uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
2,869,038,153,928 | arxiv | \section{Introduction}
\smallskip
Let $E$ be an elliptic curve defined over ${\mathbb Q}$.
For a prime $p$ of good reduction for $E$
the reduction of $E$ modulo $p$ is an elliptic curve $E_p$ defined over the finite field ${\mathbb F}_p$
with $p$ elements.
Denote by $E_p({\mathbb F}_p)$ the group of ${\mathbb F}_p$-rational points of $E_p$.
Its structure as a group, for example, the existence of large cyclic subgroups, especially of prime order, is of interest because of applications to elliptic curve cryptography \cite{Koblitz1987, Miller1986}.
It is well known that the finite abelian group $E_p({\mathbb F}_p)$ has structure
\begin{equation}\label{Structure}
E_p({\mathbb F}_p)\simeq ({\mathbb Z}/d_p{\mathbb Z}) \oplus ({\mathbb Z}/e_p{\mathbb Z})
\end{equation}
for uniquely determined positive integers $d_p$ and $e_p$ with $d_p\mid e_p$.
Here $e_p$ is the size of the maximal cyclic subgroup of $E_p({\mathbb F}_p)$, called the exponent of $E_p({\mathbb F}_p)$.
The study about $e_p$ as a function of $p$ has received considerable attention
\cite{Schoof1991, Duke2003, Co2003, CoMu2004}, where the following problems were considered:
\begin{itemize}
\item{lower bounds for the maximal values of $e_p$,}
\item{the frequency of $e_p$ taking its maximal value,
i.e., the density of the primes $p$ for which $E_p({\mathbb F}_p)$ is a cyclic group,}
\item{the smallest prime $p$ for which the group $E_p({\mathbb F}_p)$ is cyclic (elliptic curve analogue of Linnik's problem).}
\end{itemize}
Very recently motivated by a question of Silverman,
Freiberg and Kurlberg \cite{FK2012} investigated the average order of $e_p$.
Before stating their results, let us fixe some notation.
Given a positive integer $k$,
let $E[k]$ denote the group of $k$-torsion points of $E$
(called {\it the $k$-division group of $E$}) and
let $L_k := {\mathbb Q}(E[k])$ be the field obtained by adjoining to ${\mathbb Q}$ the coordinates of the points of $E[k]$
(called {\it the $k$-division field of $E$}).
Write
\begin{equation}\label{defnLk}
n_{L_k} := [L_k : {\mathbb Q}].
\end{equation}
Denote by $\mu(n)$ the M\"obius function, by $\pi(x)$ the prime-counting function and
by $\zeta_{L_k}(s)$ the Dedekind zeta function associated with $L_k$, respectively.
Assuming the Generalized Riemann Hypothesis (GRH) for $\zeta_{L_k}(s)$ for all positive integers $k$,
Freiberg and Kurlberg \cite[Theorem 1.1]{FK2012} shew that
\begin{equation}\label{FK}
\frac{1}{\pi(x)} \sum_{p\leqslant x} e_p
= \frac{1}{2} C_E x + O_E\big(x^{9/10} (\log x)^{11/5}\big)
\end{equation}
for all $x\geqslant 2$,
where
\begin{equation}\label{defCE}
C_E
:= \sum_{k=1}^{\infty} \frac{1}{n_{L_k}} \sum_{dm=k} \frac{\mu(d)}{m}
= \prod_p \bigg(1-\sum_{\nu=1}^{\infty} \frac{p-1}{p^\nu n_{L_{p^\nu}}}\bigg).
\end{equation}
The implied constant depends on $E$ at most.
When $E$ has complex multiplication (CM), they \cite[Theorem 1.2]{FK2012} also proved that \eqref{FK} holds unconditionally with a weaker error term
\begin{equation}\label{FKCM}
O_E\bigg(x\frac{\log_3x}{\log_2x}\bigg),
\end{equation}
where $\log_\ell$ denotes the $\ell$-fold iterated logarithm.
\vskip 2mm
The aim of this short note is to propose more precise result than \eqref{FK} and \eqref{FKCM}.
\begin{theorem}\label{thm}
Let $E$ be an elliptic curve over ${\mathbb Q}$.
\par
{\rm (a)}
Assuming GRH for the Dedekind zeta function $\zeta_{L_k}$ for all positive integers $k$, we have
\begin{equation}\label{thm(a)}
\frac{1}{\pi(x)} \sum_{p\leqslant x} e_p
= \frac{1}{2} C_E x + O_E\big(x^{5/6} (\log x)^{4/3}\big).
\end{equation}
{\rm (b)}
If $E$ has CM, then we have unconditionally
\begin{equation}\label{thm(b)}
\frac{1}{\pi(x)} \sum_{p\leqslant x} e_p
= \frac{1}{2} C_E x + O_E\bigg(\frac{x}{(\log x)^{1/14}}\bigg).
\end{equation}
Here $C_E$ is given as in \eqref{defCE} and the implied constants depend on $E$ at most.
\end{theorem}
{\bf Remark}.
(a)
Our proof of Theorem \ref{thm} is a refinement of Freiberg and Kurlberg's method \cite{FK2012}
with some simplification.
(b)
For comparison of \eqref{FK} and \eqref{thm(a)}, we have $\tfrac{9}{10}=0.9$ and $\tfrac{5}{6}=0.833\cdots$.
(c)
The quality of \eqref{thm(b)} can be compared with the following result of Kurlberg and Pomerance \cite[Theorem 1.2]{KP2012} concernng the multiplicative order of a number modulo $p$ :
Given a rational number $g\not=0, \pm 1$ and prime $p$ not dividing the numerator of $g$,
let $\ell_g(p)$ denote the multiplicative order of $g$ modulo $p$.
Assuming GRH for $\zeta_{{\mathbb Q}(g^{1/k}, {\rm e}^{2\pi{\rm i}/k})}(s)$ for all positive integers $k$, one has
$$
\frac{1}{\pi(x)} \sum_{p\leqslant x} \ell_g(p)
= \frac{1}{2} C_g x + O\bigg(\frac{x}{(\log x)^{1/2-1/\log_3x}}\bigg),
$$
where $C_g$ is a positive constant depending on $g$.
\vskip 5mm
\section{Preliminary}
Let $E$ be an elliptic curve over ${\mathbb Q}$ with conductor $N_E$ and let $k\geqslant 1$ be an integer.
For $x\geqslant 1$, define
$$
\pi_E(x; k)
:= \sum_{\substack{p\leqslant x\\ p\nmid N_E, \, k\mid d_p}} 1.
$$
The evaluation of this function will play a key role in the proof of Theorem \ref{thm}.
Using the Hasse inequality (see \eqref{Hasse} below),
it is not difficult to check that $p\nmid d_p$ for $p\nmid N_E$.
Thus the conditions $p\nmid N_E$ and $k\mid d_p$ are equivalent to $p\nmid kN_E$ and $k\mid d_p$,
that is $p\nmid kN_E$ and $E_p({\mathbb F}_p)$ contains a subgroup isomorphic to ${\mathbb Z}/k{\mathbb Z}\times {\mathbb Z}/k{\mathbb Z}$.
Hence by \cite[Lemma 1]{Mu1983}, we have
$$
\sum_{\substack{p\leqslant x\\ \text{$p$ splits completely in $L_k$}}} 1
= \pi_E(x; k) + O(\log(N_Ex)).
$$
In order to evaluate the sum on the left-hand side,
we need effective versions of the Chebotarev density theorem.
They were first derived by Lagarias and Odlyzko \cite{LaOd1979},
refined by Serre \cite{Serre1981}, and subsequently improved by M. Murty, V. Murty and Saradha \cite{MuMuSa1988}.
With the help of these results, one can deduce the following lemma
(cf. \cite[Lemma 3.3]{FK2012}).
\begin{lemma}\label{lem1}
Let $E$ be an elliptic curve over ${\mathbb Q}$ with conductor $N_E$.
\par
{\rm (a)}
Assuming GRH for the Dedekind zeta function $\zeta_{L_k}(s)$, we have
\begin{equation}\label{Eq1lem1}
\pi_E(x; k) = \frac{\hbox{{\rm Li}}(x)}{n_{L_k}} + O\big(x^{1/2} \log(N_Ex)\big)
\end{equation}
uniformly for $x\geqslant 2$ and $k\geqslant 1$,
where the implied constant is absolute.
\par
{\rm (b)}
There exist two absolute constants $B>0$ and $C>0$ such that
\begin{equation}\label{Eq2lem1}
\pi_E(x; k)= \frac{\hbox{{\rm Li}}(x)}{n_{L_k}} + O\big(x {\rm e}^{-B(\log x)^{5/14}}\big)
\end{equation}
unformly for $x\geqslant 2$ and $C N_E^2 k^{14}\leqslant \log x$,
where the implied constant is absolute.
\end{lemma}
The next lemma (cf. \cite[Proposition 3.2]{FK2012} or \cite[Propositions 3.5 and 3.6]{CoMu2004})
gathers some properties of the division fields $L_k$ of $E$ and estimates for $n_{L_k}$,
which will be useful later.
Denote by $\varphi(k)$ the Euler function.
\begin{lemma}\label{lem2}
{\rm (a)}
The field $L_k$ contains ${\mathbb Q}({\rm e}^{2\pi {\rm i}/k})$.
Therefore $\varphi(k)\mid n_{L_k}$ and
a rational prime $p$ which splits completely in $L_k$ satisfies $p\equiv 1 ({\rm mod}\,k)$.
\par
{\rm (b)}
$n_{L_k}$ divides $|\hbox{{\rm GL}}_2({\mathbb Z}/k{\mathbb Z})|=k^3 \varphi(k) \prod_{p\mid k} (1-p^{-2})$.
\par
{\rm (c)}
If $E$ is a non-CM curve, then there exists a constant $B_E\geqslant 1$ (depending only on $E$)
such that $|\hbox{{\rm GL}}_2({\mathbb Z}/k{\mathbb Z})|\leqslant B_E n_{L_k}$ for each $k\geqslant 1$.
Moreover, we have $|\hbox{{\rm GL}}_2({\mathbb Z}/k{\mathbb Z})| = n_{L_k}$ whenover $(k, M_E)=1$
$($where $M_E$ is Serre's constant\,$)$.
\par
{\rm (d)}
If $E$ has CM, then $\varphi(k)^2\ll n_{L_k}\leqslant k^2$.
\end{lemma}
\vskip 5mm
\section{Proof of Theorem \ref{thm}}
Let $a_E(p) := p+1- |E_p({\mathbb F}_p)|$, then
$$
e_p = \begin{cases}
(p+1-a_E(p))/d_p & \text{if $\,p\nmid N_E$},
\\\noalign{\vskip 1mm}
0 & \text{otherwise}.
\end{cases}
$$
By using Hasse's inequality
\begin{equation}\label{Hasse}
|a_E(p)|<2\sqrt{p}
\end{equation}
for all primes $p\nmid N_E$, it is easy to see that
\begin{equation}\label{Eq1}
\sum_{p\leqslant x} e_p
= \sum_{p\leqslant x, \, p\nmid N_E} \frac{p}{d_p} + O\bigg(\frac{x^{3/2}}{\log x}\bigg).
\end{equation}
In order to evaluate the last sum,
we first notice that the Hasse inequality \eqref{Hasse} implies $d_p\leqslant 2\sqrt{p}$.
Thus we can use the formula
$$
\frac{1}{k}
= \sum_{dm\mid k} \frac{\mu(d)}{m}
$$
to write
\begin{equation}\label{Eq2}
\sum_{\substack{p\leqslant x\\ p\nmid N_E}} \frac{p}{d_p}
= \sum_{\substack{p\leqslant x\\ p\nmid N_E}} p \sum_{dm\mid d_p} \frac{\mu(d)}{m}
= \sum_{k\leqslant 2\sqrt{x}} \sum_{dm=k} \frac{\mu(d)}{m}
\sum_{\substack{p\leqslant x\\ p\nmid N_E, \, k\mid d_p}} p.
\end{equation}
Let $y\leqslant 2\sqrt{x}$ be a parameter to be choosen later and define
\begin{align*}
S_1
& := \sum_{k\leqslant y} \sum_{dm=k} \frac{\mu(d)}{m} \sum_{\substack{p\leqslant x\\ p\nmid N_E, \, k\mid d_p}} p,
\\
S_2
& := \sum_{y<k\leqslant 2\sqrt{x}} \sum_{dm=k} \frac{\mu(d)}{m} \sum_{\substack{p\leqslant x\\ p\nmid N_E, \, k\mid d_p}} p.\end{align*}
With the help of Lemma \ref{lem1}(a), a simple partial integration allows us to deduce
(under GRH)
\begin{equation}\label{sump}
\begin{aligned}
\sum_{\substack{p\leqslant x\\ p\nmid N_E, \, k\mid d_p}} p
& = \int_{2-}^x t \,{\rm d} \pi_E(t; k)
= x \pi_E(x; k)
- \int_{2}^x \pi_E(t; k) \,{\rm d} t
\\\noalign{\vskip -3mm}
& = \frac{x \hbox{{\rm Li}}(x)}{n_{L_k}}
- \frac{1}{n_{L_k}} \int_{2}^x \hbox{{\rm Li}}(t) \,{\rm d} t
+ O_E\big(x^{3/2}\log x\big)
\\\noalign{\vskip 1mm}
& = \frac{\hbox{{\rm Li}}(x^2)}{n_{L_k}}
+ O_E\big(x^{3/2}\log x\big).
\end{aligned}
\end{equation}
On the other hand, by Lemma \ref{lem2} we infer that
\begin{equation}\label{CE}
\sum_{k\leqslant y} \frac{1}{n_{L_k}} \sum_{dm=k} \frac{\mu(d)}{m}
= C_E + O(y^{-1}).
\end{equation}
Thus combining \eqref{sump} with \eqref{CE} and using the following trivial inequality
\begin{equation}\label{trivial}
\bigg|\sum_{dm=k} \frac{\mu(d)}{m}\bigg|
\leqslant \frac{\varphi(k)}{k}
\leqslant 1,
\end{equation}
we find
\begin{equation}\label{S1}
\begin{aligned}
S_1
& = \hbox{{\rm Li}}(x^2) \sum_{k\leqslant y} \frac{1}{n_{L_k}}\sum_{dm=k} \frac{\mu(d)}{m}
+ O_E\bigg(x^{3/2}\log x \sum_{k\leqslant y} \bigg|\sum_{dm=k} \frac{\mu(d)}{m}\bigg|\bigg)
\\
& = C_E \hbox{{\rm Li}}(x^2)
+ O_E\bigg(\frac{x^2}{y\log x} + x^{3/2}y\log x\bigg).
\end{aligned}
\end{equation}
Next we treat $S_2$.
By \cite[Lemma 3.1 and Proposition 3.2(a)]{FK2012},
we see that $k\mid d_p$ implies that $k^2\mid (p+1-a_E(p))$ and also $k\mid (p-1)$,
hence $k\mid (a_E(p)-2)$.
With the aid of this and the Brun-Titchmarsh inequality, we can deduce that
\begin{align*}
S_2
& \ll x \sum_{y<k\leqslant 2\sqrt{x}}
\bigg(
\sum_{\substack{|a|\leqslant 2\sqrt{x}, a\not=2\\ a\equiv 2 ({\rm mod} k)}}
\sum_{\substack{p\leqslant x, a_E(p)=a\\ k^2\mid p+1-a}} 1
+
\sum_{\substack{p\leqslant x, a_E(p)=2\\ k^2\mid p-1}} 1
\bigg)
\\
& \ll x \sum_{y<k\leqslant 2\sqrt{x}}
\bigg(
\frac{\sqrt{x}}{k}\cdot\frac{x}{k\varphi(k)\log(8x/k^2)}
+
\frac{x}{k^2}
\bigg).
\end{align*}
By virtue of the elementary estimate
$$
\sum_{n\leqslant t} \frac{1}{\varphi(k)}
= D\log t + O(1)
\qquad(t\geqslant 1)
$$
with some positive constant $D$,
a simple integration by parts leads to
\begin{equation}\label{S2}
S_2
\ll \frac{x^{5/2}}{y^2 \log(8x/y^2)} + \frac{x^{2}}{y}\cdot
\end{equation}
Inserting \eqref{S1} and \eqref{S2} into \eqref{Eq2}, we find
\begin{equation}\label{Eq3}
\begin{aligned}
\sum_{p\leqslant x, \, p\nmid N_E} \frac{p}{d_p}
= C_E \hbox{{\rm Li}}(x^2)
+ O_E\bigg(x^{3/2}y\log x + \frac{x^{5/2}}{y^2 \log(8x/y^2)} + \frac{x^{2}}{y}\bigg),
\end{aligned}
\end{equation}
where we have used the fact that the term $x^2y^{-1}(\log x)^{-1}$
can be absorded by $x^{5/2}y^{-2}(\log(8x/y^2))^{-1}$
since $y\leqslant 2\sqrt{x}$.
Now the asymptotic formula \eqref{thm(a)} follows from \eqref{Eq1} and \eqref{Eq3}
with the choice of $y=x^{1/3}(\log x)^{-2/3}$.
\vskip 1mm
The proof of \eqref{thm(b)} is very similar to that of \eqref{thm(a)}.
Next we shall only point out some important differences.
Similar to \eqref{sump}, we can apply Lemma \ref{lem1}(b) to prove (unconditionally)
$$
\sum_{\substack{p\leqslant x\\ p\nmid N_E, \, k\mid d_p}} p
= \frac{\hbox{{\rm Li}}(x^2)}{n_{L_k}}
+ O_E\big(x^{2}\exp\{-B(\log x)^{5/14}\}\big)
$$
for $k\leqslant (C^{-1}N_E^{-2}\log x)^{1/14}$.
As before from this and \eqref{CE}-\eqref{trivial}, we can deduce that
\begin{equation}\label{S1(b)}
S_1
= C_E \hbox{{\rm Li}}(x^2)
+ O_E\big(x^2y^{-1}(\log x)^{-1} + x^{2}y{\rm e}^{-B(\log x)^{5/14}}\big)
\end{equation}
for $y\leqslant (C^{-1}N_E^{-2}\log x)^{1/14}$.
The treatment of $S_2$ is different.
First we divide the sum over $k$ in $S_2$ into two parts accroding to
$y<k\leqslant x^{1/4}(\log x)^{3/4}$ or $x^{1/4}(\log x)^{3/4}<k\leqslant 2\sqrt{x}$.
When $E$ has CM, we have (see \cite[page 692]{Duke2003})
$$
\sum_{\substack{p\leqslant x\\ p\nmid N_E, \, k\mid d_p}} 1
\ll \frac{x}{\varphi(k)^2 \log x}
$$
for $k\leqslant x^{1/4}(\log x)^{3/4}$.
Thus the contribution from $y<k\leqslant x^{1/4}(\log x)^{3/4}$ to $S_2$ is
\begin{align*}
& \ll \frac{x^2}{\log x} \sum_{y<k\leqslant x^{1/4}(\log x)^{3/4}} \frac{1}{\varphi(k)^2}
\ll \frac{x^2}{y\log x}\cdot
\end{align*}
Clearly the inequality \eqref{S2} (taking $y=x^{1/4}(\log x)^{3/4}$) implies that the contribution from
$x^{1/4}(\log x)^{3/4}<k\leqslant 2\sqrt{x}$ to $S_2$ is
\begin{align*}
\ll \sum_{x^{1/4}(\log x)^{3/4}<k\leqslant 2\sqrt{x}}
\sum_{\substack{p\leqslant x\\ p\nmid N_E, \, k\mid d_p}} p
& \ll \frac{x^2}{(\log x)^{5/2}}\cdot
\end{align*}
By combining these two estimates, we obtain
\begin{equation}\label{S2(b)}
S_2\ll \frac{x^2}{y\log x} + \frac{x^2}{(\log x)^{5/2}}\cdot
\end{equation}
Inserting \eqref{S1(b)} and \eqref{S2(b)} into \eqref{Eq2}, we find
\begin{equation}\label{Eq3(b)}
\begin{aligned}
\sum_{p\leqslant x, \, p\nmid N_E} \frac{p}{d_p}
= C_E \hbox{{\rm Li}}(x^2)
+ O_E\bigg(\frac{x^2}{y\log x} + \frac{x^2}{(\log x)^{5/2}} + x^{2}y{\rm e}^{-B(\log x)^{5/14}}\bigg)
\end{aligned}
\end{equation}
for $y\leqslant (C^{-1}N_E^{-2}\log x)^{1/14}$.
Now the asymptotic formula \eqref{thm(b)} follows from \eqref{Eq1} and \eqref{Eq3(b)}
with the choice of $y = (C^{-1}N_E^{-2}\log x)^{1/14}$.
\vskip 10mm
|
2,869,038,153,929 | arxiv | \section{Introduction}
\label{sec:intro}
The $\alpha+d$ radiative capture
\begin{equation}
\alpha+d \rightarrow ^6{\rm Li} +\gamma
\label{eq:alphad}
\end{equation}
has recently received quite some interest, triggered by the so-called
$^6$Li~ problem. In the theory of Big Bang Nucleosynthesis (BBN), even if it is a weak electric quadrupole transition, this reaction is important as represents the main $^6$Li production process. In 2006 Asplund {\it et al.} performed high
resolution observations of Li absorption lines in old halo stars~\cite{asp06}.
The $^6$Li/$^7$Li ratio was found to be of about $5\times 10^{-2}$,
more than two orders of magnitude larger than the expected BBN prediction. Since the analysis is performed on old stars, the quantity of
the present $^6$Li~ should be a good estimate of the one
at the star formation, i.e.\ the same after BBN. This great discrepancy
is the so-called second Lithium problem. However, recent analyses with
three-dimensional modelling of stellar atmosphere, which do not assume local
thermodynamical equilibrium and include surface
convection effects, show that these can explain
the observed line asymmetry. The $^6\rm{Li}$ problem, therefore,
would be weakened~\cite{cayrel,perez,steffen,lind}.
We recall that the BBN relevant energy window is located
between 50 and 400 keV, and experimental studies of Eq.~(\ref{eq:alphad})
at these energies are very difficult, due to the exponential drop of the reaction
cross section as a consequence of the Coulomb barrier. Furthermore, this reaction is
affected by the
isotopic suppression of the electric dipole operator, as it will be discussed
in Sec.~\ref{subsec:transop}. The reaction~(\ref{eq:alphad}) was first studied
experimentally in the early 1980s~\cite{rob81}
and then thorough the 1990s~\cite{kie91,moh94,cec96,iga00}.
However the data in the BBN energy range were affected by large
uncertainties. The latest measurement is that performed
by the LUNA Collaboration~\cite{and14,tre17}.
The theoretical study of this reaction is also very difficult, since,
in principle, we
should solve a six-body problem, i.e. we should consider the six nucleons
contained in the $\alpha+d$ and $^6$Li~ particles, and their interaction
with the photon. Such an approach
is known as the {\it ab-initio} method, and it has been used
only by Nollett {\it et al.} in Ref.~\cite{nol01}.
However, the numerical techniques used in Ref.~\cite{nol01} to solve the
six-body problem, i.e.\ the
variational Monte Carlo method,
provide solutions for the initial and final state wave functions
with uncertainties at
the 10-20\% level. Since {\it ab-initio} methods are still nowadays
hardly implemented for $A>4$ radiative captures, the study of the reaction has
been done using a simplified model, where $^6$Li~ is seen as
an $\alpha+d$ system and the problem is reduced to a two-body
problem. Then a crucial input for the calculation is represented
by the potential model, which describes the $\alpha+d$ interaction.
Five different potential models have been considered in this work,
four of them taken from
Refs.~\cite{ham10,tur15,muk11,dub98}, and a last one
constructed here starting
from the model of Ref.~\cite{dub98}, and then modifying it in order to
reproduce the asymptotic
normalization coefficient (ANC), i.e.\ the ratio between the $\alpha+d$
relative radial wave function in $^6$Li~ and the Whittaker function
for large distances. It describes the bound-state wave function in
the asymptotic region. To be noticed that only the potential
of Ref.~\cite{dub98} and this last model
have a tensor component, necessary to describe the experimental values
for the $^6$Li~ magnetic dipole and electric quadrupole moments.
Our calculations have been performed using two methods to solve the two-body
Schr\"odinger equation, both for the bound and the scattering states,
in order to verify that our results are
not affected by significant numerical uncertainties.
The paper is organized as follows: in Sec.~\ref{sec:s-factor} we introduce
all the main ingredients of the present calculation for the astrophysical
$S$-factor and we present in Sec.~\ref{subsec:res}
our results. In Sec.~\ref{sec:li6PA} we discuss the implications
of the present calculated $S$-factor for the BBN prediction of
$^6$Li~ abundance. We give our final remarks in Sec.~\ref{sec:concl}.
\section{The $\alpha + d $ \textit{S}-factor}
\label{sec:s-factor}
The $\alpha+d$ astrophysical $S$-factor $S(E)$, $E$ being the initial
center-of-mass energy, is defined as
\begin{equation}
S(E) = E\sigma(E)\exp(2\pi\eta)\:,
\label{eq:sfactor}
\end{equation}
where $\sigma(E)$ is the capture cross section, and
$\eta=2\alpha/v_{\rm rel}$ is the Sommerfeld parameter, $\alpha$ being the
fine structure constant and $v_{\rm rel}$ the $\alpha+d$ relative velocity.
With this definition, the $S$-factor has a smooth dependence on $E$
and can be easily extrapolated at low energies of
astrophysical interest.
The reaction cross section $\sigma(E)$ is given by
\begin{equation}
\sigma(E)=\int {\mathrm{d}}\Omega_{\hat{\bf q}}
\frac{{\mathrm{d}}\sigma}{{\mathrm{d}}\Omega_{\hat{\bf q}}}\:,
\label{eq:sigma}
\end{equation}
where the differential cross section
${\mathrm{d}}\sigma/{\mathrm{d}}\Omega_{\hat{\bf q}}$ can be written as
\begin{equation}
\frac{\mathrm{d}\sigma}{\mathrm{d} \Omega_{\hat{\bf{q}}}} =
\frac{e^2}{24 \pi^2 v_{rel}}\frac{q}{1+q/m_6}
\sum_{M_i \lambda M}\left| \hat{\epsilon}^{\dagger \lambda}_\mathbf{q}\cdot
\left\langle \Psi_{^6\mathrm{Li}}(M) | \mathbf{J}^\dagger(\mathbf{q})|
\Psi_{\alpha d} (M_i)
\right\rangle \right|^2 \:.
\label{eq:crosssection}
\end{equation}
Here $m_6$ is the $^6$Li~ mass, ${\bf q}$ is the photon momentum and
$\hat{\epsilon}^{\dagger \lambda}_\mathbf{q}$ its polarization vector,
$\mathbf{J}^\dagger(\mathbf{q})$ is the Fourier transform
of the nuclear electromagnetic current,
and $\Psi_{\alpha d}(M_i)$ and $\Psi_{^6\mathrm{Li}}(M)$ are the initial $\alpha+d$
and final $^6$Li~ wave functions, with spin projection $M_i$ and $M$.
In Eq.~(\ref{eq:crosssection}),
we have averaged over the initial spin projections and summed
over the final ones.
In order to calculate the $\alpha+d$ cross section, it is necessary to
evaluate the $^6$Li~ and $\alpha+d$ wave functions. This point is
described in the next Subsection.
\subsection{The $^6$Li and $\alpha+d$ systems}
\label{subsec:li6-ad}
A crucial input for our calculation is represented by the $^6$Li~
and $\alpha+d$ wave functions. We consider first the bound state.
The nucleus of $^6$Li~ has $J^\pi=1^+$, a binding energy $B$
respect to the $\alpha+d$ threshold of 1.475 MeV~\cite{dub98}, a
non-null electric quadrupole moment
$Q_6=-0.0644(7)$ fm$^2$ and a
magnetic dipole moment $\mu_6=-0.822$ $\mu_N$~\cite{dub98}.
As it was shown in Ref.~\cite{muk11}, the astrophysical $S$-factor
at low energies is highly sensitive not only to the $^6$Li~ binding energy
$B$ and the $\alpha+d$~ scattering phase shifts, but also to the $^6$Li~
$S$-state asymptotic normalization coefficient (ANC).
This quantity is crucial due to the peripheral nature
of the $\alpha+d$ reaction at low energies, where only the tail of
the $^6$Li~ wave function gives most of the contribution in the
matrix element of Eq.~(\ref{eq:crosssection}).
The $S$-state ANC is defined as
\begin{equation}
C_{\ell=0} =
\lim_{r\rightarrow+\infty} \frac{\varphi(r)}{W_{-\eta,\ell+1/2}(r)}\bigg|_{\ell=0}
\:,
\label{eq:anc0}
\end{equation}
where $\varphi(r)$ is the $S$-state $^6$Li~ reduced wave function,
$W_{-\eta,\ell+1/2}(r)$ is the Whittaker function, $\eta$ is the
Sommerfeld factor and $\ell=0$ for the $S$-state ANC.
Its experimental value for $^6$Li~ is ANC$_{\rm exp}=(2.30\pm 0.12)$
fm$^{1/2}$~\cite{tur15}.
In the present study we consider the $^6$Li~ nucleus as a compound system, made
of an $\alpha$ particle and a deuteron. In fact, as it was shown in
Ref.~\cite{dub94}, the
$\alpha+d$ clusterization percentage in $^6$Li~ can be up to about 60-80\%.
Therefore, we solve in this work a two-body problem, including both
$S-$ and $D$-states in the $\alpha+d$ bound system. The first observable that
we will try to reproduce is the binding energy, but we will consider
also the above mentioned observables of $^6$Li~.
At this point, an important input for the calculation is represented by the
$\alpha+d$ potential. The different models considered in this work will
be discussed below.
\subsubsection{The $\alpha+d$ Potentials}
\label{subsubsec:potentials}
For our calculation we consider five different potential models.
The use of so many models allows us to get a hint on the theoretical
uncertainty arising from the description of the $^6$Li~ nucleus and
the $\alpha+d$ scattering system. Four of these potentials are taken
from Refs.~\cite{ham10,muk11,tur15,dub98}, while the last one has
been constructed in the present work as described below. The
physical constants present in each potential as listed on the original
references are summarized in Table~\ref{tab:dataauthor}.
\begin{table}[t]
\centering
\begin{tabular}{|c|c|c|c|}
\hline
~ ~ ~~~~~~~~~& ~~~~~~units~~~~~~ & $V_H$ and $V_M$ & $V_T$, $V_D$ and $V_{G}$\\
\hline \hline
$A_d$ &-& 2.01411 & 2 \\
$A_\alpha$ &-& 4.00260 & 4 \\
$m_u $ & MeV & 931.494043 & \underline{938.973881}\\
$\mu$ & MeV & \underline{1248.09137} & \underline{1251.96518} \\
$\hbar^2/2\mu$ & MeV fm$^2$ & \underline{15.5989911176} & 15.5507250000 \\
$\alpha$ &-& 7.297352568$\times$10$^3$ & \underline{7.297405999$\times$10$^3$} \\
$\alpha \hbar c $ &MeV fm& \underline{1.4399644567} & 1.4399750000\\
$B$ & MeV & 1.474 & 1.475 ($V_T$) \\
& & & 1.4735 ($V_D$ and $V_G$)\\
\hline
\end{tabular}
\caption{Set of the constants present in the five $\alpha+d$
potential models,
labelled as $V_H$, $V_T$, $V_M$, $V_D$,
taken from Refs.~\cite{ham10,tur15,muk11,dub98}, respectively,
and $V_{G}$,
constructed in the present work.
$A_\alpha$ ($A_d$) is the mass numbers of the $\alpha$ ($d$) particle,
$m_u$ is the mass unit, equal to the atomic mass unit for $V_H$ and $V_M$,
and to the average nucleon mass for $V_T$, $V_D$ and $V_{G}$,
$\mu$ is the $\alpha+d$ reduced mass, $\alpha$ is the fine-structure
constant and $B$ is the $^6$Li~ binding energy respect to the
$\alpha+d$ threshold. The underlined quantities are deduced from other
data given by the authors
in the original
references~\cite{ham10,tur15,muk11,dub98}.\vspace{0.2 cm}}
\label{tab:dataauthor}
\end{table}
The first potential used in our study has been taken from Ref.~\cite{ham10}
and has the form
\begin{equation}
V_H(r) = -V_C^{\ell} \left[1+\exp\left(\frac{r-r_0}{a}\right)\right]^{-1}
+ V_{SO} \frac{\lambda^2 \mathbf{L}\cdot \mathbf{S}}{r}
\frac{\mathrm{d}}{\mathrm{d}r}
\left[1+\exp\left(\frac{r-r_0}{a}\right)\right]^{-1} + V_{Coul}^{(m)}(r)\:.
\label{eq:potHam}
\end{equation}
It contains a spin-independent Wood-Saxon component, a spin-orbit interaction
term, and a modified Coulomb potential, which is written as
\begin{equation}
V_{Coul}^{(m)}(r)=Z_\alpha Z_d \:\alpha \:\twopartdef
{ \left[3-\left(r/r_0\right)^2 \right]/\left(2 r_0\right) } {r \le r_0} {1/r}
{r > r_0}\:.
\label{eq:vcmod}
\end{equation}
The values for all the parameters present in Eqs.~(\ref{eq:potHam})
and~(\ref{eq:vcmod}), as well as those of the following
potentials, are listed in Table~\ref{tab:potentialConstants},
apart from $r_0$, which is $r_0=1.25\,A^{1/3}$ fm, with $A=6$.
To be noticed that this potential does not reproduce the experimental
value of the ANC, as it has been noticed in Ref.~\cite{muk11}, and we
have ourselves verified by calculating the $^6$Li~ properties (see below).
The second potential is taken from Ref.~\cite{tur15},
and can be written as
\begin{equation}
V_T(r) = -V_0^{\ell} \exp\left(-\frac{r^2}{a_\ell^2}\right) + V_{Coul}(r)\:.
\label{eq:potTur}
\end{equation}
It is therefore the sum of a Gaussian function and a Coulomb point-like
interaction $V_{Coul}(r)=Z_\alpha Z_d \:\alpha/r$. It reproduces
the experimental ANC for the $^6$Li (see below).
The third potential is obtained by adding to the $V_H$ potential
of Ref.~\cite{ham10}, a new term $V_N(r)$, such that the new potential
\begin{equation}
V_M(r)=V_H(r)+V_N(r)
\label{eq:potMuk}
\end{equation}
reproduces the experimental ANC~\cite{muk11}.
The procedure to obtain $V_N(r)$ is discussed at length in Ref.~\cite{muk11}.
Here we have generalized it to the coupled-channel case,
and it will be discussed below.
The potentials $V_H$, $V_T$ and $V_M$ considered so far are
central potentials, which have, at
maximum, a spin-orbit term. Therefore, these potentials are unable to
give rise to the $^3D_1$ component in the $^6$Li~ wave function. The
non-zero $^6$Li~ quadrupole moment has induced us to consider also potentials
which include a tensor term. In this study, we have used the potential
of Ref.~\cite{dub98}, which can be written as
\begin{equation}
V_D(r) = -V_0^{\ell J} \exp\left(-\frac{r^2}{a^2}\right) - V_1^{\ell}
\exp\left(-\frac{r^2}{b^2}\right)\left[ 6\frac{(\mathbf{S}\cdot
\mathbf{r})^2}{r^2} - 2 \mathbf{S^2}\right]+V_{Coul}(r)\:,
\label{eq:potDub}
\end{equation}
where $\mathbf S$ is the spin operator acting on $^6$Li.
The coefficients $V_0^{\ell J} \equiv V_0^{01}$ and $V_0^{\ell J} \equiv V_0^{21}$
have been taken from Ref.~\cite{dub98}. However, in Ref.~\cite{dub98}
this potential was used only for the bound-state problem.
Therefore, we have modified
the potential in order to reproduce also the scattering phase-shifts up to
$\ell=2$. In order to do so, the depth $V_0^{\ell J}$ has been
fitted to the experimental scattering phase-shifts
for every initial channel, minimizing the $\chi^2$ of the calculated
phase shifts with respect to the available experimental data taken from
Refs.~\cite{jen83,mci67,gru75,bru82,kel70}. In this procedure,
we minimized the $\chi^2$ changing the value of $V_0^{\ell J}$.
We have used both the bisection and the Newton's method, finding
no difference between the calculated values of $V_0^{\ell J}$. These
have been listed in Table~\ref{tab:potentialConstants}.
As in the case of $V_H$, also the $V_D(r)$ potential,
does not reproduce the $^6$Li~ ANC. Therefore,
we have constructed a new model
generalizing the procedure of Ref.~\cite{muk11} to the
coupled-channel case.
We start from a generic Hamiltonian operator $H_0$, for which we know the
bound state radial eigenfunction $\vec \varphi(r)$, the corresponding binding
energy $B$ and the ANC $C_0$ for the $S$-state. We have defined
$\vec \varphi(r)$ to be the vector containing the $S$- and $D$-state bound
wave functions, i.e.\ $\vec \varphi(r) = (\varphi_0,~ \varphi_2)$
and normalized it to unity, i.e.\
\begin{equation}
\int_0^\infty dx x^2 (\vec \varphi(x)\cdot \vec \varphi(x))=1
\label{eq:norm}\ .
\end{equation}
We want
to find a potential part of an Hamiltonian which has the same binding energy,
but the correct value for $C_0$, which we will call $C_0^N$.
As an \textit{Ansatz}, we assume that our new solution has the form
\begin{equation}
\vec \phi(r) = \vec \varphi(r)/\gamma(r)
\label{eq:verphi}\ ,
\end{equation}
with
\begin{equation}
\gamma (r ) \equiv \tau^{-1/2}\:\left[1+(\tau-1)\int_0^r\:\mathrm{d}x\:
x^2(\vec \varphi(x)\cdot \vec \varphi(x))\right]\:,
\label{eq:gamma}
\end{equation}
where $\tau$ is a parameter to be fitted to the experimental ANC value.
This solution is correctly normalized and the new ANC $C_0^N$ is given by
\begin{equation}
C_0^N=
\lim_{r\rightarrow+\infty} \frac{ \phi_0(r)}{W_{-\eta,1/2}(r)}=
\frac{1}{\sqrt{\tau}}\lim_{r\rightarrow +\infty}\frac{ \varphi_0(r)}{W_{-\eta,1/2}(r)}=\frac{C_0}{\sqrt{\tau}}\:.
\end{equation}
It is then enough to choose
$\tau=(C_0/C_0^{exp})^2$, so that $C_0^N=C_0^{exp}$.
For the $V_M$ potential, $\tau=1.378$~\cite{muk11},
while for this coupled-channel case $\tau=1.181$.
In order to obtain the new wave function $\vec \phi(r)$, we define
a new Hamiltonian operator as
\begin{equation}
H = H_0 + V_N\:,
\label{eq:hnew}
\end{equation}
and we impose
\begin{equation}\label{appeq:h}
H\vec \phi(r) = -B\vec \phi(r)\:,
\end{equation}
knowing that
\begin{equation}\label{appeq:h0}
H_0\vec \varphi(r) = - B \vec \varphi(r)\:.
\end{equation}
Subtracting Eq.~(\ref{appeq:h0}) from Eq.~(\ref{appeq:h}), we obtain
\begin{equation}
\frac{\hbar^2}{2\:\mu}\left\{\left[-2\left(\frac{\gamma'(r)}
{\gamma(r)}\right)^2
+\frac{\gamma''(r)}{\gamma(r)}\right]\vec\varphi(r)+2
\frac{\gamma'(r)}{\gamma(r)}\vec\varphi\:'(r)\right\}+
V_N(r) \vec\varphi(r)=0\:,
\end{equation}
which can be re-written as
\begin{align}
\nonumber V_N(r)\vec\varphi(r) &= -\frac{\hbar^2}{2\:\mu}\left\{\left[
-2\left(\frac{\gamma'(r)}{\gamma(r)}\right)^2
+\frac{\gamma''(r)}{\gamma(r)}\right]\vec\varphi(r)+2
\frac{\gamma'(r)}{\gamma(r)}\vec\varphi\:'(r)\right\}\\
\nonumber&= -\frac{\hbar^2}{2\:\mu}\left\{2\left[-\left(
\frac{\gamma'(r)}{\gamma(r)}\right)^2
+\frac{\gamma''(r)}{\gamma(r)}\right]\vec\varphi(r)+2
\frac{\gamma'(r)}{\gamma(r)}\vec\varphi\:'(r)-
\frac{\gamma''(r)}{\gamma(r)}\vec\varphi(r)\right\}\\
&= -\frac{\hbar^2}{2\:\mu}\left\{2\left[
\frac{\mathrm{d}^2}{\mathrm{d}r^2}\log\gamma(r)\right]\vec\varphi(r)+2
\frac{\gamma'(r)}{\gamma(r)}\vec\varphi\:'(r)-
\frac{\gamma''(r)}{\gamma(r)}\vec\varphi(r)\right\}\:.
\end{align}
Writing explicitly $\vec \varphi(r)$ and $\gamma(r)$, and
assuming for simplicity $V_N(r)$ to be diagonal, we get
\begin{align}
[V_N(r)]_{11}& = -2\frac{\hbar^2}{2\:\mu}\bigg\{
\frac{\mathrm{d}^2}{\mathrm{d}r^2}\ln\gamma(r) + \frac{\tau-1}{\gamma(r)}
\varphi_2^2(r)\frac{\mathrm{d}}{\mathrm{d}r}\ln
\frac{\varphi_0(r)}{\varphi_2(r)}\bigg\}\:,
\label{newp1}
\\
[V_N(r)]_{22}& = -2\frac{\hbar^2}{2\:\mu}\bigg\{
\frac{\mathrm{d}^2}{\mathrm{d}r^2}\ln\gamma(r) - \frac{\tau-1}{\gamma(r)}
\varphi_0^2(r)\frac{\mathrm{d}}{\mathrm{d}r}\ln
\frac{\varphi_0(r)}{\varphi_2(r)}\bigg\}\:.
\label{newp2}
\end{align}
Note that if we consider only central potentials, then
the potential $V_N(r)$ acts only on the $^6$Li~ $^3$S$_1$ state, and reduces to
\begin{equation}
V_N(r) = -2 \frac{\hbar^2}{2\mu} \frac{\mathrm{d}^2}{\mathrm{d}r^2}
\ln\left[1+(\tau-1)\int_0^r~\varphi^2(x) \mathrm{d}x\right]\:,
\label{eq:potMukN}
\end{equation}
as obtained in Ref.~\cite{muk11}. This is the term added in
Eq.~(\ref{eq:potMuk}).
As we have seen, this new term should give rise to no changes in the
binding energy, nor
in the scattering phase-shifts with respect to those
evaluated with the $H_0$. This has been verified
with a direct calculation.
Finally, this last potential is defined as
\begin{equation}
V_G(r)=V_D(r)+V_N(r)\ ,
\label{eq:potGr}
\end{equation}
where $V_N(r)$ is given in Eqs.~(\ref{newp1}) and~(\ref{newp2}).
\begin{table}[t!]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|cccccccc|}
\hline
\multicolumn{1}{|l|}{{~Potential~}} & \multicolumn{14}{c|}{{Parameters}} \\
\hline
\multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}$V_H$ \& $V_M$\end{tabular}}
& $V_c$ & $V_c^{\ell\neq 0}$ & $V_{SO}$ & $R$ & $\lambda$
& $a$ & & &
& & & &
& \\
& 60.712 & 56.7 & 2.4 & 2.271 & 2 & 0.65 &
& & & & & & & \\
\hline
\multirow{2}{*}{$V_T$}
& $V_0$ & $a_0$ & $V_0^{10}$ & $a_1$ & $V_0^{11}$ & $a_1$ &
\multicolumn{1}{c|}{$V_0^{12}$} & \multicolumn{1}{c|}{$a_2$} &
\multicolumn{1}{c|}{$V_0^{21}$} & \multicolumn{1}{c|}{$a_2$} &
\multicolumn{1}{c|}{$V_0^{22}$} & \multicolumn{1}{c|}{$a_2$} &
\multicolumn{1}{c|}{$V_0^{23}$} & $a_2$ \\
& 92.44 & 0.25 & 68.0 & 0.22 & 79.0
& 0.22 & \multicolumn{1}{c|}{85.0} & \multicolumn{1}{c|}{0.22}
& \multicolumn{1}{c|}{63.0} & \multicolumn{1}{c|}{0.19}
& \multicolumn{1}{c|}{69.0} & \multicolumn{1}{c|}{0.19}
& \multicolumn{1}{c|}{80.88} & 0.19 \\
\hline
\multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}$V_D$ \& $V_G$\end{tabular}}
& $V_0$ & $a$ & $V_1$ & $b$ & $V_0^{10}$ & $V_0^{11}$
& \multicolumn{1}{c|}{$V_0^{12}$} & \multicolumn{1}{c|}{$V_0^{22}$}
& \multicolumn{1}{c|}{$V_0^{23}$} & & & & & \\
& 71.979 & 0.2 & 27.0 & 1.12 & 77.4 & 73.08
& \multicolumn{1}{c|}{78.42} & \multicolumn{1}{c|}{72.979}
& \multicolumn{1}{c|}{86.139} & & & & & \\ \hline
\end{tabular}
\caption{Parameters present in the five potential models used in this work.
The parameters $V_c$, $V_{SO}$, $V_0$, $V_1$ and $V_0^{\ell J}$
are given in MeV, all the others are in fm. We have used the notation
$\ell$ for the orbital angular momentum and $J$ for the total angular
momentum. $V_0$ and $V_c$ are used for the $\ell=0$ state.
The Gaussian width $a_\ell$
for the $V_T$ potential is written to the right
of each potential depth.\vspace{0.2 cm}}
\label{tab:potentialConstants}
\end{table}
\subsubsection{Numerical methods}
\label{subsubsec:nummeth}
In order to solve the Schr\"odinger equation,
both for the initial and final states,
two methods have been adopted, the Numerov's and the variational
method. In particular, we have used the Numerov's method for the
bound-state problem, and the variational one for both the bound- and
the scattering-state problem.
The convergence of the two methods has been tested, proving that both
methods give the same numerical results for the $S$-factor,
with a very good accuracy. The choice of the variational method for the
scattering-state is related to the fact that this method is simpler to
be extended to the coupled-channel case.
In fact, the Numerov's method, even for the bound-state problem, needs some
improvement respect to the single-channel case. Here we have
proceeded as follows.
The reduced radial waves solutions for the $^3S_1$ ($\varphi_0$) and
$^3D_1$ ($\varphi_2$) states of $^6$Li~ must satisfy the coupled
equations
\begin{eqnarray}
\varphi_0''(r)+ \varphi_0(r)\:\frac{\hbar^2}{2\mu}[E-V_{00}(r)] & =&
\varphi_2(r)\:\frac{\hbar^2}{2\mu} V_{02}(r) \label{eq:num1}\\
\varphi_2''(r)+ \varphi_2(r)\:\frac{\hbar^2}{2\mu}[E-V_{22}(r)-
\frac{12\mu}{\hbar^2 r^2}] & =& \varphi_0(r)\:\frac{\hbar^2}{2\mu}
V_{20}(r)
\label{eq:num2}
\end{eqnarray}
We solve this system of equations iteratively. First we consider
$\varphi_0(r)$ to be zero. Eq.(\ref{eq:num2}) then becomes
\begin{equation}
\varphi_2''(r)+ \varphi_2(r)\:\frac{\hbar^2}{2\mu}
[E-V_{22}(r)- \frac{12\mu}{\hbar^2 r^2}] = 0\:,
\label{eq:varphi2}
\end{equation}
which we solve with the standard Numerov's algorithm, obtaining
$E\equiv E_2$ and $\varphi_2(r)$.
Then we calculate the solution of Eq.~(\ref{eq:num1})
giving an initial value for the normalization ratio $a$, defined as
\begin{equation}
a=\lim_{r\rightarrow +\infty} \frac{\varphi_2(r)}{\varphi_0(r)}\ ,
\label{eq:def_asynnorm}
\end{equation}
and applying again the Numerov's algorithm to obtain
$\varphi_0(r)$.
With the evaluated $\varphi_0(r)$, we calculate again
$\varphi_2(r)$ with Eq.~(\ref{eq:num2}), and so on, until we
converge both for $E$ and for $a$ within the required accuracy.
The method for the single-channel scattering problem is straightforward,
as the Numerov's outgoing solution from $r=0$ to the final grid point
is matched to the function
\begin{equation}
\varphi(r)=\cos\delta_l F_l(\eta;kr)+\sin\delta_l G_l(\eta;kr)\ ,
\label{eq:asymp-scatt}
\end{equation}
which is normalized to the unitary flux. Here $F_l(\eta;kr)$
and $G_l(\eta;kr)$ are the regular and irregular Coulomb functions,
and $k$ the $\alpha+d$ relative momentum.
The scattering phase-shift $\delta_l$ is then easily obtained.
The variational method has been used for both the bound and
the scattering states.
For the bound state, we expand the wave function as
\begin{equation}
\Psi(\mathbf{r})=\sum_{\alpha i}~c_{\alpha i}\:f_{\alpha i}(r)\:|\alpha\rangle\:,
\label{eq:varExp}
\end{equation}
where
$|\alpha\rangle \equiv \sum_{m\,\sigma}\: \langle \ell m S \sigma | J M \rangle
Y_{\ell m}(\hat{\mathbf{x}})\chi_{S\sigma}$, $f_{\alpha i}(r)$ are orthonormal
functions and $c_{\alpha i}$ are unknown coefficients.
We use the Rayleigh-Ritz variational principle to
reduce the problem to an eigenvalue-eigenvector problem, which can be solved
with standard techniques (see Ref.~\cite{Mar11} for more details).
Here we use a basis function defined as
\begin{equation}
f_{\alpha i}(r)=\sqrt{\frac{\gamma_\alpha i!}{(i+2)!}}~
L_i^{(2)}(\gamma_\alpha r)~{\rm e}^{-\gamma_\alpha\:r/2}\:,
\label{eq:laguerrre}
\end{equation}
with $\gamma_\alpha=4$ fm$^{-1}$ for each $\alpha$,
and $L_i^{(2)}(\gamma_\alpha r)$ are Laguerre polynomials.
Note that so defined, these functions are orthonormal.
For the scattering problem the wave function is decomposed as
\begin{equation}
\Psi(\mathbf{r}) = \Psi_c(\mathbf{r})+\frac{F_\ell(\eta;kr)}{kr}
\:|\alpha\rangle +
\sum_{\beta}\:{^J}R_{\alpha\beta}\:\frac{\tilde{G}_{\ell_\beta}(\eta; kr)}{kr}|
\beta\rangle\:,
\label{eq:asymp-wf}
\end{equation}
where $^JR_{\alpha\beta}$ are unknown coefficients, $\Psi_c(\mathbf{r})$
has the same form as $\Psi(\mathbf{r})$ in Eq.~(\ref{eq:varExp}),
while ${F}_\ell(\eta;kr)$ and
$\tilde{G}_\ell(\eta;kr)=G_\ell(\eta;kr)(1-{\rm e}^{r/r_0})^{2\ell +1}$
are regular and (regularized for $r\rightarrow 0$) irregular Coulomb functions,
with $r_0$ a non-linear parameter of the order of 4 fm.
The Kohn variational principle is used to obtain
the unknown coefficients ${^J}R_{\alpha\alpha'}$ and $c_{\alpha i}$
of Eq.~(\ref{eq:varExp}), with a standard procedure as
outlined in Ref.~\cite{Mar11}.
\begin{figure}[t!]
\centering
\includegraphics[width=12 cm]{ham_var.eps}
\caption{The modulus for the $^6$Li wave function in logarithmic scale
obtained with the variational (black) and the Numerov's (red dashed)
methods using the $V_H$ potential.}
\label{fig:boundWaveVar}
\end{figure}
In Fig.~\ref{fig:boundWaveVar} we show a comparison for the $^6$Li~
reduced radial wave functions obtained, using the $V_H$ potential, with the
Numerov's or variational method. Similar results can be found for the
other potentials. As it can be seen by inspection of the figure,
the variational method is unable to reproduce the $^6$Li~ wave function
at large distances, of the order of 30-40 fm.
In this case the reduced wave function has been cured in order to get the
correct asymptotic behaviour. Within the Numerov's method,
the long range wave function is constructed by hand.
The agreement between the two methods is much nicer for the scattering
problem, although the Numerov's method has been
used for the single channels. In these cases, the
agreement between the two methods is at the order
of 0.1\%.
\subsubsection{The $^6$Li~ nucleus and the $\alpha+d$ scattering state}
\label{subsubsec:results}
The $^6$Li~ static properties, i.e.\ the binding energy respect to
the $\alpha+d$ threshold, the $S$-state ANC, the magnetic dipole moment
$\mu_6$ and the electric quadrupole moment $Q_6$ are given in
Table~\ref{tab:li6res}.
By inspection of the table we can conclude that each potential
nicely reproduces the experimental binding energy, while only
$V_T$, $V_M$ and $V_G$ give good values for the ANC. Also, the $V_D$ and
$V_G$ potentials are the only ones which include the $D$-state contributions
in the $^6$Li~ wave function. Therefore, the values of
$\mu_6$ and $Q_6$ obtained with these potentials are closer to the
experimental values, while $\mu_6$ and $Q_6$ calculated with the
$V_H$, $V_T$, and $V_M$ potentials are simply those of
the deuterium.
Finally, we show in Fig.~\ref{fig:bndwaves} the $^6$Li~ reduced wave function
evaluated with each potential.
The differences between the various potentials are quite pronounced
for $r\leq 6$ fm. However, this is not too relevant for
our reaction, which is peripheral and therefore most sensitive to the tail
of the wave function and to the $S$-state ANC.
\begin{table}[t]
\begin{tabular}{|c|c|c|c|c|c|c|} \hline
$\qquad$&$V_H$~ & $V_T$~ & $V_M$~ & $V_D$~ & $V_G$~ & EXP.\\
\hline
$B$ & 1.474 & 1.475 & 1.474 & 1.4735 & 1.4735 & 1.474 \\
$C_0$ & 2.70 & 2.31 & 2.30 & 2.50 & 2.30 & 2.30\\
$\mu_6$ & 0.857 & 0.857 & 0.857 & 0.848 & 0.848 & 0.822\\
$Q_6$ & 0.286 & 0.286 & 0.286 & -0.066 &-0.051 & -0.082\\
\hline
\end{tabular}
\caption{The $^6$Li~ binding energy ($B$) in MeV, $S$-state ANC ($C_0$)
in fm$^{1/2}$, magnetic dipole moment $\mu_6$ in $\mu_N$ and electric quadrupole moment $Q_6$ in fm$^2$
are calculated with the five different
potential models $V_H$, $V_T$, $V_M$, $V_D$, and $V_G$. The available
experimental data are also shown.}
\label{tab:li6res}
\end{table}
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{confronto.eps}
\caption{The $^6$Li~ reduced wave function evaluated with each potential model
considered in this work.}
\label{fig:bndwaves}
\end{figure}
For the initial $\alpha+d$ scattering state,
the scattering phase shifts obtained with each potential
are in good agreement with the experimental data, as it
can be seen
in Fig.~\ref{fig:scatteringresults} for the single channels
and in Fig.~\ref{fig:scatteringresultsCC} for the coupled channels.
In particular, the results obtained with the $V_H$ ($V_D$)
and $V_M$ ($V_G$) potentials coincide.
\begin{figure}[tp]
\centering
\includegraphics[height=0.75\textheight]{Every_single_channel.eps}
\caption{The phase shift $\delta_\ell ^J$ for every partial wave $^3\ell_J$,
where $\ell=\{0,~1,~2\}$. The data have been taken from
Refs.~\cite{jen83,mci67,gru75,bru82,kel70}.
The phase shifts are given in degrees as a function of the
center-of-mass relative energy in MeV.
The shape of the experimental points indicates the article
from which the data were taken:
we use circles~\cite{jen83}, triangles down~\cite{mci67},
diamonds~\cite{gru75}, squares~\cite{bru82} and triangles up~\cite{kel70}.
The calculated phase shifts are obtained with the $V_H$ and $V_M$
(black solid line), $V_T$ (blue dash-dotted line),
and $V_D$ and $V_{G}$ (red dashed line) potentials.}
\label{fig:scatteringresults}
\end{figure}
\begin{figure}[t!]
\includegraphics[width=1.\textwidth]{Coupled_final.eps}
\caption{Same as Fig.~\ref{fig:scatteringresults}, but for the coupled
channels in the $J^\pi=1^+$ state. The phase shift results evaluated with
the $V_D$ and the $V_{G}$ potentials for the coupled channels
$^3S_1$ and $^3D_1$ are displayed with the two red solid lines,
the results for the mixing angle $\varepsilon$ are displayed with the
blue dashed line.}
\label{fig:scatteringresultsCC}
\end{figure}
\subsection{The transition operator}
\label{subsec:transop}
To evaluate the reaction cross section, we need to write down the
nuclear electromagnetic current operator $\mathbf{J}^\dagger(\mathbf{q})$
of Eq.~(\ref{eq:crosssection}).
This can be written as
\begin{equation}
\mathbf{J}^\dagger(\mathbf{q})=\int\mathrm{d}\mathbf{x}~
\mathrm{e}^{i \mathbf{q} \mathbf{x}}~\mathbf{J}(\mathbf{x})
\label{eq:jq}
\end{equation}
with
\begin{equation}
\mathbf{J}(\mathbf{x}) = \sum_i ~q_i\,\frac{\mathbf{p}_i}{m_i}~
\delta^3(\mathbf{x}-\mathbf{x}_i)\:,
\label{eq:jx}
\end{equation}
where $\mathbf{p}_i$, $m_i$, $\mathbf{x}_i$ and $q_i$ are respectively
the momentum,
the mass, the position and the charge of the \textit{i}-th particle.
The matrix element appearing in Eq.~(\ref{eq:crosssection}),
$\hat{\mathbf{\epsilon}}^{\dagger \lambda}_{\mathbf{q}}\cdot \left<\Psi_{^6{\rm Li}}(M)|\mathbf{J}^\dagger(\mathbf{q})|\Psi_{\alpha d}(M_i)\right>$, can be rewritten
expressing $\Psi_{^6{\rm Li}}(M)$ and $\Psi_{\alpha d}(M_i)$ as
\begin{eqnarray}
\Psi_{^6{\rm Li}}(M)&=& \frac{\varphi_{0}(r)}{r}Y_{00}(\theta,\phi)\chi_{1 M}+
\frac{\varphi_{2}(r)}{r}\sum_{m \sigma}\left<2 m, 1 \sigma | 1 M\right>
Y_{2m}(\theta, \phi)\chi_{1 \sigma}
\label{eq:psi6}\\
\Psi_{\alpha d}(M_i)&=& \sum_{\ell_i J_i}\:i^\ell
\sqrt{4\pi(2\ell_i+1)}\left<\ell_i 0, 1 M_i | J_i M_i \right>\:\nonumber\\
&\times& \frac{\varphi_{\alpha+d}^{\ell_i J_i}(r)}{kr}\sum_{m' \sigma'}
\left<\ell_i m', 1 \sigma' | J_i M_i\right>
Y_{\ell_i m'}(\theta, \phi)\chi_{1 \sigma'}\:,
\label{eq:initialwave}
\end{eqnarray}
where $\varphi_{\ell_f}(r)$ and $\varphi_{\alpha+d}^{\ell_i J_i}(r)$
are the $^6$Li~ and
$\alpha+d$~ reduced radial functions discussed in Sec.~\ref{subsec:li6-ad}.
In the partial wave decomposition of Eq.~(\ref{eq:initialwave}),
we have retained all the contributions up to $\ell_i=2$.
By then performing a multipole expansion of the
$\mathbf{J}^\dagger(\mathbf{q})$ operator, we obtain
\begin{equation}
\mathbf{\hat{\epsilon}}_\mathbf{q}^{\dagger \lambda}\cdot\mathbf{J}^\dagger(\mathbf{q})=-\sqrt{2\pi}\:\sum_{\Lambda\ge 1}(-i)^\Lambda\sqrt{2\Lambda+1}\left[E_{\Lambda\lambda}(q)+\lambda M_{\Lambda\lambda}(q)\right]\:,
\end{equation}
where $\Lambda$ is the multipole index, while $E_{\Lambda\lambda}(q)$ and
$M_{\Lambda\lambda}(q)$ are the so-called electric and magnetic multipoles of
order $\Lambda$. They are defined as
\begin{eqnarray}
E_{\Lambda\lambda}(q) &= &\frac{1}{q}
\int\mathrm{d}\mathbf{x}\:[\nabla\times(j_\Lambda(qx)
\mathbf{Y}^\lambda_{\Lambda\Lambda 1}
(\mathbf{\hat{x}}))]\cdot\mathbf{J}(\mathbf{x})\:,\\
M_{\Lambda\lambda}(q) &= &\int\mathrm{d}\mathbf{x}\:j_\Lambda(qx)
\mathbf{Y}^\lambda_{\Lambda\Lambda 1}
(\mathbf{\hat{x}})\cdot\mathbf{J}(\mathbf{x})\:,
\end{eqnarray}
where $j_\Lambda(qx)$ is the spherical Bessel function of order $\Lambda$ and $\mathbf{Y}^\lambda_{\Lambda\Lambda 1}(\mathbf{\hat{x}})$ is the vector spherical harmonic of order $\Lambda$.
In this work we adopt the so-called long wavelength approximation (LWA),
since, for the energy range of interest, the momentum of
the emitted photon is much smaller than the $^6$Li~ dimension. This means that
we can expand the multipoles in powers of $qr$. Furthermore,
in the present calculation, we include only electric dipole and
quadrupole multipoles, since it has been shown in Ref.~\cite{nol01}
that the magnetic multipoles are expected to give small contributions
to the $S$-factor.
With this approximation, $E_{\Lambda\lambda}(q)$ can be written as
\begin{equation}
E_{\Lambda\lambda}(q) = Z_e^{(\Lambda)}\:\sqrt{\frac{\Lambda+1}{\Lambda}}
f_\Lambda(qr)Y_{\Lambda\lambda}(\mathbf{\hat{x}})\:,
\label{eq:electric}
\end{equation}
where~\cite{muk16}
\begin{eqnarray}
f_1(x) &=& 3\frac{[(x^2-2)\sin x+2 x\cos x]}{x^2}\:,\label{eq:f1x}\\
f_2(x) &=& 15\frac{[(5x^2-12)\sin x+(12-x^2) x\cos x]}{x^3}\:,
\label{eq:f2x}
\end{eqnarray}
and $Z_e^{(\Lambda)}$ is the so-called effective charge, and is given
by
\begin{equation}
Z_e^{(\Lambda)}\equiv~Z_d\left(\frac{m_\alpha}{m_\alpha+m_d}\right)^\Lambda +Z_\alpha\left(-\frac{m_d}{m_\alpha+m_d}\right)^\Lambda\:.
\label{eq:zeff}
\end{equation}
Note that when only the first order contribution in the LWA
is retained, $f_\Lambda(x)$ reduces to
\begin{equation}
f_\Lambda(x) = x^\Lambda\:.
\label{eq:flx}
\end{equation}
The use of Eqs.~(\ref{eq:f1x}) and~(\ref{eq:f2x}) instead of Eq.~(\ref{eq:flx})
leads to an increase in the $S$-factor of the order of 1 \%.
This has been shown in Ref.~\cite{muk16} and has been confirmed in the
present work.
In the formalism of the LWA the total cross section of Eq.~(\ref{eq:sigma})
can be written as
\begin{equation}
\sigma(E) = \sum_{\ell_i J_i \Lambda}~\sigma_{\ell_iJ_i}^{(\Lambda)}(E)\:,
\label{eq:sigmal}
\end{equation}
where $\sigma_{\ell_iJ_i}^{(\Lambda)}(E)$ is the cross section evaluated with
the electric $\Lambda$-multipole and the initial $\alpha+d$ state
with orbital (total) angular momentum $\ell$ ($J_i$). It can be written as
\begin{multline}
\sigma_{\ell_iJ_i}^{(\Lambda)}(E) \,=\,\frac{8\pi\:\alpha}{v_{\mathrm{rel}}\,k^2}
\frac{q}{1+q/m_6}\,\,\frac{Z_e^{(\Lambda)\,2}}{[(2\Lambda+1)!!]^2}
\frac{(\Lambda+1)(2\Lambda+1)}{\Lambda}
(2\ell_i+1)(2J_i+1)
\\ \times
\bigg[
\sum_{\ell_f}~(-)^{\ell_f}\sqrt{2\ell_f+1}
\begin{pmatrix}
\ell_f &\Lambda&\ell_i\\
0&0&0
\end{pmatrix}
\begin{Bmatrix}
J_i & \ell_i & 1\\
\ell_f & J_f & \Lambda
\end{Bmatrix}\,
\:
\int\:\mathrm{d}r
\:\varphi_{^6{\rm Li}}^{\ell_f}(r)\:f_{\Lambda}(qr)\:\varphi_{\alpha+d}^{\ell_i J_i}(r)
\bigg]^2\:.
\label{crosspartial}
\end{multline}
For simplicity we define the partial $S$-factor as
\begin{equation}
S^{(\Lambda)}_{\ell_iJ_i}(E) = E\:\sigma^{(\Lambda)}_{\ell_iJ_i}(E)\:
\exp{\left(2\pi\eta\right)}\:.
\label{eq:partialSfac}
\end{equation}
The results for these quantities evaluated with the $V_G$ potential are
shown in Fig.~\ref{fig:sfactorchan}. The ones for the other potentials have
the same shapes and properties. The only difference comes for $V_H$~, $V_T$~ and
$V_M$~, where the contribution to the $S$-factor for the $\ell_i=0$ initial state
are zero, being the transition $^3S_1\rightarrow{^3}S_1$ forbidden for each
multipole term.
Due to the nature of the LWA, the largest contribution to the total
cross section, and therefore to the astrophysical $S$-factor,
should be given by the $E_1$ transition, but, as we can see from
Fig.~\ref{fig:sfactorchan}, the $E_1$ transition dominates only
at energies of the order of few keV. This is due to the so-called $E_1$
isotopic suppression. As we have seen the multipole expansion
at $\Lambda$-th order for the electric terms depends on the
square of the effective charge $Z_e^{(\Lambda)}$ and, for our reaction,
$[Z_e^{(1)}]^2 \simeq 1.6 \times 10^{-5}$ and $[Z_e^{(2)}]^2\simeq 0.44$.
Therefore the $E_1$ contribution to the $S$-factor is strongly suppressed,
except for very low energies,
where the other multipoles are reduced due to their energy dependence.
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{grassi.eps}
\caption{The partial astrophysical $S$-factors $S^{(\Lambda)}_{\ell_i J_i}(E)$,
as defined in Eq.~(\ref{eq:partialSfac}). On the left (central) panel
the separate contribution for the dipole (quadrupole) transition are shown.
The shape and color of the lines indicate the initial angular momentum.
The red dotted and orange dash-dotted lines are used to indicate
the transitions with $\ell_i=0$ and $\ell_i=2$, respectively, for $J_i=1$.
For $J_i=0$, $J_i=2$, and $J_i=3$ solid black, green dashed and
blue dot-dashed-dashed lines are used, respectively. On the right panel the
total contribution for the dipole (quadrupole) is shown with a maroon
dashed (blue solid) line.}
\label{fig:sfactorchan}
\end{figure}
\subsection{The theoretical astrophysical $S$-factor}
\label{subsec:res}
The calculated astrophysical $S$-factor is compared in Fig.~\ref{fig:allS}
with the available experimental data from
Refs.~\cite{rob81,kie91,moh94,cec96,iga00,and14,tre17}.
By inspection of the figure we can conclude that
the tail of the $S$-factor at low energies
has a strong dependence with respect to the ANC value.
In fact, the three potentials which reproduce the ANC give very close results.
The $V_H$ and $V_D$ potentials, giving a larger value for the ANC
than the other potentials, predict higher values for the $S$-factor.
Thanks to the relatively large number of considered potentials,
we can give a rough estimate of the theoretical uncertainty of our
predictions. Therefore, in Fig.~\ref{fig:allSerr} we show the same results of
Fig.~\ref{fig:allS} as two bands, one obtained using all the five
potentials and a much narrower one calculated with only the three potentials
which reproduce the correct ANC value.
As we can see from the figure, the theoretical uncertainty
for the $S$-factor is much smaller in this second case:
at center-of-mass energies $E\simeq 10$ keV, it is of the order
of 2\%, but it becomes at the 1\% level at the LUNA available energies,
i.e.\ for $E\simeq 100$ keV.
On the other hand, if we consider all of the potentials, the previous
estimates grow to 25\% and 24\% at $E\simeq 10$ and 100 keV, respectively.
The available experimental data, though, are not
accurate
enough in order to discriminate between the results obtained
with these five potentials.
Therefore, in the following Section, where the primordial $^6$Li~
abundance is discussed, we consider conservatively the results for the
astrophysical $S$-factor obtained with all the five potentials.
\begin{figure}[t!]
\includegraphics[width=1.\textwidth]{Snew.eps}
\caption{The total astrophysical $S$-factor evaluated with the five
potential
models considered in this work is compared with the data
of Ref.~\cite{rob81} (blue triangles), Ref.~\cite{kie91}
(black circles),
Ref.~\cite{moh94} (green circles), Ref.~\cite{cec96}
(magenta X),
Ref.~\cite{iga00} (cyan diamonds) and Ref.~\cite{and14,tre17}
(red squares).
The data from Refs.~\cite{kie91,cec96} are upper limits
to the $S$-factor.
In the insert, the tail of the $S$-factor in the energy range
10-50
keV. The dotted (black), dashed (red), dot-dashed (green),
dot-dot-dashed (orange) and solid (blue) lines correspond
to the results
obtained with the $V_H$, $V_T$, $V_M$, $V_D$ and $V_G$
potentials, respectively.}
\label{fig:allS}
\end{figure}
\begin{figure}[t!]
\includegraphics[width=1.\textwidth]{stutti_plus_data2.eps}
\caption{Same as Fig.~\ref{fig:allS} but with the
theoretical results shown as a band. The (gray) dotted band
is obtained using all the
five potentials considered in this work, while the
narrower (cyan) full band is obtained
using only those three potentials ($V_T$~, $V_M$~, and $V_G$~)
which reproduce the experimental
ANC value.}
\label{fig:allSerr}
\end{figure}
\section{The $^6$Li~ primordial abundance}
\label{sec:li6PA}
$^6$Li is expected to be produced during BBN with a rather low number
density, $^6$Li/H $\sim 10^{-14}$, for the baryon density as obtained by the
2015 Planck results \cite{Ade:2015xua}. This
result still holds using the $S$-factor described in the previous
Section (see below), and it is too small to be detectable at present.
Actually, some positive measurements in old halo stars at the level of
$^6$Li/$^7$Li $\simeq 0.05$ were obtained in the last decade~\cite{asp06},
but they may reflect the post-primordial production of this nuclide in Cosmic
Ray spallation nucleosynthesis. Moreover, as we mentioned already, a more
precise treatment of stellar atmosphere, including convection, shows that
stellar convective motions can generate asymmetries in the line shape that
mimic the presence of $^6$Li, so that the value 0.05 should be rather
understood as a robust upper limit on $^6$Li primordial abundance. This does
not mean that the issue is irrelevant for BBN studies since the study of the
chemical evolution of the fragile isotopes of Li, Be and B could constraint
the $^7$Li primordial abundance, and clarify the observational situation of
Spite Plateau, see e.g. Ref.~\cite{Olive:2016xmw}.
The whole $^6$Li is basically produced via the $\alpha+d$ process,
which is thus the leading reaction affecting the final yield of this isotope.
The new theoretical $S$-factors detailed so far have been used
to compute the thermal rate in the BBN temperature range, by folding the cross
section with the Maxwell-Boltzmann distribution of involved nuclides. We have
then changed the PArthENoPE code~\cite{Pisanti:2007hk} accordingly, and
analyzed the effect of each different $S$-factor on the final abundance of
$^6$Li, as function of the baryon density. For comparison, we also consider
the value of the $S$-factor as obtained from fitting experimental data
from Refs.~\cite{RO81,MO94,IG00,AN14}
\begin{eqnarray}
S(E) &=& 10^{-9} \left( 3.19368 + 6.94243\, E + 32.204\, E^2 \right)
\nonumber \\
&+& \frac{9.96936 \times 10^{-7}}{1. + 4800.46 \,(E - 0.694061)^2} \ ,
\label{e:astro}
\end{eqnarray}
as well as the NACRE 1999 fit~\cite{nacre99},
which is used as benchmark rate in PArthENoPE public code.
The results are shown in Fig.~\ref{f:rates}, normalized to NACRE 1999.
\begin{figure*}
\begin{center}
\includegraphics[width=.9\textwidth]{Rates.ps}
\end{center}
\caption{Rates vs. the temperature $T$ in units of $10^9$ K ($T_9$),
corresponding to the astrophysical $S$-factors of the data
fit (solid/magenta), and of the theoretical calculations with the
five potentials used in this work
(dotted/black, dashed/red, dot-dashed/green, long-dashed/orange and
solid/blue, corresponding to $V_H$, $V_T$, $V_M$, $V_D$ and $V_G$ potentials,
respectively), normalized to the standard rate used in PArthENoPE
(NACRE 1999). \label{f:rates}}
\end{figure*}
As we can see, the change is in the 10-20 \% range. If we adopt the
Planck 2015 best fit for the baryon density parameter
$\Omega_b h^2= 0.00226$~\cite{Ade:2015xua}, we obtain values for the
$^6$Li/H density ratio in the range $(0.9 - 1.4)\times 10^{-14}$, slightly
smaller than what would be the result if the experimental data fit
is used, as it can be seen in Table~\ref{op}.
\begin{table}[b]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline
& bench & ~data~ & ~~H~~ & ~~T~~ & ~~M~~ & ~~D~~ & ~~G~~ \\
\hline
$^6$Li/H $\times 10^{14}$ & $1.1$ & $1.7$
& $1.4$ & $1.1$ & $1.1$ & $1.0$ & $0.93$ \\ \hline
\end{tabular}
\caption{Values of the final yield of $^6$Li (relative to H) for the
five potential models considered in this paper, as well as for the
NACRE 1999 rate, used as benchmark in PArthENoPE (bench)
and using a fit of experimental data (data).}
\label{op}
\end{table}
Notice that, at least with present sensitivity on $^6$Li yields, the
dependence on the baryon density, or equivalently, the baryon to photon
density ratio $\eta_{10} \sim 273.49 \, \Omega_b h^2$, is quite mild, as
shown in Fig.~\ref{f:Li6}. The lower band in this plot cover the range of
values obtained when the five potential models are used,
and we can conservatively say
that standard BBN predicts $^6$Li/H$ = (0.9 - 1.8) \times 10^{-14}$. This
range is also in good agreement with the results of other
studies~\cite{ham10,Cyburt:2015mya}. In Fig.~\ref{f:Li6} we also show the
final abundance of $^7$Be+$^7$Li (upper band), which remains
in the range $(4.2 - 4.7) \times 10^{-10}$,
and it is, as expected, almost independent of the potential model adopted
for the $\alpha+d$ radiative capture reaction considered here.
\begin{figure*}
\begin{center}
\includegraphics[width=.9\textwidth]{Li6.ps}
\end{center}
\caption{The X/H abundance for X=$^6$Li (lower band) and X= $^7$Be+$^7$Li
(upper band). The theoretical uncertainty arising from the use of
the five potential models considered in this paper is shown as a band.
\label{f:Li6}}
\end{figure*}
\section{Conclusions}
\label{sec:concl}
The $\alpha+d$ radiative capture has been studied within a two-body
framework, where the $\alpha$ particle and the deuteron are considered
as structureless constituent of $^6$Li. The long-wavelength approximation (LWA)
has been used, and the electric $E_1$ and $E_2$ multipoles have been retained.
In order to study the accuracy that the present theoretical framework can reach,
we have used five different models for the $\alpha+d$ interaction, among which
also, for the first time, potential models with a tensor term, able to
reproduce the magnetic dipole and electric quadrupole moments of $^6$Li~, as
well the $S$-state ANC and the $\alpha+d$ scattering phase shifts. The
theoretical uncertainty to the astrophysical $S$-factor, the observable
of interest, is of the order of $\sim$ 20\% if all the five potential
models are retained, but reduces to few \% if only those potentials
which reproduce the $S$-state ANC are considered. The experimental data,
however, are affectd by an uncertainty much larger than the theoretical one.
The calculated values for the $\alpha+d$ astrophysical $S$-factor
have been used in the PArthENoPE public code in order to estimate the
$^6$Li~ and $^7$Li+$^7$Be primordial abundances. The $^6$Li~
abundance is predicted to be slightly smaller
than what would result from the available experimental data and from
the NACRE 1999 compilation,
but still in the range of $(0.9 - 1.8) \times 10^{-14}$.
We conclude that this result of standard BBN is thus quite robust.
Further studies about $^6$Li astrophysical measurement may be needed
to check the claim of a much larger ratio $^6$Li/$^7$Li obtained in
Ref.~\cite{asp06}. On the other hand, the
final $^7$Li+$^7$Be abundance is almost independent on the result for
the astrophysical $S$-factor presented here, and is found to be
in the range of $(4.2 - 4.7) \times 10^{-10}$.
Finally, we would like to notice that
the present calculation for the astrophysical $S$-factor
is, to our knowledge,
the most up-to-date one working within
a two-body framework. However, the assumption that the deuteron is
a structureless constituent of $^6$Li~ can be considered rather weak,
and the present study could be improved if
the six-body systems are viewed as a core of an $\alpha$ particle
and two nucleons, i.e.\ as a three-body systems. The first steps within
this three-body framework have been done in Ref.~\cite{Tur16},
and further work along this line is currently underway.
|
2,869,038,153,930 | arxiv | \section{Introduction}
Braverman and Gaitsgory~\cite{BG} gave conditions for an
algebra
to be a PBW deformation of a Koszul algebra.
Etingof and Ginzburg~\cite{EG} adapted these conditions
to the setting of a Koszul ring over a semisimple group ring $\CC G$
using results of Beilinson, Ginzburg, and Soergel~\cite{BGS}
in order to study symplectic reflection algebras.
These are certain kinds of
deformations of a skew group algebra $\CC [x_1,\ldots, x_{2n}]\rtimes G$
that preserve a symplectic group action.
More generally, Drinfeld~\cite{Drinfeld} considered such deformations
of a skew group algebra $\CC [x_1,\ldots,x_n]\rtimes G$
for $G$ an arbitrary finite group acting linearly.
We showed~\cite{quad} how to adapt the techniques
of Braverman and Gaitsgory
to an algebra defined over a group ring $kG$ that is not necessarily
semisimple, aiding exploration of deformations
of a skew group algebra $S\rtimes G$ for $S$ any Koszul algebra
and $G$ any finite group.
There, we examined deformations preserving the
action of $G$ on the Koszul algebra $S$.
However, other types of
deformations are possible, some
arising only in the modular setting, where the
characteristic of the field $k$ divides the order of $G$.
Here,
we study deformations of $S\rtimes G$
that deform not only the generating relations of the Koszul algebra
$S$ but also deform the action of $G$ on $S$.
This construction recollects
the graded affine Hecke algebras of Lusztig~\cite{Lusztig89}, in which
a group action is deformed; in the nonmodular setting, these
were shown by Ram and the first author~\cite{RamShepler} to be
isomorphic to Drinfeld's deformations.
Every deformation of an algebra defines a Hochschild
2-cocycle of that algebra.
A central question in deformation theory asks which
cocycles may be lifted to deformations.
We use homological techniques in this paper to answer this question
in our context: For $S$ any Koszul algebra with action of a finite group $G$,
we show in Theorem~\ref{thm:main}
that obstructions to lifting cocycles on $S\rtimes G$
correspond to concrete conditions on parameter functions defining
potential PBW deformations.
Such deformations are filtered algebras with associated
graded algebra precisely $S\rtimes G$.
Our theorem generalizes
\cite[Theorem 5.4]{quad} to include deformations of the group action.
It applies to many algebras of interest that are
deformations of algebras of the form
$S\rtimes G$.
For example, one might take
$S$ to be the symmetric algebra (polynomial ring) $S(V)$
on a finite
dimensional vector space $V$,
or a skew (quantum) polynomial ring
$S_{\bf q}(V)$ with multiplication skewed
by a tuple ${\bf q} = (q_{ij})$ of scalars,
or a skew exterior algebra,
or even the Jordan plane or a Sklyanin algebra.
Our primary tool is a twisted product resolution
constructed by Guccione, Guccione, and Valqui~\cite{GGV}
and adapted in~\cite{quad}.
We use it here to prove Theorem~\ref{thm:hom}, a more homological version
of our main Theorem~\ref{thm:main} from which we prove Theorem~\ref{thm:main}
as a corollary.
In the nonmodular setting, a simpler resolution
suffices, one that is induced directly from the Koszul resolution
of $S$ itself.
The twisted product resolution we use here
partitions homological information according to type;
cochains corresponding to deformations of the group action
and to deformations of the Koszul relations live on two distinct parts
of the resolution. Conditions for PBW deformations include interaction
among the parts.
We obtain explicit conditions in
the special case that the Koszul
algebra $S$ is a polynomial ring in Theorem~\ref{RawPBWConditions},
generalizing \cite[Theorem~3.1]{ueber}.
Our result may also be proven directly via the Composition-Diamond Lemma,
used by Khare~\cite[Theorem~27]{Khare} for
deformations of the action of a cocommutative algebra on a polynomial ring.
An advantage of our approach is that it yields conditions
much more generally for all Koszul algebras.
When the characteristic does not divide the group order, we
strengthen \cite[Theorem~4.1]{ueber}
by showing in Theorem~\ref{thm:nonmod}
that a deformation of the group action and Koszul relations together
is isomorphic to one in which only the Koszul relations are deformed.
We give an example to show that Theorem~\ref{thm:nonmod} is
false in the modular setting.
Let $k$ be any field. We assume the characteristic of $k$
is not 2 throughout to make some results easier to state.
All tensor products are over $k$ unless otherwise indicated,
that is, $\otimes=\otimes_k$. We assume that in each graded or
filtered $k$-algebra, elements of $k$ have
degree~$0$.
\section{PBW Deformations of Koszul algebras twisted by groups}\label{sec:PBW}
In this section, we recall some definitions and state our
main result giving Braverman-Gaitsgory style conditions for
PBW deformations. The proof will be given in Section~\ref{sec:hom2} after
we recall and develop the needed homological algebra.
\subsection*{PBW deformations}
Let $\kk$ be a ring with unity (for example,
the field $k$ or a group ring $kG$).
Let $\cH$ be a finitely generated filtered $\kk$-algebra,
so that we may write $\cH=T_{\kk}(U)/(P)$
for some finite dimensional $\kk$-bimodule $U$ and ideal
$(P)$ generated by a subset
$P$ of the tensor algebra $T_{\kk}(U)$
consisting of filtered
elements.
Thus elements of $P$ may be
nonhomogeneous with respect to the
grading on the free algebra $T_{\kk}(U)$
with $U$ in degree $1$.
An element of $T_{\kk}(U)$ has {\em filtered degree d} if
it lies in the $d$-th filtered piece
$\oplus_{i\leq d} \, (U)^{\otimes_{\kk} i}$
of $T_{\kk}(U)$
but not in the $(d+1)$-st.
We associate to any presentation of a filtered algebra a homogenous
version,
$$\text{HomogeneousVersion}\big(T_{\kk}(U)/(P)\big)
=T_{\kk}(U)/(R),$$
where
$R=\cup _d\, \{\pi_d(p): p\in P \text{ of filtered degree } d\}$
and $\pi_d:T_{\kk}(U)\rightarrow (U)^{\otimes_{\kk} d}$
projects onto the homogeneous component of degree $d$.
We say that a filtered algebra $\cH$ with a given
presentation
is a {\em PBW deformation} of its homogeneous version
if it has the {\em PBW property},
i.e., the associated graded algebra of $\cH$ coincides
with the homogeneous version:
$$
\text{Gr}(\cH)\cong \text{HomogeneousVersion}(\cH)
\quad\text{ as graded algebras.}
$$
Given a fixed presentation in terms
of generators and relations,
we often merely say that $\cH$ is
a PBW deformation.
This terminology originated from the Poincar\'e-Birkhoff-Witt Theorem,
which states that the associated graded algebra of the
universal enveloping algebra of a Lie algebra is its homogeneous
version, namely, a polynomial ring.
\begin{remark}{\em
The reader is cautioned that
authors use the adjective {\em PBW}
in slightly different ways.
For example,
in Braverman-Gaitsgory~\cite{BG}
and also in~\cite{quad},
the homogeneous version of a filtered
{\em quadratic} algebra is defined
by
projecting every generating relation
onto its degree $2$ part, instead of its
highest homogeneous part.
This merely means that filtered
relations of degree $1$ must be considered
separately in PBW theorems there.
}\end{remark}
\subsection*{Group twisted Koszul algebras}
Let $S$ be a finitely generated graded Koszul $k$-algebra.
Then $S$ is a quadratic
algebra generated by some finite dimensional $k$-vector
space $V$ (in degree $1$)
with generating quadratic relations
$R$, some $k$-subspace of $V\ot V$:
$$S=T_k(V)/(R)\, . $$
Let $G$ be a finite group acting by graded automorphisms on $S$.
This is equivalent to $G$ acting linearly
on $V$ with the relations $R$ preserved set-wise.
We denote the action of $g$ in $G$ on $v$ in $V$ by $^gv$ in $ V$.
The {\em skew group algebra} (or semidirect product algebra)
$S\rtimes G$ (also written $S\# G$) is the $k$-algebra generated
by the group algebra $kG$ and the vector space $V$
subject to the relations given by $R$
together with the relations
$gv-\, ^gvg$ for $g$ in $G$ and $v$ in $V$.
We identify
$S\rtimes G$ with a filtered algebra
over the ring $\kk=kG$ generated by $U=kG\ot V\ot kG$:
$$
S\rtimes G \cong T_{kG}(kG\ot V\ot kG)/(R\cup R')
$$
as graded algebras,
where elements of $G$ have degree 0 and
elements of $V$ have degree 1,
and where
\begin{equation}\label{eqn:R-prime}
R'=\Span_k \{g\otimes v\otimes 1 - 1\otimes \, ^gv\otimes g:
v\in V,\ g\in G \}\subset kG \ot V\ot kG\, .
\end{equation}
Here we identify $R\subset V\ot V$ with a subspace of $$k\ot V\ot k\ot V\ot k
\subset kG\ot V\ot kG\ot V\ot kG\cong
(kG\ot V\ot kG)\ot _{kG} (kG\ot V\ot kG). $$
\subsection*{PBW deformations of
group twisted Koszul algebras}
Now
suppose $\mathcal{H}$ is a PBW deformation
of $S\rtimes G$.
Then $\mathcal{H}$ is
generated by $kG$ and $V$
subject to nonhomogeneous relations
of degrees $2$ and $1$ of the form
$$
\begin{aligned}
P&=\{ r-\alpha(r)-\beta(r): r\in R \} \ \ \ \mbox{ and } \\
P'&=\{ r'-\lambda(r'): r'\in R' \}
\end{aligned}
$
for some $k$-linear parameter functions
$$\alpha:R\rightarrow V\otimes kG, \
\beta: R\rightarrow kG, \
\lambda: R'\rightarrow kG \, . $$
That is, $\mathcal{H}$
can be realized as the quotient
$$
\mathcal{H}=T_{kG}(kG\ot V\ot kG)/ ( P \cup P').
$$
Note we may assume that
$\alpha$ takes values in $V\ot kG\cong k\ot V\ot kG$,
rather than more generally in $kG\ot V\ot kG$,
without changing the $k$-span of $P\cup P'$, since the relations $P'$
allow us to replace elements in $kG\ot V\ot kG$ with those in $k\ot V\ot kG$.
In our main theorem below, we determine which
such quotients define PBW deformations of $S\rtimes G$.
We first need some notation for
decomposing
any functions $\alpha, \beta, \lambda$ as above.
We identify $\lambda: R'\rightarrow kG$ with the
function
(of the same name)
$\lambda: kG\otimes V\rightarrow kG$
mapping
$g\ot v$ to $\lambda(g\ot v\ot 1 - 1\ot \, {}^gv\ot g)$
for all $g$ in $G$ and $v$ in $V$.
We write
$$\alpha(r)=\sum_{g\in G} \alpha_g(r) g , \ \ \
\beta(r) =\sum_{g\in G} \beta_g(r)g, \ \ \
\lambda(h\ot v) = \sum_{g\in G} \lambda_g(h\ot v) g
$$ for functions
$\alpha_g:R\rightarrow V$,
$\beta_g:R\rightarrow k$, $\lambda_g : kG\ot V\rightarrow k$ (identifying $V$ with $V\ot k$
in $V\ot kG$).
Write
$\lambda(g\ot - ):V\rightarrow kG$ for the function
induced from $\lambda$ by fixing $g$ in $G$.
Let $m:kG\ot kG\rightarrow kG$ be multiplicaton on $kG$
and let $\sigma: kG\ot V\rightarrow V\ot kG$ be
the twist map given
by
\[
\sigma ( g\ot v) = {}^gv\ot g
\quad\text{ for } g\text{ in }G, \ v\text{ in }V.
\]
For the statement of the theorem, we
set
\begin{equation}\label{eqn:Habc}
\begin{aligned}
\chabl = T_{kG}(kG\ot V \ot kG)
\ /\ \big(r-\alpha(r)-\beta(r),\ r' - \lambda(r'):r\in R, \ r' \in R' \big)
\end{aligned}
\end{equation}
\medskip
\noindent
for linear parameter functions
$\alpha: R\rightarrow V\ot kG$,
$\beta: R\rightarrow kG$,
$\lambda: R'\rightarrow kG$
and for $R$ the space of Koszul relations and $R'$ the space of
group action relations~(\ref{eqn:R-prime}).
The functions $\alpha$ and $\beta$ are extended uniquely to
right $kG$-module homomorphisms from $R\ot kG$
to $V\ot kG$ and $kG$, respectively.
\newpage
\begin{thm}\label{thm:main}
Let $G$ be a finite group and let $V$ be a $kG$-module.
Let $S=T_k(V)/(R)$ be a Koszul algebra
with subspace $R\subset V\ot V$ closed under the action of $G$.
Then a filtered algebra $\mathcal{H}$
is a PBW deformation of $S\rtimes G$ if and only if
\[
\mathcal{H}\cong \chabl
\]
for some linear parameter functions
$\alpha:R\rightarrow V\otimes kG, \
\beta: R\rightarrow kG, \
\lambda: kG\ot V \rightarrow kG $
satisfying
\begin{itemize}
\item[(1)]
$ \ 1\ot\lambda-\lambda (m\ot 1)+(\lambda\ot 1) (1\ot\sigma) = 0 $,
\item[(2)]
$ \ \lambda (\lambda\ot 1)-\lambda(1\ot \alpha)=(1\ot\beta)-(\beta\ot 1)
(1\ot\sigma)(\sigma\ot 1)$,
\item[(3)]
$ \ (1\ot\alpha)-(\alpha\ot 1)(1\ot\sigma)(\sigma \ot 1)
=\lambda\ot 1+(1\ot \lambda)(\sigma\ot 1)$,
\item[(4)]
$ \ \alpha ((1\ot\sigma)(\alpha\ot 1)- 1\ot\alpha) + \sum_{g\in G} \alpha_g\ot \lambda(g\ot -)
=1\ot \beta-\beta\ot 1$,
\item[(5)]\rule{0ex}{2
ex}
$ \beta ((1\ot \sigma)(\alpha\ot 1) - 1\ot\alpha)= - \lambda (\beta\ot 1) $,
\item[(6)]
$ \ \alpha\ot 1-1\ot\alpha=0$,
\end{itemize}
upon projection of images of the maps to $S\rtimes G$.
Here,
the map in $(1)$ is defined on $kG\ot kG\ot V$,
the maps in $(2)$ and $(3)$ are defined on $kG\ot R$,
the map in $ (6)$ is defined on $(V\ot R)\cap (R\ot V)\subset V\ot V\ot V$,
and
$(6)$ implies that the maps in $(4)$ and $(5)$ are
also defined on $(V\ot R)\cap (R\ot V)$.
\end{thm}
We will prove the theorem in Section~\ref{sec:hom2} as a corollary
of Theorem~\ref{thm:hom}, after first developing
some homological algebra in Sections~\ref{sec:def} and~\ref{sec:hom}.
The theorem above includes
the case of filtered quadratic algebras
defined over the ring $kG$ instead of the field $k$.
Such algebras preserve the action of $kG$
and
correspond to the case
$\lambda = 0$ in the theorem above.
We recover a result from~\cite{quad}
which we rephrase below to highlight the role of the twisting map $\sigma$.
The theorem was developed to provide tools particularly in the case
that $kG$ is not semsimple.
Note that the action of $G$ on itself
by conjugation induces an action
on the parameter functions $\alpha$ and $\beta$
(with $(^g\alpha)(r)=\, ^g(\alpha(\, ^{g^{-1}}r))$
as usual and $^g(v\otimes h)=\, ^gv \otimes ghg^{-1}$
for $r$ in $R$, $g$ in $G$, and $v$ in $V$).
\begin{thm}\cite[Theorem 5.4]{quad}
Let $G$ be a finite group and let $V$ be a $kG$-module.
Let $S=T_k(V)/(R)$ be a Koszul algebra
with subspace $R\subset V\ot V$ closed under the action of $G$.
Then a filtered quadratic algebra
$\mathcal{H}$ is a PBW deformation of $S\rtimes G$ preserving
the action of $G$
if and only if
\[
\mathcal{H}\cong \mathcal{H}_{0,\alpha,\beta}
\]
for some $G$-invariant linear parameter functions
$\alpha:R\rightarrow V\otimes kG, \
\beta: R\rightarrow kG$
satisfying, upon projection to $S\rtimes G$,
\begin{itemize}
\item[(i)]
$ \ \alpha\ot 1-1\ot\alpha=0$ ,
\item[(ii)]
$ \ \alpha ( (1\ot \sigma)(\alpha\ot 1)- 1\ot\alpha)
=1\ot \beta-\beta\ot 1$,
\item[(iii)]\rule{0ex}{2
ex}
$ \beta ((1\ot \sigma)(\alpha\ot 1) - 1\ot\alpha)= 0$.
\end{itemize}
Here, the map in (i) is defined on $(V\ot R)\cap (R\ot V)$, and
(i) implies that the maps in (ii) and (iii) are also defined
on $(V\ot R)\cap (R\ot V)$.
\end{thm}
\begin{proof}
The additional hypothesis, that the action of $G$ is
preserved in the deformation, is equivalent to setting $\lambda =0$
in Theorem~\ref{thm:main}. In this case,
Condition~(1) of Theorem~\ref{thm:main} is vacuous,
and Conditions~(2) and (3) are equivalent to $G$-invariance of $\alpha$
and $\beta$.
Conditions (4), (5), (6) become Conditions (ii), (iii), (i) here,
respectively.
\end{proof}
\begin{remark}
{\em The conditions of the above theorems generalize those of Braverman and
Gaitsgory~\cite[Lemma~3.3]{BG}
from Koszul algebras $S$ to skew group algebras $S\rtimes G$.
Their Condition (I) corresponds to our
Conditions
(1), (2), and (3) in Theorem~\ref{thm:main};
these conditions limit the possible relations of filtered degree~1.
The nonmodular case can be proven using the theory of Koszul rings
over the semisimple ring $kG$, as in \cite{EG}.
In the modular case, when char$(k)$ divides $|G|$,
we found in~\cite{quad} that
more complicated homological information
is required to obtain PBW conditions
using this approach.
}\end{remark}
\section{Deformations}\label{sec:def}
In this section, we recall the general theory of deformations and Hochschild
cohomology that we will need and show how it applies to the
algebras $\chabl$ of Theorem~\ref{thm:main}.
Recall that for any $k$-algebra $A$,
the Hochschild cohomology of an $A$-bimodule $M$ in degree $n$ is
$$
\HH^n (A,M) = \Ext^n_{A^e}(A,M),
$$
where $A^e=A\ot A^{op}$ is the enveloping algebra of $A$,
and the bimodule structure of $M$ defines it as an $A^e$-module.
In the case that $M=A$, we abbreviate $\HH^n(A)= \HH^n(A,A)$.
\subsection*{Bar and reduced bar resolutions}
Hochschild cohomology can be defined using
the {\em bar resolution}, that is, the free resolution
of the $A^e$-module $A$ given as:
\[
\cdots \stackrel{\delta_3}{\longrightarrow} A\ot A\ot A\ot A
\stackrel{\delta_2}{\longrightarrow} A\ot A\ot A
\stackrel{\delta_1}{\longrightarrow} A\ot A \stackrel{\delta_0}{\longrightarrow}
A\rightarrow 0 ,
\]
where
\begin{equation}\label{eqn:delta}
\delta_n (a_0\ot \cdots\ot a_{n+1}) = \sum_{i=0}^n (-1)^i a_0\ot \cdots
\ot a_i a_{i+1}\ot \cdots\ot a_{n+1}
\end{equation}
for all $n\geq 0$ and $a_0,\ldots,a_{n+1}\in A$.
If $A$ is an $\NN$-graded algebra, then each tensor power of $A$ is canonically a
graded $A$-bimodule. The
Hochschild cohomology of $A$ inherits this grading from
the bar resolution and thus is bigraded:
$\HH^i(A)=\bigoplus_j \HH^{i,j}(A)$
with
$\HH^{i,j}(A)$ the subspace consisting of
homogeneous elements of graded degree $j$ as maps.
For our arguments, we will need to use the {\em reduced bar resolution},
which replaces the $A^e$-module $A^{\ot (n+2)}$, for each $n$, by its vector
space quotient $A\ot (\overline{A})^{\ot n}\ot A$, where
$\overline{A} = A/k$ (the vector space quotient by all scalar multiples of the
multiplicative identity $1_A$ in $A$).
The differentials on the bar resolution factor through these quotients
to define differentials for the reduced bar resolution.
\subsection*{Deformations}
A {\em deformation of $A$ over $k[t]$} is an associative
$k[t]$-algebra $A_t$ with underlying vector space $A[t]$
such that $A_t|_{t=0}\cong A$ as algebras.
The product $*$ on a deformation $A_t$ of $A$ is determined by its
values on pairs of elements of $A$,
\begin{equation}\label{star-formula}
a_1 * a_2 = a_1a_2 + \mu_1(a_1\ot a_2) t + \mu_2(a_1\ot a_2) t^2 + \cdots
\end{equation}
where $a_1a_2$ is the product of $a_1$ and $a_2$ in $A$ and
each $\mu_j: A\ot A \rightarrow A$
is some $k$-linear map (called the {\em $j$-th multiplication map})
extended to be linear over $k[t]$.
(We require that only finitely many terms in the above
expansion
for each pair $a_1, a_2$ are nonzero.)
We may (and do) assume that $1_A$
is the multiplicative identity with respect to the multiplication
$*$ of $A_t$.
(Each deformation
is equivalent to one with $1_A$ serving as the multiplicative identity;
see~\cite[p.\ 43]{GerstenhaberSchack83}.)
We identify the maps $\mu_i$ with 2-cochains
on the reduced bar resolution
using the
canonical isomorphism
$
\Hom_{k}(\overline{A}\ot \overline{A}, A)\cong
\Hom_{A^e}(A\ot \overline{A} \ot \overline{A}\ot A, A).
$
(Our assumptions
imply that the value of $\mu_i$ is 0 if either argument is the
multiplicative identity of $A$.)
We will use the same notation for elements of $A$ and $\overline{A}$
when no confusion will arise.
Associativity of the multiplication $*$ implies
certain conditions on the maps $\mu_i$ which
are elegantly phrased in~\cite{Gerstenhaber64} in terms of the differential
$\delta$ and the
Gerstenhaber bracket $[ \ , \ ]$, as we explain next.
The {\em Gerstenhaber bracket} for 2-cochains $\xi, \nu$
on the (reduced) bar resolution
is the 3-cochain defined by
\begin{equation}\label{eqn:G-brack}
\begin{aligned}
{[} \xi, \nu {]} (a_1\ot a_2\ot a_3)
\ \ = \ \
\xi(\nu(a_1\ot a_2)\ot a_3)
-\xi(a_1\ot \nu(a_2\ot a_3)) \\
\ \ \ + \nu(\xi(a_1\ot a_2)\ot a_3)
-\nu(a_1\ot \xi(a_2\ot a_3))
\end{aligned}
\end{equation}
for all $a_1, a_2, a_3\in A$.
See~\cite{Gerstenhaber63} for the
definition of Gerstenhaber bracket in other degrees.
\subsection*{Obstructions}
If $A_t$ is a deformation of a $k$-algebra $A$ over $k[t]$,
associativity of multiplication $*$ implies in particular
(see \cite{Gerstenhaber64})
that
\begin{align}
\label{obst0}
\delta_3^*(\mu_1)&=0
&\text{ ($\mu_1$ is a Hochschild 2-cocycle),}&\\
\label{obst1}
\delta_3^*(\mu_2)&=\tfrac{1}{2}[\mu_1, \mu_1]
&\text{ (the first obstruction vanishes)},& \text{ and }\\
\label{obst2}
\delta_3^*(\mu_3)&=[\mu_1, \mu_2]
&\text{ (the second obstruction vanishes).}&
\end{align}
Associativity of the multiplication $*$
also implies that higher degree ``obstructions''
vanish, i.e., it forces necessary
conditions on all the $\mu_j$.
We will only need to look closely at the above beginning obstructions:
Higher degree obstructions relevant to our setting
will automatically vanish because
of the special nature of Koszul algebras
(see the proof of Theorem~\ref{thm:hom}).
\subsection*{Graded deformations}
Assume that the $k$-algebra $A$ is $\NN$-graded.
Extend the grading
on $A$ to $A[t]$ by setting $\deg (t) = 1$.
A {\em graded deformation of $A$ over $k[t]$}
is a deformation of $A$ over $k[t]$ that is graded, i.e.,
each map $\mu_j:A\otimes A \rightarrow A$ is homogeneous of degree $-j$.
An {\em $i$-th level graded deformation of $A$} is a deformation
over $k[t]/(t^{i+1})$, i.e., an algebra $A_i$ with underlying
vector space $A[t]/(t^{i+1})$ and multiplication as
in (\ref{star-formula}) in which terms involving powers of $t$ greater
than $i$ are 0.
An $i$-th level graded deformation $A_i$ of $A$ {\em lifts} (or {\em extends})
to an $(i+1)$-st level graded deformation $A_{i+1}$
if the $j$-th multiplication maps of $A_i$
and $A_{i+1}$ coincide for all $j\leq i$.
We next point out that the algebra $\chabl$
defined in (\ref{eqn:Habc})
gives rise to a graded deformation
of $S\rtimes G$ in case it has the PBW property.
\begin{prop}\label{prop:PBW}
If $\chabl$ is a PBW deformation of $S\rtimes G$,
then $\chabl$ is the fiber of a deformation of
$A=S\rtimes G$:
$$\chabl\cong A_t\big|_{t=1}$$
as filtered algebras, for $A_t$ a graded
deformation of $S\rtimes G$ over $k[t]$.
\end{prop}
\begin{proof}
We define the algebra $A_t$ by
$$
A_t=
T_{kG}(kG\ot V\ot kG)[t] /
(r-\alpha(r)t-\beta(r)t^2, \ r'-\lambda(r')t:
r\in R, r'\in R').
$$
Since $\chabl$ has the PBW property,
$A_t$ and $(S\rtimes G)[t]$
are isomorphic as $k[t]$-modules:
Define a $k$-linear map from $S\rtimes G$ to $T_{kG}(kG\ot V\ot kG)$ so
that composition with the quotient map onto $\chabl$ is an isomorphism
of filtered vector spaces, and extend to a $k[t]$-module homomorphism
from $S\rtimes G [t]$ to $T_{kG}(kG\ot V\ot kG)[t]$.
The composition of this map with the quotient map onto $A_t$
can be seen to be an isomorphism of vector spaces by a degree argument.
The rest of the proof is a straightforward generalization of
\cite[Proposition~6.5]{ueber}, which is the case $S=S(V)$ and $\alpha = 0$.
Here, $r$ replaces $v\ot w - w\ot v$ and
the first and second multiplication maps $\mu_1$ and $\mu_2$
satisfy
\[
\begin{aligned}
& \lambda(g\ot v\ot 1 - 1\ot {}^gv\ot g)=\mu_1( g\ot v)-\mu_1( {}^gv\ot g ),\\
& \alpha(r) = \mu_1(r), \ \ \
\mbox{ and } \ \ \ \beta(r)=\mu_2( r)
\end{aligned}
\]
for all $g$ in $G$, $v$ in $V$, and $r$ in $R$.
\end{proof}
\section{Hochschild cohomology of group twisted Koszul algebras}\label{sec:hom}
We will look more closely at the Hochschild 2-cocycle condition (\ref{obst0})
and the obstructions (\ref{obst1})
and (\ref{obst2}) in the case that $A$ is a group twisted
Koszul algebra $S\rtimes G$.
A convenient resolution for this purpose was introduced by Guccione, Guccione,
and Valqui~\cite{GGV}.
We now recall from \cite{quad} a modified version of this construction.
\subsection*{Twisted product resolution}
Again, let $S$ be a Koszul algebra with finite dimensional
generating
$k$-vector space $V$ and subspace of relations $R\subset V\otimes V$:
$$
S=T_k(V)/(R)\, .$$
Since $S$ is Koszul, the complex
$$\cdots\rightarrow
{K}_3\overset{d_3}{\longrightarrow}
{K}_2\overset{d_2}\longrightarrow
{K}_1 \overset{d_1}\longrightarrow
{K}_0\overset{d_0}{\longrightarrow}
S\rightarrow 0
$$
is a free $S^e$-resolution of $S$, where $K_n = S\ot \tilde{K}_n\ot S$ with
$\tilde{K}_0=k$, $\tilde{K}_1 = V$, and
$$\tilde{K}_n=
\bigcap_{j=0}^{n-2}(V^{\otimes j}\otimes R\otimes V^{\otimes(n-2-j)}),
\quad n\geq 2.
$$
Identify $K_0$ with $S\ot S$.
The differential is restricted from that of the (reduced) bar resolution of $S$,
defined in (\ref{eqn:delta}),
so that $d_n=\delta_n|_{{K}_n}$.
Let $G$ be finite group acting by graded automorphisms on $S$ and
set $A=S\rtimes G$.
The {\em twisted product resolution} $X_{\DOT}$
of $A$ as an $A^e$-module
is the total complex
of the double complex $X_{\DOT,\DOT}$,
where
\begin{equation}\label{xij2}
X_{i,j} = A\ot (\overline{kG})^{\ot i} \ot \tilde{K}_j\ot A ,
\end{equation}
and $A^e$ acts by left and right multiplication on the outermost
tensor factors $A$:
$$
\begin{small}
\xymatrixcolsep{9ex}
\xymatrixrowsep{9ex}
\xymatrix{
&
X_{0,3}\ar[d]^{d^v_{0,3}} &
\vdots \ar[d] &
\vdots \ar[d] \\
&
X_{0, 2} \ar[d]^{d^v_{0,2}} &
X_{1, 2} \ar[l]^{d^h_{1, 2}} \ar[d]^{d^v_{1,2}} &
X_{2, 2} \ar[l]^{d^h_{2, 2}} \ar[d]^{d^v_{2,2}} &
\ar[l]\cdots \\
&
X_{0, 1} \ar[d]^{d^v_{0, 1}} &
X_{1, 1} \ar[l]^{d^h_{1,1}} \ar[d]^{d^v_{1, 1}} &
X_{2, 1} \ar[l]^{d^h_{2,1}} \ar[d]^{d^v_{2, 1}} &
\ar[l] \cdots \\
&
X_{0, 0} &
X_{1, 0} \ar[l]^{d^h_{1,0}} &
X_{2, 0} \ar[l]^{d^h_{2,0}} &
\ar[l] \cdots
}
\end{small}
$$
To define the differentials, we first identify each $X_{i,j}$ with a
tensor product over $A$ (see~\cite[Section~4]{quad}),
\begin{equation}\label{eqn:xij1}
X_{i,j}\cong (A\ot (\overline{kG})^{\ot i}\ot kG)\ot _A (S\ot\tilde{K_j}\ot A),
\end{equation}
where the right action of $A$ on $A\ot (\overline{kG})^{\ot i}\ot kG$ is given by
\[
(a\ot g_1\ot \cdots \ot g_i\ot g_{i+1}) sh =
a ( {}^{g_1\cdots g_{i+1}} s ) \ot g_1\ot \cdots
\ot g_i\ot g_{i+1} h
\]
and
the left action of $A$ on $S\ot\tilde{K}_j\ot A$ is given by
\[
sh (s'\ot x\ot a) = s ( {}^h s')\ot {}^hx \ot ha
\]
for all $g_1,\ldots, g_{i+1}, h$ in $G$,
$s,s'$ in $S$, and $a$ in $A$.
(We have suppressed tensor symbols in writing elements
of $A$ to avoid confusion with tensor
products defining the resolution.)
The horizontal and vertical differentials
on the bicomplex $X_{\DOT, \DOT}$, given as a tensor product
over $A$ via (\ref{eqn:xij1}), are then defined by
$
d_{i,j}^h=d_i\ot 1
$
and
$d^v_{i,j}=(-1)^i \ot d_j$,
respectively, where the notation $d$ is used for both the differential
on the reduced bar resolution of $kG$ (induced to an $A\ot (kG)^{op}$-resolution)
and on the Koszul resolution of $S$ (induced to an $S\ot A^{op}$-resolution).
Setting $X_n = \oplus_{i+j=n} X_{i,j}$ for each $n\geq 0$
yields the total complex $X_{\DOT}$:
\begin{equation}\label{resolution-X}
\cdots\rightarrow X_2\rightarrow X_1\rightarrow X_0\rightarrow A\rightarrow 0,
\end{equation}
with differential in positive degrees $n$ given by
$d_n= \sum_{i+j=n} (d_{i,j}^h + d_{i,j}^v)$,
and in degree 0 by the multiplication map.
By~\cite[Theorem~4.3]{quad}, $X_{\DOT}$ is a free resolution
of the $A^e$-module $A=S\rtimes G$.
\subsection*{Chain maps between reduced bar
and twisted product resolutions}
We found in~\cite{quad} useful chain maps
converting between the bar resolution
and the (nonreduced) twisted product resolution $X_{\DOT}$
of $A=S\rtimes G$.
We next extend~\cite[Lemma~4.7]{quad},
adding more details and adapting it for
use with the
reduced bar resolution. See also~\cite[Lemma~7.3]{ueber}
for the special case $S=S(V)$.
We consider elements of $\tilde{K}_j
\subset V^{\otimes j} $
to have graded degree $j$ and elements of $(\overline{kG})^{\ot i}$ to have graded degree 0.
\begin{lemma}\label{lem:maps}
For $A=S\rtimes G$, there exist $A$-bimodule homomorphisms
$\phi_n: X_n\rightarrow A\ot \overline{A}^{\,\ot n }\ot A$
and $\psi_n: A\ot \overline{A}^{\,\ot n }\ot A \rightarrow X_{n}$
such that the diagram
$$
\begin{small}
\xymatrix{
\cdots\ar[r] &
X_{2} \ar[r]^{d_2} \ar@<-.5ex>[d]_{\phi_2}&
X_{1} \ar[r]^{d_1} \ar@<-.5ex>[d]_{\phi_1}&
X_{0} \ar[r]^{d_0}\ar@<-.5ex>[d]_{\phi_0} &
A \ar[r] \ar@<-.5ex>[d]& 0 \\
\cdots\ar[r] &
A\ot \overline{A}\ot \overline{A} \ot A \ar[r]^{\delta_2} \ar@<-.5ex>[u]_{\psi_2}&
A\ot \overline{A} \ot A \ar[r]^{\delta_1}\ \ar@<-.5ex>[u]_{\psi_1}&
A\ot A \ar[r]^{\delta_0} \ar@<-.5ex>[u]_{\psi_0}&
A \ar[r] \ar@<-.5ex>[u]_{=} & 0& \\
}
\end{small}
$$
commutes, the maps $\phi_n$, $\psi_n$ are of graded degree~0, and $\psi_n\phi_n$
is the identity map on $X_n$ for $n=0,1,2$.
\end{lemma}
In fact, it can be shown that there are chain maps such that
$\psi_n\phi_n$ is the identity map on $X_n$
for each $n$.
We will not need this more general statement here, but
rather some of the explicit values of the maps in low degrees as given in
the proof of the lemma.
\begin{proof}
We again suppress tensor symbols in writing elements
of $A$ to avoid confusion with tensor
products defining the resolution.
In degree 0, $\psi_0$ and $\phi_0$
may be chosen to be identity
maps on $A\ot A$.
As in \cite[Lemma~4.7]{quad}, we may set
\[
\phi_1(1\ot g\ot 1)=1\ot g\ot 1 \ \ (\mbox{on }X_{1,0}),
\quad \ \phi_1(1\ot v\ot 1) = 1\ot v\ot 1 \ \ (\mbox{on }X_{0,1}) ,
\]
for all nonidentity $g$ in $G$ and $v$ in $V$,
and these values determine $\phi_1$ as an $A$-bimodule map.
Moreover, we set
\[
\begin{aligned}
\psi_1(1\ot g\ot 1) & = 1\ot g\ot 1 \ \ \ (\mbox{in } X_{1,0}),\\
\psi_1(1\ot vg\ot 1) &=1\ot v\ot g + v\ot g\ot 1 \ \
\ (\mbox{in } X_{0,1}\oplus X_{1,0})
\end{aligned}
\]
for nonidentity $g$ in $G$ and $v$ in $V$ and
use the identification (\ref{eqn:xij1}) for evaluating the differential
to check that $d_1\psi_1 = \psi_0\delta_1$ on these arguments.
In order to extend $\psi_1$ to an $A$-bimodule map
on $A\ot \overline{A}\ot A$,
we first choose a homogeneous
vector space basis of
$\overline{A}$
consisting of
the elements $g\neq 1_G$, $vg$,
and $sg$
as $g$ ranges
over the elements of $G$,
$v$ ranges over a $k$-basis of $V$,
and $s$ ranges over a $k$-basis
of homogeneous elements of $S$ of degree $\geq 2$.
Tensoring each of these elements
on the left and right by $1$ then
gives a free $A$-bimodule basis of $A\ot\overline{A}\ot A$.
The function $\psi_1$ is already defined
on elements of the form $1\ot g\ot 1$ and
$1\ot vg\ot 1$;
we may define $\psi_1$ on elements of the form $1\ot s\ot 1$ so that
$d_1\psi_1= \psi_0 \delta_1$ on these elements
and then define
\[
\psi_1(1\ot sg\ot 1) = \psi_1(1\ot s\ot g) + s\ot g \ot 1
\]
so that
$d_1\psi_1= \psi_0 \delta_1$ on these elements as well.
Then $\psi_1\phi_1$ is the identity map on $X_1$,
by construction.
Define $\phi_2$ by setting
\[
\begin{aligned}
& \phi_2(1\ot g\ot h\ot 1) & &=& &1\ot g\ot h\ot 1&
\ \ \ \ &(\mbox{on } X_{2,0}), \\
& \phi_2(1\ot g\ot v\ot 1) & &=& &1\ot g\ot v\ot 1 - 1\ot {}^gv\ot g\ot 1&
\ \ \ \ &(\mbox{on }X_{1,1}),\\
& \phi_2(1\ot r\ot 1) & &=& &1\ot r\ot 1& \ \ \ \
&(\mbox{on } X_{0,2})
\end{aligned}
\]
for all nonidentity $g,h$ in $G$, $v$ in $V$, and $r$ in $R$.
One may check that $\delta_2\phi_2 = \phi_1 d_2$.
Now set
\[
\begin{aligned}
&\psi_2(1\ot g\ot h\ot 1)& &=& &1\ot g\ot h\ot 1&
\ \ \ \ \ \ \ \ &(\mbox{in }X_{2,0}),\\
&\psi_2(1\ot vh\ot g\ot 1)& &=& &v\ot h\ot g\ot 1&
&(\mbox{in } X_{2,0}),\\
&\psi_2(1\ot g\ot vh\ot 1)& &=& &1\ot g\ot v \ot h + {}^gv\ot g\ot h\ot 1&
&(\mbox{in }X_{1,1}\oplus X_{2,0}),\\
&\psi_2(1\ot r\ot 1)& &=& &1\ot r\ot 1&
&(\mbox{in }X_{0,2}),\\
\end{aligned}
\]
for all $g,h$ in $G$, $v$ in $V$, and
$r$ in $R$.
A calculation shows that $d_2\psi_2=\psi_1\delta_2$ on these elements.
Letting $g,h$ range over the elements of $G$, $v$ over
a $k$-basis of $V$, and $r$ over a $k$-basis of $R$, we
obtain a linearly independent set consisting of elements of the form
$1\ot g\ot h\ot 1$,
$1\ot vh\ot g\ot 1$,
$1\ot g\ot vh\ot 1$,
and $1\ot r\ot 1$
on which $\psi_2$ has already been defined.
Extend the $k$-basis of $R$ to a $k$-basis of $V\ot V$ by including
additional elements of the form $v\ot w$ for $v,w$ in $V$. Now
define $\psi_2(1\ot v\ot w\ot 1)$
arbitrarily subject to the condition that $d_2\psi_2 = \psi_1\delta_2$
on these elements.
Let
\begin{equation}\label{explicitpsivalues}
\begin{aligned}
\psi_2(1\ot vg\ot w\ot 1)\ &=& &\psi_2(1\ot v\ot {}^gw\ot g) + v\ot g\ot w\ot 1,&
\ \text{ and }\\
\psi_2(1\ot v\ot wg\ot 1)\ &=& &\psi_2(1\ot v\ot w\ot g)&
\end{aligned}
\end{equation}
for all $g$ in $G$ and $v,w$ in $V$.
One checks that $d_2\psi_2 = \psi_1\delta_2$ on these elements as well.
Extend these elements to a free $A$-bimodule basis of $A\ot\overline{A}
\ot\overline{A}\ot A$.
We may
define $\psi_2$ on the remaining free basis elements so that
$d_2\psi_2 =\psi_1\delta_2$. By construction, $\psi_2\phi_2$ is the
identity map on $X_2$.
\end{proof}
We will need some further values of $\phi$ in homological degree 3,
which we set in the next lemma.
The lemma is proven by directly checking the chain map condition.
Other values of $\phi_3$ may be defined by
extending to a free $A$-bimodule basis of $A\ot (\overline{A})^{\ot 3}\ot A$.
\begin{lemma}
\label{eqn:phi3}
We may choose the map
$\phi_3$ in Lemma~\ref{lem:maps}
so that
$$
\begin{aligned}
\phi_3(1\ot x\ot 1) & = 1\ot x\ot 1 \ \ \ (\mbox{on }X_{0,3}),\\
\phi_3(1\ot g\ot r\ot 1)
& = 1\ot g\ot r\ot 1 - (1\ot \sigma\ot 1\ot 1)(1\ot g\ot r\ot 1)\\
&\hspace{.4cm} + (1\ot 1\ot\sigma\ot 1)(1\ot \sigma\ot 1\ot 1)(1\ot g\ot r
\ot 1) \ \ \ (\mbox{on }X_{1,2})
\end{aligned}
$$
for all nonidentity $g$ in $G$, $r$ in $R$,
and $x$ in $(V\ot R)\cap (R\ot V)$.
\end{lemma}
\section{Homological PBW conditions}\label{sec:hom2}
We now give homological conditions for a filtered algebra
to be a PBW deformation of a Koszul algebra twisted
by a group.
These conditions are a translation of the necessary
homological Conditions~(\ref{obst0}),
(\ref{obst1}), and (\ref{obst2}) into conditions on the parameter
functions $\alpha,\beta, \lambda$ defining a potential deformation; we prove these conditions
are in fact sufficient.
Again, let $S$ be a Koszul algebra generated by a finite dimensional
vector space $V$ with defining relations
$R$ and an action of a finite group $G$ by graded automorphisms.
Let $R'$ be the space of group action relations defined in
(\ref{eqn:R-prime}). Let $A=S\rtimes G$.
We use the resolution $X_{\DOT}$ of (\ref{xij2}) to express
the Hochschild cohomology $\HH^{\DOT}(A)$.
\begin{remark}\label{rk:extend-fcns}
{\em
Just as in \cite[Lemma 8.2]{ueber}, we may
identify the $k$-linear functions
$$\alpha : R\rightarrow V\ot kG, \ \
\beta: R\rightarrow kG , \ \ \mbox{ and } \lambda: R'\rightarrow kG$$
with
2-cochains on the resolution $X_{\DOT}$, i.e.,
$A$-bimodule homomorphisms from $X_2$ to $A$.
Indeed, both $\alpha$ and $\beta$
extend uniquely to
cochains on $X_{0,2}= A\ot R\ot A$
since a cochain is an $A$-bimodule homomorphism
and thus determined there by its values on $R$.
Similarly,
$\lambda$ corresponds to a unique
cochain on $X_{1,1}$
taking the value $\lambda (g\ot v\ot 1- 1\ot {}^gv\ot g)$
on elements of the form $1\ot g \ot v\ot 1$.
Here we identify the target spaces of $\alpha,\beta,\lambda$
with subspaces of $A$.
We extend these cochains
defined by $\alpha, \beta,\lambda$ to all of $X_{\DOT}$
by setting them to be 0 on
the components of $X_{\DOT}$
on which we did not already define them.
}\end{remark}
We fix choices of chain maps $\phi$, $\psi$ satisfying Lemmas~\ref{lem:maps}
and~\ref{eqn:phi3}.
We define the Gerstenhaber bracket of cochains on $X_{\DOT}$
by transferring
the Gerstenhaber bracket~(\ref{eqn:G-brack}) on the (reduced) bar resolution
to $X_{\DOT}$ using these chain maps:
If $\xi,\nu$ are Hochschild cochains on $X_{\DOT}$, we define
\begin{equation}\label{eqn:K-brack}
[\xi,\nu] = \phi^* ( [\psi^*(\xi), \psi^*(\nu)]) ,
\end{equation}
another cochain on $X_{\DOT}$.
At the chain level, this bracket depends on the choice of chain maps
$\phi,\psi$, although at the level of cohomology, it does not.
The choices we have made in Lemmas~\ref{lem:maps} and~\ref{eqn:phi3}
provide valuable information, as we see next.
\begin{thm}\label{thm:hom}
Let $S$ be a Koszul algebra over the field $k$ generated by
a finite dimensional vector space $V$.
Let $G$ be a finite group acting on $S$ by
graded automorphisms.
The algebra $\chabl$ defined in (\ref{eqn:Habc}) is a PBW deformation of $S\rtimes G$
if and only if
\begin{itemize}
\item[(a)]
$d^*(\alpha+\lambda)=0$,
\item[(b)]
$[\alpha+\lambda, \alpha+\lambda]=2d^*\beta$, and
\item[(c)]
$[\lambda+\alpha, \beta]=0$,
\end{itemize}
where $\alpha,\beta,\lambda$ are identified with
cochains on the
twisted product resolution $X_{\DOT}$ as in Remark~\ref{rk:extend-fcns}.
\end{thm}
\begin{proof}
We adapt ideas of~\cite[Theorem 4.1]{BG}, first
translating the above Conditions~(a), (b), and~(c)
to conditions
on the reduced bar resolution itself.
The proof is similar to that of~\cite[Theorem 5.4]{quad},
but certain arguments must be
altered to allow for the additional parameter function $\lambda$.
If $\chabl$ is a PBW deformation of $S\rtimes G$, then by Proposition~\ref{prop:PBW},
there are Hochschild 2-cochains $\mu_1$ and $\mu_2$ on the
(reduced) bar resolution such that
the Conditions~(\ref{obst0}),
(\ref{obst1}), and (\ref{obst2}) hold, that is,
$\mu_1$ is a Hochschild 2-cocycle, $[\mu_1,\mu_1]=2\delta^*(\mu_2)$,
and $[\mu_1,\mu_2]$ is a coboundary.
By the proofs of Proposition~\ref{prop:PBW} and Lemma~\ref{lem:maps},
$$
\alpha+\lambda =\phi_2^*(\mu_1) \ \ \mbox{ and } \ \
\beta =\phi_2^*(\mu_2) .
$$
Since $\mu_1$ is a cocycle, it follows that $d^*(\alpha +\lambda)=0$,
that is, Condition (a) holds.
For Condition (b), note that each side of the equation is automatically 0
on $X_{3,0}$ and on $X_{2,1}$, by a degree argument.
We will evaluate each side of the equation on $X_{1,2}$ and on $X_{0,3}$.
By definition,
\begin{eqnarray*}
[\alpha + \lambda, \alpha+\lambda] & = & \phi^*[\psi^*(\alpha+\lambda), \psi^*(\alpha + \lambda) ] \\
&=& \phi^*([\psi^*\phi^*(\mu_1),\psi^*\phi^*(\mu_1)])\\
& = & 2 \phi^*(\psi^*\phi^*(\mu_1) (\psi^*\phi^*(\mu_1)\ot 1 - 1\ot \psi^*\phi^*(\mu_1)).
\end{eqnarray*}
We evaluate on $X_{1,2}$. By Lemma~\ref{eqn:phi3}, the image of $\phi_3$ on $X_{1,2}$
is contained in
\[
(kG\ot \Ima (\phi_2\big|_{X_{0,2}}))\oplus
(V\ot \Ima(\phi_2\big|_{X_{1,1}}))\cap
(\Ima (\phi_2\big|_{X_{0,2}})\ot kG)\oplus
(\Ima(\phi_2\big|_{X_{1,1}})\ot V) .
\]
Therefore, since $\psi_2\phi_2$ is the identity map, applying $\psi^*\phi^*(\mu_1)\ot 1 - 1\ot \psi^*\phi^*(\mu_1)$
to an element in the image of $\phi_3$ is
the same as applying $\mu_1\ot 1 - 1\ot \mu_1$.
Since $\mu_1$ is a Hochschild 2-cocycle, the image of $\mu_1\ot 1 - 1\ot \mu_1$ is 0 upon projection to $S\rtimes G$,
which implies that the image of $\mu_1\ot 1 - 1\ot \mu_1$ on $\phi_3(X_{1,2})$ is
contained in the subspace of $\overline{A}\ot \overline{A}$ spanned by all $g\ot v - {}^g v\ot g$
for nonidentity $g$ in $G$ and $v$ in $V$.
This is in the image of $\phi_1$, and so again, applying $\psi^*\phi^*(\mu_1)$ is the same as applying $\mu_1$.
Hence $[\alpha+\lambda,\alpha+\lambda] = \phi^*([\mu_1,\mu_1])$ on $X_{1,2}$.
Condition~(\ref{obst1})
then implies that Condition (b) holds on $X_{1,2}$.
A similar argument verifies
Condition~(b) on
$X_{0,3}$.
Condition (c) holds by a degree argument:
The bracket $[\lambda+\alpha, \beta]$ is cohomologous to $[\mu_1,\mu_2]$, which
by (\ref{obst2}) is a coboundary. So $[\alpha+\lambda,\beta]$ is itself a coboundary:
$[\lambda+\alpha, \beta]=d^*(\xi)$ for some 2-cochain $\xi$.
Now $[\lambda +\alpha,\beta]$ is of graded degree~$-3$, and
the only 2-cochain on $X_{\DOT}$ of graded degree~$-3$ is 0.
For the converse, assume Conditions (a), (b), and (c) hold.
We may now set
$\mu_1=\psi^*(\alpha + \lambda)$ and $\mu_2=\psi^*(\beta)$.
Set
\[
\gamma =\delta_3^*(\mu_2)-\tfrac{1}{2}[\mu_1,\mu_1] .
\]
Condition~(a) of the theorem implies that $\alpha+\lambda$
is a 2-cocycle and thus $\mu_1$ is a 2-cocycle
on the reduced bar resolution of $A$.
The 2-cocycle $\mu_1$ then is a first multiplication map on $A$ and
defines a first level deformation $A_1$
of $A=S\rtimes G$.
Next we will see that Condition~(b) implies
this first level deformation can be extended to a second
level deformation.
By Lemma~\ref{lem:maps},
$$
\begin{aligned}
\phi^*_3(\gamma) &= \phi^*_3 (\delta_3^*\left(\psi_2^*(\beta)\right))
-\tfrac{1}{2}\phi_3^*[\psi^*_2(\alpha+\lambda), \psi^*_2(\alpha+\lambda)]\\
&= d^*(\phi^*_2 \psi_2^*(\beta))
-\tfrac{1}{2}[\alpha+\lambda, \alpha+\lambda] \\
&= d_3^*(\beta)
-\tfrac{1}{2}[\alpha+\lambda, \alpha+\lambda] . \\
\end{aligned}
$$
Hence $\phi_3^*(\gamma)=0$ by Condition~(b).
This forces $\gamma$ to be a coboundary, say
$\gamma=\delta^*(\mu)$ for some 2-cochain $\mu$ on the reduced bar resolution,
necessarily of graded degree $-2$.
Now,
$$
d^*(\phi^*(\mu))= \phi^* (\delta^*( \mu))=\phi^*(\gamma) =0\, ,
$$
so $\phi^*(\mu)$ is a 2-cocycle.
Then there must be a 2-cocycle $\mu'$ on the reduced bar resolution
with $\phi^*\mu'=\phi^*\mu$.
We replace $\mu_2$ by $\tilde{\mu}_2=\mu_2-\mu+\mu'$ so that
$\phi^*(\tilde{\mu}_2)=\beta$
but
$$2\delta^*(\tilde{\mu}_2)= 2\delta^*(\mu_2+\mu')-2\gamma
= [\mu_1,\mu_1] $$
by the definition of $\gamma$,
since $\mu'$ is a cocycle.
Thus the obstruction to lifting $A_1$ to a second level deformation
using the multiplication map $\tilde{\mu}_2$ vanishes, and
$\mu_1$ and $\tilde{\mu}_2$ together define
a second level deformation $A_2$ of $A$.
We now argue that Condition~(c) implies $A_2$ lifts
to a third level deformation of $A$.
Adding the coboundary $\mu'-\mu$ to $\mu_2$ adds a coboundary to
$[\mu_2, \mu_1]$, and hence
$[\tilde{\mu}_2, \mu_1] = \delta^*_3(\mu_3)$
for some cochain $\mu_3$ on the reduced bar resolution of graded degree $-3$.
Thus the obstruction to lifting $A_2$ to a third level deformation
vanishes and the multiplication maps
$\mu_1, \tilde{\mu}_2, \mu_3$ define a third level deformation $A_3$ of $A$.
The obstruction to lifting $A_3$ to a fourth level deformation
of $A$ lies in $\HH^{3,-4}(A)$ by~\cite[Proposition 1.5]{BG}.
Applying the map $\phi^*$ to this obstruction gives
a cochain of graded degree $-4$ on $X_3$,
as $\phi$ is of graded degree $0$ as a chain map by Lemma~\ref{lem:maps}.
But $X_3$ is generated, as an $A$-bimodule, by elements of graded degree 3
or less,
and thus $\phi^*$ applied to
the obstruction is 0, implying that the obstruction
is a coboundary. Thus the deformation $A_3$ lifts
to a fourth level deformation $A_4$ of $A$. Similarly,
the obstruction to lifting an $i$-th level deformation $A_i$ of $A$
lies in $\HH^{3, -(i+1)}$, and again since $S$ is Koszul,
the obstruction is a coboundary.
So the deformation $A_i$ lifts to $A_{i+1}$,
an $(i+1)$-st level deformation of $A$, for all $i\geq 1$.
The corresponding graded deformation
$A_t$ of $A$
is the vector
space $A[t]$ with multiplication determined by
$$
a*a'=aa'+\mu_1(a,a')t+\mu_2(a, a')t^2 + \mu_3(a, a')t^3+\ldots
$$
for all $a,a'\in A$.
We next explain that $\chabl$ is isomorphic, as a filtered algebra,
to the fiber $A_t|_{t=1}$.
First note that $A_t|_{t=1}$
is generated by $V$ and $G$
(since
the associated graded algebra of $A_t$ is $A$).
Thus we may define an algebra homomorphism
$$T_{kG}(kG \ot V \ot kG) \longrightarrow A_t\big|_{t=1}$$
and then
use Lemma~\ref{lem:maps} to verify that the elements
$$
\begin{aligned}
&r-\alpha(r)-\beta(r) \quad & &\text{ for }r\in R, \text{ and }\\
&g\ot v\ot 1- 1\ot ^gv \ot g-\lambda(g \ot v) \quad & &\text{ for } g\in G, v\in V
\end{aligned}
$$
lie in the kernel.
We obtain a surjective homomorphism of filtered algebras,
$$\chabl\longrightarrow A_t\big|_{t=1}\, . $$
We consider the dimension over $k$ of
each of the filtered components
in the domain and range:
Each filtered component of
$\chabl$ has dimension at most that of the corresponding
filtered component of $S\rtimes G$
since its associated graded algebra
is necessarily a quotient of $S\rtimes G$.
But the associated graded algebra of $A_t|_{t=1}$
is precisely $S\rtimes G$, and so
$$
\dim_k(F^d(S\rtimes G))
\geq \dim_k(F^d(\chabl))
\geq \dim_k(F^d(A_t\big|_{t=1}))
= \dim_k(F^d(S\rtimes G)),
$$
where $F^d$ indicates the summand of filtered degree $d$
in $\NN$.
Thus these dimensions are all equal. It follows that
$\chabl\cong A_t\big|_{t=1}$, and $\chabl$ is a PBW deformation.
\end{proof}
We now prove Theorem~\ref{thm:main} as a consequence of Theorem~\ref{thm:hom},
translating the homological conditions into Braverman-Gaitsgory style
conditions.
\begin{proof}[Proof of Theorem~\ref{thm:main}]
We explained in Section~\ref{sec:PBW} that each PBW deformation of $S\rtimes G$
has the form $\chabl$ as defined in (\ref{eqn:Habc}) for some parameter
functions $\alpha,\beta,\lambda$.
Theorem~\ref{thm:hom} gives necessary and sufficient conditions for
such an algebra $\chabl$ to be a PBW deformation of $S\rtimes G$.
We will show that the Conditions (a), (b), and (c)
of Theorem~\ref{thm:hom} are equivalent
to those of Theorem~\ref{thm:main}.
When convenient, we identify
$$\Hom_{A^e}(A\ot \overline{A}^{\, n}\ot A,A)
\cong
\Hom_{k}(\overline{A}^{\, n}, A) \, .
$$
\subsection*{Condition (a): $d^*(\alpha +\lambda) = 0$}
The cochain $d^*(\alpha+\lambda)$ has homological degree~3 and is the
zero function
if and only if it is 0 on each of $X_{3,0}$, $X_{2,1}$, $X_{1,2}$, and
$X_{0,3}$. It is automatically 0 on $X_{3,0}$ since $d(X_{3,0})$
trivially intersects $X_{1,1}\oplus X_{0,2}$ on which $\alpha
+\lambda$ is defined.
On $X_{2,1}$,
$d^*(\alpha)= 0$
automatically, as $\alpha$ is 0 on $X_{2,0}\oplus X_{1,1}$.
We evaluate $d^*(\lambda)$
on the elements of a free $A^e$-basis of $X_{2,1}$, using the
identification (\ref{eqn:xij1}) for evaluating the differential:
\[
\begin{aligned}
d^*(\lambda) &(1\ot g\ot h\ot v\ot 1) \\
& = \lambda(g\ot h\ot v\ot 1 - 1\ot gh\ot v\ot 1
+ 1\ot g\ot {}^hv\ot h \\
&\quad\quad + {}^{gh}v\ot g\ot h\ot 1
- 1\ot g\ot h\ot v ) \\
&= g \lambda(h\ot v)-\lambda(gh\ot v)+ \lambda(g\ot {}^h v) h
\end{aligned}
\]
in $A$ for all $g,h$ in $G$ and $v$ in $V$,
which can be rewritten as Theorem~\ref{thm:main}(1).
Therefore $d^*(\alpha + \lambda) |_{X_{2,1}} = 0$
if and only if
Theorem~\ref{thm:main}(1) holds.
(If $g$ or $h$ is the identity group element $1_G$, then in the evaluation above,
some of the terms are 0 as we are working with the reduced bar resolution.
The condition remains the same in these cases, and merely corresponds to the
condition $\lambda(1_G \ot v) =0$ for all $v$ in $V$.)
On $X_{1,2}$, $d^*(\alpha+\lambda)=0$
if and only if
\[
\begin{aligned}
d^*(\alpha +\lambda)& (1\ot g\ot r\ot 1)\\
& = (\alpha + \lambda) ( g\ot r \ot 1 - 1\ot {}^gr \ot g
- (\sigma\ot 1\ot 1)(g\ot r\ot 1) - 1\ot g\ot r ) \\
&= g \alpha(r) - \alpha( {}^gr)g
- (1\ot \lambda)(\sigma\ot 1)(g\ot r) - (\lambda\ot 1)(g\ot r) \,
\end{aligned}
\]
vanishes for all $g$ in $G$ and $r$ in $R$.
(Note that the multiplication map takes $r$
to 0 in $A$.)
This is equivalent to the equality
\[
1\ot \alpha - (\alpha \ot 1) (1\ot \sigma)(\sigma\ot 1)
= (1\ot \lambda)(\sigma\ot 1) + \lambda\ot 1
\]
as functions on $kG\ot R$ with values in $A$.
Thus $d^*(\alpha + \lambda)|_{X_{1,2}} = 0$ if and only if
Theorem~\ref{thm:main}(3) holds.
On $X_{0,3}$, $d^*(\lambda)$ is automatically 0 since $\lambda$
is 0 on $X_{0,2}$. So we compute $d^*(\alpha)|_{X_{0,3}}$.
Consider $x$ in $(R\ot V)\cap (V\ot R)$. Then
\begin{equation}\label{image}
d^*(\alpha) (1\ot x\ot 1)
= \alpha (x\ot 1 - 1\ot x)
= (1\ot \alpha-\alpha\ot 1)(x).
\end{equation}
So $d^*(\alpha + \lambda)|_{X_{0,3}}= 0$ if and only if
$ 1\ot \alpha - \alpha\ot 1$ has image 0 in $A$, i.e.,
Theorem~\ref{thm:main}(6) holds.
\subsection*{Condition (b): $[\alpha+\lambda, \alpha + \lambda] =
2d^*(\beta)$}
On $X_{3,0}$ and on $X_{2,1}$, both sides of this equation
are automatically 0, as their graded degree is $-2$.
We will compute their values on $X_{1,2}$ and on $X_{0,3}$.
First note that since $\lambda$ and $\alpha$ each have homological
degree 2, by the definition~(\ref{eqn:G-brack}) of bracket, $[\alpha,\lambda]
=[\lambda,\alpha]$ and so
\[
[\alpha + \lambda, \alpha +\lambda]
=[\alpha,\alpha]+2[\alpha,\lambda]+[\lambda,\lambda].
\]
We will compute $[\lambda,\lambda]$, $[\alpha,\lambda]$,
and $[\alpha,\alpha]$.
Note that $[\lambda,\lambda]$ can take nonzero values only on
$X_{1,2}$. We will compute its values on elements of the form
$1\ot g\ot r\ot 1$ for $g$ in $G$ and $r$ in $R$.
By (\ref{eqn:G-brack}), (\ref{eqn:K-brack}), and Lemmas~\ref{lem:maps} and~\ref{eqn:phi3},
$[\lambda , \lambda](1\ot g\ot r\ot 1)
=2\lambda(\lambda\ot 1)(g\ot r)$.
Similarly,
\[
[\alpha,\lambda ] (1\ot g\ot r\ot 1)
= - \lambda (1\ot \alpha) (g\ot r).
\]
Finally, note that $[\alpha,\alpha]|_{X_{1,2}}= 0$ automatically
for degree reasons.
Just as in our earlier calculation, we find that
\[
d^*(\beta)(1\ot g\ot r\ot 1) =
(1\ot \beta - (\beta\ot 1)(1\ot \sigma)(\sigma\ot 1))
(g\ot r).
\]
Therefore, $[\alpha+\lambda,\alpha+\lambda] = 2d^*(\beta)$
on $X_{1,2}$ if and only if
\[
2\lambda (\lambda\ot 1) - 2 \lambda (1\ot \alpha) =
2\ot \beta - 2(\beta\ot 1)(1\ot \sigma)(\sigma\ot 1)
\]
on $kG\ot R$.
This is equivalent to Theorem~\ref{thm:main}(2).
On $X_{0,3}$, the bracket $[\lambda,\lambda]$ vanishes.
We compute $[\alpha,\lambda]$ and $[\alpha,\alpha]$
on an element $1\ot x\ot 1$ of $X_{0,3}$
with $x$ in $(V\ot R)\cap (R\ot V)$.
Note that $\psi^*(\alpha)(r)=\alpha\psi(r)=
\alpha(\psi\phi)r=\alpha(r)$ for all $r$ in $R$.
Thus
$$
(\psi^*(\alpha)\ot 1-1\ot \psi^*(\alpha))(x)
=
(\alpha\ot 1-1\ot\alpha)(x)
$$
and therefore
\begin{eqnarray*}
[ \alpha , \alpha ](1\ot x\ot 1) & = & 2\psi^*(\alpha)(\alpha\ot 1
- 1\ot \alpha)(x) \ \mbox{ and}\\
{[} \alpha , \lambda {]} (1\ot x\ot 1) & = & \psi^*(\lambda)(\alpha\ot 1 -
1\ot \alpha)(x).
\end{eqnarray*}
We apply $\psi$ to $(\alpha\ot 1 - 1\ot \alpha)(x)$
using Lemma~\ref{lem:maps}.
Since $(\alpha\ot 1)(x)$ lies in $(V\ot kG)\ot V \subset A\ot A$
and $(1\ot \alpha)(x)$ lies in $V\ot (V\ot kG)\subset A\ot A$,
we use~(\ref{explicitpsivalues}) to apply $\psi$:
$$
\psi(\alpha\ot 1 - 1\ot \alpha)(x)
=
(\psi(1\ot \sigma)(\alpha\ot 1) - \psi(1\ot \alpha))(x) \ + \ y
$$
for some element $y$ in $X_{1,1}$. However, $\alpha$ is zero on $X_{1,1}$,
so
$$
\psi^*(\alpha)(\alpha\ot 1 - 1\ot \alpha)(x)
=
\alpha\psi((1\ot \sigma)(\alpha\ot 1) - 1\ot \alpha)(x) \, .
$$
We assume Condition (a) which we have shown implies Condition~(6) of
Theorem~\ref{thm:main}, i.e.,
$((1\ot \sigma)(\alpha\ot 1) - 1\ot \alpha)(x)$
lies in $R\ot kG$ since it is zero upon projection to $A$.
By the proof of Lemma~\ref{lem:maps},
$$\phi((1\ot \sigma)(\alpha\ot 1) - 1\ot \alpha)(x)
=
((1\ot \sigma)(\alpha\ot 1) - 1\ot \alpha)(x),
$$
and applying $\alpha\psi$ gives
$\alpha((1\ot \sigma)(\alpha\ot 1) - 1\ot \alpha)(x)$
since
$\psi\phi=1$.
Hence
$$
[ \alpha, \alpha ](1\ot x\ot 1) =
2\alpha ((1\ot \sigma)(\alpha\ot 1) - 1\ot \alpha) (x)\, .
$$
Similarly, we apply $\psi^*(\lambda)$ to $(\alpha\ot 1 - 1\ot \alpha)(x)$
again using~(\ref{explicitpsivalues}).
Recall that $\lambda$ is only nonzero on $X_{1,1}$,
and $\psi(1\ot\alpha)$ intersects $X_{1,1}$ at $0$;
hence $\psi^*(\lambda)(\alpha\ot 1-1\ot\alpha)(x) =
\psi^*(\lambda)(\alpha\ot 1)(x)$
and
$$
[\alpha,\lambda](1\ot x\ot 1)
=
\psi^*(\lambda)(\alpha\ot 1)(x)=
(\sum_{g\in G}\alpha_g\ot
\lambda(g\ot - )) (x) \, .
$$
Therefore $[\alpha+\lambda,\alpha+\lambda]=2d^*(\beta)$ on
$X_{0,3}$ if and only if
Theorem~\ref{thm:main}(4) holds.
\subsection*{Condition (c): $[\alpha+\lambda, \beta]= 0$}
On $X_{3,0}$ $X_{2,1}$, and $X_{1,2}$, the left side of
this equation is automatically 0 for degree reasons. We will compute values
on $X_{0,3}$. Similar to our previous calculation,
we find
\[
[\lambda,\beta ] = \lambda (\beta\ot 1)
\quad \mbox{ and } \quad
[\alpha,\beta] = \beta ((1\ot \sigma)(\alpha\ot 1) - 1\ot \alpha)
\]
on $(V\ot R)\cap (R\ot V)$.
So $[\alpha+\lambda,\beta]= 0$ if and only
if $\beta ((1\ot\sigma)(\alpha\ot 1) - 1\ot \alpha) =
- \lambda(\beta\ot 1) $.
This is precisely Theorem~\ref{thm:main}(5).
\end{proof}
\section{Application:
Group actions on polynomial rings}
\label{sec:CDL}
We now consider the special case when
$S$ is the symmetric algebra $S(V)$ of a finite dimensional
$k$-vector space $V$.
Let $G$ be a finite group
acting on $S(V)$ by graded automorphisms.
Let $\chlk$ be the $k$-algebra
generated by the group ring $kG$ together with
the vector space $V$ and subject to the relations
\begin{itemize}
\item
$gv-\, ^gv g -\lambda(g,v), \quad\ \ \text{ for }g\text{ in } G, \ v\text{ in } V$
\item
$vw-wv-\kappa(v,w), \quad\text{ for }v,w \text{ in } V,$
\end{itemize}
where
$$
\lambda:kG \times V \rightarrow kG, \quad
\kappa: V \times V \rightarrow kG \oplus (V\otimes kG)
$$
are bilinear functions.
Letting $\kappa^C$ and $\kappa^L$ be the projections of $\kappa$
onto $kG$ and $V\ot kG$, respectively,
$\chlk$ is the algebra $\chabl$ from earlier sections
with $\alpha=\kappa^L$ and $\beta=\kappa^C$.
Its homogeneous version
is the algebra
$$\text{HomogeneousVersion}(\chlk)=S(V)\rtimes G
=\mathcal{H}_{0,0}\ .$$
We say that $\chlk$ is a {\em Drinfeld orbifold algebra}
if it has the PBW property:
$$\text{Gr}\,{\chlk} \cong S(V)\rtimes G
$$
as graded algebras.
Thus Drinfeld orbifold algebras
are PBW deformations of $S(V)\rtimes G$ .
In characteristic zero,
our definition of Drinfeld orbifold algebra coincides with that in~\cite{doa},
up to isomorphism, even though no parameter $\lambda$ appears there.
This is a consequence of Theorem~\ref{thm:nonmod} in the next section:
In this nonmodular case, $\chlk$ is isomorphic to $\cH_{0,\kappa'}$ for some $\kappa'$.
The algebras $\mathcal{H}_{\lambda,\kappa}$ include as special cases many algebras
of interest in the
literature, and our Theorem~\ref{RawPBWConditions}
below unifies results giving necessary
and sufficient conditions on parameter functions for $\mathcal{H}_{\lambda,\kappa}$ to
have the PBW property.
When $\lambda =0$ and $\kappa^L=0$,
Drinfeld orbifold algebras $\mathcal{H}_{0,\kappa}$
include Drinfeld's Hecke algebras~\cite{Drinfeld} and
Etingof and Ginzburg's symplectic reflection algebras~\cite{EG}.
When $\lambda=~0$ and $\kappa^C=0$, Drinfeld orbifold algebras $\mathcal{H}_{0,\kappa}$
exhibit a Lie type structure: Many of the conditions of
Theorem~\ref{RawPBWConditions} below
are vacuous in this case, while
Condition~(3) states that $\kappa^L$ is $G$-invariant and
Conditions~(4) and~(6) are analogs of the Jacobi identity twisted by the group action.
When $\kappa =0$, Drinfeld orbifold algebras $\mathcal{H}_{\lambda,0}$
include Lusztig's graded affine Hecke algebras \cite{Lusztig89}.
The following theorem simultaneously generalizes \cite[Theorem 3.1]{doa}
and \cite[Theorem 3.1]{ueber}.
\begin{thm}
\label{RawPBWConditions}
Let $G$ be a finite group acting linearly on $V$, a finite dimensional
$k$-vector space. Then $\chlk$
is a PBW deformation of $S(V)\rtimes G$ if and only if
\begin{itemize}
\item[(1)]
$\lambda(gh,v)=\lambda(g,\, ^hv)h+g\lambda(h,v)$,
\item[(2)]\rule{0ex}{4ex}
$
\kappa^C(\, ^gu,\, ^gv)g-g\kappa^C(u, v)
\ =\
\lambda\big(\lambda(g,v), u\big)-\lambda\big(\lambda(g,u), v\big)
+\displaystyle{\sum_{a\in G}}\lambda\big(g,\kappa^L_a(u,v)\big)a\,
$
\item[(3)]\rule{0ex}{4ex}
$\ ^g\big(\kappa^L_{g^{-1}h}(u,v)\big)
-\kappa^L_{hg^{-1}}(^gu,\ ^gv)
=
(^hv-\ ^gv)\lambda_h(g,u)-
(^hu-\ ^gu)\lambda_h(g,v)$,
\item[(4)]\rule{0ex}{4ex}
\vspace{-3.3ex}
$$
\hspace{-17ex}
\begin{aligned}\hphantom{x}
0\ =\ & \ 2\sum_{\sigma\in\Alt_3}
\kappa^C_g(v_{\sigma(1)},v_{\sigma(2)})(v_{\sigma(3)}-\, ^gv_{\sigma(3)})
\\
& +\ \sum_{\substack{a\in G\\ \sigma\in \Alt_3\rule{0ex}{1.5ex}}}
\kappa_{ga^{-1}}^L\big(v_{\sigma(1)}+\, ^a{v_{\sigma(1)}},
\kappa_a^L(v_{\sigma(2)},v_{\sigma(3)})\big)
\\
&-2\sum_{\substack{a\in G\\ \sigma\in \Alt_3\rule{0ex}{1.5ex}}}
\kappa_a^L(v_{\sigma(1)},v_{\sigma(2)})\lambda_g(a,v_{\sigma(3)})\, ,
\end{aligned}
$$
\item[(5)]\rule{0ex}{3ex}
\vspace{-3.5ex}
$$
\hspace{-10ex}
\begin{aligned}
&2\sum_{\sigma\in\Alt_3}
\lambda\big(\kappa^C(v_{\sigma(1)},v_{\sigma(2)}),v_{\sigma(3)}\big)
\\ & \quad\quad\quad
= -\sum_{\substack{a\in G\\ \sigma\in \Alt_3\rule{0ex}{1.5ex}}}
\kappa_{ga^{-1}}^C\big(v_{\sigma(1)}+\ ^av_{\sigma(1)},
\kappa_a^L(v_{\sigma(2)},v_{\sigma(3)})\big) ,
\end{aligned}
$$
\item[(6)]\rule[-2ex]{0ex}{3ex}
$0=
\kappa_g^L(u,v)(w-\ ^gw)+
\kappa_g^L(v,w)(u-\ ^gu)+
\kappa_g^L(w,u)(v-\ ^gv) $\, ,
\end{itemize}
in $S(V)\rtimes G$, for all $g,h$ in $G$ and all $u,v,w, v_1, v_2, v_3$ in $V$.
\end{thm}
\begin{proof}
The theorem follows from
Theorem~\ref{thm:main} by rewriting the conditions explicitly on
elements.
\end{proof}
Alternatively,
the conditions of the theorem follow from
strategic and tedious application of
the Composition-Diamond Lemma
(such as in the proof of \cite[Theorem~3.1]{doa}).
Condition (1) follows from consideration of overlaps of the
form $ghv$ for $g,h$ in $G$, $v$ in $V$.
For Conditions~(2) and~(3),
we consider overlaps of the form $gwv$ for $w$ in $V$;
terms of degree~$1$ give rise to Condition~(3) while
those of degree~$0$ give rise to Condition~(2).
Overlaps of the form $uvw$ for $u$ in $V$
give the other conditions:
Terms of degree~$0$ give rise to Condition~(5),
terms of degree~$1$ give rise to Condition~(4),
and terms of degree~$2$ give rise Condition~(6).
Note that we assume Condition~(6) to deduce Conditions~(4)
and~(5).
In the theorem above,
we may set $\kappa^L = 0$ to obtain the conditions
of~\cite[Theorem~3.1]{ueber}
or instead set $\lambda = 0$
to obtain the conditions of~\cite[Theorem~3.1]{doa}.
Note that in Theorem~\ref{RawPBWConditions},
Condition~(3) measures the extent to which
$\kappa^L$ is $G$-invariant. Indeed,
the failure
of $\kappa^L$ to be $G$-invariant is recorded by $d^*(\lambda)$, and
so $\lambda$ is a cocycle if and only if $\kappa^L$
is invariant.
Condition~(3) in particular implies that $\kappa_{1_G}^L$ is
$G$-invariant.
The conditions in the theorem also generalize a special case of
Theorem~2.7 in \cite{Khare} by Khare:
He more generally considered actions of cocommutative algebras, while we
restrict to actions of group algebras $kG$.
Khare more specifically restricted $\kappa^L$ to take values in the subspace
$V\cong V\ot k$ of $V\ot kG$.
We next give some examples of Drinfeld orbifold algebras.
The first example exhibits parameters $\kappa^C$, $\kappa^L$, and $\lambda$
all nonzero. The second example shows that a new
class of deformations is possible in the modular setting;
see Remark~\ref{counterexample}.
\begin{example}
{\em Let $k$ have prime characteristic $p>2$,
and
$V=kv_1 \oplus kv_2\oplus kv_3$.
Let $G\leq \text{GL}_3(k) $ be the cyclic group of order $p$
generated by the transvection $g$ in $\text{GL}(V)$
fixing $v_1,v_2$ and mapping $v_3$ to $v_1 + v_3$:
$$
G=\left<g=\left(\begin{smallmatrix} 1 & 0 & 1\\0 & 1 & 0 \\ 0 & 0 & 1
\end{smallmatrix}\right)\right>\, .
$$
Define $$\lambda(g^i,v_3)=ig^{i-1},\
\kappa^C(v_1,v_3)=g=-\kappa^C(v_3,v_1),\
\kappa^L(v_1, v_3)=v_2=-\kappa^L(v_3, v_1),$$
and set $\lambda, \kappa^C, \kappa^L$ to be zero on all other pairs
of basis vectors.
Then
\[
\begin{aligned}
\chlk=T_{kG}(kG \ot V\ot kG)/&(gv_1-v_1g, \ gv_2-v_2g, \ gv_3-v_1g-v_3g-1,\\
& \hspace{.3cm}
v_1v_3-v_3v_1-v_2-g, \ v_1v_2-v_2v_1, \ v_2v_3-v_3v_2 )
\end{aligned}
\]
is a PBW deformation of $S(V)\rtimes G$
by Theorem~\ref{RawPBWConditions}.
}\end{example}
\begin{example}\label{counterexampleexplicit}
{\em
Let $k$ have prime characteristic $p>2$ and $V = kv\oplus kw$.
Suppose $G\leq {\rm{GL}}_2(k)$ is the cyclic group of order $p$ generated
by $g = \left(\begin{smallmatrix} 1 & 1 \\ 0 & 1 \end{smallmatrix}\right)$ so that
${}^g v =v$ and ${}^g w= v+w$.
Define
\[
\lambda( g^i,v) = i g^i,\
\lambda(1,w)= \lambda( g,w) = 0,\
\lambda( g^i,w) = \tbinom{i}{2}\, g^i \text{ for }i>2,
\]
and $\kappa = 0$.
Then one may check the conditions of Theorem~\ref{RawPBWConditions} to conclude
that
$$\mathcal{H}_{\lambda,0}
= T_{kG}(kG\ot V\ot kG) / (gv-vg - g , \ gw-vg-wg , \ vw-wv)
$$
is a PBW deformation of $S(V)\rtimes G$.
}\end{example}
\section{Comparison of modular and nonmodular settings}
We now turn to the nonmodular setting, when the characteristic
of the underlying field $k$ does not divide the order of the acting
group $G$. We compare algebras modelled on Lusztig's
graded affine Hecke algebra~\cite{Lusztig89}
to algebras modelled
on Drinfeld's Hecke algebra~\cite{Drinfeld} (such as the symplectic reflection
algebras of Etingof and Ginzburg~\cite{EG}).
The following theorem strengthens Theorem~4.1 of~\cite{ueber}
while simultaneously generalizing it to the setting
of Drinfeld
orbifold algebras (see~\cite{doa}) in the nonmodular setting.
The theorem was originally shown for Coxeter groups
and Lusztig's graded affine Hecke algebras in~\cite{RamShepler}.
\begin{thm}\label{thm:nonmod}
Suppose $G$ acts linearly on a finite dimensional vector
space $V$
over a field $k$ whose characteristic is coprime to $|G|$.
If the algebra $\mathcal{H}_{\lambda,\kappa}$ defined
in Section~\ref{sec:CDL} is a PBW deformation
of $S(V)\rtimes G$ for some parameter functions
$$\lambda: kG\times V\rightarrow kG\quad
\text{and}\quad \kappa: V\times V \rightarrow kG \oplus
(V\otimes kG),$$
then there exists a parameter function
$$
\kappa': V\times V \rightarrow kG \oplus (V\otimes kG)$$
such that
$$
\mathcal{H}_{\lambda, \kappa}\cong \mathcal{H}_{0,\kappa'}
$$
as filtered algebras and thus $\mathcal{H}_{0,\kappa'}$
also exhibits the PBW property.
\end{thm}
\begin{proof}
As in~\cite{ueber},
define $\gamma:V\otimes kG\rightarrow kG$ by
$$
\gamma(v\otimes g)=\tfrac{1}{|G|}\sum_{a,b\, \in G}
\lambda_{ab}\big(b, \, ^{b^{-1}}v\big) ag
=\sum_{a\in G}\ \gamma_a(v)\, ag
$$
for $\gamma_a:V \rightarrow k$ giving the
coefficient of $ag$ in $G$, and as
before, for each $h$ in $G$, $\lambda_h: kG\times V\rightarrow k$ is
defined by $\lambda(b, v) = \sum_{h\in G}\lambda_h(b, v) h$
for $b$ in $G$ and $v$ in $V$.
We abbreviate $\gamma(u)$ for $\gamma(u\ot 1)$
for $u$ in $V$ in what follows for simplicity of notation.
Define a parameter function $\kappa':V\times V
\rightarrow kG\oplus (V\otimes kG)$
by
$$
\begin{aligned}
\kappa'(u,v)\ =\
&\ \ \gamma(u)\gamma(v) -\gamma(v)\gamma(u)
+\lambda(\gamma(u), v)- \lambda(\gamma(v), u)\\
&+\kappa(u,v)- \kappa^L(u,v)\\
&+\tfrac{1}{|G|}\sum_{g\in G}
(1-\gamma)\big((\, ^g\kappa^L)(u, v)\big)
\end{aligned}
$$
for $u,v$ in $V$. Here, $\kappa^L(u,v)$ is again the
degree $1$ part of $\kappa$, i.e.,
the projection of $\kappa(u,v)$
to $V\otimes kG$,
and we take the $G$-action on $\kappa^L$
induced from the action of $G$ on itself
by conjugation,
i.e.,
$(^g\kappa^L)(u,v)=\, ^g(\kappa^L(\, ^{g^{-1}}u,\, ^{g^{-1}}v))$
with $^g(v\otimes h)=\, ^gv \otimes ghg^{-1}$
for $g,h$ in $G$.
Let
$$
F=T_{kG}(kG\ot V\ot kG)
$$
and identify $v$ in $V$ with $1\ot v\ot 1$ in $F$.
Define an algebra homomorphism
$$
f:F\rightarrow \mathcal{H}_{\lambda, \kappa}
\quad\text{ by }\quad
v\mapsto v + \gamma(v)
\text{ and }
g\mapsto g
\quad\text{for all } g\in G, v\in V,
$$
after identifying $\chlk$ with a quotient of $F$.
We will use Theorem~\ref{RawPBWConditions}
to verify that the relations defining $\mathcal{H}_{0,\kappa'}$
as a quotient of $F$ lie in the kernel of $f$.
It will follow that
$f$ extends to a filtered algebra homomorphism
$$f:\mathcal{H}_{0,\kappa'}\rightarrow \mathcal{H}_{\lambda,\kappa}\, .$$
We first check that
elements $uv-vu-\kappa'(u,v)$ in $F$ for $u,v$ in $V$
are mapped to zero under $f$.
On one hand,
$\kappa'(u,v)$ in $F$ is mapped under
$f$ to
$$
\kappa'(u,v)+\tfrac{1}{|G|}\sum_{g\in G}
\gamma\big((\, ^g\kappa^L)(u,v)\big).
$$
On the other hand, the commutator $[u,v]=uv-vu$ in $F$ maps to the
commutator
$$[u+\gamma(u),v+\gamma(v)]
=
[u,v] + [\gamma(u), \gamma(v)]
+ \big(u\gamma(v)-\gamma(v)u-v\gamma(u)+\gamma(u)v\big)
$$ in $\chlk$. But $[u,v]$ is $\kappa(u,v)$
in $\chlk$, and $\kappa'(u,v)$ by definition expresses
the commutator $[\gamma(u),\gamma(v)]$
in terms of $\kappa(u,v)$
and other terms.
Hence $[u,v]+[\gamma(u),\gamma(v)]$
simplifies
to
$$
\kappa'(u,v) - \lambda(\gamma(u),v)+\lambda(\gamma(v),u)
+\kappa^L(u,v)-\tfrac{1}{|G|}
\sum_{g\in G}\,(1-\gamma)\big((^g\kappa^L)(u, v)\big)
$$
in $\chlk$.
We may also rewrite
$$
u\gamma(v)-\gamma(v)u-v\gamma(u)+\gamma(u)v
$$
as
$$
\lambda(\gamma(u),v)-\lambda(\gamma(v),u)
-
\sum_{g\in G} \big(\gamma_g(v)(\,^gu-u)g
-\gamma_g(u)(\,^gv-v)g \big).
$$
Hence, the relation $uv-vu-\kappa'(u,v)$
in $F$ maps under $f$
to
\begin{equation}
\label{relationzero}
\kappa^L(u,v)
-\tfrac{1}{|G|}\sum_{g\in G}\,(^g\kappa^L)(u,v)
- \sum_{g\in G} \big(\gamma_g(v)(\,^gu-u)g
-\gamma_g(u)(\,^gv-v)g \big).
\end{equation}
We may then argue as in
the proof of Theorem~4.1 of~\cite{ueber}
to show that Condition~(3) of Theorem~\ref{RawPBWConditions}
implies that
$$
\begin{aligned}
\sum_{g\in G} \big(\gamma_g(v)(\,^gu-u)g
-\gamma_g(u)(\,^gv-v)g \big)
& =\kappa^L(u,v)-\tfrac{1}{|G|}
\sum_{g,a\in G} \ ^g\big(\kappa^L_a(\,^{g^{-1}}u, \,^{g^{-1}}v)\big)gag^{-1}\\
&=\kappa^L(u,v)-\tfrac{1}{|G|}
\sum_{g\in G} \ (^g\kappa^L)(u, v)\, .
\end{aligned}
$$
Thus expression~(\ref{relationzero}) above is zero
and $uv-vu-\kappa'(u,v)$ lies in the kernel of $f$ for all
$u,v$ in $V$.
We may follow the rest of the proof of Theorem~4.1 of~\cite{ueber}
to see that $gv - \, ^g v g$ lies in the kernel of $f$
for all $g$ in $G$ and $v$ in $V$
and that $f$ is an isomorphism.
\end{proof}
\begin{remark}\label{counterexample}
{\em
Theorem~\ref{thm:nonmod} above is false in the modular setting,
i.e., when char~$(k)$ divides $|G|$.
Indeed, Example~\ref{counterexampleexplicit}
gives an algebra $\mathcal{H}_{\lambda,0}$
exhibiting the PBW property for some parameter function
$\lambda$, but we claim that there is no parameter
$\kappa ' : V\times V \rightarrow kG\oplus (V\ot kG)$
for which $\mathcal{H}_{\lambda,0}\cong \mathcal{H}_{0, \kappa'}$ as filtered algebras.
If there were,
then $\mathcal{H}_{0,\kappa'}$ would exhibit the PBW property and
any isomorphism $f: \mathcal{H}_{\lambda,0}\rightarrow \mathcal{H}_{0, \kappa'}$
would map the relation
$$gv-vg-g=gv-\, ^gvg-\lambda(g,v)=0$$
in $\mathcal{H}_{\lambda,0}$ to $0$ in
$\mathcal{H}_{0, \kappa'}$.
But $f$ is an algebra homomorphism
and takes the filtered degree $1$ component of
$\mathcal{H}_{\lambda,0}$ to that of $\mathcal{H}_{0, \kappa'}$,
giving a relation
$$f(g) f(v) -f(v)f(g)-f(g)=0$$
in $ \mathcal{H}_{0, \kappa'}$ with first two
terms of the left hand side of filtered degree $1$.
In particular, the sum of the terms
of degree $0$ vanish. But this implies that $f(g)=0$
since the degree $0$
terms of $f(g)f(v)-f(v)f(g)$ cancel with each other
as $kG$ is commutative.
This contradicts the assumption that $f$ is an isomorphism. }
\end{remark}
|
2,869,038,153,931 | arxiv | \section{\uppercase{Introduction}}
\label{sec:introduction}
Lung sounds auscultation is the first and most common examination carried out by every general practitioner or family doctor. It is fast, easy and well known procedure, popularized by La\"ennec \cite{stethoscopeInventor}, who invented the stethoscope. Nowadays, different variants of such tool can be found on the market, both analog and electronic, but regardless of the type of stethoscope, this process still is highly subjective. Indeed, an auscultation normally involves the usage of a stethoscope by a physician, thus relying on the examiner's own hearing, experience and ability to interpret psychoacoustical features. Another strong limitation of standard auscultation can be found in the stethoscope itself, since its frequency response tends to attenuate frequency components of the lung sound signal above nearly $120 \ Hz$, leaving lower frequency bands to be analyzed and to which the human ear is not really sensitive \cite{intro1} \cite{auscultation}. A way to overcome this limitation and inherent subjectivity of the diagnosis of diseases and lung disorders is by digital recording and subsequent computerized analysis \cite{AI4lungs}.
Historically many efforts have been reported in literature to automatically detect lung sound pathologies by means of digital signal processing and simple time-frequency analysis \cite{AI4lungs}. In recent years, however, machine learning techniques have gained popularity in this field because of their potential to find significant diagnostic information relying on statistical distribution of data itself \cite{intro2}. Palaniappan et al. (2013) report state of the art results are obtained by using supervised learning algorithms such as support vector machine (SVM), decision trees and artificial neural networks (ANNs) trained with expert-engineered features extracted from audio signals. However, more recent studies \cite{CNNsLungSounds} have proved that such benchmark results can be obtained through end-to-end learning, by means of deep neural networks (DNNs), a type of machine learning algorithm that attempts to model high-level abstractions in complex data, composing its processing from multiple non-linear transformations, thus incorporating the feature extraction itself in the training process. Among the most successful deep neural network architectures, convolutional neural networks (CNNs) together with recurrent neural networks (RNNs) have been shown to be able to find useful features in the lung sound signals as well as to track temporal correlations between repeating patterns \cite{CNNsLungSounds}.
However, information fusion between different auscultation points (APs) and integration of decision making processes to guide the examiner throughout the auscultation seems to be absent in the literature. Reinforcement Learning (RL), a branch of machine learning inspired by behavioral psychology \cite{Shteingart2014ReinforcementLA}, can possibly provide a way to integrate auscultation path information, interactively, at data acquisition stage. In the common RL problem setting, the algorithm, also referred as agent, learns to solve complex problems by interacting with an environment, which in turn provides positive or negative rewards depending on the results of the actions taken. The objective of the agent is thus to find the best \textit{policy}, which is the best action to take, given a state, in order to maximize received reward and minimize received penalty. We believe that RL framework is the best choice for the solution of our problem, i.e finding the lowest number of APs while maintaining a minimum acceptable diagnosis accuracy. As a result, the agent will learn what are the optimal locations to auscultate the patient and in which order to examine them. As far as we know, this is the first attempt to use RL to perform interactive lung auscultation. \\
\indent RL has been successful in solving a variety of complex tasks, such as computer vision \cite{RLAcomputervision}, video games \cite{RLAvideogames}, speech recognition \cite{RLAspeech} and many others. RL can also be effective in feature selection, defined as the problem of identifying the smallest subset of highly predictive features out of a possibly large set of candidate features \cite{HAZRATIFARD20131892}. A similar problem was further investigated \cite{7175757} where the authors develop a Markov decision process (MDP) \cite{Puterman:1994:MDP:528623} that through dynamic programming (DP) finds the optimal feature selecting sequence for a general classification task. Their work motivated us to take advantage of this framework with the aim of applying it to find the lowest number of APs, i.e the smallest set of features, while maximizing the accuracy in classifying seriousness of breath phenomena detected during auscultation, which is in turn directly proportional to diagnosis accuracy.
This work is organized as follows. Section \ref{sec:mathematics} recalls the mathematical background useful to follow the work. Section \ref{sec:RL-based-interactive-auscultation} formally defines our proposed solution and gives a systematic overview of the interactive auscultation application. Section \ref{evaluation} describes the experimental framework used to design the interactive agent and evaluate its performance. Section \ref{results} shows the results of our experiments, where we compare the interactive agent against its static counterpart, i.e an agent that always takes advantage of all auscultation points. Finally, Section \ref{conclusions} presents our conclusions.
\section{\uppercase{Mathematical Background}}
\label{sec:mathematics}
\subsection{Reinforcement Learning}
\label{rl_theory}
The RL problem, originally formulated in \cite{Sutton1988}, relies on the theoretical framework of MDP, which consists on a tuple of $(S, A, P_{SA}, \gamma, R)$ that satisfies the Markov property \cite{Puterman:1994:MDP:528623}. $S$ is a set of environment states, $A$ a set of actions, $P_{SA}$ the state (given an action) transitions probability matrix, $\gamma$ the discount factor, $R(s)$ the reward (or reinforcement) of being in state $s$. We define the policy $\pi(s)$ as the function, either deterministic or stochastic, which dictates what action to take given a particular state. We also define a \textit{value function} that determines the value of being in a state $s$ and following the policy $\pi$ till the end of one iteration, i.e an episode. This can be expressed by the expected sum of discounted rewards, as follows:
\begin{equation}
V^{\pi}(s) = E[R(s_0) + \gamma R(s_1) + \gamma^2 R(s_2) + ... | s_0 = s, \pi ]
\end{equation}
\noindent where $s_0, \ s_1, \ s_2, \ \dots$ is a sequence of states within the episode. The discount factor $\gamma$ is necessary to moderate the effect of observing the next state. When $\gamma$ is close to $0$, there is a shortsighted condition; when it tends to $1$ it exhibits farsighted behaviour \cite{Sutton:1998:IRL:551283}. For finite MDPs, policies can be partially ordered, i.e $\pi\geq\pi'$ if and only if $V^\pi (s) \geq V^{\pi'} (s)$ for all $s \in S$. There is always at least one policy that is better than or equal to all the others. This is called optimal policy and it is denoted by $\pi^*$. The optimal policy leads to the optimal value function:
\begin{align}
V^*(s) = \max_\pi V^\pi (s) = V^{\pi^*}(s)
\end{align}
\indent In the literature the algorithm solving the RL problem (i.e finding $\pi^*$) is normally referred as the agent, while the set of actions and states are abstracted as belonging to the environment, which interacts with the agent signaling positive or negative rewards (Figure \ref{fig:RL-workflow}). Popular algorithms for its resolution in the case of finite state-space are \textit{Value iteration} and \textit{Policy iteration} \cite{Sutton1988}.
\begin{figure}[ht!]
\centering
\includegraphics[scale=0.95]{figure-01-eps-converted-to}
\caption{RL general workflow: the agent is at state $s_0$, with reward $R[s_0]$. It performs an action $a$ and goes from state $s_0$ to $s_1$ getting the new reward $R[s_1]$.}
\label{fig:RL-workflow}
\end{figure}
\subsection{Q-Learning}
\label{q-learn}
Q-learning \cite{Watkins92q-learning} is a popular algorithm used to solve the RL problem. In Q-learning actions $a \in A$ are obtained from every state $s \in S$ based on an action-value function called \textit{Q function}, $Q: S \times A \rightarrow{\mathbb{R}}$, which evaluates the quality of the pair $(s,a)$.
The Q-learning algorithm starts arbitrarily initializing $Q(s,a)$; then, for each episode, the initial state $s$ is randomly chosen in $S$, and $a$ is taken using the policy derived from $Q$. After observing $r$, the agent goes from state $s$ to $s'$ and the $Q$ function is updated following the Bellman equation \cite{bellman}:
\begin{equation}
\label{eq:bellman}
Q(s,a) \leftarrow Q(s,a) + \alpha [r + \gamma \cdot \max_{a'} Q(s',a') - Q(s,a)]
\end{equation}
where $\alpha$ is the learning rate that controls algorithm convergence and $\gamma$ is the discount factor. The algorithm proceeds until the episode ends, i.e a terminal state is reached. Convergence is reached by recursively updating values of $Q$ via temporal difference incremental learning \cite{Sutton1988}.
\subsection{Deep Q network}
\label{dqn}
If the states are discrete, $Q$ function is represented as a table. However, when the number of states is too large, or the state space is continuous, this formulation becomes unfeasible. In such cases, the Q-function is computed as a parameterized non-linear function of both states and actions $Q(s,a;\theta)$ and the solution relies on finding the best parameters $\theta$. This can be learned by representing the Q-function using a DNN as shown in \cite{RLAvideogames} \cite{mnih2015humanlevel}, introducing deep Q-networks (DQN).
The objective of a DQN is to minimize the mean square error (MSE) of the Q-values:
\begin{align}
\label{q-learn-loss}
L(\theta) = \frac{1}{2} [r + \max_{a'}Q(s',a'; \theta') - Q(s,a;\theta)]^2 \\
J(\theta) = \max_{\theta}[L(\theta)]
\end{align}
\noindent Since this objective function is differentiable w.r.t $\theta$, the optimization problem can be solved using gradient based methods, e.g Stochastic Gradient Descent (SGD) \cite{Bottou2018OptimizationMF}.
\begin{figure*}[ht!]
\centering
\includegraphics[scale=1.0]{figure-02-eps-converted-to}
\caption{Interactive auscultation: the examiner starts auscultating the patient from the initial point (in this case point number 3), using our proprietary digital and wireless stethoscope, connected via Bluetooth to a smartphone. The recorded signal is sent to the server where a fixed set of features are extracted. These features represent the input to the agent that predicts the best action that should be taken. The prediction is then sent back to device and shown to the user, in this case to auscultate point number 8. The auscultation continues until agent is confident enough and declares predicted alarm value. The application works effectively even if the device is temporary offline: as soon as the connection is back, the agent can make decisions based on all the points that have been recorded so far.}
\label{fig:system_overview}
\end{figure*}
\section{\uppercase{RL-based Interactive Auscultation}}
\label{sec:RL-based-interactive-auscultation}
\subsection{Problem Statement}
\label{problem-statmenent}
In our problem definition, the set of states $S$ is composed by the list of points already auscultated, each one described with a set of fixed number of features that characterize breath phenomena detected in that point. In other terms, $ S\in \mathbb{R}^{n \times m}$, where $n$ is the number of auscultation points and $m$ equals the number of extracted features per point, plus one for number of times this point has been auscultated.
The set of actions $A$, conversely, lie in a finite space: either auscultate another specified point (can be one of the points already auscultated), or predict diagnosis status of the patient if confident enough.
\subsection{Proposed Solution}
\label{proposed-solution}
With the objective of designing an agent that interacts with the environment described above, we adopted deep Q-learning as resolution algorithm. The proposed agent is a deep Q-network whose weights $\theta$ are updated following Eq. \ref{eq:bellman}, with the objective of maximizing the expected future rewards (Eq. \ref{q-learn-loss}):
\begin{equation}
\label{eq:updates}
\theta \leftarrow \theta + \alpha [r + \gamma \cdot \max_{a'} Q(s',a';\theta) - Q(s,a; \theta)] \nabla Q(s,a; \theta)
\end{equation}
where the gradient in Eq. \ref{eq:updates} is computed by backpropagation.
Similarly to what shown in \cite{RLAvideogames}, weight updates are performed through \textit{experience replay}. Experiences over many plays of the same game are accumulated in a replay memory and at each time step multiple Q-learning updates are performed based on experiences sampled uniformly at random from the replay memory. Q-network predictions map states to next action. Agent's decisions affect rewards signaling as well as the optimization problem of finding the best weights following Eq. \ref{eq:updates}.
The result of the auscultation of a given point is a feature vector of $m$ elements. After the auscultation, values from the vector are assigned to the appropriate row of the state matrix. Features used to encode agent's states are obtained after a feature extraction module whose core part consists of a convolutional recurrent neural network (CRNN) trained to predict breath phenomena events probabilities. The output of such network is a matrix whose rows show probability of breath phenomena changing over time. This data structure, called probability raster, is then post-processed in order to obtain $m$ features, representative of the agent's state.
Finally, reinforcement signals ($R$) are designed in the following way: rewards are given when the predicted diagnosis status is correct, penalties in the opposite case. Moreover, in order to discourage the agent of using too many points, a small penalty is provided for each additional auscultated point. The best policy for our problem is thus embodied in the best auscultation path, encoded as sequence of most informative APs to be analyzed, which should be as shortest as possible.
\subsection{Application}
\label{application}
The interactive auscultation application consists of two entities: the pair digital stethoscope and smartphone, used as the interface for the user to access the service; and a remote server, where the majority of the computation is done and where the agent itself resides. An abstraction of the entire system is depicted in Figure \ref{fig:system_overview}.
The first element in the pipeline is our proprietary stethoscope \cite{stethome}. It is a digital and wireless stethoscope similar to Littmann digital stethoscope \cite{littmann} in functionality, but equipped with more microphones that sample the signal at higher sampling rate, which enables it to gather even more information about the patient and background noise. The user interacts with the stethoscope through a mobile app installed on the smartphone, connected to the device via Bluetooth Low Energy protocol \cite{Gomez12overviewand}. Once the auscultation has started, a high quality recording of the auscultated point is stored on the phone and sent to the remote server. Here, the signal is processed and translated into a fixed number of features that will be used as input for the agent. The agent predicts which is the best action to perform next: it can be either to auscultate another point or, if the confidence level is high enough, return predicted patient's status and end the examination. Agent's decision is being made dynamically after each recording, based on breath phenomena detected so far. This allows the agent to make best decision given limited information which is crucial when the patient is an infant and auscultation gets increasingly difficult over time.
\begin{figure*}[ht!]
\centering
\includegraphics[scale=0.85]{figure-03-eps-converted-to}
\caption{Feature extraction module: audio signal is first converted to spectrogram and subsequently fed to a CRNN, which outputs a prediction raster of $5$ classes: inspiration, expiration, wheezes, crackles and noise. This raster is then post-processed with the objective of extracting values representative of detection and intensity level of the critical phenomena, i.e wheezes and crackles. More specifically, maximum probability value and relative duration of tags are computed per each inspiration/expiration and the final features are computed as the average of these two statistics along all inspirations/expirations.}
\label{fig:feature-extractor}
\end{figure*}
\section{\uppercase{Evaluation}}
\label{evaluation}
This section describes the experimental framework used to simulate the interactive auscultation application described in Subsection \ref{application}. In particular, in Subsection \ref{dataset} the dataset used for the experiments is described. In Subsection \ref{feature_extractor} a detailed description of the feature extraction module already introduced in Subsection \ref{proposed-solution} is provided. In Subsection \ref{RLA} the interactive agent itself is explained, while in Subsection \ref{exp} the final experimental setup is presented.
\subsection{Dataset}
\label{dataset}
Our dataset consists of a total of $570$ real examinations conducted in the Department of Pediatric Pulmonology of \textit{Karol Jonscher Clinical Hospital} in Pozna\'n (Poland) by collaborating doctors. Data collection involved young individuals of both genders ($46 \%$ females and $54 \%$ males.) and different ages: $13 \%$ of them were infants ($[0,1)$ years old), $40 \%$ belonging to pre-school age ($[1, 6)$) and $43 \%$ in the school age ($[6, 18)$).
Each examination is composed of $12$ APs, recorded in pre-determined locations (Figure \ref{fig:system_overview}). Three possible labels are present for each examination: $0$, when no pathological sounds at all are detected and there's no need to consult a doctor; $1$, when minor (innocent) auscultatory changes are found in the recordings and few pathological sounds in single AP are detected, but there's no need to consult a doctor; $2$, significant auscultatory changes, i.e major auscultatory changes are found in the recordings and patient should consult a doctor. This ground truth labels were provided by 1 to 3 doctors for each examination, in case there was more than one label the highest label value was taken. A resume of dataset statistics is shown in Table \ref{tab:dataset}. \\
\begin{table}[ht!]
\centering
\caption{Number of examinations for each of the classes.}
\begin{tabular}{llllr}
\cmidrule{1-3}
Label & Description & $N_{examinations}$\\
\midrule
$0$ & no auscultatory \\ & changes - \textit{no alarm} & $200$\\
$1$ & innocent auscultatory \\ & changes - \textit{no alarm} & $85$ \\
$2$ & significant auscultatory \\ & changes - \textit{alarm} & $285$ \\
\bottomrule
\end{tabular}
\label{tab:dataset}
\end{table}
\subsection{Feature Extractor}
\label{feature_extractor}
The features for the agent are extracted by a feature extractor module that is composed of three main stages, schematically depicted in Figure \ref{fig:feature-extractor}: at the beginning of the pipeline, the audio wave is converted to its magnitude spectrogram, a representation of the signal that can be generated by applying short time fourier transform (STFT) to the signal (a). The time-frequency representation of the data is fed to a convolutional recurrent neural network (b) that predicts breath phenomena events probabilities in form of predictions raster. Raster is finally post-processed in order to extract $8$ interesting features (c).
\subsubsection{CRNN}
This neural network is a modified implementation of the one proposed by {\c{C}}akir et al. (2017), i.e a CRNN designed for polyphonic sound event detection (SED). In this structure originally proposed by the convolutional layers act as pattern extractors, the recurrent layers integrate the extracted patterns over time thus providing the context information, and finally the feedforward layer produce the activity probabilities for each class \cite{DBLP:journals/corr/CakirPHHV17}. We decided to extend this implementation including dynamic routing \cite{Sabour2017DynamicRB} and applying some key ideas of Capsule Networks (CapsNet), as suggested in recent advanced studies \cite{capsulenetworkspsed} \cite{capsulenetworkspsed2}. The CRNN is trained to detect $5$ types of sound events, namely: inspirations, expirations, wheezes, crackles \cite{auscultation} and noise. \\
\begin{figure*}[ht!]
\centering
\subfigure[]{\includegraphics[scale=0.45]{figure-04-a.pdf}}\quad
\subfigure[]{\includegraphics[scale=0.45]{figure-04-b.pdf}}
\caption{Interactive agent learning curves: in the very first episodes the agent randomly guesses the state of the patient, without auscultating any point. Next comes the exploration phase when the agent auscultates many points, often reaching the 12-point limit which results in high penalties. Finally, as the agent plays more and more episodes, it starts learning the optimal policy using fewer and fewer points until he finds the optimal solution}
\label{learning-curves}
\end{figure*}
\subsubsection{Raster Post-processing}
Wheezes and crackles are the two main classes of pathological lung sounds. The purpose of raster post-processing is to extract a compact representation that will be a good descriptions of their presence/absence and level of intensity. Thus, for each inspiration and expiration event we calculate two values for each pathological phenomena (wheezes and crackles): maximum probability within said inspiration/expiration and relative duration after thresholding (the level in which the inspiration/expiration is filled, or covered with the pathological phenomenon). All extracted values are then averaged across all inspirations and expirations separately. We therefore obtain $8$ features: average maximum wheeze probability on inspirations (1) and expirations (2), average relative wheeze length in inspirations (3) and expirations (4) and the same four features (5, 6, 7 and 8) for crackles.
\begin{figure*}[ht!]
\centering
\includegraphics[scale=1.01]{figure-05-eps-converted-to}
\caption{Histograms: we compared the distribution of most used points by the agent against the ones that doctors would most often choose at examination time. Results show that the agent learned importance of points without any prior knowledge about human physiology}
\label{fig:histograms}
\end{figure*}
\subsection{Reinforcement Learning Agent}
\label{RLA}
Our agent consists of a deep fully connected neural network. The network takes as input a state matrix of size $12$ rows $\times \ 9$ columns; then processes the input through $3$ hidden layers with $256$ units each followed by ReLU nonlinearity \cite{Nair:2010:RLU:3104322.3104425}; the output layer is composed by $15$ neurons which represent expected rewards for each of the possible future actions. This can be either to request one of the $12$ points to be auscultated, or declare one of the three alarm status, i.e predict one of the three labels and finish the examination.
The state matrix is initially set to all zeros, and the $i_{th}$ row is updated each time $i_{th}$ AP is auscultated. First $8$ columns of state matrix correspond to eight features described in previous section, while the last value is a counter for the number of times this auscultation point was auscultated. At every interaction with the environment, the next action $a$ to be taken is defined as $arg \max$ of the output vector. At the beginning of the agent's training we ignore agent's preferences and perform random actions, as the training proceeds we start to use agent's recommended actions more and more often. For a fully trained model, we always follow agent's instructions. The agent is trained to predict three classes, but classes $0$ and $1$, treated as agglomerated \textit{not alarm} class, are eventually merged at evaluation phase.
The agent is trained with objective of minimizing Eq. \ref{q-learn-loss}, and Q-values are recursively updated by temporal difference incremental learning. There are two ways to terminate the episode: either make a classification decision, getting the reward/penalty that follows table \ref{rla-rewards-table}; or reaching a limit of $12$ actions which results in a huge penalty of $r = -10.0$. Moreover, when playing the game a small penalty of $r = -0.01$ is given for each requested auscultation, this is to encourage the agent to end the examination if it doesn't expect any more information coming from continued auscultation.
\begin{table}[ht]
\caption{Reward matrix for Reinforcement Learning Agent final decisions.}
\label{rla-rewards-table}
\centering
\begin{tabular}{cllll}
& & \multicolumn{3}{l}{predicted} \\ \cline{3-5}
& \multicolumn{1}{l|}{} & \multicolumn{1}{l|}{0} & \multicolumn{1}{l|}{1} & \multicolumn{1}{l|}{2} \\ \cline{2-5}
\multicolumn{1}{c|}{\multirow{3}{*}{\rotatebox{90}{actual}}} & \multicolumn{1}{l|}{0} & \multicolumn{1}{l|}{2.0} & \multicolumn{1}{l|}{0.0} & \multicolumn{1}{l|}{-1.0} \\ \cline{2-5}
\multicolumn{1}{c|}{} & \multicolumn{1}{l|}{1} & \multicolumn{1}{l|}{0.0} & \multicolumn{1}{l|}{2.0} & \multicolumn{1}{l|}{-0.5} \\ \cline{2-5}
\multicolumn{1}{c|}{} & \multicolumn{1}{l|}{2} & \multicolumn{1}{l|}{-1.0} & \multicolumn{1}{l|}{-0.5} & \multicolumn{1}{l|}{2.0} \\ \cline{2-5}
\end{tabular}
\end{table}
\subsection{Experimental Setup}
\label{exp}
We compared the performance of reinforcement learning agent, from now on referred to as \textit{interactive} agent, to its \textit{static} counterpart, i.e an agent that always performs an exhaustive auscultation (uses all $12$ APs).
In order to compare the two agents we performed $5$-fold cross validation for $30$ different random splits of the dataset into training ($365$ auscultations), validation ($91$) and test ($114$) set. We trained the agent for $200$ episodes, setting $\gamma=0.93$ and using Adam optimization algorithm \cite{Kingma2014AdamAM} to solve Eq. \ref{q-learn-loss}, with learning rate initially set to $0.0001$.
Both in validation and test phase, $0$ and $1$ labels were merged as single $not \ alarm$ classes. Therefore results shown in the following refer to the binary problem of alarm detection: we chose as comparative metrics balanced accuracy (BAC) defined as unweighted mean of sensitivity and specificity; and F1-score, harmonic mean of precision and recall, computed for each of the two classes.
\section{\uppercase{Results}}
\label{results}
In Table \ref{results-experiments} we show the results of the experiments we conducted. The interactive agent performs the auscultation using on average only $3$ APs, effectively reducing the time of the examination $4$ times. This is a very significant improvement and it comes at a relatively small cost of 2.5 percent point drop in classification accuracy.
\begin{table}[ht!]
\caption{Results of experiments.}
\centering
\begin{tabular}{llllrr}
\cmidrule{1-5}
Agent & $BAC$ & $F1_{alarm}$ & $F1_{not \ alarm}$ & $APs$ \\
\midrule
Static & $84.8 \ \% $ & $82.6 \ \%$ & $85.1 \ \%$ & $12$ \\
Interactive & $82.3 \ \%$ & $81.8 \ \%$ & $82.6 \ \%$ & $3.2$ \\
\bottomrule
\end{tabular}
\label{results-experiments}
\end{table}
Figure \ref{learning-curves} shows learning curves of rewards and number of points auscultated by the agent. In the very first episodes the agent directly guesses the state of the patient, declaring the alarm value without auscultating any point. As soon as it starts exploring other possible action-state scenarios, it often reached the predefined limit of $12$ auscultation points which significantly reduces its average received reward. However, as it plays more episodes, it starts converging to the optimal policy, using less points on average.
In order to assess the knowledge learned by the agent, we conducted a survey involving a total of $391$ international experts. The survey was distributed among the academic medical community and in hospitals. In the survey we asked each participant to respond a number of questions regarding education, specialization started or held, assessment of their own skills in adult and child auscultation, etc. In particular, we asked them which points among the $12$ proposed would be auscultated more often during an examination. Results of the survey are visible in Figure \ref{fig:histograms} where we compare collected answers with most used APs by the interactive agent. It's clear that the agent was able to identify which APs carry the most information and are the most representative to the overall patient's health status. This is the knowledge that all human experts gain from many years of clinical practice. In particular the agent identified points 11 and 12 as very important. This finding is confirmed by the doctors who strongly agree that these are the two most important APs on patient's back. On the chest both doctors and the agent often auscultate point number 4, but the agent prefers point number 2 instead of 3, probably due to the distance from the heart which is a major source of interference in audio signal.
The agent seems to follow two general rules during the auscultation: firstly, it auscultates points belonging both to the chest and to the back; secondly, it tries to cover as much area as possible, visiting not-subsequent points. For instance, the top $5$ auscultation paths among the most repeating sequences that we observed are: $[4,9,11]$, $[8,2,9]$, $[2,11,12]$, $[7,2,8]$, $[4,11,12]$. These paths cover only $3 \%$ of the possible paths followed by the agent: this means the agent does not follow a single optimal path or even couple of paths, but instead uses a wide variety of paths depending on breath phenomena detected during the examination.
\section{\uppercase{Conclusions}}
\label{conclusions}
We have presented a unique application of reinforcement learning for lung sounds auscultation, with the objective of designing an agent being able to perform the procedure interactively in the shortest time possible.
Our interactive agent is able to perform an intelligent selection of auscultation points. It performs the auscultation using only $3$ points out of a total of $12$, reducing fourfold the examination time. In addition to this, no significant decrease in diagnosis accuracy is observed, since the interactive agent gets only $2.5$ percent points lower accuracy than its static counterpart that performs an exhaustive auscultation using all available points.
Considering the research we have conducted, we believe that further improvements can be done in the solution proposed. In the near future, we would like to extend this work to show that the interactive solution can completely outperform any static approach to the problem. We believe that this can be achieved by increasing the size of the dataset or by more advanced algorithmic solutions, whose investigation and implementation was out of the scope of this publication.
\section*{\uppercase{Copyright Notice}}
This contribution has been published in the proceedings of the 11\textsuperscript{th} International Conference on Agents and Artificial Intelligence (ICAART) - Volume 1. Conference link: http://www.icaart.org/?y=2019
\bibliographystyle{apalike}
{\small
|
2,869,038,153,932 | arxiv | \section{Introduction}
The most massive galaxies form earliest in the Universe, known as downsizing of galaxy formation (e.g., \cite{1996AJ....112..839C}, \cite{2010MNRAS.404.1775T}).
This naturally motivates us to explore massive mature galaxies at the highest redshift.
The current record redshift of spectroscopically confirmed quiescent galaxies (QGs) is $z=4.01$ \citep{2019ApJ...885L..34T} and many QGs have been identified at $z=3-4$ (e.g., \cite{2018A&A...618A..85S}).
They are extremely compact in the rest-frame optical with an effective radius of less than 1 kpc (e.g., \cite{2018ApJ...867....1K}).
The compact stellar distribution could be related to bursty star formation histories with star formation rate (SFR) of several hundreds $M_\odot$yr$^{-1}$ in the center, as implied by near-infrared spectroscopic studies (e.g., \cite{2017Natur.544...71G}, \cite{2020ApJ...889...93V}).
These findings suggest that starburst galaxies at $z=5-7$ such as bright submillimeter galaxies (SMGs) are good candidates for the progenitors of massive QGs at $z=3-4$ (see also \cite{2014ApJ...782...68T, 2015ApJ...810..133I}).
For understanding how the most massive galaxies grow in such an early universe, we study the star-forming activities and the physical conditions in the interstellar medium (ISM) of a strongly-lensed SMG at $z=6.027$, G09.83808 \citep{2018NatAs...2...56Z}.
G09.83808 is one of three bright SMGs so far discovered at $z>6$, and is a more common populations of starburst galaxies with the intrinsic 870 $\mu$m flux density of $S_\mathrm{870,intr}\sim4$ mJy, compared to the other two extreme ones with $S_\mathrm{870,intr}>10$ mJy \citep{2013Natur.496..329R, 2018Natur.553...51M}.
In this work, we focus on the far-infrared fine structure lines of nitrogen \nii and oxygen [O~{\sc iii}]$_{88}$.
Nitrogen line emission is especially important for understanding the chemical evolution of galaxies because nitrogen is mainly produced from carbon and oxygen already present in stars through the CNO cycle, referred to as secondary element (e.g., \cite{2020ApJ...900..179K}).
Nitrogen is formed in intermediate-mass stars which are longer-lived than massive stars, inducing a time-delay.
Thus, an enhanced ratio between \nii and \oiii luminosity implies that galaxies experienced many cycles of star formation.
Recent ALMA observations have detected nitrogen lines (\nii or [N~{\sc ii}]$_{122}$) in galaxies at $z\sim5$ (e.g., \cite{2019ApJ...882..168P}, \cite{2020MNRAS.494.4090C}).
But galaxies where both nitrogen and oxygen lines are detected are limited to $z<5$ \citep{2019A&A...631A.167D,2019ApJ...876....1T}, except for bright quasars \citep{2019ApJ...881...63N, 2020ApJ...900..131L}.
For pushing studies of ISM in starburst galaxies to higher redshift,
we observe the \oiii line emission ($\nu_\mathrm{obs}$=482.9 GHz) and \nii ($\nu_\mathrm{obs}$=207.9 GHz), as well as the 0.6 mm and 1.5 mm continuum emission, in G09.83808.
\begin{figure*}[!t]
\begin{center}
\includegraphics[scale=1.1]{image_Fig1.pdf}
\end{center}
\caption{
Top: from left to right, the ALMA maps of 0.6 mm, 1.5 mm continuum, [O~{\sc iii}], and \nii line emission are displayed.
The mask region is shown in the top left panel.
Middle and bottom panels show the best-fit models produced by {\tt GLAFIC} and the residuals, respectively.
White dashed and solid contours show the -4$\sigma$, -3$\sigma$ and +3$\sigma$, +4$\sigma$ levels in the residual maps.
We use a scientific color scale from \citet{2020NatCo..11.5444C}.
}
\label{fig;map}
\end{figure*}
\section{Observations}
ALMA observations were executed on 2019 December (Band-8) and 2019 October--2020 January (Band-5).
On-source time was 2.4 h and 1.6 h, respectively.
The maximum recovery scale is 5\farcs4 and 6\farcs2, respectively.
The data were calibrated in the standard manner using {\tt CASA} \citep{2007ASPC..376..127M}.
We first construct a clean mask with the 0.6 mm continuum data by applying {\tt CASA/AUTO-MULTITHRESH} \citep{2020PASP..132b4505K} and Briggs weighting with robust=+0.5.
We use this mask for all imaging of continuum and line emission.
Then, we clean the emission down to the 1.5$\sigma$ level to create continuum and 100 km s$^{-1}$ channel maps with robust=$-$0.5 and robust=+2.0, respectively.
Figure \ref{fig;map} shows the ALMA maps of the 0.6 mm, 1.5 mm continuum, \oiii and \nii line emission in G09.83808, where two arcs of counter-images are evident.
The line emission is integrated over the velocity range of $-$350 km s$^{-1}$ to +150 km s$^{-1}$.
The beam size is 0\farcs56$\times$0\farcs38 for the 0.6 mm continuum, 0\farcs48$\times$0\farcs41 for the 1.5 mm continuum, 0\farcs76$\times$0\farcs64 for the \oiii, and 0\farcs84$\times$0\farcs77 for the \nii line map.
The peak flux densities and noise levels are 7.46$\pm$0.17 mJy beam$^{-1}$ (44$\sigma$) for the 0.6 mm continuum, 2.10$\pm$0.03 mJy beam$^{-1}$ (75$\sigma$) for the 1.5 mm continuum, 1.59$\pm$0.12 Jy km s$^{-1}$ beam$^{-1}$ (14$\sigma$) for the \oiii, and 0.38$\pm$0.03 Jy km s$^{-1}$ beam$^{-1}$ (12$\sigma$) for the \nii line map.
The total flux densities and line fluxes in the mask region are 38.69$\pm$1.13 mJy for the 0.6 mm continuum, 9.91$\pm$0.19 mJy for the 1.5 mm continuum, 8.13$\pm$0.50 Jy km s$^{-1}$ for the \oiii, and 1.48$\pm$0.12 Jy km s$^{-1}$ for the \nii line map.
The uncertainties are calculated as 1$\sigma\times\sqrt{(N_\mathrm{mask}/N_\mathrm{beam})}$ where $N_\mathrm{mask}$ and $N_\mathrm{beam}$ is the areas of the mask region and the clean beam, respectively.
\section{Analysis and results}
\subsection{Gravitational lens modeling}
\label{sec;lens_modeling}
Strong gravitational lensing produces multiple images of a background source, G09.83808 at $z=6.027$, in the ALMA maps.
A big advantage of submillimeter observations is that the flux contribution of a foreground source, a massive quiescent galaxy at $z = 0.776$ \citep{2017MNRAS.472.2028F}, is negligible in this wavelength unlike optical and near-infrared observations.
For mass models of foreground (lens) source, we assume a singular isothermal ellipsoid with five parameters ($xy$-coordinates, ellipticity $e$, position angle measured counterclockwise from North $\theta_e$, velocity dispersion $\sigma_v$) and external perturbation with two parameters (tidal shear $\gamma$ and position angle $\theta_\gamma$) in a similar way as in \citet{2015PASJ...67...72T}.
The background source is assumed to have an exponential disk with a S$\acute{\mathrm{e}}$rsic index of $n=1$, characterized by six parameters ($xy$-coordinates, flux, effective radius $R_\mathrm{eff}$, major-to-minor axis ratio $q$ and position angle $\theta_q$), for both the continuum and line emissions.
First, we determine the parameters of the foreground source by using the 1.5 mm continuum image, where the spatial resolution and signal-to-noise ratio are better than those of other images.
We then use {\tt GLAFIC2} software \citep{2010PASJ...62.1017O} to optimize the mass model of the foreground source.
Only the clean mask region is used for $\chi^2$ minimization.
To estimate the uncertainties of the best-fit parameters, we add a $1\sigma$ noise map convolved by a dirty beam to the clean image and repeat to fit the noise added images.
The best-fit parameters are $e=0.11^{+0.02}_{-0.08}$, $\theta_e=78^{+33}_{-3}$ deg, $\sigma_v=257.9^{+0.3}_{-2.0}$ km s$^{-1}$ for an isothermal ellipsoid and $\gamma=4.1^{+2.8}_{-0.0}\times10^{-2}$ and $\theta_\gamma=47^{+11}_{-3}$ deg for an external shear.
The uncertainties are based on the 16th and 84th percentile of 500 MonteCarlo runs.
The derived position of the foreground source is nicely consistent with the position in a deep $z$-band image (0\farcs7 seeing) from the second public data release of the Hyper Suprime-Cam in Subaru Strategic Program \citep{2019PASJ...71..114A}.
We also obtain the central position and the shape of the 1.5 mm continuum emission for the background source.
The spatial distribution is well characterized by an exponential disk with $q=0.93^{+0.02}_{-0.08}$ and $\theta_q=108^{+13}_{-22}$ deg.
Even if $n$ is a free parameter, the best-fit value is $n=1.17^{+0.13}_{-0.10}$, supporting that the dust continuum emission has an exponential profile \citep{2016ApJ...833..103H, 2018ApJ...861....7F}.
The total magnification factor, given by a ratio between flux densities in the image and source plane, is $\mu=8.38^{+0.74}_{-0.27}$, which is consistent with the previous result from 0\farcs1-resolution observations of 870 $\mu$m continuum emission ($\mu=9.3\pm1.0$; \cite{2018NatAs...2...56Z}).
Next, we measure the intrinsic sizes of the continuum and line emissions for the background source by fixing the other parameters to the values obtained above.
The effective radii are 0\farcs112$^{+0.002}_{-0.002}$ for the 1.5 mm continuum, 0\farcs117$^{+0.004}_{-0.003}$ for the 0.6 mm continuum, 0\farcs21$^{+0.08}_{-0.08}$ for the \oiii and 0\farcs20$^{+0.04}_{-0.04}$ for the \nii line emission.
The dust emissions at different wavelength have a similar size and both are more compact than the ionized gas emissions.
Figure \ref{fig;map} shows the best-fit model and residuals for each emission.
The residual image of \oiii emission shows three $+3\sigma$ peaks in the edges of the two arcs, corresponding to the direction of the minor axis of the disk component in the source plane.
Even if the position angle is not fixed, the $3\sigma$ residuals still remain.
Given that the emission peak is detected at 12$\sigma$,
the $3\sigma$ deviation from an exponential disk may suggest that G09.83808 has subcomponents (clumps or small satellite galaxies) of ionized gas.
We require deeper and higher-resolution observations to confirm the existence of these components.
\begin{figure}[!t]
\begin{center}
\includegraphics[scale=1.0]{image_Fig2_arXiv.pdf}
\end{center}
\caption{
Intrinsic infrared luminosities versus circularized effective radii for G09.83808 at $z=6$, strongly-lensed SMGs (\cite{2016ApJ...826..112S}), SMGs at $z=1-4$ (\cite{2019MNRAS.490.4956G, 2020MNRAS.494.3828D}), DSFGs at $z=2$ (\cite{2020ApJ...901...74T}) and LIRGs at $z=0$ (\cite{2016A&A...591A.136L}).
A shaded region shows the effective radius of the rest-frame optical emission in massive quiescent galaxies at $z\sim4$ \citep{2018ApJ...867....1K}.
Dashed lines correspond to the median values of the infrared surface densities for each sample.
}
\label{fig;LIR_size}
\end{figure}
\subsection{Dust SED modeling}
\label{sec;sed}
Combining the new ALMA continuum data with photometry from Herschel, JCMT, and LMT observations \citep{2016ApJ...832...78I, 2017MNRAS.472.2028F, 2018NatAs...2...56Z}, we constrain the spectral energy distribution (SED) of dust emission to estimate the total infrared luminosity, $L_\mathrm{IR}$.
We assume flux calibration uncertainties of 5\% and 10\% in ALMA Band-5 and Band-8 observations.
We fit a modified blackbody radiation model, characterized by dust temperature $T_\mathrm{dust}$ and an emissivity index $\beta$ \citep{2012ApJ...761..140C}, to the observed SEDs by using the {\tt CIGALE} code \citep{2019A&A...622A.103B}.
We fix the wavelength where the optical depth is unity to 150 $\mu$m and the power law slope to 2.0.
The best-fit model gives the total infrared luminosity between 8 and 1000 $\mu$m of $L_\mathrm{IR}=(4.6^{+0.6}_{-0.7})\times10^{12}~L_\odot$ after correction of magnification effect, $T_\mathrm{dust}=51\pm4$ K, and $\beta$=$2.5^{+0.3}_{-0.2}$.
As the 0.6 mm continuum emission corresponds to the peak of the dust SED, its spatial distribution probes where stars are intensively formed.
We find G09.83808 to have an infrared surface density of $\Sigma_\mathrm{IR}=(1.8\pm0.3)\times10^{12}~L_\odot$ kpc$^{-2}$ within the circularized effective radius, $R_\mathrm{eff, 0.6 mm}$=0.64$\pm$0.02 kpc.
We also estimate the total infrared luminosities for other strongly lensed SMGs, where the 870 $\mu$m continuum sizes and magnification are measured \citep{2013ApJ...767...88W, 2016ApJ...822...80S, 2016ApJ...826..112S} in the same way as in G09.83808.
Figure \ref{fig;LIR_size} shows the total infrared luminosity and circularized effective radius of dust continuum emission for four different galaxy populations: 1) strongly-lensed SMGs at $z=3-6$ including G09.83808, 2) SMGs at $z=1-4$ \citep{2019MNRAS.490.4956G, 2020MNRAS.494.3828D}, 3) massive dusty star-forming galaxies at $z=2$ (DSFGs; \cite{2020ApJ...901...74T}), and 4) LIRGs at $z=0$ \citep{2016A&A...591A.136L}.
The four populations occupy different regions in the $L_\mathrm{IR}-R_\mathrm{eff}$ plane.
For any combination of the two populations, a KS test shows that the probability that they are drawn from the same distribution is less than 1\%.
The median values of the infrared surface densities are $2.5\times10^{12}~L_\odot$ kpc$^{-2}$ for lensed SMGs, $1.0\times10^{12}~L_\odot$ kpc$^{-2}$ for SMGs, $1.6\times10^{11}~L_\odot$ kpc$^{-2}$ for DSFGs, and $0.8\times10^{11}~L_\odot$ kpc$^{-2}$ for LIRGs.
Thus, lensed SMGs and SMGs are undergoing intense starburst with a higher infrared surface density, compared to LIRGs and DSFGs.
The small difference between lensed SMGs and SMGs may be due to the different redshift range (i.e. emissions at different rest-frame wavelengths) and/or a large magnification near caustics of gravitational lenses in extremely bright objects with $S_\mathrm{1.4~mm}>$20 mJy \citep{2013ApJ...767...88W}.
The effective radii of lensed SMGs are comparable to those of massive QGs at $z\sim4$ ($0.52\pm0.18$ kpc in the rest-frame optical; \cite{2018ApJ...867....1K}).
The intense starburst in the central compact region supports an evolutionary link between G09.83808 at $z=6$ and massive QGs at $z\sim4$.
\begin{figure*}[!t]
\begin{center}
\includegraphics[scale=1.0]{image_Fig3_arXiv.pdf}
\end{center}
\caption{
Left: \nii line to infrared luminosity ratio as a function of $S_{63}/S_{158}$ continuum ratio for local LIRGs and strongly-lensed SMGs.
Right: \oiii 88 line to infrared luminosity ratios with the results from single starburst models with an age of 10, 20, and 30 Myr.
A dashed dotted line show continuous star formation models with an age of 20 Myr.
The ionization parameter varies from $U_\mathrm{ion}=10^{-4}$ to $U_\mathrm{ion}=10^{-2}$.
}
\label{fig;line_IR_ratio}
\end{figure*}
\section{Far-infrared line properties}
\label{sec;fir}
UV photons from massive stars ionize the surrounding gas and at the same time heat the dust.
Then, thermal radiation of the dust can be observed in the infrared.
A combination of fine structure lines such as [N~{\sc ii}]$_{205}$ and [O~{\sc iii}]$_{88}$ with infrared continuum emission can therefore provide with information about physical properties of the ISM and ionizing sources (e.g., metallicity, gas density and ionization parameter) in galaxies.
We compare the line-to-infrared luminosity ratio, $L_\mathrm{[NII]205}/L_\mathrm{IR}$ and $L_\mathrm{[OIII]88}/L_\mathrm{IR}$, in G09.83808 and other lensed SMGs at $z=3-6$ with those in local LIRGs.
Although the spatial distributions of ionized gas and dust are different, we use the galaxy-integrated properties for straightforward comparison.
\subsection{Nitrogen line emission}
\label{sec;nii_ir}
$Herschel$ observations of [N~{\sc ii}]$_{205}$ line in LIRGs at $z\sim0$ show that $L_\mathrm{[NII]205}/L_\mathrm{IR}$ is anticorrelated with the continuum flux density ratio between 63 $\mu$m and 158 $\mu$m, $S_{63}/S_{158}$ \citep{2017ApJ...846...32D,2017ApJS..230....1L}.
For lensed SMGs, we derive $S_{63}/S_{158}$ from the best-fit SED (section \ref{sec;sed}).
In both galaxy populations, there is a decreasing trend of $L_\mathrm{[NII]205}/L_\mathrm{IR}$ with increasing $S_{63}/S_{158}$ (Figure \ref{fig;line_IR_ratio}).
The trends could be related to a variation in ionization parameter, defined as $U_\mathrm{ion}=\phi(H)/c/n_H$ where $\phi(H)$ is the flux of hydrogen-ionizing photons, $n_H$ is the hydrogen density at the illuminated face of a cloud, and $c$ is the light speed.
We use photoionization code {\tt Cloudy~v17.01} \citep{2017RMxAA..53..385F} to compare the observations with models with different ionization parameters.
We generate the input spectra of a single age starburst model with 20 Myr by using the Binary Population and Spectral Synthesis ({\tt BPASS v2.0}) code \citep{2016MNRAS.462.3302E}.
The initial gas density at illuminated face is fixed to be $n_H$=50 cm$^{-3}$, which is the typical value in local LIRGs \citep{2017ApJ...846...32D}.
We adopt solar elemental abundance ratios and gas-phase depletion factors, with taking into account secondary production of nitrogen \citep{2011A&A...526A.149N} and assume a solar metalliciity.
For dust, we assume Orion-type graphite and silicate grains with a size distribution and abundance appropriate for those along the line of sight to the Trapezium stars in Orion.
We stop calculations at the total hydrogen column density of $N(H)=10^{22}$ cm$^{-2}$ to avoid the dust temperature becoming too low.
We here do not intend to determine each parameters from fitting, but aim to interpret the observed trends from comparison with models.
The decreasing trend of $L_\mathrm{[NII]205}/L_\mathrm{IR}$ is successfully reproduced by photoionization models in the range of $U_\mathrm{ion}=10^{-4}-10^{-2}$ (Figure \ref{fig;line_IR_ratio}).
As an ionization parameter becomes larger, the H$^+$ region expands.
But the volume of H$^+$ region does not increase linearly with $U_\mathrm{ion}$ because UV photon in turn is used to heat the dust in the expanded H$^+$ region (\cite{2009ApJ...701.1147A}).
The fraction of UV photon available for ionization becomes smaller while all of its energy is eventually converted into dust emission, resulting in a decrease of $L_\mathrm{[NII]205}/L_\mathrm{IR}$.
On the other hand since the UV photon per dust particle increases, the dust temperature becomes higher, and then $S_{63}/S_{158}$ becomes larger \citep{2009ApJ...701.1147A, 2018MNRAS.473...20R}.
Therefore, in both local and high-redshift galaxies, the decreasing trend can be explained by higher ionization paremters.
\subsection{Oxygen line emission}
Unlike $L_\mathrm{[NII]205}/L_\mathrm{IR}$, the photoionization models predicts increasing $L_\mathrm{[OIII]88}/L_\mathrm{IR}$ with increasing ionization parameter (Figuref \ref{fig;line_IR_ratio}).
This is because only a small fraction ($<$10\%) of oxygen is doubly ionized at low ionization parameter.
Nevertheless, local LIRGs do not show any correlation between $L_\mathrm{[OIII]88}/L_\mathrm{IR}$ and $S_{63}/S_{158}$.
The $L_\mathrm{[OIII]88}/L_\mathrm{IR}$ values of two lensed SMGs (G09.83808 at $z=6.0$ and SPT 0418--47 at $z=4.2$; \cite{2019A&A...631A.167D}) are also consistent with those in local LIRGs.
From comparisons with photoionization models, we find that $L_\mathrm{[OIII]88}/L_\mathrm{IR}$ is very sensitive to a variation in age of star formation, which changes the energy distribution of incident radiation.
The contribution of massive stars to the incident radiation is larger for ages younger than 20 Myr, leading to higher $L_\mathrm{[OIII]88}/L_\mathrm{IR}$.
The trend with changing age is almost orthogonal to the trend with changing ionization parameter.
Therefore, the ionization parameter dependence of $L_\mathrm{[OIII]88}/L_\mathrm{IR}$ quickly disappear due to a small variation in age of star formation.
The impact of age variation on \nii is small because both trends are parallel in the $L_\mathrm{[NII]205}/L_\mathrm{IR}$--$S_{63}/S_{158}$ plane.
Single starburst model with even the younger age ($<10$ Myr) and continuous star formation models predict a much higher $L_\mathrm{[OIII]88}/L_\mathrm{IR}$ of $10^{-3}-10^{-2}$, which is similar to those in local dwarf galaxies \citep{2015A&A...578A..53C}.
We also note that these arguments are based on simple spherical models in which all of [N~{\sc ii}]$_{205}$, [O~{\sc iii}]$_{88}$ and dust emissions are radiated from the same clouds.
If [O~{\sc iii}]$_{88}$ emission comes from a high density gas ($n_H>$1000 cm$^{-3}$) unlike [N~{\sc ii}]$_{205}$, photoionization models with a different column density can reproduce a large variation in $L_\mathrm{[OIII]88}/L_\mathrm{IR}$ ratio \citep{2014ApJ...795..117F}.
\section{Gas-phase metallicity}
In this section, we estimate the gas-phase metallicity by using two measurements of [N~{\sc ii}]$_{205}$/\oiii and $S_{63}/S_{158}$ ratio in G09.83808.
Since \nii and \oiii lines have different critical densities and ionization potential, its ratio depends not only on metallicity but also on gas density and ionization parameter.
In local LIRGs, the gas density is mostly in a narrow range of $n_e$=20--100 cm$^{-3}$ \citep{2017ApJ...846...32D}.
We assume that the $z=6$ galaxy has a line ratio of $\log(L_\mathrm{[NII]122}/L_\mathrm{[NII]205}) = 0.20 \pm 0.18$, which is the median value in LIRGs.
This assumption is consistent with previous observations of SMGs at $z\sim4$ \citep{2019A&A...631A.167D, 2019ApJ...883L..29L}.
The ionization parameter dependence more seriously affects the estimates of metallicities when a [N~{\sc ii}]$_{122}$/\oiii line ratio is used.
[N~{\sc iii}]$_{57}$/\oiii with similar ionization potential is considered to be a better indicator of metallicity \citep{2011A&A...526A.149N, 2017MNRAS.470.1218P}.
However, either of [N~{\sc iii}]$_{57}$ and \oiii lines at $z>6$ is shifted to be in frequency ranges where the atmospheric transmission is low or even zero, except for $z=6.8-6.9$ and $z=7.1-7.3$.
In addition, as [N~{\sc iii}]$_{57}$ emission at $z\sim7$ can be observed with ALMA Band-9 receivers, the required integration time become by a factor of 60 larger than that in Band-7 observations of [N~{\sc ii}]$_{122}$ line at the same limiting flux.
An approach of using a [N~{\sc ii}]$_{122}$/\oiii line ratio therefore has a great advantage for future measurements of metallicity for a large sample once the dependence of the ionization parameter is taken into account.
In local galaxies where both lines are detected \citep{2015A&A...578A..53C,2016ApJS..226...19F, 2018ApJ...861...94H}, the [N~{\sc iii}]$_{57}$/\oiii ratio is correlated with the [N~{\sc ii}]$_{122}$/\oiii ratio, though with a large dispersion (Figure \ref{fig;NIII57}).
At a similar [N~{\sc iii}]$_{57}$/\oiii ratio, galaxies with a higher [N~{\sc ii}]$_{122}$/\oiii ratio tend to have a lower $S_{63}/S_{158}$ ratio, corresponding to a lower ionization parameter.
We therefore introduce the scaling relation to predict [N~{\sc iii}]$_{57}$/\oiii ratio as $\log(L_\mathrm{[NIII]57}/L_\mathrm{[OIII]88})_\mathrm{pred}=A+B\log(L_{[NII]205}/L_{[OIII]88})+C\log(S_{63}/S_{158})$.
We determine the coefficients to minimize the difference between the predicted and observed [N~{\sc iii}]$_{57}$/\oiii in local galaxies by ordinary least squares regression.
Thus, we obtain $A$=--0.21$\pm$0.02, $B$=0.45$\pm$0.03 and $C$=0.32$\pm$0.07, with a dispersion of $\pm$0.14 dex in $\log(L_\mathrm{[NIII]57}/L_\mathrm{[OIII]88})$.
By using the scaling relation, we infer log([N~{\sc iii}]$_{57}$/[O~{\sc iii}]$_{88}$)=--0.55$\pm$0.09 where the uncertainty includes that due to the conversion from [N~{\sc ii}]$_{205}$ to [N~{\sc ii}]$_{122}$ as well as the measurement errors.
This ratio is relatively low compared to local LIRGs (Figure \ref{fig;hist}), but implies $Z=0.5-0.7~Z_\odot$ according to the photoionization models (section \ref{sec;nii_ir}).
Our result is consistent with previous studies, where it is claimed that SMGs at $z=3-4$ are chemically evolved with nearly solar metallicity (e.g., \cite{2018MNRAS.473...20R,2019ApJ...876....1T, 2019A&A...631A.167D}).
Numerical simulations also predict $Z\sim0.5~Z_\odot$ at $z=6$ in the stellar mass range of $\log(M_\star/M_\odot)=10-10.5$ \citep{2019MNRAS.484.5587T}.
High-resolution 3--4 $\mu$m observations with the James Webb Space Telescope (JWST) will allow us to obtain the stellar mass of strongly-lensed galaxies at $z=4-6$ in separate from a foreground object.
Therefore, the ALMA--JWST synergetic observations will allow us to probe the massive end of stellar mass-metallicity relation for galaxies at $z=6$, which has not been explored so far.
\begin{figure}[!t]
\begin{center}
\includegraphics[scale=1.0]{image_Fig4_arXiv.pdf}
\end{center}
\caption{
A comparison between $L_\mathrm{[NIII]57}/L_\mathrm{[OIII]88}$ and $L_\mathrm{[NII]122}/L_\mathrm{[OIII]88}$ for galaxies $z=0$ \citep{2015A&A...578A..53C,2016ApJS..226...19F, 2018ApJ...861...94H}.
Color coding shows the $S_{63}/S_{158}$ continuum ratio.
}
\label{fig;NIII57}
\end{figure}
\begin{ack}
We thank the referee for constructive comments that improved the paper.
We wish to thank Jacqueline Fischer for advice about photoionization modeling with {\tt Cloudy}.
We also would like to thank Tanio D\'{i}az-Santos and Rodrigo Herrera-Camus for kindly providing catalogs of galaxies at $z\sim0$.
This paper makes use of the following ALMA data: ADS/JAO.ALMA\#2019.1.01307.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ.
We thank the ALMA staff and in particular the EA-ARC staff for their support.
This work was supported by JSPS KAKENHI Grant Numbers 20K14526, 17H06130.
Data analysis was in part carried out on the Multi-wavelength Data Analysis System operated by the Astronomy Data Center (ADC), National Astronomical Observatory of Japan.
\end{ack}
\begin{figure}[!t]
\begin{center}
\includegraphics[scale=1.0]{image_Fig5_arXiv.pdf}
\end{center}
\caption{
A histogram of [N~{\sc iii}]$_{57}$/\oiii ratio inferred from the scaling relation for local LIRGs (blue; \cite{2017ApJ...846...32D}).
A red hatched region shows the range of G09.83808 at $z=6$, including uncertainties on conversion from [N~{\sc ii}]$_{205}$ to [N~{\sc ii}]$_{122}$.
The top x-axis denotes gas-phase metallicities based on the photoionization models (section \ref{sec;nii_ir}).
}
\label{fig;hist}
\end{figure}
\bibliographystyle{apj}
|
2,869,038,153,933 | arxiv | \section{Introduction}
Contrary to the conventional wisdom, recent studies have shown that elliptical and lenticular galaxies also have interesting interstellar medium. They not only have X-ray-emitting hot gas, many of them also have substantial amount of warm and cold gas, and dust \citep[e.g.][]{Phillips86,Kim89,Buson93,Goudfrooij94,Macchetto96,Zeilinger96,Lauer05,Sarzi06,DavisT11, Singh13, Gomes16}.
The evidence for the warm gas in early-type galaxies come from optical emission line measurements. Unlike late-type galaxies, in most cases, the emission lines are not due to star formation. Most early-type galaxies with line emission show a line ratio pattern similar to Low-ionization Nuclear Emission-line Regions (LINER, \citealt{Heckman80}). This often leads to them being classified as a type of active galactic nuclei. However, many recent studies have shown that these line emission is not only spatially-extended, but have a spatial gradient in line ratio that are consistent with being photoionized by sources that are spatially distributed like the stars, rather than a central AGN \citep{Sarzi10, YanB12, Singh13, Belfiore15, Gomes16}. Therefore, the name LINER is no longer appropriate. In some recent papers, these phenomena have been referred to as LINER-like. \cite{Belfiore15} recently has renamed them as Low-Ionization Emission Regions or LIERs for short. LINER-like and LIER are both referring to the same phenomenon, namely the spatially-extended optical emission lines observed in a large fraction of quiescent red galaxies where the line ratios do not match those of star-forming regions.
The exact ionization mechanism certainly could be different in individual galaxies. But for the majority of early-type galaxies where it is dominated by extended emission, having uniform line ratio and similar equivalent width (EW), there is likely a common mechanism shared by most of them. Although the AGN photoionization model has been ruled out for the majority of low-ionization emission-line galaxy, there are several mechanisms that are spatially distributed, such as photoionization by hot evolved stars, collisional ionization by shocks, and turbulent mixing layers.
Photoionization by hot evolved stars is a potential explanation for these warm ionized gas. It was originally proposed by \cite{Binette94}, and further developed by \cite{Stasinska08} and \cite{CidFernandes11}. Several observational evidences favor this explanation. The strongest evidences are two. One is the reasonably tight correlation between emission-line surface brightness and the stellar surface density shown by \cite{Sarzi10}. The other is that the ionization parameter gradient measured statistically by \cite{YanB12} matches the prediction of a distributed ionizing source following the stellar density profile. The latter work also found the luminosity-dependence of the ionization parameter gradient also match the theoretical prediction. These seem to be very strong evidence.
However, there are also serious problems associated with this explanation.
First, the ionization photon budget does not work out. Current stellar evolution theory suggests the number of ionizing photons from an evolved stellar population is roughly $10^{41}$ per second per solar mass, and it changes little with age after 1Gyr. In order to power all the line emission, the gas has to absorb 100\% of the ionizing photons from post-AGB stars. This would require the gas to surround each individual stars. However, in many galaxies, the gas are observed to have different kinematics from the stars \citep{Sarzi06, DavisT11, Gomes16}, suggesting an external origin. If the neutral gas originated externally, we do not expect them to provide complete covering fraction. Thus, we cannot explain the level of emission given the current prediction of the stellar evolution theory. Second, if the gas originated externally and is randomly positioned relative to the hot evolved stars, we will also have trouble to produce the kind of ionization parameter that are seen. For detailed calculations, see \cite{YanB12}.
Shocks are another popular mechanism for the ionization. Given the amount of stellar mass loss and the large stellar velocity dispersion in these early-type galaxies, cloud-cloud collisions will produce shocks. But it is unclear whether it is the dominant emission-line luminosity contributor. In an early-type galaxy observed by the SDSS-IV MaNGA (Mappping Nearby Galaxies at Apache Point Observatory) survey \citep{Bundy15,Yan16b}, \cite{Cheung16} found narrow bi-symmetric features in its \mbox{H$\alpha$}\ equivalent width map. The proposed theory is that winds powered by a central radio AGN produces shocks in the direction of the winds, enhancing the line emission in those directions.
Turbulent mixing layers may also produce the observed emission lines \citep{Slavin93}. Shear flows at the boundaries of hot and cold gas could produce gas at intermediate temperatures and give similar line ratios as observed in LIER and diffuse ionized gas in star-forming galaxies.
One reliable way to distinguish these different ionization mechanisms is the temperature of the gas. Photoionized gas usually has a temperature around $10^4$K, while shocked gas and turbulent mixing has higher temperatures, approaching $10^5$K. If we can measure the temperature, we can tell which ionization mechanism is at work. There are a few temperature-sensitive line ratios in an optical spectrum, such as \mbox{[\ion{O}{iii}] $\lambda$5007}/\mbox{[\ion{O}{III}] $\lambda$4363}, \mbox{[\ion{N}{ii}] $\lambda$6583}/\mbox{[\ion{N}{ii}] $\lambda$5755}. The weaker line in these line ratios are usually too weak to be detected in an SDSS-quality spectrum. In this paper, we measure these temperature-sensitive lines in carefully stacked spectra of quiescent red galaxies. This allows us to measure the gas temperature and put strong constraints on the ionization mechanisms.
The paper is organized as the following. We describe the data in Section 2. The methods of sample selection, line measurements and zero-point corrections, selection of subsamples, stacking, stellar continuum subtraction, and weak line measurements in the stacked spectra are described in Section 3. We derive the extinction and gas temperatures in Section 4. We compare the results with models in Section 5 and conclude in Section 6.
Throughout the paper, we assume a flat $\Lambda$CDM cosmology with $\Omega_m=0.3$ and a Hubble constant $H_0=100h$~km~s$^{-1}$ Mpc$^{-1}$. The magnitudes used are all in the AB system.
\section{Data}
Sloan Digital Sky Survey \citep{York00} has obtained five-band optical imaging in one quarter of the whole sky and obtained spectra for more than half a million galaxies in the local Universe. For this paper, we use the spectra from SDSS DR7 \citep{SDSSDR7}. Our spectra are obtained from the Science Archive Server of SDSS and are read in by the code using the {\it readspec.pro} routine in the IDLSPEC2D package. We make use of New York University Value Added Catalog \citep{BlantonSS05} which is a magnitude-limited sample brighter than 17.9 in $r$-band. g
As we pointed out before in \cite{Yan11flux}, the standard flux calibration of the spectra show small residuals on large wavelength scales. For this work, we have applied the correction derived by \cite{Yan11flux} in each spectrum. We also correct the spectra for galactic extinction using the dust map measured by \cite{SchlegelFD98} and the extinction curve determined by \cite{O'Donnell94}.
\section{Methods}
\subsection{Selection of Quiescent Red Galaxies}
To measure the pure low-ionization emission line spectrum, we need a sample of quiescent red galaxies. By `quiescent', we are referring to galaxies that are not forming stars. Such low-ionization diffuse emission may also occur in star-forming galaxies, but it is overwhelmed by line emission from star-forming HII regions. In quiescent galaxies, we can study them uncontaminated.
\begin{figure}
\includegraphics[width=\columnwidth]{cmd.png}
\caption{colour-magnitude diagram for all SDSS main sample galaxies with $0.09<z<0.1$. The colour bi-modality can be clearly seen. The tilted lines indicate our thresholds for selecting red sequence galaxies.}
\label{fig:cmd}
\end{figure}
We choose a redshift range of $0.06<z<0.15$. We choose $z>0.06$ so that the 3\arcsec\ SDSS fibers cover a scale more than 2.5 kpc/h. As shown by \cite{YanB12}, on these scales, the total line emission luminosity is dominated by large scale emission, rather than the nuclear emission which may have contribution from AGN photoionization. We choose $z<0.15$ to ensure sufficient S/N for the majority of the sample.
First, we select a red galaxy sample from SDSS using a cut in colour-magnitude diagram. Figure~\ref{fig:cmd} shows the $^{0.1}(g-r)$ vs. $M_{0.1r}-5\log h$ plot for all SDSS main sample galaxies in a narrow redshift bin. The colour bi-modality is evident in this figure. We apply empirical cuts to select the red galaxies in-between the two lines. The cuts are defined by the following inequalities.
\begin{eqnarray}
^{0.1}(g-r) &>& -0.02 (M_{^{0.1}r}-5\log h) +0.49 \\
^{0.1}(g-r) &<& -0.02 (M_{^{0.1}r}- 5\log h) +0.59
\end{eqnarray}
Red sequence galaxies can also include star-forming galaxies that appear red due to dust extinction. Here, we demonstrate that an additional cut based on $D_n(4000)$ is very effective in removing these dusty star-forming galaxies. Figure~\ref{fig:d4000_gr} shows the distribution of red sequence galaxies selected above in the $D_n(4000)$ vs. $g-r$ space. Our definition of $D_n(4000)$ is the same as those used in \cite{Balogh99} and \cite{KauffmannHT03}, and is defined as the ratio between the average flux density ($f_\nu$) between the two windows bracketing the $4000\AA$: $4000-4100\AA$ and $3850-3950\AA$. Figure~\ref{fig:d4000_gr} clearly shows a correlation between D4000 and g-r colour for the majority of red galaxies. But there are also galaxies with lower D4000 that fall below the correlation.
\begin{figure}
\begin{center}
\includegraphics[width=0.45\textwidth]{d4000_gr.png}
\caption{D4000 vs. $^{0.1}g-r$ for red sequence galaxies at $0.09<z<0.1$. The tilted dashed lines indicate our cut for selecting the quiescent red sequence galaxies.}
\label{fig:d4000_gr}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.45\textwidth]{dustyredgal.png}
\caption{Panel (a): $H\alpha$ emission equivalent width (EW) vs. $D_n(4000)$ for red sequence galaxies with $0.09<z<0.1$. Low $D_n(4000)$ galaxies have stronger emission. (b): Balmer decrement for stronger line-emitting red-sequence galaxies in the same redshift range. { The dashed line indicates the Case B ratio for 10,000K.} Only the brightest 40\% of galaxies in \mbox{H$\alpha$}\ luminosity are plotted. Low $D_n(4000)$ red galaxies have higher dust extinction. (c) \mbox{[\ion{N}{ii}]}/\mbox{H$\alpha$}\ ratio for red galaxies as a function of D4000. Low $D_n(4000)$ galaxies have lower \mbox{[\ion{N}{ii}]}/\mbox{H$\alpha$}. (d) The \mbox{H$\alpha$}\ luminosity vs. \mbox{[\ion{N}{ii}]}/\mbox{H$\alpha$}\ ratio distribution of low $D_n(4000)$ galaxies follow the expectation of a composite of low-ionization emission and star-forming region emission. In all panels, the red points indicate those galaxies above the lower cut shown in Figure~\ref{fig:d4000_gr}.}
\label{fig:dustyredgal}
\end{center}
\end{figure}
What are these galaxies? As shown in Fig.~\ref{fig:dustyredgal}, among all red galaxies, low $D_n(4000)$ ones have stronger line emission and a higher Balmer decrement than high $D_n(4000)$ ones, indicating that they are dusty star-forming galaxies. The younger the stellar population (lower $D_n(4000)$), the higher an extinction is needed to have a colour as red as an unextincted old population. The low $D_n(4000)$ galaxies also have lower \mbox{[\ion{N}{ii}]}/\mbox{H$\alpha$}\ ratios, but not as low as pure star-forming galaxy. This intermediate line ratio is probably due to a combination of low-ionization emission and star formation, as pointed out by \cite{YanB12} in their Figure 2 and shown here again in panel (d) of Figure~\ref{fig:dustyredgal}.
Because the two windows defining $D_n(4000)$ are separated by only 150\AA, it is much less sensitive to dust extinction than $^{0.1}(g-r)$ colour, in which the two bands are separated by $\sim1300$\AA. Therefore, $D_n(4000)$ is better at reflecting the intrinsic property of the stellar population.
Therefore, to exclude dusty star-forming galaxies from our analysis, we apply two cuts on $D_n(4000)$ { as a function of $^{0.1}(g-r)$ and select only galaxies in-between the two cuts}. The cuts we adopt are shown in Figure~\ref{fig:d4000_gr} and can be expressed by the following inequalities.
\begin{eqnarray}
D_n(4000) &>& 1.6~^{0.1}(g-r) +0.26 \\
D_n(4000) &<& 1.6~^{0.1}(g-r) +0.52
\end{eqnarray}
\subsection{Emission line measurements in individual spectra}
We measure the strong emission lines(\mbox{[\ion{O}{ii}] $\lambda$3727}, \text{H$\beta$}\, \mbox{[\ion{O}{iii}] $\lambda$5007}, \mbox{[\ion{O}{i}]}, \mbox{H$\alpha$}, \mbox{[\ion{N}{ii}] $\lambda$6583}, \mbox{[\ion{S}{II}] $\lambda \lambda$6716,6731}) after subtracting the stellar continuum in each spectrum. We fit for the stellar continuum using a linear combination of templates. The templates are made with \cite{BC03} stellar population synthesis models with solar metallicity and span a range of ages. The fitting is done in the same way as described by \cite{Yan06}, except for the use of multiple templates. After the continuum subtraction, the residuals around the line may still have systematic difference from zero. We thus fit the residual continuum by a linear function in two sidebands around the line and then subtract off the linear fit from the residual spectrum. Finally, we sum the flux within narrow windows around each line to get the line flux. Note we do not use Gaussian-fitted flux because the Gaussian fitting is ill-behaved when the line flux is zero or very low. The center and sidebands windows for each emission line are listed in Table.~\ref{tab:linedef}.
The \text{H$\beta$}\ line flux is measured within a fairly narrow window out of concern of imperfect stellar continuum subtraction around \text{H$\beta$}. As the narrow window does not always cover the complete line profile, we correct for the missing flux assuming a Gaussian profile with a dispersion equal to that measured for \mbox{H$\alpha$}.
\begin{table*}
\begin{tabular}{llll}
\hline\hline
Line & Center window (\AA) & Left sideband(\AA) & Right sideband (\AA)\\ \hline
\mbox{[\ion{O}{ii}] $\lambda$3727} & 3717.36---3739.36 & 3697.36---3717.36 & 3739.36---3759.36\\
\mbox{[\ion{Ne}{iii}] $\lambda$3869} & 3859.85---3879.85 & 3839.85---3859.85 & 3879.85---3899.85\\
\text{H$\beta$} & 4857.46---4867.04 & 4798.88---4838.88 & 4885.62---4925.62\\
\mbox{[\ion{O}{iii}] $\lambda$5007} & 4998.2---5018.2 & 4978.2---4998.2 & 5018.2---5038.2 \\
\mbox{[\ion{O}{i}] $\lambda$6300} & 6292.05---6312.05 & 6272.05---6292.05 & 6312.05---6332.05 \\
\mbox{H$\alpha$} & 6554.61---6574.61 & 6483---6538 & 6598---6653 \\
\mbox{[\ion{N}{ii}] $\lambda$6583} &6575.28---6595.28 & 6483---6538 & 6598---6653 \\
\mbox{[\ion{S}{II}] $\lambda \lambda$6716,6731} & 6705.48---6745.48 & 6687.48---6705.48 & 6745.48---6763.48 \\
\hline\hline
\end{tabular}
\caption{Definition of windows for strong emission line measurements in both individual and stacked spectra.}
\label{tab:linedef}
\end{table*}
\subsection{Zero-point correction for emission lines}
The galaxies we selected have very uniform continuum shapes and line ratios. The standard deviation between the \mbox{[\ion{O}{ii}]}\ continuum level and the \mbox{H$\alpha$}\ continuum level is only 0.046 dex (10.6\%).
Because the emission lines of these galaxies are relatively weak, it is extremely important to remove as much systematics as possible from the emission line measurements. Otherwise, the line ratios will be biased and there could be artificial correlation between line ratio and line strength (e.g. between \text{H$\beta$}/\mbox{H$\alpha$}\ ratio and \mbox{H$\alpha$}\ strength, or between \text{[\ion{O}{iii}]}/\mbox{[\ion{O}{ii}]}\ ratio and \mbox{[\ion{O}{ii}]}\ strength). We noticed that there are systematics associated with our emission line EW measurements, because even for galaxies with every emission line other than \text{[\ion{O}{iii}]}\ to be negative, the median \text{[\ion{O}{iii}]}\ emission line flux is still having a positive value.
Another example is, for a sample of galaxies with nearly zero \mbox{H$\alpha$}\ EW, the \text{H$\beta$}\ EW has a median of 0.16 A, which is not physical.
This means that there is a zero-point offset in our measurement of line emission EW. Based on this logic, we first determine the zero-point offsets of all the emission line EW. The systematics are likely resulting from inaccurate stellar continuum subtraction, thus, they should be removed in EW rather than in flux.
The logic we adopt is, that the EW of different emission lines should be positively correlated with each other, as shown by the EW vs. EW plots in Figure~\ref{fig:ew_vs_ew}. The true value of an emission line ``A'' should be among the lowest among all galaxies if all the other emission lines are among the lowest. In addition, we assume there should be a population of quiescent galaxies where all line emission is consistent with zero. There are several strong lines in the spectra: \mbox{H$\alpha$}, \mbox{[\ion{N}{ii}] $\lambda$6583}, \mbox{[\ion{S}{II}] $\lambda \lambda$6716,6731}, \mbox{[\ion{O}{iii}] $\lambda$5007}, and \mbox{[\ion{O}{ii}] $\lambda$3727}. To get the EW zero-point for each strong line, we select a sample with the lowest emission line EWs in all the other strong lines except the one to be considered.
For example, to evaluate \mbox{[\ion{O}{ii}]}\ EW zero-point, we first select all galaxies whose other strong lines (\mbox{H$\alpha$}, \mbox{[\ion{N}{ii}]}, \text{[\ion{O}{iii}]}, and \mbox{[\ion{S}{ii}]}) have EWs below the 10th percentile, respectively for each emission line, among all the quiescent galaxies. We measure the median \mbox{[\ion{O}{ii}]}\ EW among these galaxies. We then increase the percentile threshold to 15th percentile, 20th percentile, etc. Finally, we plot the median \mbox{[\ion{O}{ii}]}\ EW values as a function of the percentile thresholds in Fig.~\ref{fig:zeropoint}. As we go to higher percentile thresholds, the median \mbox{[\ion{O}{ii}]}\ EW should monotonically increase. At low enough percentile thresholds, the median \mbox{[\ion{O}{ii}]}\ EW should be consistent to being flat because those line measurements are completely due to noise. As shown in Figure~\ref{fig:zeropoint}, the median \mbox{[\ion{O}{ii}]}\ EW first decreases and then increases. The initial decline is basically consistent with being flat. It declines because of noisy measurements and the small sample statistics. We thus set the zero-point of \mbox{[\ion{O}{ii}]}\ EW to be the median value for the 25th percentile which is a good estimate for those galaxies whose line emission are consistent with being zero.
\begin{figure*}
\begin{center}
\includegraphics[width=\textwidth]{zeropoint.png}
\caption{Median EW for each emission line for subsamples with the lowest EWs in the other strong lines, plotted as a function of the percentile threshold for selecting such subsamples. Here, the EWs shown are before the zero-point correction is applied. The horizontal lines indicate the adopted zero-point correction for each line.}
\label{fig:zeropoint}
\end{center}
\end{figure*}
We do this for all other strong lines. When we determine the zero-point for \mbox{H$\alpha$}\ or \mbox{[\ion{N}{ii}]}, we exclude both of them in selecting the lowest EW samples (i.e., we only look at \mbox{[\ion{O}{ii}]}, \text{[\ion{O}{iii}]}\ and \mbox{[\ion{S}{ii}]}\ to select the subsample for each percentile threshold). Because \mbox{H$\alpha$}\ and \mbox{[\ion{N}{ii}]}\ are measured using the same continuum subtraction, the two lines' EW zero-points could be correlated. If the line to be considered is relatively weak (\text{H$\beta$}, \mbox{[\ion{O}{i}]}), then we look at all the strong lines to determine the sample for each percentile threshold.
We took the values of the median EWs for the 25-th percentile subsamples as the zero-point for all the lines. These values are subtracted from the original EW measurements. Afterwards, we recompute the correct line fluxes by multiplying the zero-point-corrected EWs with their corresponding continuum levels in each spectrum.
After this correction to the EWs, we plot the EWs of the lines against each other (Fig.~\ref{fig:ew_vs_ew}). With the correction, there is a population of galaxies whose distribution in every emission line centers around the origin. These galaxies are consistent with having zero line emission.
\begin{figure*}
\begin{center}
\includegraphics[width=\textwidth]{ew_vs_ew.pdf}
\caption{Zero-point-corrected EWs of six different emission lines are compared to \mbox{H$\alpha$}\ EW for the high-D$_{\rm n}$(4000) red galaxy sample. All lines show positive correlation with \mbox{H$\alpha$}. The contour levels indicate the number density of points. They are equally-spaced in log density. The crosses indicate the location of the origin [0,0]. Before the zero-point correction, the highest density regions do not center around the origin.}
\label{fig:ew_vs_ew}
\end{center}
\end{figure*}
Note that this method assumes the intrinsic emission line EWs for those galaxies with the weakest lines are zero. However, it is entirely possible for all early type galaxies to have an intrinsic minimum line strength. We think this is less likely. Even if there were such a mechanism to give every galaxy a positive line strength, our method below would remove the effect of those constant emission EW and only probe the line emission above this constant minimum level.
\subsection{Constructing the strong-line subsamples}
First, we bin the quiescent red galaxy sample by the strength of their emission lines.
The equivalent widths of the sample show very little dependence on the galaxy luminosity (Figure~\ref{fig:ew_mag}). To detect the weak emission lines such as \mbox{[\ion{N}{ii}] $\lambda$5755}\ and \mbox{[\ion{O}{III}] $\lambda$4363}, we focus on galaxies with relatively strong line emission. Later, we will use galaxies without line emission as a control sample for stellar continuum subtraction.
\begin{figure}
\begin{center}
\includegraphics[width=0.45\textwidth]{ew_mag.pdf}
\caption{This figure shows that the equivalent width (EW) of \mbox{H$\alpha$}\ is nearly independent of galaxy luminosity. { The lines show the 5th-, 25th-, 50th-, 75th-, and 95th-percentiles as a function of absolute magnitude.}}
\label{fig:ew_mag}
\end{center}
\end{figure}
As shown by \cite{Yan06} and in the top left panel of Figure~\ref{fig:ew_vs_ew}, most of these galaxies follow a very narrow sequence in \mbox{[\ion{O}{ii}]}\ EW vs. \mbox{H$\alpha$}\ EW plot. In terms of \mbox{[\ion{O}{ii}]}/\mbox{H$\alpha$}\ ratio, most of the sample span a very narrow range in \mbox{[\ion{O}{ii}]}/\mbox{H$\alpha$}\ ratio. We suspect that the small fraction of galaxies that have low \mbox{[\ion{O}{ii}]}/\mbox{H$\alpha$}\ ratio are a slightly different population. Their low \mbox{[\ion{O}{ii}]}/\mbox{H$\alpha$}\ ratios could be due to several different reasons. They may either have contamination from another ionization mechanism, or have a different abundance pattern, or more dust extinction. We exclude this population of galaxies by the same cut as we used in \cite{Yan06}. We keep only galaxies that have ${\rm EW}(\mbox{[\ion{O}{ii}]}) > 5 {\rm EW}(\mbox{H$\alpha$})-7$. This cut removes 3.3\% of galaxies. It removes most of the Seyferts in the red sequence which have high \text{[\ion{O}{iii}]}/\mbox{[\ion{O}{ii}]}\ ratio (\text{[\ion{O}{iii}]}/\mbox{[\ion{O}{ii}]}$>1$) compared to the majority of quiescent galaxies.
We then limit the sample to those with a median S/N per pixel greater than 15. This cut is chosen to maximize the S/N in the stacked spectra. This would remove 27.5\% of galaxies after the previous cuts.
Next, we select the strongest line emitting galaxies. We base our selection on equivalent width of lines.
We do not base the selection of strong-line galaxies on \mbox{H$\alpha$}\ EW alone, as that would bias the line ratio distribution of the sample. For example, selecting strong \mbox{H$\alpha$}\ galaxies without considering \mbox{[\ion{N}{ii}]}\ may bias the sample towards low \mbox{[\ion{N}{ii}]}/\mbox{H$\alpha$}\ galaxies. Therefore, we construct a 'Total EW' by combining the EW of all strong emission lines (\mbox{H$\alpha$}, \mbox{[\ion{N}{ii}]}, \mbox{[\ion{O}{ii}]}, \text{[\ion{O}{iii}]}, \mbox{[\ion{S}{ii}]}) according to the median EW ratios among these lines. This population populates a narrow sequence in many of the EW vs. EW plot (Figure~\ref{fig:ew_vs_ew}). For example, in \mbox{[\ion{O}{ii}]}\ EW vs. \mbox{H$\alpha$}\ EW plot, these galaxies populate a narrow sequence with a slope of approximately 5. Order the galaxies by 5EW(\mbox{[\ion{O}{ii}]})+EW(\mbox{H$\alpha$}) will sort the galaxies along that sequence. Similarly, we obtain the EW ratios between other lines and \mbox{H$\alpha$}, and we construct a total EW index based on the following formula.
\begin{eqnarray}
{\rm Total~EW~index} &= &{\rm EW}(\mbox{H$\alpha$}) + 1.03 {\rm EW}(\mbox{[\ion{N}{ii}]}) \nonumber\\
& & +5.0 {\rm EW} (\mbox{[\ion{O}{ii}]}) + 0.5 EW(\text{[\ion{O}{iii}]}) \nonumber\\
& & + EW(\mbox{[\ion{S}{ii}]})
\end{eqnarray}
We select a sample of strong emission line galaxies by requiring this Total EW index to be above 75 percentile in each redshift bin with $\Delta z= 0.1$.
This leads to a sample of 18,664 galaxies.
We will bin the strong line sample by their \mbox{[\ion{N}{ii}]}/\mbox{[\ion{O}{ii}]}\ and \mbox{[\ion{N}{ii}]}/\mbox{H$\alpha$}\ line ratios. Therefore, we require that the fractional errors on \mbox{[\ion{N}{ii}]}/\mbox{[\ion{O}{ii}]}\ and \mbox{[\ion{N}{ii}]}/\mbox{H$\alpha$}\ ratios to be better than 0.3 dex. This removes 21\% of the strong line sample. We further require \text{[\ion{O}{iii}]}/\mbox{[\ion{O}{ii}]}\ ratio to be less than 1. This removes 1.2\% of the remaining sample. This yields our final strong-line sample with a total of 14,645 galaxies.
\begin{figure}
\begin{center}
\includegraphics[width=0.45\textwidth]{n2o2_n2ha_den.png}
\caption{Distribution of the selected strong-line quiescent red galaxies in \mbox{[\ion{N}{ii}]}/\mbox{[\ion{O}{ii}]}\ vs. \mbox{[\ion{N}{ii}]}/\mbox{H$\alpha$}\ space. The gray scale indicates density of points. The dashed line is an approximation (not a fit) to the trend. The dotted lines mark our separation thresholds to split the sample into the high-\mbox{[\ion{N}{ii}]}/\mbox{H$\alpha$}, mid-\mbox{[\ion{N}{ii}]}/\mbox{H$\alpha$}, and low-\mbox{[\ion{N}{ii}]}/\mbox{H$\alpha$}\ subsamples.}
\label{fig:n2o2_n2ha_den}
\end{center}
\end{figure}
With these cuts, the strong-lined quiescent red galaxies selected populated a relatively narrow locus in the \mbox{[\ion{N}{ii}]}/\mbox{[\ion{O}{ii}]}\ vs. \mbox{[\ion{N}{ii}]}/\mbox{H$\alpha$}\ diagram as shown in Figure~\ref{fig:n2o2_n2ha_den}. \mbox{[\ion{N}{ii}]}/\mbox{[\ion{O}{ii}]}\ is an excellent metallicity indicator, because N and O have very similar ionization potentials for multiple ionization levels. Therefore, they always trace roughly the same spatial regions, no matter it is photoionization or collisional ionization. In addition, this line ratio is sensitive to metallicity because of two reasons. First, Nitrogen is a secondary element so N/O ratio goes up in high metallicity regime. Second, the two lines differ a lot in the transition energy, making it very sensitive to temperature variation which traces metallicity variation. Both factors increase \mbox{[\ion{N}{ii}]}/\mbox{[\ion{O}{ii}]}\ ratio as metallicity increases making this line ratio very sensitive to metallicity.
The drawback with this line ratio is that it is quite sensitive to dust extinction. { However, this is of little consequence for our analysis of quiescent galaxies since the extinction vector goes vertically in Figure~\ref{fig:n2o2_n2ha_den}, which therefore differs strikingly from the general slope apparent in the data. As a result, such trend does not appear to be due to variations in dust extinction. At a fixed \mbox{[\ion{N}{ii}]}/\mbox{H$\alpha$}\ ratio, the small observed variations in \mbox{[\ion{N}{ii}]}/\mbox{[\ion{O}{ii}]}\ imply that the extinction variations are relatively small among the selected quiescent galaxies.}
Gas with different metallicities can have different electron temperatures. It therefore makes more sense to separate the whole sample into different metallicity bins. We separate this sample into three bins according to their position in the \mbox{[\ion{N}{ii}]}/\mbox{[\ion{O}{ii}]}\ vs. \mbox{[\ion{N}{ii}]}/\mbox{H$\alpha$}\ space, which should roughly trace metallicity variations.
We find an approximation to the trend in this space and cut the sample into 3 equal portions along the dotted lines as shown in Figure~\ref{fig:n2o2_n2ha_den}. Below, we refer to the three subsamples as high-\mbox{[\ion{N}{ii}]}/\mbox{H$\alpha$}, mid-\mbox{[\ion{N}{ii}]}/\mbox{H$\alpha$}, and low-\mbox{[\ion{N}{ii}]}/\mbox{H$\alpha$}\ subsamples.
\subsection{Constructing the matching zero-line subsamples}
In order to measure the weak emission lines, we have to accurately subtract the stellar continuum underneath the lines. One of the best ways for stellar continuum subtraction is to use other galaxies without emission lines to build a stellar continuum template. Thus, we construct a sample of red quiescent galaxies without emission lines and select a matching subsample for each strong-line subsample.
We select those galaxies that center around zero EW in all the strong emission lines (\mbox{H$\alpha$}, \mbox{[\ion{N}{ii}] $\lambda$6583}, \mbox{[\ion{S}{II}] $\lambda \lambda$6716,6731}, \mbox{[\ion{O}{iii}] $\lambda$5007}, \text{H$\beta$}\, and \mbox{[\ion{O}{ii}] $\lambda$3727}). We draw a multi-dimensional ellipsoid centered around 0 (the origin) in the multi-dimensional space with each of the semi-axis equal to two times the median uncertainty in EW for each line. Again the selection is done in each redshift bin as the uncertainties change with redshift. This yields a sample of 13,353 galaxies in the zero-line sample.
Because there are weak but non-zero correlations between the emission-line strength and the stellar population, the high-\mbox{[\ion{N}{ii}]}/\mbox{H$\alpha$}\ strong-line subsample and the low-\mbox{[\ion{N}{ii}]}/\mbox{H$\alpha$}\ strong-line subsample may have different stellar continuum. To accurately reproduce the stellar continuum of each strong-line subsample, the galaxies making up the corresponding zero-line subsample needs to have the same stellar population and the same velocity dispersion. We experimented several different matching methods. The best continuum subtraction is achieved by matching the 3-d distribution of galaxies in $r_{0.1}$-band absolute magnitude, $D_n(4000)$, and stellar velocity dispersion space.
In detail, we first choose the bin size in each dimension (each property to be matched) so that there are 20 bins between the 5-th and 95-th percentile points in that property for the zero-line sample. This corresponds to $\Delta M_{0.1r}=0.119$, $\Delta D_n(4000)=0.011$, and $\Delta \log V_{\rm disp}= 0.016$.
For each strong-line subsample, we go through each bin in this 3-d parameter space and randomly draw twice as many galaxies in the zero-line sample belonging to that bin as there are in the given strong-line subsample for that bin. If a bin has fewer galaxies in the zero-line sample than there are in the strong-line subsample, then we discard all galaxies in that bin from both samples. If the zero-line sample has more than twice the number of the strong-line subsample, we do the random drawing without replacement. If the zero-line sample has more galaxies in that bin than the strong-line subsample but less than twice as many, we allow each galaxy in the zero-line sample to be selected at most twice. This way, we minimize repetition of spectra in constructing the corresponding zero-line subsample.
\subsection{Stacking the Spectra}
We stack the spectra of all galaxies in each strong-line subsample and each zero-line subsample. For the stacking, we first correct each spectrum for the Milky Way galactic extinction using the extinction map of \cite{SchlegelFD98} and the extinction law of \cite{O'Donnell94}. We then correct each spectrum by the correction vector derived by \cite{Yan11flux} (done in observed wavelength space), normalize it by the median flux in the window of 6000-6100\AA, and re-sample it to a common wavelength grid with the same logarithmic spacing. Finally, we sum together all spectra in a given sample and propagate the errors accordingly.
Figure~\ref{fig:coaddD_comparison} shows the stacked spectra for the high \mbox{[\ion{N}{ii}]}/\mbox{H$\alpha$}\ sample, the medium \mbox{[\ion{N}{ii}]}/\mbox{H$\alpha$}\ sample, and the low \mbox{[\ion{N}{ii}]}/\mbox{H$\alpha$}\ sample.
\begin{figure*}
\begin{center}
\includegraphics[width=1.0\textwidth]{coaddD_comparison.png}
\caption{Stacked spectra for the strong-line subsamples in black and for the zero-line subsamples in red. The three panels are for the three \mbox{[\ion{N}{ii}]}/\mbox{H$\alpha$}\ bins, respectively. The spectra for the zero-line subsamples are offset vertically for clarity.}
\label{fig:coaddD_comparison}
\end{center}
\end{figure*}
To measure the emission lines, we need to subtract the continuum. Although we have matched the zero-lined galaxies with the strong-lined galaxies before stacking their spectra, the resulting continuum still have a slightly different broadband shape at a few percent levels. This would introduce residual features in the continuum. To avoid this problem, we first divide the stack spectrum of each strong-line subsample by the stack spectrum of its corresponding zero-line subsample, then do a b-spline fit through the ratio spectrum using only the line-free regions and with a spacing of 400 pixels between break points. We then multiply the zero-line stack with this smooth ratio curve before subtracting it from the strong-lined stack. This is done separately for each strong-line subsample. The resulting residual spectrum are shown in Figure~\ref{fig:overallresidual}. One can see the residuals are flat to much better than 1\%.
\begin{figure*}
\begin{center}
\includegraphics[width=1.0\textwidth]{coaddDresidual.png}
\caption{Continuum-subtracted residual stack spectra for the three strong-line subsamples with different \mbox{[\ion{N}{ii}]}/\mbox{H$\alpha$}\ ratios. }
\label{fig:overallresidual}
\end{center}
\end{figure*}
\subsection{Emission line measurements in the stacked spectra}
\begin{figure*}
\begin{center}
\includegraphics[width=0.49\textwidth]{coaddD_OIIIzoomin.png}
\includegraphics[width=0.49\textwidth]{coaddD_NIIzoomin.png}
\includegraphics[width=0.49\textwidth]{coaddD_OIIzoomin.png}
\includegraphics[width=0.49\textwidth]{coaddD_SIIzoomin.png}
\caption{Zoom-in of the continuum-subtracted stack spectra around \mbox{[\ion{O}{III}] $\lambda$4363} (top left), \mbox{[\ion{N}{ii}] $\lambda$5755} (top right), \mbox{[\ion{O}{II}] $\lambda\lambda$7320,7330} (bottom left), and \mbox{[\ion{S}{II}] $\lambda\lambda$4068,4076} (bottom right) for the three strong-lined subsamples. The light grey zones indicate the wavelength windows used to determine the residual continua, which are fit by straight lines (red solid lines). The dark gray zones indicate the windows over which the line flux is integrated. The expected line centers are shown by the vertical dashed lines.}
\label{fig:coronalzoomin}
\end{center}
\end{figure*}
We measure the emission lines in the residual spectra using similar algorithm as given by \cite{Yan06}. For each emission line, we measure the residual continuum level in two sidebands as given in Table~\ref{tab:linedef} and \ref{tab:weaklinedef}. We fit a linear function through the two sidebands and subtract it from the spectra before measuring the line flux. We sum the residual flux in the line window as the resulting line flux. { For all the Balmer lines, we adopt the Gaussian-fitted line flux rather than summed flux as visual inspection suggests the Gaussian-fitting results in better fits to the continuum levels for the weaker Balmer lines.}
\begin{table*}
\begin{tabular}{llll}
\hline\hline
Line & Center window (\AA) & Left sideband(\AA) & Right sideband (\AA)\\ \hline
\text{H$\zeta$} & 3884.17---3896.17 & 3835.00---3860.00 & 3896.18---3921.18 \\
\text{H$\epsilon$} & 3963.20---3979.20 & 3952.20---3960.20 & 3979.20---3987.20 \\
\mbox{[\ion{S}{II}] $\lambda\lambda$4068,4076} & 4058.625---4088.625 & 4000---4060 & 4140---4200\\
\text{H$\delta$} & 4094.892---4110.892& 4086.892---4094.892 & 4110.892---4118.892\\
\text{H$\gamma$} & 4334.692---4348.692& 4326.692---4334.692 & 4348.692---4356.692 \\
\mbox{[\ion{O}{III}] $\lambda$4363} & 4354.435---4374.435 & 4248.435---4328.435 & 4374.435---4454.435 \\
\mbox{[\ion{N}{ii}] $\lambda$5755} & 5746.2---5766.2 & 5666.2---5746.2 & 5766.2---5846.2 \\
\mbox{[\ion{O}{II}] $\lambda\lambda$7320,7330} & 7312.08---7342.08 & 7252.08---7312.08 & 7342.08---7402.08\\
\hline\hline
\end{tabular}
\caption{Definition of windows for weak emission line measurements in the continuum-subtracted stack spectra.}
\label{tab:weaklinedef}
\end{table*}
The fluctuations in line-free regions of the residual spectra show the level of uncertainty associated with our continuum subtraction technique. For the strong forbidden lines, the uncertainty in the summed line flux is propagated from the uncertainties in the stacked spectra. For the weaker coronal lines, we adopt a more conservative estimate that would take into account any systematics resulting from imperfect continuum subtraction. We simulate the flux measurements with a sliding boxcar placed in the sidebands with the same width as the line window. We take the root-mean-square of about 100 to 160 integrated measurements of fluxes in such regions without any emission lines as the uncertainty of the line measurement. We show in Table \ref{tab:lines} our measurements for all emission lines in the continuum-subtracted spectra. { The first three columns gave the raw values relative to Hbeta flux, without correcting for intrinsic dust extinction. The second set of three columns show the values after extinction correction according to the \mbox{H$\alpha$}/\text{H$\beta$}\ ratio. However, as we will discuss below, we have reasons to suspect that these extinction correction can be unreliable. Thus, we do not recommend using these extinction-corrected values. }
Figure~\ref{fig:coronalzoomin} shows the residual spectra around the four temperature-sensitive coronal lines, which are the key measurements for this paper. In the bottom panels, one can see that the \mbox{[\ion{O}{II}] $\lambda\lambda$7320,7330}\ and \mbox{[\ion{S}{II}] $\lambda\lambda$4068,4076}\ lines are significantly detected in all three stacked spectra. The bump around 7290\AA\ in the low-\mbox{[\ion{N}{ii}]}/\mbox{H$\alpha$}\ stack could be due to \mbox{[\ion{Ca}{II}] $\lambda$7291}. The \mbox{[\ion{N}{ii}] $\lambda$5755}\ line is detected in the mid- and low-\mbox{[\ion{N}{ii}]}/\mbox{H$\alpha$}\ stacks. There is a bump around the wavelength of \mbox{[\ion{N}{ii}] $\lambda$5755}\ in the high-\mbox{[\ion{N}{ii}]}/\mbox{H$\alpha$}\ stack but it is of a similar level as other features that are due to noise. According to our simulated flux measurements in the sidebands as described above, it is less than 3$\sigma$. The features to the right of the shaded region are \mbox{\ion{He}{i} $\lambda$5876}\ in emission and \mbox{\ion{Na}{I} $\lambda\lambda$5890,5896}\ in absorption.
{ For \mbox{[\ion{O}{III}] $\lambda$4363}, the bumps in all three spectra around that wavelength do not look significant. This spectral region could be affected by an imperfect subtraction of the \text{H$\gamma$}\ absorption feature present in the stellar continuum. Only in the high \mbox{[\ion{N}{ii}]}/\mbox{H$\alpha$}\ stack is the \mbox{[\ion{O}{III}] $\lambda$4363}\ marginally detected, at a 3$\sigma$ level.}
\begin{table*}
\begin{tabular}{ m{4cm} |ccc|ccc}
\hline\hline
Lines & Hi-\mbox{[\ion{N}{ii}]}/\mbox{H$\alpha$}\ & Mid-\mbox{[\ion{N}{ii}]}/\mbox{H$\alpha$}\ & Low-\mbox{[\ion{N}{ii}]}/\mbox{H$\alpha$}\ & Hi-\mbox{[\ion{N}{ii}]}/\mbox{H$\alpha$}\ & Mid-\mbox{[\ion{N}{ii}]}/\mbox{H$\alpha$}\ & Low-\mbox{[\ion{N}{ii}]}/\mbox{H$\alpha$}\ \\
& \multicolumn{3}{c|}{Raw measurements} & \multicolumn{3}{c}{Extinction-corrected (not recommended)}\\
\hline
\mbox{H$\alpha$}\ & $ 439.2\pm 8.0$& $ 393.6\pm 6.1$& $ 390.6\pm 5.7$& $ 287.0\pm 5.2$& $ 287.0\pm 4.4$& $ 287.0\pm 4.2$\\
\text{H$\beta$}\ & $100$ & $100$ & $100$ & $100$ & $100$ & $100$\\
\mbox{[\ion{O}{iii}] $\lambda$5007}\ & $ 229.0\pm 4.5$& $ 200.9\pm 3.4$& $ 191.7\pm 3.0$& $ 217.0\pm 4.2$& $ 193.0\pm 3.2$& $ 184.3\pm 2.9$\\
\mbox{[\ion{N}{ii}] $\lambda$6583}\ & $ 687.7\pm 12.3$& $ 459.4\pm 7.0$& $ 312.1\pm 4.6$& $ 447.6\pm 8.0$& $ 334.0\pm 5.1$& $ 228.7\pm 3.4$\\
\mbox{[\ion{S}{II}] $\lambda \lambda$6716,6731}\ & $ 466.9\pm 8.5$& $ 394.8\pm 6.2$& $ 363.1\pm 5.4$& $ 296.0\pm 5.4$& $ 281.4\pm 4.4$& $ 261.0\pm 3.9$\\
\mbox{[\ion{O}{ii}] $\lambda$3727}\ & $ 583.6\pm 10.6$& $ 620.9\pm 9.5$& $ 686.0\pm 9.8$& $ 925.2\pm 16.8$& $ 874.2\pm 13.4$& $ 957.9\pm 13.7$\\
\mbox{[\ion{O}{III}] $\lambda$4363}\ & $ 10.9\pm 3.1$& $ 4.1\pm 3.1$& $ 2.9\pm 2.8$& $ 13.6\pm 3.8$& $ 4.8\pm 3.6$& $ 3.4\pm 3.2$\\
\mbox{[\ion{N}{ii}] $\lambda$5755}\ & $ 7.7\pm 5.2$& $ 9.3\pm 2.3$& $ 8.2\pm 1.7$& $ 5.9\pm 4.0$& $ 7.7\pm 1.9$& $ 6.7\pm 1.4$\\
\mbox{[\ion{S}{II}] $\lambda\lambda$4068,4076}\ & $ 24.3\pm 3.6$& $ 25.4\pm 2.4$& $ 19.7\pm 2.1$& $ 34.1\pm 5.0$& $ 32.7\pm 3.0$& $ 25.3\pm 2.7$\\
\mbox{[\ion{O}{II}] $\lambda\lambda$7320,7330}+\mbox{[\ion{Ca}{II}] $\lambda$7324}\ & $ 24.6\pm 2.8$& $ 23.2\pm 2.3$& $ 18.9\pm 2.1$& $ 13.9\pm 1.6$& $ 15.2\pm 1.5$& $ 12.5\pm 1.4$\\
\mbox{[\ion{O}{II}] $\lambda\lambda$7320,7330}\ & \multirow{2}{4em}{$24.6\pm 4.0$}& \multirow{2}{4em}{$20.6\pm 2.5$}& \multirow{2}{4em}{$ 14.0\pm 4.5$}& \multirow{2}{4em}{$13.9\pm 2.3$}& \multirow{2}{4em}{$ 13.5\pm 1.7$}&\multirow{2}{4em}{ $ 9.3\pm 3.0$}\\
(corrected for \mbox{[\ion{Ca}{II}] $\lambda$7324})& & & & & &\\\hline
\mbox{[\ion{S}{ii}] $\lambda$6731}/\mbox{[\ion{S}{ii}] $\lambda$6716}\ & $0.778\pm0.007$ & $0.741\pm0.007$ & $0.719\pm0.009$ & $0.776\pm0.007$ & $0.740\pm0.007$ & $0.717\pm0.009$ \\
\text{H$\beta$}\ Flux & $ 0.276\pm0.005$& $ 0.340\pm0.005$& $ 0.372\pm0.005$& $ 0.382\pm0.007$& $ 0.433\pm0.006$& $ 0.471\pm0.006$\\
\hline\hline
\end{tabular}
\caption{{ Measurements of strong and weak lines in the continuum-subtracted stacked spectra. The measurements are given relative to the flux of the \text{H$\beta$}\ line, which is given in the bottom row. The left three columns show the raw ratios. The right three columns show the ratios after correcting for intrinsic dust extinction according to \mbox{H$\alpha$}/\text{H$\beta$}\ ratios. As discussed in the text, this extinction correction may not be reliable as it could lead to unphysical ratios between other lines.} The unit of the \text{H$\beta$}\ line fluxes is Angstrom multiplied with the flux density between 6000\AA\ and 6100\AA, since each spectrum is normalized by the median flux in the window 6000-6100\AA\ before being stacked.}
\label{tab:lines}
\end{table*}
\section{Densities and Temperatures of the Low-ionization Gas}\label{sec:measurements}
\subsection{Balmer Decrements and Extinction Correction}
The average level of dust extinction in each subsample can be measured by comparing the observed Balmer decrements with the theoretical values expected under Case B recombinations. Multiple Balmer lines, from $H\alpha$ to $H\zeta$, are detectable in the stacked, continuum-subtracted spectra. Therefore, we have multiple decrements we can use to evaluate extinction. Figure~\ref{fig:BalmerDecrements} shows how the ratio between various Balmer lines and \text{H$\beta$}\ compare with the Case B expectations in the low-density limit, { assuming $T=10,000$K}. Here we use the Gaussian-fitted line fluxes rather than the summed fluxes to make the measurements less affected by residual Balmer absorption. H$\epsilon$ ($\lambda$3970\AA)overlaps with the \mbox{[\ion{Ne}{iii}] $\lambda$3967}\ line, which is part of a doublet with the \mbox{[\ion{Ne}{iii}] $\lambda$3869}\ line with a fixed ratio. We measure the \mbox{[\ion{Ne}{iii}] $\lambda$3869}\ and subtract 31\% of its flux from the Gaussian fit at the wavelength of H$\epsilon$.
\begin{figure}
\begin{center}
\includegraphics[width=0.5\textwidth]{BalmerDecrements.png}
\caption{We show the ratio between the observed Balmer-line to \text{H$\beta$}\ ratio and the theoretical case B Balmer-line to \text{H$\beta$}\ ratio. The error bars indicate $\pm1\sigma$ uncertainties. The three panels are for the three \mbox{[\ion{N}{ii}]}/\mbox{[\ion{O}{ii}]}\ subsamples. If there was no dust extinction, we expect the points to lie on the horizontal dashed line. The curves indicate the expected trend under the level of dust extinction constrained from the \mbox{H$\alpha$}/\text{H$\beta$}\ ratio. }
\label{fig:BalmerDecrements}
\end{center}
\end{figure}
The curves in Fig.~\ref{fig:BalmerDecrements} show the expected trend under a level of dust extinction constrained by the \mbox{H$\alpha$}/\text{H$\beta$}\ ratio. The higher order Balmer lines tend to show a smaller level of extinction or zero extinction, but are broadly consistent within 1-2$\sigma$ with the curve determined through \mbox{H$\alpha$}/\text{H$\beta$}\ ratio. Given the high S/N on the \mbox{H$\alpha$}\ and \text{H$\beta$}\ lines, one would think the \mbox{H$\alpha$}/\text{H$\beta$}\ ratio would give a reliable extinction correction. However, the reality is more complex. As we will see below, the combination of the two temperature-sensitive line ratios, \mbox{[\ion{O}{II}] $\lambda\lambda$7320,7330}/\mbox{[\ion{O}{ii}] $\lambda$3727}\ and \mbox{[\ion{S}{II}] $\lambda\lambda$4068,4076}/\mbox{[\ion{S}{II}] $\lambda \lambda$6716,6731}, can also provide an extinction estimate (also see \citealt{Dopita82}). They yield much smaller extinction estimates than that given by \mbox{H$\alpha$}/\text{H$\beta$}\ ratio, with upper limits in $A_{\rm v}$ to be 0.48, 0.14, an 0.0 mag for the three subsamples. Thus, we decide not to correct the line ratios for extinction. We will come back and discuss this point below.
\subsection{ Density Derivation}
{ We compute the density of the gas using the \mbox{[\ion{S}{ii}] $\lambda$6731}/\mbox{[\ion{S}{ii}] $\lambda$6716}\ line ratio from the stacked spectra. This line ratio is a good density indicator and it is insensitive to temperature. We compute the density using the {\it nebular.temden} routine \citep{ShawD95} in PyRAF, which is based on the 5-level atom program written by \cite{deRobertis87}. Assuming $T=10,000$K, we obtain the mean density values for the three subsamples, listed in Table~\ref{tab:tempraw}. The mean electron density for the high-, mid-, and low-\mbox{[\ion{N}{ii}]}/\mbox{H$\alpha$}\ samples are $136\pm14$ cm$^{-3}$, $71\pm12$ cm$^{-3}$ and $34\pm15$ cm$^{-3}$, respectively.
The 1-$\sigma$ uncertainties in density are computed according to the uncertainty of the line ratio measurements. This uncertainty is the uncertainty of the mean density among all galaxies in each subsample. It does not indicate the uncertainty for an individual galaxy nor the range of densities among galaxies.
This line ratio is also measurable in individual galaxies with significant \mbox{[\ion{S}{ii}]}\ detection. If we limit to only those strong-line quiescent galaxies with \mbox{[\ion{S}{ii}]}\ detected above 10$\sigma$, we find the RMS scatter in the \mbox{[\ion{S}{ii}] $\lambda$6731}/\mbox{[\ion{S}{ii}] $\lambda$6716}\ ratio is 0.11 with a median ratio of 0.79. This scatter still includes significant measurement noise as the median \mbox{[\ion{S}{ii}]}\ EW for them is only 4.2\AA. But we can see the density of the \mbox{[\ion{S}{ii}]}-emitting gas in these galaxies is always in the low density regime.
The fact that the line-emitting gas in these quiescent galaxies { takes place} in the low density regime { with respect to the optical \mbox{[\ion{N}{ii}]}\ and \text{[\ion{O}{iii}]}\ lines} means that their { strengths} are not affected by collisional deexcitation. Since we expect O$^{++}$ to lie at a higher temperature than S$^+$, we expect it to be emitted from an even lower density plasma than that of O$^+$ and S$^+$. Thus, it is unlikely for density stratification to affect the use of \mbox{[\ion{O}{III}] $\lambda$4363}/\mbox{[\ion{O}{III}] $\lambda \lambda$4959,5007}\ as temperature indicators, or the use of other coronal-to-strong line ratios in this paper. We are safe to use them to measure temperatures. }
{ As we will discuss below, the implication of this density measurement for shock models is that a much lower pre-shock density than what we infer here for \mbox{[\ion{S}{ii}]}\ is required to match the \mbox{[\ion{S}{ii}]}\ and \mbox{[\ion{O}{ii}]}\ data.}
\subsection{Temperature Derivation}
Different ions trace regions with different ionization in an ionized cloud. The \text{[\ion{O}{iii}]}\ lines trace the more highly ionized regions which have more O$^{++}$. The \mbox{[\ion{O}{ii}]}, \mbox{[\ion{N}{ii}]}, and \mbox{[\ion{S}{ii}]}\ lines trace the partially ionized regions, as O$^0$, N$^0$, and S$^0$ have an ionization potential similar to H$^0$. In the following discussion, we should keep in mind the temperature measured from \text{[\ion{O}{iii}]}\ could be different from the temperatures measured from \mbox{[\ion{O}{ii}]}, \mbox{[\ion{N}{ii}]}, and \mbox{[\ion{S}{ii}]}, as they trace different emission line regions.
We derived the temperature for the gas using several pairs of line ratios. These are also computed using the {\it nebular.temden} routine mentioned above.
Among the four temperature indicators we use, \mbox{[\ion{O}{III}] $\lambda \lambda$4959,5007}/\mbox{[\ion{O}{III}] $\lambda$4363}\ and \mbox{[\ion{N}{ii}] $\lambda \lambda$6548,6583}/\mbox{[\ion{N}{ii}] $\lambda$5755}\ have very little secondary dependence on density, while \mbox{[\ion{S}{II}] $\lambda \lambda$6716,6731}/\mbox{[\ion{S}{II}] $\lambda\lambda$4068,4076}\ and \mbox{[\ion{O}{ii}] $\lambda$3727}/\mbox{[\ion{O}{II}] $\lambda\lambda$7320,7330}\ have slightly higher secondary dependence on density, where a factor of 10 difference in the assumed density would induce a 5-10\% change in the derived temperature. { We consistently assumed the density inferred from the \mbox{[\ion{S}{II}] $\lambda \lambda$6716,6731}\ doublet in the evaluation of all these temperature indicators. }
We list the temperatures derived and the 1-$\sigma$ uncertainties in Table~\ref{tab:tempraw}.{ We list the values before and after extinction correction. However, we recommend using the set without extinction correction as there are reasons to believe the Balmer-decrements are yielding unreliable extinction measurements.}
\begin{table*}
\begin{tabular}{l|ccc|ccc}
\hline\hline
Indicators & Hi-\mbox{[\ion{N}{ii}]}/\mbox{H$\alpha$}\ & Mid-\mbox{[\ion{N}{ii}]}/\mbox{H$\alpha$}\ & Low-\mbox{[\ion{N}{ii}]}/\mbox{H$\alpha$}\ &Hi-\mbox{[\ion{N}{ii}]}/\mbox{H$\alpha$}\ & Mid-\mbox{[\ion{N}{ii}]}/\mbox{H$\alpha$}\ & Low-\mbox{[\ion{N}{ii}]}/\mbox{H$\alpha$}\ \\
(Unit: cm$^{-3}$ for $n$ and $10^4$K for $T$)& \multicolumn{3}{c|}{Before extinction-correction} & \multicolumn{3}{c}{Extinction-corrected (not recommended)}\\ \hline
$n_e from {\mbox{[\ion{S}{ii}] $\lambda$6731}/\mbox{[\ion{S}{ii}] $\lambda$6716}}$ & $136\pm14$ & $71\pm12$ & $34\pm15$ & $133\pm13$ & $70\pm12$ & $31\pm15$\\
$T_{\mbox{[\ion{O}{III}] $\lambda$4363}/\mbox{[\ion{O}{iii}] $\lambda$5007}}$ & $2.59^{+0.93}_{-0.42}$ & $<1.93$ & $<1.87$ & $3.29^{+1.70}_{-0.66}$ & $<2.17 $ & $ <2.07 $ \\
$T_{\mbox{[\ion{N}{ii}] $\lambda$5755}/\mbox{[\ion{N}{ii}] $\lambda$6583}}$ & $<1.05$ &$1.20^{+0.19}_{-0.12}$ & $1.37^{+0.20}_{-0.13}$ & $<1.12$ &$1.27^{+0.23}_{-0.13}$ & $1.46^{+0.24}_{-0.14}$\\
$T_{\mbox{[\ion{S}{II}] $\lambda\lambda$4068,4076}/\mbox{[\ion{S}{II}] $\lambda \lambda$6716,6731}}$& $0.72^{+0.07}_{-0.05}$& $0.84^{+0.06}_{-0.05}$ &$0.78^{+0.05}_{-0.05}$ & $1.29^{+0.27}_{-0.16}$& $1.35^{+0.17}_{-0.12}$ &$1.16^{+0.14}_{-0.10}$\\
$T_{\mbox{[\ion{O}{II}] $\lambda\lambda$7320,7330}/\mbox{[\ion{O}{ii}] $\lambda$3727}}$&$1.71^{+0.3}_{-0.23}$ & $1.48_{-0.14}^{+0.20}$ & $1.10^{+0.35}_{-0.16}$ &$0.87^{+0.09}_{-0.06}$ & $0.92_{-0.06}^{+0.07}$ & $0.77^{+0.15}_{-0.08}$\\
\hline\hline
\end{tabular}
\caption{Electron density and temperature measurements derived using line ratios before (left three columns) and after (right three columns) correcting for intrinsic dust extinction. We do not recommend using the second set of values as the extinction derived from the Balmer decrement could be unreliable. We discuss this in detail in Sections \ref{sec:measurements} and \ref{sec:constraints}.}
\label{tab:tempraw}
\end{table*}
When interpreting the line ratios, we should keep in mind { that the line ratios in the stacked spectra is a weighted average. }
Spectra stacking is done after normalizing each spectrum at 6000-6100A. Therefore, the line ratio in the stacked spectra is a weighted average with the weight proportional to the ratio between the denominator line and the 6000-6100A continuum. This can be shown by the derivation below, where we use $F_{\rm A}$ and $F_{\rm B}$ to denote the flux of two emission lines in the stacked spectra. The flux in individual spectra are denoted as $F_{i{\rm A}}$ and $F_{i{\rm B}}$. The continuum level between 6000-6100A is denoted as $C_i$.
\begin{align}
({F_{\rm A}\over F_{\rm B}})_{\rm stack} &= {\sum_i {F_{i{\rm A}}\over C_i} \over \sum_i {F_{i{\rm B}}\over C_i}} \\
&= {\sum_i {F_{i{\rm A}} \over F_{i{\rm B}}} {F_{i{\rm B}} \over C_i} \over \sum_i {F_{i{\rm B}}\over C_i}}
\end{align}For example, the \mbox{H$\alpha$}/\text{H$\beta$}\ ratio in the stack is effectively a weighted average of the Balmer decrement, with the weight being the ratio between \text{H$\beta$}\ and 6000-6100A continuum. Galaxies with stronger lines will have a higher weight and galaxies with less extinction will have a higher weight.
This means that the various line ratios are using slightly different weighting schemes to compute the weighted average. This could potentially introduce small inconsistencies between line ratios when we compare them to models. However, the galaxies in our sample have fairly homogeneous continuum shape: the RMS variation in the ratios between the continuum around \mbox{[\ion{O}{ii}]}\ and that around \mbox{H$\alpha$}\ is only 10.6\%. The emission line EWs also have very small dynamic ranges: RMS scatter in the logarithm of EWs is only 0.2 dex. Thus, any effect this effective weighting has on the final line ratio is expected to be small and does not affect our main conclusions.
\begin{enumerate}
\item Concerning the \mbox{[\ion{O}{III}] $\lambda \lambda$4959,5007}/\mbox{[\ion{O}{III}] $\lambda$4363}\ ratio. We only detect significant \mbox{[\ion{O}{III}] $\lambda$4363}\ emission in the high-\mbox{[\ion{N}{ii}]}/\mbox{H$\alpha$}\ subsample, at a 3.5$\sigma$ significance. This yields a temperature of $2.59^{+0.93}_{-0.42}\times10^4$K without extinction correction.
In the mid-\mbox{[\ion{N}{ii}]}/\mbox{H$\alpha$}\ and low-\mbox{[\ion{N}{ii}]}/\mbox{H$\alpha$}\ subsamples, the \mbox{[\ion{O}{III}] $\lambda$4363}\ lines are undetected (below 2$\sigma$), indicating the temperatures are below 1.93$\times10^4$K and 1.87$\times10^4$K (2$\sigma$ upper limits), respectively. The overall trend seems to be that the temperature in the O$^{++}$ zone is getting lower as metallicity decreases, which is inconsistent with our general expectation of higher temperature in lower metallicity gas.
Our measurement of \mbox{[\ion{O}{III}] $\lambda$4363}\ could potentially be contaminated by the \mbox{[\ion{Fe}{II}] $\lambda$4359}\ line, which is very close to 4363 in wavelength. The signal-to-noise ratio of the \mbox{[\ion{O}{III}] $\lambda$4363}\ line does not allow a reliable decomposition. In principle, we could estimate the strength of the line based on the strength of \mbox{[\ion{Fe}{II}] $\lambda$4287}, which originates from the same upper level as \mbox{[\ion{Fe}{II}] $\lambda$4359}. { However, as we do not see any visible bump at 4288\AA\ in the residual spectrum (see Fig.~\ref{fig:coronalzoomin}), we do not attempt to make such correction. In order to evaluate the potential impact of the \mbox{[\ion{Fe}{II}] $\lambda$4359}\ line, we included it in the model calculation and compared it directly to the data.} The ratio of \mbox{[\ion{Fe}{II}] $\lambda$4359}/\mbox{[\ion{O}{III}] $\lambda$4363}\ is strongly dependent on temperature, and decreases with increasing temperature. Given our temperature measurement based on \mbox{[\ion{S}{ii}]}\ and \mbox{[\ion{N}{ii}]}, the contamination of \mbox{[\ion{Fe}{II}] $\lambda$4359}\ is expected to be less than 2\% of \mbox{[\ion{O}{III}] $\lambda$4363}, in the photoionization model. The shock models and turbulent mixing models are expected to produce even higher temperatures, thus we expect \mbox{[\ion{Fe}{II}] $\lambda$4359}\ to have little impact on the \mbox{[\ion{O}{III}] $\lambda$4363}\ line measurements.
We also note that, the marginally-detected \mbox{[\ion{O}{III}] $\lambda$4363}\ line in the high-\mbox{[\ion{N}{ii}]}/\mbox{H$\alpha$}\ subsample is about 30\% wider than the \mbox{[\ion{O}{III}] $\lambda \lambda$4959,5007}\ line. This could be due to either noise or that the line has significant contributions from gas with a much higher velocity dispersion that is presumably much hotter. { This could weaken our temperature measurement inferred from \mbox{[\ion{O}{III}] $\lambda$4363}. We cannot rule out either that the wider \mbox{[\ion{O}{III}] $\lambda$4363}\ profile may originate from a much denser region around the nuclei of these galaxies. }
We warn the reader that our marginal detection of that line is not { fully} robust. It should be emphasized, however, that our final conclusions do not depend on it.
\item Concerning the \mbox{[\ion{N}{ii}] $\lambda \lambda$6548,6583}/\mbox{[\ion{N}{ii}] $\lambda$5755}\ ratio. We detect the \mbox{[\ion{N}{ii}] $\lambda$5755}\ emission line significantly in the mid-\mbox{[\ion{N}{ii}]}/\mbox{H$\alpha$}\ and low-\mbox{[\ion{N}{ii}]}/\mbox{H$\alpha$}\ subsamples, at 4.0$\sigma$, and $4.9\sigma$ respectively, indicating temperatures in the range between $1.2\times10^4$K and $1.4\times10^4$K. We do not detect it significantly in the high-\mbox{[\ion{N}{ii}]}/\mbox{H$\alpha$}\ subsample indicating a $2\sigma$ upper limit in temperature of $1.05\times10^4$K. The overall trend is that the temperature in the N$^+$ zone gets lower as metallicity increases, consistent with our general expectations. This is the opposite trend compared to that in the O$^{++}$ zone.
Both \mbox{[\ion{N}{ii}] $\lambda \lambda$6548,6583}/\mbox{[\ion{N}{ii}] $\lambda$5755}\ and \mbox{[\ion{O}{III}] $\lambda \lambda$4959,5007}/\mbox{[\ion{O}{III}] $\lambda$4363}\ ratios are insensitive to extinction, as the lines involved are not too far apart from each other. Extinction corrections would not have changed significantly the temperatures inferred from our \mbox{[\ion{N}{ii}]}\ and \text{[\ion{O}{iii}]}\ estimators (Table~\ref{tab:tempraw}).
\item Concerning the \mbox{[\ion{S}{II}] $\lambda \lambda$6716,6731}/\mbox{[\ion{S}{II}] $\lambda\lambda$4068,4076}\ and\\ \mbox{[\ion{O}{ii}] $\lambda$3727}/\mbox{[\ion{O}{II}] $\lambda\lambda$7320,7330}\ ratios. First, the \mbox{[\ion{O}{II}] $\lambda\lambda$7320,7330}\ is actually a quadruplet. Second, it could be contaminated by the \mbox{[\ion{Ca}{II}] $\lambda$7324}\ line. A correction can be performed based on the strength of the \mbox{[\ion{Ca}{II}] $\lambda$7291}\ line, which is seen in the low-\mbox{[\ion{N}{ii}]}/\mbox{H$\alpha$}\ spectrum of the bottom-left panel of Figure~\ref{fig:coronalzoomin}. In both the photoionization and the shock models, the \mbox{[\ion{Ca}{II}] $\lambda$7324}\ is always about 68-70\% of the \mbox{[\ion{Ca}{II}] $\lambda$7291}\ line. Hence, we have measured the \mbox{[\ion{Ca}{II}] $\lambda$7291}\ line and subtracted 69\% of it from our \mbox{[\ion{O}{II}] $\lambda\lambda$7320,7330}\ measurements. The corrected value constitutes our adopted \mbox{[\ion{O}{II}] $\lambda\lambda$7320,7330}\ measurements (see Table~\ref{tab:lines}).
Interestingly, we detect the \mbox{[\ion{O}{ii}]}\ and \mbox{[\ion{S}{ii}]}\ coronal lines at a significant level. The temperature inferred for the S$^+$ zone without extinction correction is around 8000K
The temperature inferred for the O$^+$ zone without extinction correction is around 11,000-17,000K
Note that these line ratios are very sensitive to the extinction correction. If we applied the extinction correction derived from Balmer decrements, it would reverse the temperature relationship between the two zones, making the S$^+$ zone { apparently} hotter than the O$^+$ zone. { This would be unphysical as it would contradict photoionization calculations in which the S$^+$ zone is either at a similar temperature as the O$^+$ zone, or cooler because it extends deeper into the photoionized slab, in the transition zone towards neutral gas where the heating rate has gone much lower. The ionization of Sulphur is maintained by photons of energy lower than those absorbed by H$^0$. An even stronger argumnent could be made with shock models.}
\end{enumerate}
\section{Constraints on the Ionization Mechanisms}\label{sec:constraints}
In this section, we compare our measurements of temperature sensitive line ratios with the predictions from theoretical calculations for three different mechanisms: photoionization by hot evolved stars, shocks, and turbulent mixing.
\begin{enumerate}
\item{Photoionization models: }
We model the line ratios produced by photoionization by hot evolved stars using CLOUDY version { v17.00}, last described by \cite{Ferland17}. The input ionizing spectrum is a 13 Gyr old simple stellar population spectrum with solar metallicity, computed by \cite{BC03}. { We also include cosmic radio to X-ray background { radiation} in the simulation, as well as the galactic background cosmic rays, using the default values provided by Cloudy.} The ionization parameter ($\log U$) spans from -4.5 to -2. The metallicity spans $-1.2 < [O/H]< 0.6$ where [O/H]$ = \log (O/H) / \log (O/H)_\odot$. The solar $12+\log(O/H)$ is assumed to be 8.69 \citep{GrevesseASS10}. The abundance pattern is assumed to follow the solar abundance pattern except for Nitrogen. The N/O ratio is assumed to increase with the O/H ratio, as Nitrogen has a secondary nucleosynthesis contribution when O/H is high. We adopt the fitting formula given by \cite{VilaCostasE93} as shown in the following equation.
\begin{equation}
\log (N/O) = \log (0.034 + 120 (O/H))
\end{equation}
{ For Carbon abundance, we follow \cite{Dopita13} and assume it is always 0.6 dex higher than Nitrogen. We use the default dust depletion factors in CLOUDY. Oxygen is depleted by 0.4 dex and Carbon is depleted by 0.6 dex, while Nitrogen is not depleted. }
{ The simulation assumes constant gas pressure, with an initial electron density of 100 cm$^{-3}$, and an open geometry. This assumption results in \mbox{[\ion{S}{II}] $\lambda \lambda$6716,6731}\ doublet line ratios consistent with what we observe. We have also verified that changing the density to 10 cm$^{-3}$ would hardly change the key lines ratios we use in this paper. We include attenuation by dust and photoelectric heating by dust in the ionized zone, adopting the Milky Way ISM grain type and size distribution.}
{The model calculation for \mbox{[\ion{O}{II}] $\lambda\lambda$7320,7330}\ includes all 4 lines in the quadruplet: 7318\AA, 7319\AA, 7329\AA, 7330\AA. The computation of all forbidden lines also include any potential recombination contribution and charge transfer contribution to the lines.}
\item{Shock Models: }
The shock models come from \cite{Allen08} who presented a series of fully radiative fast shock models computed with the MAPPINGS III code, spanning a range of shock velocity, density, and magnetic field parameter ($B/n^{1/2}$). { The model data are available as part of the software ``IDL Tool for Emission-line Ratio Analysis (ITERA)''\citep{GrovesA10,GrovesA13}\footnote{https://github.com/astrobrent/itera}. }
{ In shock models, the post-shock cooling zone can have a much higher density than the pre-shock density.
When a shock passes through the gas, the density of the gas would be compressed by a factor of 4. It would then be compressed even further when the post-shock gas cools. The S+ zone only occurs when the gas has { sufficiently cooled and combined,} and thus corresponds to a much higher density than the pre-shock gas. To fit the data, the models first need to satisfy the measured density that is in the low density regime with \mbox{[\ion{S}{ii}] $\lambda$6731}/\mbox{[\ion{S}{ii}] $\lambda$6716} having a mean ranging from 0.72 to 0.78 among the three subsamples. The RMS scatter of this ratio among individual galaxies is about 0.12 { if we consider only the \mbox{[\ion{S}{ii}]}\ fluxes that reach $10\sigma$ or higher. Hence, we will require all models to produce a \mbox{[\ion{S}{ii}] $\lambda$6731}/\mbox{[\ion{S}{ii}] $\lambda$6716}\ ratio less than 0.9. Among the shock models available, all those with a pre-shock density of either n=0.01 or 0.1 cm$^{-3}$ turn out to comply with our \mbox{[\ion{S}{ii}] $\lambda$6731}/\mbox{[\ion{S}{ii}] $\lambda$6716}\ limit ratio (extending from the theoretical minimum value of 0.68 up to 0.9). All shock models with a pre-shock density of n=1 cm$^{-3}$ also match our constraints as long as they have $B/n^{1/2} \ge 0.5 {\rm \mu G~cm}^{3/2}$. Among the n=10 cm$^{-3}$ models, only those with a magnetic field parameter $B/n^{1/2}$ larger than 10 ${\rm \mu G~cm}^{3/2}$ satisfy our density limit.} Strong magnetic field would reduce the density compression factor because the gas is partially supported by the magnetic pressure\citep{DopitaS95}. In the following sections, we { will evaluate} two sets of shock models, those with n=1 cm$^{-3}$ and those with n=10 cm$^{-3}$. { It turns out that} shock models with { lower densities of} n=0.01 cm$^{-3}$ and n=0.1 cm$^{-3}$ would end up populating nearly the same area { in many of our diagnostic plots} as the n=1 cm$^{-3}$ models.
}
\item{Conductive heating and Turbulent mixing models: }
The turbulent mixing models come from \cite{Slavin93}, with predictions only available for a few lines. { So we cannot do full justice to this category of models and one should consider our qualitative evaluation as tentative}. There are 6 models in total, for three different average temperatures and two different transverse velocities between the hot and the cold gas components. The lowest temperature model ($T=10^5$K) yields too low an \text{[\ion{O}{iii}]}/\mbox{[\ion{O}{ii}]}\ ratio to produce the kind of line ratios we see. Thus, we only consider the models with higher temperatures ($T=10^{5.3}$K and $10^{5.5}$K).
\end{enumerate}
We first { compare the data with each of the three ionization models using} the strong-line \text{[\ion{O}{iii}]}/\mbox{[\ion{O}{ii}]}\ vs. \mbox{[\ion{N}{ii}]}/\mbox{[\ion{O}{ii}]} diagram of Figure~\ref{fig:n2o2_o3o2}. In the case of the photoionization model, the data are best { reproduced} by models with $\log U\sim-3.5$ and [O/H] within $\pm0.3$ dex of the solar value. The grid shown assume a { front slab} density of n=100~cm$^{-3}$. We found that changing the density to n=10~cm$^{-3}$ hardly changes the grid at all. { These models roughly fit the observed \mbox{[\ion{O}{i}]}/\mbox{H$\alpha$}, \mbox{[\ion{S}{ii}]}/\mbox{H$\alpha$}, \mbox{[\ion{N}{ii}]}/\mbox{H$\alpha$}, and \text{[\ion{O}{iii}]}/\text{H$\beta$} ratios. }
The turbulent mixing models { on the other hand} yield a rather low \mbox{[\ion{N}{ii}]}/\mbox{[\ion{O}{ii}]}\ ratio. In order to { reproduce the data with such a model, one would have to} significantly increase the N/O abundance ratio.
{ As for the shock models}, as mentioned earlier, { they are required to} have a sufficiently low pre-shock density or, { alternatively, a sufficiently} strong magnetic field in order to match the observed \mbox{[\ion{S}{ii}]}\ doublet ratio. We here plot { models with} $n=10$ cm$^{-3}$, a solar metallicity and a strong magnetic field. { These succeed in matching} the \text{[\ion{O}{iii}]}/\mbox{[\ion{O}{ii}]}\ ratio but result in a too low \mbox{[\ion{N}{ii}]}/\mbox{[\ion{O}{ii}]}\ ratios. { Other shock models of equal solar metallicity but} with lower pre-shock density all fall in almost the same region in this diagram, which for clarity's sake we do not show.
To match the observed \mbox{[\ion{N}{ii}]}/\mbox{[\ion{O}{ii}]}\ ratios, one might invoke dust extinction or increase the { Nitrogen abundance}.
\begin{figure}
\begin{center}
\includegraphics[width=0.48\textwidth]{Tgrids_n2o2_o3o2.png}
\caption{ Comparison between the data and ionization models for the \mbox{[\ion{O}{iii}] $\lambda$5007}/\mbox{[\ion{O}{ii}] $\lambda$3727}\ vs. \mbox{[\ion{N}{ii}] $\lambda$6583}/\mbox{[\ion{O}{ii}] $\lambda$3727}\ line ratios. The blue grid at the bottom correspond to photoionization models by hot evolved stars. The solid blue lines connect models with constant [O/H] and the dashed blue lines connect models with constant ionization parameter ($\log U$). The red grid in the middle are shock models with a density of $n=10/cm^3$, in which solid red lines connect models with constant magnetic field ranging from 10 to 100${\rm \mu G~ cm}^{3/2}$ and the dashed red lines connect models with constant shock velocity, which range from 150 to 500 km/s.
The magenta crosses indicate the turbulent mixing layer models.
The three data points (big symbols) correspond to the three subsamples, going from the low-\mbox{[\ion{N}{ii}]}/\mbox{H$\alpha$}\ (left) to the high-\mbox{[\ion{N}{ii}]}/\mbox{H$\alpha$}\ (right). The error bars are not shown as they are too tiny in this plot. The data are consistent with both photoionization and shock models. { The arrow at the bottom left shows how one magnitude extinction in $A_V$ might impact the data.} Any extinction correction would therefore move the data points towards the lower left.}
\label{fig:n2o2_o3o2}
\end{center}
\end{figure}
In Figure~\ref{fig:n2o2_o3o2_metal}, we show three sets of shock models with the same pre-shock density ($n=1$ cm$^{-3}$) but different sets of elemental abundances: Large Magellanic Cloud (LMC) abundances, solar abundances, and twice the solar abundances.
{ We can clearly see the offsets in \mbox{[\ion{N}{ii}]}/\mbox{[\ion{O}{ii}]}\ between models}. Thus increasing the N/O abundance ratio does indeed bring the shock models { in agreement with} the data in this strong-line diagram. { Since \cite{Allen08} covered different metallicities only in the case of preshock n=1 cm$^{-3}$ models, we cannot strictly confirm how models with different pre-shock densities would behave although we expect the general trend to remain the same.}
\begin{figure}
\begin{center}
\includegraphics[width=0.48\textwidth]{Tgrids_n1_n2o2_o3o2.png}
\caption{ { This plot is similar to Figure~\ref{fig:n2o2_o3o2}, except that the three sets of shock models (with $n=1$ cm$^{-3}$) possess different metallicities: the right-most grid (dark red) corresponds to models with twice the solar metallicity, the middle grid (red) to solar metallicity, and the left-most grid (light red) to LMC metallicity.} In each grid, solid lines connect models with constant magnetic field, which range from 0.5 to 10 ${\rm \mu G~cm}^{3/2}$, with the weakest magnetic parameter yielding the largest \mbox{[\ion{N}{ii}]}/\mbox{[\ion{O}{ii}]}\ ratio, while dashed lines connect models with constant shock velocity, which range from 150 to 500 km/s, with the slowest shock yielding the lowest \mbox{[\ion{N}{ii}]}/\mbox{[\ion{O}{ii}]}\ ratio.}
\label{fig:n2o2_o3o2_metal}
\end{center}
\end{figure}
The strong line diagrams are not very useful for distinguishing between photoionization and shock models. Thus, we now focus on the temperature-sensitive line ratios. We dispose of 4 temperature-sensitive line ratios. \mbox{[\ion{O}{II}] $\lambda\lambda$7320,7330}\ and \mbox{[\ion{S}{II}] $\lambda\lambda$4068,4076}\ are detected { at a significant level} in all subsamples. We also have \mbox{[\ion{N}{ii}] $\lambda$5755}\ and \mbox{[\ion{O}{III}] $\lambda$4363}\ but these lines are only detected in some subsamples. { Let's first analyze the \mbox{[\ion{S}{II}] $\lambda\lambda$4068,4076}/\mbox{[\ion{S}{II}] $\lambda \lambda$6716,6731}\ vs. \text{[\ion{O}{iii}]}/\mbox{[\ion{O}{ii}]}\ ratios as shown in Figure~\ref{fig:st2_o3o2} and compare
the raw data (the three data points with error bars) with both photoionization and shock models. In the case of photoionization (blue grid), the calculated \mbox{[\ion{S}{II}] $\lambda\lambda$4068,4076}/\mbox{[\ion{S}{II}] $\lambda \lambda$6716,6731}\ ratio favours subsolar metallicities. This contrasts sharply with} what we saw in the strong line plot (Figure~\ref{fig:n2o2_o3o2}). It is also surprising given our expectation of high metallicities for these massive galaxies. The n=10 cm$^{-3}$ shock models are able to match the observed ratio although shocks with lower preshock densities cannot. We note that models with a preshock density of 0.01, 0.1, and 1 cm$^{-3}$ populate approximately the same area in this plot. { For these models, decreasing the metallicity to that of LMC can raise the upper limit of the grid by 0.1 dex, which would bring it closer to some of the observed \mbox{[\ion{S}{II}] $\lambda\lambda$4068,4076}/\mbox{[\ion{S}{II}] $\lambda \lambda$6716,6731}\ ratios. If we compare the models with the reddening corrected data (the three data points without error bars), none of the models come close.}
\begin{figure}
\begin{center}
\includegraphics[width=0.48\textwidth]{Tgrids_st2_o3o2.png}
\caption{Comparison between the data and models for the \mbox{[\ion{S}{II}] $\lambda\lambda$4068,4076}/\mbox{[\ion{S}{II}] $\lambda \lambda$6716,6731}\ vs. \mbox{[\ion{O}{III}] $\lambda \lambda$4959,5007}/\mbox{[\ion{O}{ii}] $\lambda$3727}\ line ratios. The blue grid corresponds to photoionization by hot evolved stars with the same convention as used in Fig.~\ref{fig:n2o2_o3o2}, while the red grid is for shock models with n=10 cm$^{-3}$ and the magenta grid for shock models with n=1 cm$^{-3}$. The solid lines { connect models of the same} magnetic field parameter, with $B/n^{1/2}= 10, 30, and 100~{\rm \mu G~ cm}^{3/2}$ for the n=10 cm$^{-3}$ model and $0.5,1, 3.2, and 10 ~{\rm \mu G ~cm}^{3/2}$ for the n=1 cm$^{-3}$ model, respectively. The dashed lines represent shock velocities of $V=150, 200,300,400, 500, and 1000{\rm km/s}$. { The three large symbols with error bars represent the raw data while those without correspond to extinction-corrected ratios. The black arrow indicates the effect of 1 magnitude extinction in $A_V$.}}
\label{fig:st2_o3o2}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.48\textwidth]{Tgrids_ot2_st2.png}
\caption{ Comparison between the data and models for the \mbox{[\ion{S}{II}] $\lambda\lambda$4068,4076}/\mbox{[\ion{S}{II}] $\lambda \lambda$6716,6731}\ vs. \mbox{[\ion{O}{II}] $\lambda\lambda$7320,7330}/\mbox{[\ion{O}{ii}] $\lambda$3727}\ line ratios. The blue grid corresponds to photoionization by hot evolved stars with the same convention as used in Fig.~\ref{fig:n2o2_o3o2}. while the red and magenta grids correspond to the same shock models as shown in Figure~\ref{fig:st2_o3o2}. The three large symbols with error bars represent the raw data while those without correspond to extinction-corrected ratios. The black arrow indicates the effect of 1 magnitude extinction in $A_V$. The marked discrepancy between models and the dereddened ratios using the Balmer decrement suggests that the adopted extinction correction may be problematic. See text for discussion.}
\label{fig:ot2_st2}
\end{center}
\end{figure}
{ In Figure~\ref{fig:ot2_st2}, shock and photoionization models are compared with the data in a plot of the \mbox{[\ion{S}{II}] $\lambda\lambda$4068,4076}/\mbox{[\ion{S}{II}] $\lambda \lambda$6716,6731}\ vs. \mbox{[\ion{O}{II}] $\lambda\lambda$7320,7330}/\mbox{[\ion{O}{ii}] $\lambda$3727}\ line ratios. Interestingly, all models appear limited to a diagonal boundary line traced by the position of the upper leftmost photoionization models. In the case of shock models, these turn back after reaching this diagonal boundary.
This simply results from the position of the S$^+$zone, which, as compared to O$^+$, occurs deeper into the ionized cloud layer, in a region where the gas gets colder and is partially ionized, no matter whether it is photoionized or shocked. Thus, we expect in shock or photoionized models, regardless of the ionization mechanism, that the O$^+$ zone should either be co-spatial with the S$^+$ zone, or be slightly offset from it towards the warmer and more highly ionized zones. Therefore, the O$^+$ zone should either have a similar or a slightly higher temperature than the S$^+$ zone. Based on this argument, physical models are expected to populate only the diagonal domain, where the temperatures are in fact equal, or the region to the lower right, where the O$^+$ zone is hotter than S$^+$. }
In Figure~\ref{fig:ot2_st2}, we can see that the raw line ratios before extinction correction (points with error bars) appear to be consistent with both the solar metallicity, n=10/cm$^3$ shock models, and the subsolar photoionization models. The high-\mbox{[\ion{N}{ii}]}/\mbox{H$\alpha$}\ subsample point can alternatively be explained by a subsolar photoionization model assuming slight extinction. However, once extinction correction is applied, the ratios are moved too far up in the upper left of the diagram to be explained by either type of models as it would be unphysical to expect the S$^+$ zone to be hotter than the O$^+$ zone. We therefore strongly suspect that the extinction corrections derived from Balmer decrements to significantly overestimate the amount of dust reddening.
This temperature relationship between O$^+$ and S$^+$, and the opposite extinction-dependence of \mbox{[\ion{O}{II}] $\lambda\lambda$7320,7330}/\mbox{[\ion{O}{ii}] $\lambda$3727}\ and \mbox{[\ion{S}{II}] $\lambda\lambda$4068,4076}/\mbox{[\ion{S}{II}] $\lambda \lambda$6716,6731}\ make the combination of these two line ratios a useful extinction estimator. This was first pointed out by \cite{Dopita82}. Basically, in most photoionization and shock models, the two ratios should fall roughly along the diagonal zone from lower left to upper right in Figure~\ref{fig:ot2_st2}. Departures from this could be attributed to extinction. If we assume all points should fall along the diagonal line in the extinction-free case, we find the extinction in the high-, mid-, and low-\mbox{[\ion{N}{ii}]}/\mbox{H$\alpha$}\ subsamples would be 0.48, 0.14, 0.0 mag in A$_{\rm V}$, respectively. Though, as shown here, very low and very high shock velocity could also make the ratios to be offset from the diagonal line. Thus, these values should be considered upper limits on dust extinction.
On the other hand, what could make the extinction estimates from Balmer decrements so unreliable? { It is worth noticing that in the case of the high- and low-\mbox{[\ion{N}{ii}]}/\mbox{H$\alpha$}\ subsamples (see Figure~\ref{fig:BalmerDecrements}), smaller amounts of extinction are in better agreements with the decrements found with the higher order Balmer lines. Hence some reddening might be present coupled with a significant enhancement of \mbox{H$\alpha$}\ through collisional excitation. Turbulent mixing layer may play a part in overheating the partly neutral zone, increasing the Balmer decrement to as high as 5, as shown by \cite{Binette99}.
Or perhaps the extinction is non-uniform and the Hydrogen lines would tend to preferably originate from more dust-absorbed regions than the low ionization lines of O and S. Another possibility is that there may remain stellar absorption in the residual spectrum, which would make us underestimate the \text{H$\beta$}\ strength. We do not observe, however, any trace of side wings absorption around \text{H$\beta$}\ and so do not think this is likely either.
This conspicuous problem with extinction is extremely interesting and need to be investigated further in the future, as we do not have a satisfying answer for the current paper.}
If we focus on shock models in Figure~\ref{fig:ot2_st2}, we find that those with n=1 cm$^{-3}$ and solar metallicity can be ruled out using these two temperature-sensitive line ratios. Even at the largest shock velocity (1000 km/s), the solar metallicity models could not produce as high a temperature in the S$^+$ zone as observed. One needs to invoke lower metallicity models to increase the temperature. However, with lower metallicity, the strong line ratio \mbox{[\ion{N}{ii}]}/\mbox{[\ion{O}{ii}]}\ moves further away from the observed values (see Figure~\ref{fig:n2o2_o3o2_metal}. Therefore, n=1 cm$^{-3}$ models cannot match both \mbox{[\ion{S}{II}] $\lambda\lambda$4068,4076}/\mbox{[\ion{S}{II}] $\lambda \lambda$6716,6731}\ and \mbox{[\ion{N}{ii}]}/\mbox{[\ion{O}{ii}]}\ simultaneously, and are ruled out.
In the case of the photoionization models, they similarly cannot match both the strong line ratios and the temperature-sensitive line ratios simultaneously using the same metallicity. Strong line ratios favour super solar metallicity while the O$^+$ and S$^+$ coronal-to-strong line ratios favour subsolar metallicity. The temperatures indicated by these line ratios might even be too high to be adequately fit using such photoionization models. So far, the only models that survive these data tests are the shock models with n=10 cm$^{-3}$ and strong magnetic parameters. We will see below that these face challenges from the other temperature-sensitive line ratios.
\begin{figure}
\begin{center}
\includegraphics[width=0.45\textwidth]{Tgrids_o3t_n2t.png}
\caption{Comparison between data with both types of ionization models in a diagram of \mbox{[\ion{O}{III}] $\lambda$4363}/\mbox{[\ion{O}{III}] $\lambda \lambda$4959,5007}\ vs. \mbox{[\ion{N}{ii}] $\lambda$5755}/\mbox{[\ion{N}{ii}] $\lambda \lambda$6548,6583}. See caption of Figure~\ref{fig:st2_o3o2} for the definition of the grid lines. The large symbols represent our three subsamples. The arrows indicate $2\sigma$ limits when the line is undetected, otherwise, it is shown using $\pm1\sigma$ error bars. Note that for the photoionization model shown, we have added the \mbox{[\ion{Fe}{II}] $\lambda$4359}\ flux to the \mbox{[\ion{O}{III}] $\lambda$4363}\ line in order to take into account possible contamination by \mbox{[\ion{Fe}{ii}]}. Its contribution is significant only at the low temperature end (i.e. high metallicity and low ionization parameter).
}
\label{fig:o3t_n2t_shock10}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.45\textwidth]{Tgrids_n1metal_o3t_n2t.png}
\caption{Comparison between the data with the ionization models in a diagram of \mbox{[\ion{O}{III}] $\lambda$4363}/\mbox{[\ion{O}{III}] $\lambda \lambda$4959,5007}\ vs. \mbox{[\ion{N}{ii}] $\lambda$5755}/\mbox{[\ion{N}{ii}] $\lambda \lambda$6548,6583}. The three shock grids shown have a preshock density of n=1/cm$^3$ at three different abundance patterns. The light red grid (top right) is for the LMC abundance, the red grid (top middle) is for the solar abundance, and the dark red (top left) is for twice the solar abundance. The large symbols represent the data of the three subsamples { adding a pointing arrow when the line is undetected to indicate a $2\sigma$ limits or adding a $\pm1\sigma$ error bar when detected.} Note that for the photoionization model predictions (in blue), we have added the \mbox{[\ion{Fe}{II}] $\lambda$4359}\ flux to the \mbox{[\ion{O}{III}] $\lambda$4363}\ line to take into account potential contamination by \mbox{[\ion{Fe}{ii}]}. Its contribution is only significant at the low temperature end (high metallicity and low ionization parameter).
}
\label{fig:o3t_n2t_n1_metal}
\end{center}
\end{figure}
Let us now compare the data with the models in a diagram of \mbox{[\ion{O}{III}] $\lambda$4363}/\mbox{[\ion{O}{III}] $\lambda \lambda$4959,5007}\ vs. \mbox{[\ion{N}{ii}] $\lambda$5755}/\mbox{[\ion{N}{ii}] $\lambda \lambda$6548,6583}\ line ratios, as shown in Figure~\ref{fig:o3t_n2t_shock10}. The \mbox{[\ion{O}{III}] $\lambda$4363}\ line is only detected above 3$\sigma$ in the high-\mbox{[\ion{N}{ii}]}/\mbox{H$\alpha$}\ subsample, so only for this bin is the measurement shown with $\pm1\sigma$ error bars. For the other two subsamples, we instead show the $2\sigma$ upper limits using a pointing arrow. Similarly, for the \mbox{[\ion{N}{ii}] $\lambda$5755}\ line, which is undetected in the high-\mbox{[\ion{N}{ii}]}/\mbox{H$\alpha$}\ subsample, is here shown using a $2\sigma$ upper limit pointing arrow.
In this figure, both line ratios reveal some inadequacies in models.
We see that the n=10 cm$^{-3}$ or the n=1 cm$^{-3}$ solar metallicity shock models, for instance, cannot match the \mbox{[\ion{N}{ii}] $\lambda$5755}/\mbox{[\ion{N}{ii}] $\lambda$6583}\ line ratios of the mid- and low-\mbox{[\ion{N}{ii}]}/\mbox{H$\alpha$}\ subsamples, although they provide a good match to the marginally-detected \mbox{[\ion{O}{III}] $\lambda$4363}\ line strength of the high-\mbox{[\ion{N}{ii}]}/\mbox{H$\alpha$}\ subsample.
If this marginal detection turned out real, the derived temperature would also come out too high to be fit by photoionization models at any metallicity considered.
All shock models discussed above are for solar metallicity.
In Figure~\ref{fig:o3t_n2t_n1_metal}, we show how metallicity would affect the line ratios predicted by shock models. They suggest that the three \mbox{[\ion{N}{ii}]}/\mbox{H$\alpha$}\ subsamples probably have different metallicities, increasing from the low to the high \mbox{[\ion{N}{ii}]}/\mbox{H$\alpha$}\ bin.
The N$^+$ zone temperature (indicated by the X-axis) turns out very sensitive to metallicity, as expected. The models show that metal-rich environment yields a lower temperature (i.e. a lower \mbox{[\ion{N}{ii}] $\lambda$5755}/\mbox{[\ion{N}{ii}] $\lambda$6583}\ ratio). The O$^{++}$ zone temperature does not appear to vary much between the three metallicities, but it is sensitive to magnetic field and shock velocity. { We find promising that a metallicity gradient in shock models could account for the observed trend across the three subsamples. However, the low-\mbox{[\ion{N}{ii}]}/\mbox{H$\alpha$}\ data point shows a higher N$^+$ zone temperature than the lowest metallicity shock models (the LMC models for instance, or the SMC models which are not plotted). It would be difficult to imagine that quiescent massive galaxies could be of lower metallicity than the LMC. In addition, the indication in favour of a subsolar metallicity conflicts with that shown by the observed \mbox{[\ion{N}{ii}]}/\mbox{[\ion{O}{ii}]}\ ratios which appear to favour a supersolar metallicity (Figure~\ref{fig:n2o2_o3o2_metal}). Therefore, even when taking into account the interesting possibility of metallicity variations across subsamples, there still remain discrepancies overall between the data and the shock models.}
\section{Conclusions}
Using the spectra stacking technique, we have obtained high signal-to-noise emission-line spectra of quiescent red sequence galaxies from the SDSS. After careful continuum subtraction using a control sample without emission lines, we have detected multiple temperature-sensitive coronal emission lines or provided meaningful upper limits.
{ From reliable measurements of} \mbox{[\ion{S}{II}] $\lambda\lambda$4068,4076}, \mbox{[\ion{O}{II}] $\lambda\lambda$7320,7330}, and \mbox{[\ion{N}{ii}] $\lambda$5755}\ lines, we { inferred that the emission from these ions was from regions} with a temperature of around $10^4$K. The measured \mbox{[\ion{S}{II}] $\lambda\lambda$4068,4076}/\mbox{[\ion{S}{II}] $\lambda \lambda$6716,6731}\ and \mbox{[\ion{O}{II}] $\lambda\lambda$7320,7330}/\mbox{[\ion{O}{ii}] $\lambda$3727}\ line ratios provide us with interesting constraints on the levels of extinction which are much lower than the values estimated from Balmer decrements. If the extinction values estimated from Balmer decrements { turn out to be the correct ones}, they would lead to unphysical temperature relationship between the S$^+$ and O$^+$ emission zones. The unresolved discrepancy between the two extinction estimates is one of the puzzles we uncovered in this paper.
The \mbox{[\ion{S}{ii}] $\lambda$6731}/\mbox{[\ion{S}{ii}] $\lambda$6716}\ ratio indicates { an electron density of} the line-emitting S$^+$ gas { that is smaller than or of the order of} 100 cm$^{-3}$. This result was used to set the physical conditions for our photoionization models { or to limit shock models either to those characterized} with a very low preshock density $n\le 1$cm$^{-3}$ and/or those with a strong magnetic field, since only such shock models can reproduce the observed \mbox{[\ion{S}{ii}]}\ doublet ratios.
We compared the temperature-sensitive line ratios with model predictions for three ionization mechanisms: photoionization by hot evolved stars, radiative shocks, and turbulent mixing layers. We found that neither the photoionization models nor the shock models can simultaneously explain all the line ratios we observe. Photoionization models with solar and supersolar metallicities can account for the strong line ratios. { On the other hand,} all of the temperature-sensitive line ratios indicate high temperatures, { which would imply} subsolar metallicities. The marginally-detected \mbox{[\ion{O}{III}] $\lambda$4363}\ line in the high-\mbox{[\ion{N}{ii}]}/\mbox{H$\alpha$}\ sample, if it is real, would indicate too high a temperature to be fit at all by photoionization models. Shock models, which have one more degree of freedom than photoionization models, similarly cannot explain all the line ratios. Among models that { reproduce the observed} low S$^+$ density, those with solar metallicity, n=10 cm$^{-3}$, and strong magnetic field can match the observed temperatures in O$^+$, S$^+$, and O$^{++}$ zones, but fail to match the high temperature of the N$^+$ zone. Lower pre-shock density ($n\le 1 {\rm cm}^{-3}$) shock models require significantly subsolar metallicities to match the temperature of the N$^+$ and S$^{+}$ zones, but would require supersolar metallicity in order to match the \mbox{[\ion{N}{ii}]}/\mbox{[\ion{O}{ii}]}\ ratios. The main discrepancy is between the observed high \mbox{[\ion{N}{ii}]}/\mbox{[\ion{O}{ii}]}\ ratios and the relatively hot temperatures we derived from coronal lines of singly ionized ions. Neither photoionization nor shock models could { reproduce both temperatures observed}. Given that both photoionization and shock models are failing to account for all observed line ratios for similar reasons, the combination of the two processes would face { similar difficulties.}
{ We could not do full justice to turbulent mixing models given that model predictions were only available for a few lines. }
Our work illustrates the powerful constraints provided by temperature-sensitive line ratios. They reveal significant discrepancies between data and models. Although the interpretation { of our data might be questioned on the ground of the inevitable level of uncertainty associated with} the stacking process whereby different galaxies are averaged together, it encourages us to push deeper with { deeper observations that would aim at detecting the reported lines} in individual galaxies or, { if a significantly improved S/N was achieved}, would allow us to reduce the number of galaxies required for detecting them. Our work also highlights the need of more detailed and consistent modeling that would provide us with more stringent comparisons with observations when better data become available.
\section*{Acknowledgements}
I thank the referee, Luc Binette, whose comments helped to significantly improve this paper. I am also grateful for the hospitality of the Tsinghua Center for Astrophysics at Tsinghua University during an extended visit. RY acknowledges the support of NSF Grant AST-1715898.
Funding for the SDSS and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, the Max Planck Society, and the Higher Education Funding Council for England. The SDSS Web Site is http://www.sdss.org/.
The SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions. The Participating Institutions are the American Museum of Natural History, Astrophysical Institute Potsdam, University of Basel, University of Cambridge, Case Western Reserve University, University of Chicago, Drexel University, Fermilab, the Institute for Advanced Study, the Japan Participation Group, Johns Hopkins University, the Joint Institute for Nuclear Astrophysics, the Kavli Institute for Particle Astrophysics and Cosmology, the Korean Scientist Group, the Chinese Academy of Sciences (LAMOST), Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, Ohio State University, University of Pittsburgh, University of Portsmouth, Princeton University, the United States Naval Observatory, and the University of Washington.
\bibliographystyle{yahapj}
|
2,869,038,153,934 | arxiv |
\subsection{Classification of extremals}
We are now in a position to determine extremals $(q^*(t),\psi^*(t),u^*(t))$. For any extremal, the function $H^*$ is maximized at each time instant $t$ (necessary condition $N3$), implying that
\begin{dgroup}
\begin{dmath}
u_1}%{u_1(t) = u_m \ \mathrm{sign}\bigpar{\frac{r}{m} \psi_1(t) + \frac{2 r}{J_r b} \psi_3(t) }
\end{dmath}
\begin{dmath}
u_2(t) = u_m \ \mathrm{sign}\bigpar{\frac{r}{m} \psi_1(t) - \frac{2 r}{J_r b} \psi_3(t) }
\end{dmath}
\label{eq:pmptorques}
\end{dgroup}
The initial condition $\psi(0)$ determines $\psi^*(t)$ according to \eqref{eq:solnauxillary} , which in turn determines $u^*(t)$ according to \eqref{eq:pmptorques} and hence $q^*(t)$, given $q(0)$. The possible extremals are thus determined by the initial conditions $\psi(0)$. Clearly, three possibilities exist:
\begin{enumerate}
\item $\psi(t) \equiv 0$
\item $\psi(t) \equiv \psi(0) \neq 0$
\item $\psi(t) \neq \psi(0)\ \forall t > 0$
\end{enumerate}
For convenience, we define the following \emph{switching functions}:
\begin{dgroup}
\begin{dmath}
\sigma_1(t) = \bigpar{\frac{r}{m} \psi_1(t) + \frac{2 r}{J_r b} \psi_3(t) }
\end{dmath}
\begin{dmath}
\sigma_2(t) = \bigpar{\frac{r}{m} \psi_1(t) - \frac{2 r}{J_r b} \psi_3(t) }
\end{dmath}
\label{eq:switchfuncs}
\end{dgroup}
\noindent \emph{Case 1:} If $\psi(t) \equiv 0$ then $H = -\mu + \psi_2 \omega$ and any control $u^*(t) \in U\ \forall t \in I$ would satisfy $N3$. Such a case is known as the doubly-singular control. However, such a control cannot be an extremal control. This is due to the fact that $N1$ and $N4$ cannot simultaneously hold, and thus $\psi(t) \equiv 0$ cannot be part of a valid extremal.
\noindent \emph{Case 2:} If $\psi(t) \equiv \psi(0) \neq 0$, which occurs when $\psi_2(0) = 0$. Consider either of the two mutually exclusive possibilities:
\begin{enumerate}
\item[$S1$] $\sigma_1(t) = \sigma_1(0) = 0$
\item[$S2$] $\sigma_2(t) = \sigma_2(0) = 0$
\end{enumerate}
When $S1$ ( $S2$ ) holds, the coefficient of $u_1}%{u_1$ ($u_2$) in \eqref{eq:hamiltonian2} is zero, while the coefficient of $u_1}%{u_1$ ($u_2$) is a non-zero constant. This implies that extremal controls may be of the form where one motor torque is $\pm u_m$ over the interval of definition of the trajectory, while the other control is arbitrary.
Suppose $\psi(t) \equiv \psi(0) \neq 0$ however $S1$ and $S2$ do not hold. Then, according to \eqref{eq:pmptorques} the motor torques are constant with maximum possible magnitude.
\noindent \emph{Case 3:} Finally, suppose that $\psi_2(0) \neq 0$, implying that $\psi(t) \neq \psi(0)\ \forall t > 0$. Since $\psi_1(t)$ is constant and $\psi_3(t)$ is linear in time $t$, $\sigma_1(t)$ ( or $\sigma_2(t)$) either monotonically increases of monotonically decreases, with exactly one time instant where its value is undefined. Since $u_1}%{u_1 = u_m \mysign{\sigma_1}$ and $u_2 = u_m \mysign{\sigma_2}$, this implies that the motor torques are piecewise constant (with value $\pm u_m$) with no more than one switch.
To summarize, the application of the Pontryagin Maximum Principle\ results in the conclusion that all extremal controls consist of only two possible cases:
\begin{enumerate}
\item[$C1$] At least one motor has a constant torque with value $u_m$ or $-u_m$ over $I$.
\item[$C2$] Both motors have piecewise constant torques (with possible values in $\{-u_m,+u_m \}$) with exactly one switch for each motor at a time instants $t_1$ and $t_2$ such that $t_1$, $t_2 \in (0,t_f)$.
\end{enumerate}
We know that a time-optimal control between any two states exists. We also know that such a control must necessarily be of the form $C1$ or $C2$. Given a desired initial state $q_0$ and target state $q_d$, we attempt to find a control of the form $C1$ or $C2$ such that the control induces the desired change in state. This procedure generates an extremal $(q^*(t),\psi^*(t),u^*(t))$, defined on $t \in I = [0,t_f]$ such that $q(0) = q_0$ and $q(t_f) = q_d$.
We now introduce a notation for the four possible combinations when both motor torques are at their maximum values. We name the combinations $\beta^{+}$, $\beta^{-}$, $\alpha^{+}$, and $\alpha^{-}$ as described in Table \ref{tab:motortorques}. When $u_1}%{u_1 = u_m$ and $u_2 = -u_m$, for example, we refer to this situation as control being $\alpha^{+}$. For any interval of time where $\beta^{+}$ or $\beta^{-}$ control is used, the robot's linear speed changes while the angular velocity remains constant. For any interval of time where $\alpha^{+}$ or $\alpha^{-}$ control is used, the linear speed remains the same, and the angular velocity changes. Let the rates of angular acceleration and linear acceleration be $\alpha = 4 r u_m / ( J_r b)$ and $\beta = 2 r u_m / m$ respectively. Whenever the motors torques are equal to $\pm u_m$, either $\ddot{\theta} = \pm \alpha$ and $\dot{v} = 0$, or $\ddot{\theta} = 0$ and $\dot{v} = \pm \beta$.
\begin{table}
\caption{Notation for four torque modes}
\centering
\begin{tabular}{c|cc}
\hline
&$ u_1}%{u_1 = u_m$ & $ u_1}%{u_1 = -u_m$ \\
\hline
$u_2 = u_m$ & $\beta^{+}$ & $\alpha^{-}$\\
$u_2 = -u_m$ & $\alpha^{+}$ & $\beta^{-}$\\
\hline
\end{tabular}
\label{tab:motortorques}
\end{table}
For a $C2$ control, since each motor will switch exactly once, any $C2$ extremal consists of at most two instants of switching, and therefore at most three time intervals of time in which one of the four controls in Table \ref{tab:motortorques} is used. We call each time interval a phase of the extremal. We will refer to the controls used during any such interval using Table \ref{tab:motortorques}. If the control $u_1 = u_m$, $u_2 = -u_m$ is used during a phase, for example, we refer to that phase as a $+\alpha$ phase.
The possible sequences of control phases that are valid $C2$ extremal controls sequences are
\begin{enumerate}
\item $\alpha^{\pm}$ $\rightarrow$ $\beta^{\pm}$ $\rightarrow$ $\alpha^{\mp}$
\item $\alpha^{\pm}$ $\rightarrow$ $\alpha^{\mp}$
\item $\beta^{\pm}$ $\rightarrow$ $\alpha^{\pm}$ $\rightarrow$ $\beta^{\mp}$
\item $\beta^{\pm}$ $\rightarrow$ $\beta^{\mp}$
\end{enumerate}
\noindent where the arrow denotes a transition between one control phase (on the left of the arrow) to another control (on the right) at some time instant. Since both motors must switch exactly once, the control in the last phase is always the reversal of the control in the first phase.
Thus, we can introduce a further classification of the extremals, based on the possible combinations of motor torques listed above. The first two sequences are classified as $C2a$ controls. The last two are $C2b$ controls. The classification is based on whether the motor torques in the first phase have the same sign or not.
For any $C1$ extremal control, one motor torque is always $+u_m$ or $-u_m$ and never switches during the transition from initial to goal state. The other motor torque can be an arbitrary function of time (bounded by $u_m$). Thus, extremal controls of the form $C1$ include singular controls, where one motor torque is arbitrary. We want to identify a special subset of $C1$ controls where the non-constant motor switches between $\pm u_m$ no more than twice. We will say that such controls are of type $C1_{ns}$. The possible $C1_{ns}$ extremal control sequences are
\begin{enumerate}
\item $\beta^{\pm}$ $\rightarrow$ $\alpha^{\pm}$ $\rightarrow$ $\beta^{\pm}$
\item $\beta^{\pm}$ $\rightarrow$ $\alpha^{\pm}$
\item $\beta^{\pm}$
\item $\alpha^{\pm}$ $\rightarrow$ $\beta^{\pm}$
\item $\alpha^{\pm}$
\end{enumerate}
The next two subsections deal with the construction of extremal controls given $q_0$ and $q_d$, under the assumption that the desired angular velocity is zero.
\section{Conclusion}
We have derived time-optimal controls that enable a torque-controlled differential drive wheeled mobile robot to reach a desired constant velocity in the plane in minimum time, for any initial velocity. These controls can be implemented as functions of time (planning problem) or as a feedback control based on a state-based switching rules.
\section{Acknowledgements}
The authors wish to thank Prof. Oleg Makarenkov at the University of Texas at Dallas for his helpful discussions. The first author became familiar with the details of the Pontryagin Maximum Principle thanks to Prof. Makarenkov's course on Switched Systems in the Spring of 2015
\section{Theorem 4, $\S$10, \cite{Filippov88}}
\label{app:filippv}
\noindent Condition 1:\hfill\\
Let a domain $G \subset \mathbb{R}^3$be separated by smooth hypersurfaces $s_i^k$ into domains $S_j^n$, $j = 1,\dots, r$. The superscript denotes the dimension of the surface, the subscript denotes the index of the surface or domain. The boundary of each hypersurface does not belong to the surface, and consists of a finite number of smooth hypersurfaces of smaller dimensions or points.
The vector valued function $f(t,q)$ is continuous in $t$,$q$ for $a < t < b$ in each of the domains $S_j^n$ upto the boundary, that is, $f(t,q) = f_j^n(t,q)$ for $q \in S_j^n$, and the function $f_j^n$ is continuous in $\bar{S}_j^n$. On sum or all of the hypersurfaces $\bar{S}_j^k$, $0 \leq k \leq n-1$, or on some of their closed subsets continuous vector valued functions $f_i^k(t,q)$ are given; the vector $f_i^k(t,q)$ lies in the $k$-dimensional plane tangent to $S_i^k$ at the point $q$.
\section{Introduction}
\label{sec:intro}
Differential drive systems are a popular choice for mobile robot platforms. This can be attributed largely to their ability to turn in place, which makes them ideal for navigation in cluttered environments. Another advantage is the simplicity of construction, especially when compared to holonomic wheeled mobile robots. The control of nonholonomic wheeled mobile robots has a long history~\cite{Brockett83,Ryan94,Bloch03,Kuipers11}, with the differential drive robot system being a common example. The most important controls problem typically considered for this robot is the point stabilization problem~\cite{Samson91} or the tracking of a reference trajectory~\cite{Jiang97,dAndrea92,Fierro95}. The point stablization problem is particularly interesting due to the impossibility of solving it using a smooth time-invariant feedback law~\cite{Brockett83}.
In recent years the field of multi-robot coordination has been an active area of research. Control methods such as consensus algorithms and behaviour-based controls can achieve a wide variety of tasks. In general, these methods often consider single integrator dynamics, and the commanded control for each robot is a velocity in the plane. Such control laws can be implemented in an exact manner only on holonomic wheeled mobile robots. Further, consider a team of multiple differential drive robots that are to be operated by a human using some input device. Typically, the human may command a motion towards a particular direction. Depending on the headings of the robots, they may or may not be able to move in that direction instantaneously.
Thus, in this paper, we are concerned with controlling the planar velocity of the differential drive robot. The goal is to find controls that change the current velocity of the robot to some desired velocity in the plane as fast as possible. The effect of implementing such controls is to make the robots `appear' to be holonomic, with as small a delay as possible in tracking of commanded velocities. Previous work on time-optimal control for the differential drive robot has focused on control of the robot's position~\cite{Reister94,Renaud97,VanLoock13}.
The contribution of this paper lies in applying the Pontryagin Maximum Principle\ to the differential drive robot with bounded torque inputs in order to derive time-optimal controls that drive the forward speed, heading angle and angular velocity to desired values.
\section{Preliminaries}
In this section we describe the differential drive robot system and recount the Pontryagin Maximum Principle which will be appplied to this system.
\subsection{Differential Drive Robot}
A sketch of a differential drive robot is shown in Figure \ref{fig:ddwmr}. The desired velocity $\mathbf{v}_d \in \mathbb{R}^2$ is given by the blue vector, with magnitude $v_d = \| \mathbf{v}_d\|$. However, the robot's velocity lies along the green vector, with magnitude $v \in \mathbb{R}$. The robot heading $\theta$ must be controlled such that the robot velocity matches the desired one.
The kinematic equations of motion of the wheeled mobile robot are
\begin{dmath}
\bmat{\dot{x}\\\dot{y}\\ \dot{\theta}} = \bmat{\cos{(\theta)} & 0 \\ \sin{(\theta)} & 0 \\0 & 1} \bmat{v \\ \omega}
\label{eq:ddkinematics}
\end{dmath}
\noindent where $(x,y)$ is the cartesian position of the centroid of the robot, $v$ is the forward speed and $\omega$ is the angular velocity of the robot. The non-holonomic nature of the equations is due to the fact that the equations \ref{eq:ddkinematics} satisfy the contraint
\begin{dmath}
\dot{x} \sin{(\theta)} - \dot{y} \cos{(\theta)} = 0
\end{dmath}
We assume that the wheels do not slip. This corresponds to the two constraints
\begin{dgroup*}
\begin{dmath}
\dot{x} \cos{(\theta)} + \dot{y} \sin{(\theta)} + b \dot{\theta} = r \dot{\phi}_R
\end{dmath}
\begin{dmath}
\dot{x} \cos{(\theta)} + \dot{y} \sin{(\theta)} - b \dot{\theta} = r \dot{\phi}_L
\end{dmath}
\end{dgroup*}
The linear speed $v$ and angular velocity $\omega$ are then obtained from the right and left wheel velocities ($\dot{\phi}_R$ and $\dot{\phi}_L$ respectively) as
\begin{dgroup}
\begin{dmath}
v = \dot{\phi}_R \frac{r}{2} + \dot{\phi}_L \frac{r}{2}
\end{dmath}
\begin{dmath}
\omega = \frac{\dot{\phi}_R r}{2 b} - \frac{\dot{\phi}_L r}{2 b}
\end{dmath}
\label{eq:ddrkinematics}
\end{dgroup}
\noindent where $r$ and $2 b$ are the radii of the wheels and the distance between the wheels respectively.
\begin{figure}[tb]
\centering
\begin{tikzpicture}[scale=0.6]
\input{NHWMRsketch.tex}
\end{tikzpicture}
\caption{The differential drive robot with with linear speed $v$, angular velocity $\omega$ and desired velocity $\mathbf{v}_d$.}
\label{fig:ddwmr}
\end{figure}
Some commercially available differential drive robots, such as the iRobot Create, can only be commanded wheel speeds. Further, the wheel speeds that can be obtained are bounded. That is, $| \dot{\phi}_R | \leq \dot{\phi}_{max}$ and $| \dot{\phi}_L | \leq \dot{\phi}_{max}$ for some $\dot{\phi}_{max} > 0$.
A second possibility is when the motors are torque controlled. Let $u_1}%{u_1$ and $u_2$ be the net torques at the right and left wheels respectively. We assume that these torques are bounded, that is, $| u_1}%{u_1 | \leq u_m$ and $| u_2 | \leq u_m$ for some $u_m > 0$. In this case, we can derive
\begin{dgroup}
\begin{dmath}
m \dot{v} = \frac{r}{2} u_1}%{u_1 + \frac{r}{2} u_2
\end{dmath}
\begin{dmath}
J_r \dot{\omega} = \frac{ r}{2 b} u_1}%{u_1 - \frac{r}{2 b} u_2
\end{dmath}
\label{eq:ddwmrdynamics}
\end{dgroup}
\noindent where $m$ is the effective mass of the robot and $J_r$ is the effective rotational inertia of the robot about the vertical axis through the center of the wheel base. The parameters $m$ and $J_r$ are functions of the robot and wheel parameters ( see \cite{Sarkar94} for details). Further, the right and left wheel speeds change according to the equations
\begin{dmath}
\bmat{c_1 & c_2 \\ c_2 & c_1} \bmat{\dot{\phi}_R \\ \dot{\phi}_L} = \bmat{u_1}%{u_1 \\ u_2}
\label{eq:wheelspeeddynamics}
\end{dmath}
\noindent where $c_1$ and $c_2$ are strictly positive constants which depend on the robot parameters ( See \cite{Sarkar94} for details). Note that $c_1$ and $ c_2 $ cannot be equal since equality requires the robot to have no rotational inertia about the vertical axis . Also $c_2 = 0 \iff m b^2 = I$ ( see \cite{Sarkar94} for details).
We will use both models \eqref{eq:ddrkinematics} and \eqref{eq:ddwmrdynamics} separately to address the goals outlined in Section \ref{sec:problemstatement}.
\subsection{Time-optimal Control}
Consider a dynamical system consisting of state $q \in \mathbb{R}^n$.
The dynamics are given by
\begin{dmath}
\dot{q} = f(q,u)
\label{eq:dynamicalsystem}
\end{dmath}
\noindent where $u \in U \subset \mathbb{R}^p$ is the control input and $f: \mathbb{R}^n \times U \rightarrow \mathbb{R}^n$ is a vector field on $\mathbb{R}^n$.
Consider an initial state $q_0 \in \mathbb{R}^n$ and a target state $q_d \in \mathbb{R}^n$. Assume that there exists some control $u(t)$ defined on $[0,t_f]$ such that the corresponding trajectory $q(t)$ defined on $[0,t_f]$ has the property that $q(0) = q_0$ and $q(t_f) = q_d$. The pair $(q(t),u(t))$, where $t$ is defined on $[0,t_f]$, is called a controlled trajectory. Out of all controlled trajectories that achieve the desired change of state, the time-optimal control problem consists of finding one for which the final time $t_f$ the least-possible.
The Pontryagin Maximum Principle \cite{Schattler,Mauder,SussTang,Wu2000} can be used as a tool to find these time-optimal controlled trajectories, by specifying necessary conditions that they must satisfy. Any controlled trajectory meeting the necessary conditions is called an extremal. The time-optimal controls are a subset of the extremals, hence application of the Pontryagin Maximum Principle\ is a good first step to finding them. Further sufficiency conditions need to be applied in order to conclude that an extremal is time-optimal.
We can introduce the adjoint state $\psi \in \mathbb{R}^n$, and the Hamiltonian $H$ given by
\begin{dmath}
H(q,\mu,\psi,u) = - \mu + \psi^T f(q,u)
\label{eq:hamiltonian}
\end{dmath}
\noindent where $\mu \in \{0,1\}$.
The principle states that:
\begin{thm}
Consider system \eqref{eq:dynamicalsystem} with $U$ a compact subset of $\mathbb{R}^{p}$. Let there exist an adjoint state $\psi \in \mathbb{R}^{n}$, a Hamiltonian function $H$ given by \eqref{eq:hamiltonian}, an extremal denoted by the triple $(q^*(t),\psi^*(t),u^*(t))$ and the extremal Hamiltonian $H^*(t) = H(q^*(t),\mu,\psi^*(t),u^*(t))$ defined on $t \in I = [0,t_f]$. Then the following are true
\begin{enumerate}
\item[N1] For all $t \in I $, $(\mu, \psi^*(t)) \neq 0$ holds.
\item[N2] For almost all $t \in I$, the adjoint state satisfies
\begin{dmath}
\dot{\psi} = - \pd{H}{q}(q^*(t),\mu,\psi^*(t),u^*(t))
\label{eq:costatedynamics}
\end{dmath}
\item[N3] For almost all $t \in I$, $u^*(t)$ satisfies
\begin{dmath}
H^*(t) = \max_{u \in U} H(q^*(t),\mu,\psi^*(t),u)
\end{dmath}
\item[N4] For almost all $t \in I$, $H^*(t) = 0$ holds.
\end{enumerate}
\end{thm}
\section{Velocity Control through Wheel speeds}
Suppose that the robot is such that the
\begin{enumerate}
\item The control interface accepts commanded wheel speeds
\item There is a maximum allowed commanded wheel speed
\item The commanded wheel speeds are obtained practically instantly
\end{enumerate}
As mentioned in Section \ref{sec:intro}, These assumptions are satisfied by platforms such as the iRobot Create. Since the commanded wheel speeds can be achieved instantaneously, the state is simply $\theta$ whose dynamics are simply
\begin{dmath}
\dot{\theta} = \omega
\label{eq:kinematicstaticvd}
\end{dmath}
\noindent and the forward speed $v$ can change instantaneously.
The kinematics of the system are given by \eqref{eq:ddrkinematics}. The right and left wheel speeds ( $\dot{\phi}_R$, $\dot{\phi}_L$ respectively) are bounded. Thus, $| \dot{\phi}_R | \leq \dot{\phi}_{max}$ and $| \dot{\phi}_L | \leq \dot{\phi}_{max}$ for some $\dot{\phi}_{max} > 0$. The maximum forward speed $v_{max}$ and the maximum angular velocity $\omega_{max}$ are given by
\begin{dmath}
{v_{max}= r \dot{\phi}_{max},\ \omega_{max}= \frac{ r}{b} \dot{\phi}_{max}}
\end{dmath}
The robot may have a current heading $\theta$ and forward speed $v$. Our goal is to control the robot such that $\theta \rightarrow \theta_d$ and the forward speed $v$ equals the desired forward speed $v_d$
We first look at the case when the desired velocity has a constant heading.
\subsection{Constant desired heading}
A straightforward application of the the Pontryagin Maximum Principle\ shows that in order to change to a linear speed $\| \mathbf{v}_d \|$ with heading $\theta_d$ in least amount of time, we can simply rotate in the required direction at maximum angular velocity until the heading $\theta$ matches the desired value, and then change wheel speeds to achieve the desired forward speed $v_d$.
To see this, notice that the dynamics \eqref{eq:kinematicstaticvd} is of the form $f(q,u) = 0$, and thus the adjoint equation is given by $\dot{\psi} =0$,where $\psi \in \mathbb{R} $. Thus, the adjoint state is always $\psi(t) = \psi(0)$. The Hamiltonian is simply
\begin{dmath}
H = -\mu + \psi \omega
\end{dmath}
\noindent and since $| \omega | \leq \omega_{max}$, we can see that $H$ is maximized by selecting
\begin{dmath}
\omega = \omega_{max} \mysign{\psi(t)}\\
= \omega_{max} \mysign{\psi(0)}
\end{dmath}
Thus, the extremal control consists of a constant angular velocity. Clearly, then, the time-optimal control to change the heading $\theta$ and forward speed $v$ is as mentioned above.
\subsection{Time varying desired heading and desired forward speed}
Suppose the desired velocity is time-varying.
If the goal state $\mathbf{v}_d$ cannot be predicted, the the Pontryagin Maximum Principle\ cannot be applied to such a system. The problem of tracking a time-varying trajectory has been tackled in previous research work \cite{}. Some results even achieve exponential tracking. However, these works often do not account for saturation, and are not concerned with shortest-time paths.
An example of a continuous-time velocity tracking controller for a differential drive wheeled mobile robot is given by:
\begin{dgroup}
\begin{dmath}
v = \| v_d \| \cos (\theta - \theta_d)
\end{dmath}
\begin{dmath}
\omega = - k_{\omega} (\theta - \theta_d)
\end{dmath}
\label{eq:continuouskinematic}
\end{dgroup}
\noindent and in the absence of saturations, $v \rightarrow v_d$ and $\theta \rightarrow \theta_d$. If these desired quantities are time varying, such that the rates of change are bounded, then the error in tracking is also bounded. The bound can be reduced by increasing $k_{\omega}$. We now investigate the effect of saturated wheel speeds.
Due to the limits on the wheel speed, we can use \eqref{eq:ddrkinematics} to determine that the achievable forward and angular velocities are constrained to satisfy the relation
\begin{dmath}
\frac{| v |}{r} + \frac{b | \omega |}{r} \leq \dot{\phi}_{max}
\label{eq:constrainedvw}
\end{dmath}
\noindent which is represented as the shaded region in Figure \ref{fig:vwphase}.
\begin{figure}[tb]
\centering
\begin{tikzpicture}[scale = 0.75]
\input{vwphase.tex}
\end{tikzpicture}
\caption{The shaded region represents the achievable forward and angular velocities of the differential drive wheeled mobile robot}
\label{fig:vwphase}
\end{figure}
Given the desired values of $v$ and $\omega$ from \eqref{eq:continuouskinematic}, the desired wheel speeds may be computed as
\begin{dmath*}
\bmat{\dot{\phi}_{R,d}\\ \dot{\phi}_{L,d}} = \frac{2 b}{ r^2} \bmat{-\frac{r b}{2} & -\frac{r}{2}\\ -\frac{r b}{2} & \frac{r}{2} } \bmat{v \\ \omega}
\end{dmath*}
If these wheel speeds are commanded, then the actual wheel speeds obtained are
\begin{dmath*}
{\dot{\phi}_{R} = \mathrm{sat}(\dot{\phi}_{R,d} , \dot{\phi}_{max}) ,\dot{\phi}_{L} = \mathrm{sat}(\dot{\phi}_{L,d}, \dot{\phi}_{max} )}
\end{dmath*}
\noindent where
\begin{dmath}
\mathrm{sat}(x,\alpha) = \begin{cases} x & \mbox{ if } |x| < \alpha \\ \mysign{x} \alpha & \mbox{ otherwise }\end{cases}
\end{dmath}
The actual forward velocity $v_{out}$ and $\omega_{out}$ achieved by the robot can be computed from these saturated wheel velocities using \eqref{eq:ddrkinematics}. They must satisfy \eqref{eq:constrainedvw}.
Thus, if $\| v_d\| > v_{max}$ then the heading angle does not converge. Let $\| v_d\| = v_{max} + \epsilon$, where $\epsilon > 0$. If $| \omega | < \frac{2 \epsilon}{b}$ then both desired wheel speeds are still greater than $\dot{\phi}_{max}$, implying that the actual forward and angular velocities due to the saturated wheel speeds will be $\pm v_{max}$ and $0$ respectively. Thus, when the error is non-zero yet small, the angular velocity remains zero, and the heading will not converge.
One can immediately see that this situation can be remedied by saturating the magnitude of the desired velocity. That is:
\begin{dgroup}
\begin{dmath}
v = \mathrm{sat}(\| v_d \| , v_{max}) \cos (\theta - \theta_d)
\end{dmath}
\begin{dmath}
\omega = - k (\theta - \theta_d)
\end{dmath}
\end{dgroup}
\noindent which allows the robot heading to converge, since $\epsilon = 0$ and hence the angular velocity after saturation is never $0$, unless the desired angular velocity is zero.
Given that our goal is to re-orient the robot to match a desired heading and speed, the above method of computing $v$ and $\omega$ can be improved upon. The idea is to recognize that the achievable forward and angular velocities satisfy \eqref{eq:constrainedvw}. Then, given $v$ and $\omega$, we can compute $\bar{v}$ and $\bar{\omega}$ as
\begin{dmath}
\bar{\omega} = \begin{cases} \mathrm{sign}(\omega_d) \omega_{max} & \mbox{ if } | \omega_d | \geq \omega_{max} \\ \omega_d &\mbox{ if } | \omega_d | < \omega_{max} \end{cases}
\end{dmath}
\noindent and
\begin{dmath}
\bar{v} = \begin{cases} \mysign{v} \mathrm{max} \left( 0, v_{max} - b | \omega_d | \right) & \mbox{ if } \frac{| v |}{r} + \frac{b | \omega |}{r} > \dot{\phi}_{max} \\
v & \mbox{otherwise} \end{cases}
\end{dmath}
The effect is to always prioritize rotational motion when given $v$ and $\omega_d$ outside the shaded region in \ref{fig:vwphase}.
A further improvement may be obtained through a heuristic solution that can be viewed as a hybrid control. Suppose that the desired heading is $\theta_d(t)$ and $|\dot{\theta}_d (t)|< \omega_{max}$. A bang-bang control can be used until $\theta(t) = \theta_d(t)$. After this time, the modified continuous tracking controller can be used. The benefit of this heuristic is that the lack of forward motion during the bang-bang phase minimizes the drift in position.
\subsection{Simulations}
Consider a robot with heading angle $0$. Let $r = 1 m$, $b = 5m$, and $\dot{\phi}_{max} = 0.5 rad/sec$. The robot is commanded to move with a velocity of $5 m/s$ in the positive $y$-axis direction. This implies that $\| \mathbf{v}_d \| > v_{max}$, and $\theta_d = \frac{\pi}{2}$. The results for the time-optimal control, the continuous control (without saturation of $\| \mathbf{v}_d \|$), the continuous control (with saturation of $\| \mathbf{v}_d \|$) and our the modified continuous control is given in Figure \ref{fig:vwphase1}.
\begin{figure}[tb]
\centering
\includegraphics[width=0.35\textwidth]{wheelspeedstatic.pdf}
\caption{Plot of $\theta$ versus time for different control strategies with a static desired velocity $\mathbf{v}_d$.}
\label{fig:vwphase1}
\end{figure}
Next, we command a time-varying velocity $\mathbf{v}_d(t) = [1\ \ (1+t)]^T$. The results for the the continuous control, the modified continuous control (with saturation on $\mathbf{v}_d$) and the hybrid control strategy is given in Figure \ref{fig:vwphase2}. The modified and hybrid continuous controls definitely have lesser error as when compared to the continuous control case.
\begin{figure}[tb]
\centering
\includegraphics[width=0.35\textwidth]{wheelspeeddynamics.pdf}
\caption{Plot of $\theta$ versus time for different control strategies with a time-varying desired velocity $\mathbf{v}_d$.}
\label{fig:vwphase2}
\end{figure}
\section{Problem Statement}
\label{sec:problemstatement}
The position of the centroid of the wheeled mobile robot at time $t$ is denoted by $(x(t),y(t)) \in \mathbb{R}^2$. We are given a desired velocity $\mathbf{v}_d \in \mathbb{R}^2$. The goal is to design control strategies such that the derivative of the position matches the desired velocity, and that the convergence is achieved as fast as possible.
\begin{dmath}
\bmat{\dot{x}\\\dot{y}} \rightarrow \mathbf{v}_d
\end{dmath}
Given the robot kinematics \eqref{eq:ddrkinematics}, this is equivalent to requiring that
\begin{dmath}
{\theta \rightarrow \theta_d,\ v \rightarrow \| \mathbf{v}_d \|}
\end{dmath}
\noindent where $\theta_d$ is the angle that $\mathbf{v}_d$ makes with the $x$-axis of the coordinate axis in which $(x,y)$ is defined.
We want to solve the control problem for two types of inputs:
\begin{enumerate}
\item The control inputs are the wheel (or motor) speeds
\item The control input are the wheel (or motor) torques
\end{enumerate}
\noindent and two types of reference signals
\begin{enumerate}
\item $\mathbf{v}_d$ is constant
\item $\mathbf{v}_d$ is time-varying
\end{enumerate}
In the rest of the paper, we address the case of wheel speed inputs for both types of reference signals, and torque inputs for the case of constant reference signal $\mathbf{v}_d$.
\subsection{Synthesis for $\omega(0) \neq 0$}
In the previous subsection, we have found controls that satisfy the necessary conditions of the Pontryagin Maximum Principle\ when the initial angular velocity is zero, and these controls result in $v \rightarrow v_d, \theta \rightarrow \theta_d$. These trajectories can be found for any $\theta_0$ and $v_0$. More precisely, we can specify the motor torques as functions of time that achieve the change in state in the least possible time.
The case when $\omega(0) = 0$ corresponds to the mobile robot moving with a constant heading in the plane. We would like to accommodate the case when $\omega(0) \neq 0$ for various reasons, listed below:
\begin{itemize}
\item The robot could have been following a circular trajectory when a new desired linear velocity is commanded
\item Due to disturbances on the input or state, the robot may need to compute new switching times for an initial condition that corresponds to $\omega(0) \neq 0$
\item We wish to develop a state-based feedback control law that is time-optimal.
\end{itemize}
In Section \ref{ssec:synthwzero} we have found the time optimal control for initial conditions of the form $(v_0,\theta_0,0)$. For each such initial condition, there is a unique extremal $(q^*(t),\psi^*(t),u^*(t))$ defined on $I = [0,t_f]$ such that the initial condition is $q^*(0)$ and $q^*(t_f) = (0,0,0)$. Thus, this extremal trajectory is clearly the time-optimal one.
In this subsection, we will see that we can derive more than one extremal for some initial conditions, and hence we need to establish which one is time-optimal when possible. Our strategy will be to determine sets of initial conditions where extremal controls of type $C1_{ns}$, $C2a$ or $C2b$ are such that the corresponding extremal trajectories start at the initial condition and end at the origin.
Let the initial condition be $q_0 = (v,\theta,\omega)$. We can define the following subsets of $\mathbb{R}^3$:
\begin{dmath}
\Omega_1 = \left\{ (v,\theta,\omega) \hiderel{\in} \mathbb{R}^3 \hiderel{:} H_1(v,\theta,\omega) \hiderel{<} 0 \textrm{ and } {H_2(v,\theta,\omega) \hiderel{<} 0 }\right\}
\end{dmath}
\begin{dmath}
\Omega_2 = \left\{ (v,\theta,\omega) \hiderel{\in} \mathbb{R}^3 \hiderel{:} H_1(v,\theta,\omega) \hiderel{>} 0 \textrm{ and } {H_2(v,\theta,\omega) \hiderel{>} 0 }\right\}
\end{dmath}
\begin{dmath}
\Omega_3 = \left\{ (v,\theta,\omega) \hiderel{\in} \mathbb{R}^3 \hiderel{:} \omega H_1(\theta,\omega) \hiderel{<} 0 \right\}
\end{dmath}
\begin{equation}
\Omega_4 = \left\{ (v,\theta,\omega) \hiderel{\in} \mathbb{R}^3 \hiderel{:} H_1(v,\theta,\omega) H_2(v,\theta,\omega) \hiderel{<} 0 \right\}
\end{equation}
\begin{dmath}
S_5 = \left\{ (v,\theta,\omega) \hiderel{\in} \mathbb{R}^3 \hiderel{:} H_1(v,\theta,\omega) \hiderel{=} 0, H_2(v,\theta,\omega) \hiderel{\neq} 0 \right\}
\end{dmath}
\begin{dmath}
S_{6} = \left\{ (v,\theta,\omega) \hiderel{\in} \mathbb{R}^3 \hiderel{:} H_1(v,\theta,\omega) \hiderel{\neq} 0, H_2(v,\theta,\omega) \hiderel{=} 0 \right\}
\end{dmath}
\begin{dmath}
L_v = \left\{ (v,\theta,\omega) \hiderel{\in} \mathbb{R}^3 \hiderel{:} H_1(v,\theta,\omega) \hiderel{=} H_2(v,\theta,\omega) \hiderel{=} 0, v \hiderel{\neq} 0 \right\}
\end{dmath}
\begin{dmath}
L_\omega = \left\{ (v,\theta,\omega) \hiderel{\in} \mathbb{R}^3 \hiderel{:} H_1(v,\theta,\omega) \hiderel{=} H_2(v,\theta,\omega) \hiderel{=} 0, \omega \hiderel{\neq} 0 \right\}
\end{dmath}
\noindent where
\begin{dmath}
H_1(v,\theta,\omega) = 2 \alpha \theta + \omega |\omega|
\end{dmath}
\noindent and
\begin{dmath}
H_2(v,\theta,\omega) = \frac{\omega | \omega | }{2 \alpha } + \theta + \frac{\omega |v| }{\beta}
\end{dmath}
The surfaces $H_1(v,\theta,\omega) = 0$ and $H_2(v,\theta,\omega) = 0$ are plotted in Figures \ref{fig:H1eq0} and \ref{fig:H2eq0} respectively. For comparison, they are superimposed in Figure \ref{fig:H1H2eq0}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{H1equals03.pdf}
\caption{The surface $H_1(v,\theta,\omega) = 0$ for $\alpha = 0.5, \beta = 1$.}
\label{fig:H1eq0}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.4\textwidth]{H2equals03.pdf}
\caption{The surface $H_2(v,\theta,\omega) = 0$ for $\alpha = 0.5, \beta = 1$.}
\label{fig:H2eq0}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.4\textwidth]{H1H2equals03.pdf}
\caption{The surfaces $H_1(v,\theta,\omega) = 0$ and $H_2(v,\theta,\omega) = 0$ for $\alpha = 0.5, \beta = 1$.}
\label{fig:H1H2eq0}
\end{figure}
\begin{lem}
Let $q_0 \in \Omega_1 \cup \Omega_2$. Then, an extremal $(q^*(t),\psi^*(t),u^*(t))$ defined on $I = [0,t_f]$ exists such that $q^*(0) = q_0$ and $q^*(t_f) = (0,0,0)$, and $u^*(t)$ is of type $C2a$.
\label{lem:omega12}
\end{lem}
\input{proofOmega12}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{Omega1Omega2.pdf}
\caption{The shaded region corresponds to the set $\Omega_1 \cup \Omega_2$ for some fixed value of $v$.}
\label{fig:Omega12}
\end{figure}
\begin{rem}
$\Omega_1 \cap \Omega_2 = \emptyset $
\end{rem}
\begin{lem}
Let $q_0 \in \Omega_3$. Then, an extremal $(q^*(t),\psi^*(t),u^*(t))$ defined on $I = [0,t_f]$ exists such that $q^*(0) = q_0$ and $q^*(t_f) = (0,0,0)$, and $u^*(t)$ is of type $C2b$.
\label{lem:omega3}
\end{lem}
\input{proofOmega3}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{Omega3.pdf}
\caption{The shaded region corresponds to the set $\Omega_3$ for some fixed value of $v$.}
\label{fig:Omega3}
\end{figure}
We have already defined sets $\Omega_1$ and $\Omega_2$, and fortunately $\Omega_1 \cap \Omega_2 = \emptyset$. This means that for initial conditions in either of those sets, we have a unique $C2a$ control which results in a transition of the state to the origin. Unfortunately, $\Omega_1 \cap \Omega_3 \neq \emptyset$ and $\Omega_2 \cap \Omega_3 \neq \emptyset$. This means that for $(v,\theta,\omega) \in \Omega_1 \cap \Omega_3 $ or $(v,\theta,\omega) \in \Omega_1 \cap \Omega_3 $, we must be able to decide whether the $C2a$ control or the $C2b$ control is faster. We will now show that the $C2a$ control is always faster.
\begin{lem}
Let $q_0 \in (\Omega_1 \cup \Omega_2 ) \cap \Omega_3$. Let the duration of the $C2a$ extremal corresponding to $q_0$ be $t_{C2a}$ and the duration of the $C2b$ extremal be $t_{C2b}$. Then
\begin{dmath}
t_{C2a} < t_{C2b}
\end{dmath}
\label{lem:C2afasterC2b}
\end{lem}
\input{prooftC2atC2b}
We have determined initial conditions for which $C2a$ or $C2b$ controls exist such that the state transitions to $(0,0,0)$. We now determine initial conditions for which $C1a$ controls exist. Recall that an extremal control is of the form $C1_{ns}$ if one of the motor torques is always $+u_m$ or $-u_m$ for the duration of the trajectory, and the other motor switches no more than twice.
\begin{lem}
Let $q_0 \in \Omega_4$. Then, an extremal $(q^*(t),\psi^*(t),u^*(t))$ defined on $I = [0,t_f]$ exists such that $q^*(0) = q_0$ and $q^*(t_f) = (0,0,0)$, and $u^*(t)$ is of the form $\beta^{\pm} \rightarrow \alpha^{\pm} \rightarrow \beta^{\pm} $
\label{lem:omega4bab}
\end{lem}
\input{proofOmega4}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{Omega4.pdf}
\caption{The shaded region corresponds to the set $\Omega_4$ for some fixed value of $v$.}
\label{fig:Omega4}
\end{figure}
\begin{lem}
Let $q_0 \in \Omega_4$. Then, a unique extremal $(q^*(t),\adjointinout u^*(t))$ defined on $I = [0,t_f]$ exists such that $q^*(0) = q_0$ and $q^*(t_f) = (0,0,0)$, and $u^*(t)$ is of the form $\alpha^{\pm} \rightarrow \beta^{\pm} \rightarrow \alpha^{\pm} $
\label{lem:omega4aba}
\end{lem}
\input{proofOmega4aba}
\begin{lem}
Let $q_0 \in S_5$. Then, an extremal $(q^*(t),\psi^*(t),u^*(t))$ defined on $I = [0,t_f]$ exists such that $q^*(0) = q_0$ and $q^*(t_f) = (0,0,0)$, and $u^*(t)$ is of the form $\alpha^{\pm} \rightarrow \beta^{\pm} $
\label{lem:omega5}
\end{lem}
\begin{pf}
Since $H_1 = 0$ and $H_2 \neq 0$ then $\omega v \neq 0$. Consider the equations \eqref{eq:wnonzeroc1ns2} wherein $t_3 = 0$. The solutions for $t_1$ and $t_2$ are given by
\begin{dgroup}
\begin{dmath}
t_1 = \frac{|\omega | }{\alpha}
\label{eq:H1orH20sol1}
\end{dmath}
\begin{dmath}
t_2 = \frac{|v | }{\beta}
\label{eq:H1orH20sol2}
\end{dmath}
\end{dgroup}
Clearly $t_1$ and $t_2$ are positive precisely when $\omega v \neq 0$. Thus, $q \in S_5$ implies that a control of the form above exists such that the resulting trajectory reaches the origin.
\end{pf}
\begin{lem}
Let $q_0 \in S_{6}$. Then, an extremal $(q^*(t),\psi^*(t),u^*(t))$ defined on $I = [0,t_f]$ exists such that $q^*(0) = q_0$ and $q^*(t_f) = (0,0,0)$, and $u^*(t)$ is of the form $\beta^{\pm} \rightarrow \alpha^{\pm}$.%
\label{lem:omega6}
\end{lem}
\begin{pf}
Since $H_2 = 0$ and $H_1 \neq 0$ then again we can conclude that $\omega v \neq 0$. The equations \eqref{eq:wnonzeroc1nsd714} wherein $t_3 = 0$ have solutions
\begin{dgroup}
\begin{dmath}
t_1 = \frac{|v | }{\beta}
\end{dmath}
\begin{dmath}
t_2 = \frac{|\omega | }{\alpha}
\end{dmath}
\end{dgroup} Again, $t_1$ and $t_2$ are positive precisely when $\omega v \neq 0$.Thus, $q \in S_{6}$ implies that a control of the form above exists such that the trajectory reaches the origin.
\end{pf}
\begin{lem}
Let $q_0 \in L_v$. Then, an extremal $(q^*(t),\psi^*(t),u^*(t))$ defined on $I = [0,t_f]$ exists such that $q^*(0) = q_0$ and $q^*(t_f) = (0,0,0)$, and $u^*(t)$ is of the form $\beta^{\pm}$.%
\label{lem:Lv}
\end{lem}
\begin{pf}
Since $q_0 \in L_v$, $\omega = \theta =0$ and $v \neq 0$. Substituting $\omega = \theta =0$ in \eqref{eq:wnonzeroc1nsd714} we immediately see that $t_2 = t_3 = 0$ and $t_1 = \frac{| v | }{\beta}$. Thus a control of the form $\beta^{\pm}$ exists such that the trajectory from $q_0$ reaches the origin.
\end{pf}
\begin{lem}
Let $q_0 \in L_\omega$. Then, an extremal $(q^*(t),\psi^*(t),u^*(t))$ defined on $I = [0,t_f]$ exists such that $q^*(0) = q_0$ and $q^*(t_f) = (0,0,0)$, and $u^*(t)$ is of the form $\alpha^{\pm}$.%
\label{lem:Lomega}
\end{lem}
\begin{pf}
Since $q_0 \in L_\omega$, $H_1(v,\theta,\omega) = H_2(v,\theta,\omega) = v =0$. From \eqref{eq:proofO4c2t3} we see that $t_3 = 0$ and from \eqref{eq:proofO4c2s1s2t2} we see and $t_2 = 0$. Further, $t_3 = 0$ implies that $t_1 = \frac{| \omega | }{\alpha}$, due to \eqref{eq:proofO4c2s1s2t2}. Thus, a control of the form $\alpha^{\pm}$ exists such that the trajectory from $q_0$ reaches the origin.
\end{pf}
\begin{lem}
Let $q_0 = (v_0,\theta_0,\omega_0) \in \mathbb{R}^3$. Let there exist an extremal $(q^*(t),\psi^*(t),u^*(t))$ defined on $I = [0,t_f]$ such that $q^*(0) = q_0$ and $q^*(t_f) = (0,0,0)$, such that $u^*(t)$ is a singular control. Then there exists a $C1_{ns}$ control $\bar{u}^*(t)$ and corresponding extremal $(\bar{q}^*(t),\bar{\psi}^*(t),\bar{u}^*(t))$ defined on $[0,\bar{t}_f]$ such that $\bar{q}^*(0) = q_0$, $\bar{q}^*(\bar{t}_f) = (0,0,0)$ and $\bar{t}_f = t_f$.
\label{lem:ignoreC1s}
\end{lem}
\input{proofSingular_v5.tex}
\begin{rem}
Lemma \ref{lem:ignoreC1s} implies that if we consider all possible $C1_{ns}$ trajectories from an initial state $q_0$, then we do not need to consider singular controls even if they exist, since one of the $C1_{ns}$ controls will result in the trajectory reaching the goal state in the same time.
\end{rem}
\begin{lem}
Let $q_0 \in \Omega_3 \cap ( \Omega_4 \cup S_5 \cup S_{6} \cup L_v \cup L_\omega )$ . Let the duration of the $C2b$ extremal corresponding to $q_0$ be $t_{C2b}$ and the duration of the $C1_{ns}$ extremal be $t_{C1ns}$. Then
\begin{dmath}
t_{C1ns} \leq t_{C2b}
\end{dmath}
\label{lem:C2bslow}
\end{lem}
\input{prooftC1atC2b}
\section{Regular Synthesis}
\label{sec:regularsynthesis}
Given a point $q \in \mathbb{R}^3$, we can determine the form of the time-optimal control $u_{q}^*(t)$ defined on $[0,t_f]$ which results in a trajectory $q^*(t)$ corresponding to a minimum-time transition from $q$ to the origin. The trajectory $q^*(t)$ is the solution to a differential equation driven by the discontinuous input signal $u_q^*(t)$.
For each $t \in [0,t_f]$, $u^q*(t) \in \{\alpha^{-},\alpha^{+},\beta^{+},\beta^{-} \}$. One can immediately define a feedback control $v^*(q) = u_{q}^{*}(0)$. The resulting closed-loop system under $v^*(q)$ is now a differential equation with a discontinuous right-hand side. It is not necessary that the solutions of this new dynamical system correspond to $q^*(t)$.
In order to prove that our feedback law results in time-optimal behaviour, we must construct an optimal regular synthesis~\cite{Piccoli00} and show that the solutions of the closed-loop dynamical system under the feedback law only produces the trajectories defined by the optimal regular synthesis.
\input{regsynthdef}
We can construct two different optimal regular syntheses $\Gamma_1$ and $\Gamma_2$. The difference between them is seen in the control for points in the set $\Omega_4$.
\begin{prop}
For every $x \in (\Omega_1 \cup \Omega_2)$, $\Omega_4$, $S_5$, $S_{6}$, $L_v$, or $L_\omega$, let $(q_x^*(t),u^*_x(t))$ be the unique extremal defined for $x$ by Lemma \ref{lem:omega12}, \ref{lem:omega4aba}, \ref{lem:omega5}, \ref{lem:omega6}, \ref{lem:Lv}, or \ref{lem:Lomega} respectively. Let $\Gamma_1 = \cup_{x \in \mathbb{R}^3} (q_x^*(t),u^*_x(t))$. Then, $\Gamma_1$ defines an optimal regular synthesis for the time-optimal control problem.
\label{prop:regsynth1}
\end{prop}
\begin{pf}
The set $(\Omega_1 \cup \Omega_2) \cup \Omega_4 \cup S_5 \cup S_{6} \cup L_v \cup L_\omega \cup \{ 0 \}$ is equal to $\mathbb{R}^3$. For each $x \in \mathbb{R}^3$, the controlled trajectorys $(q_x^*(t),u^*_x(t))$ is unique and extremal, as shown in the appropriate Lemma mentioned in the proposition. Thus, $\Gamma_1$ forms a total extremal presynthesis. It is straightforward to check that $\Gamma_1$ is memoryless. Thus, $\Gamma_1$ is a total extremal synthesis.
Next, we show that $\Gamma_1$ is a regular synthesis (Definition~$2.12$,~\cite{Piccoli00}). The conditions used to define a regular synthesis rely on numerous other definitions. Any term appearing in the rest of this proof which has not been defined in this paper has been defined in~\cite{Piccoli00}. We refer the reader to~\cite{Piccoli00} for these definitions.
To show that a synthesis is regular, we must show that a certain cost function satisfies weak continuity conditions (Definition $2.8$, \cite{Piccoli00}) and that $\Gamma_1$ is $(f,L)$-differentiable (Definition $2.9$, \cite{Piccoli00}) at all points in $\mathrm{Dom}(\Gamma_1)$ excluding a thin set (Definition $2.10$,~\cite{Piccoli00}).
The cost function $V_{\Gamma} \colon \mathbb{R}^3 \to \mathbb{R}$ is simply the time taken to transition from a given initial condition to the origin. The Function is continuous in $\mathbb{R}^3$, which can be seen through the analytical expressions obtained. Thus, $V_{\Gamma}$ satisfies the conditions of Definition $2.8$ in~\cite{Piccoli00}.
The property of $(f,L)$-differentiability is more complicated to show. The Lagrangian function $L \colon \mathbb{R}^3 \times U \to \mathbb{R}$ common in optimal control problems reduces to the constant function $L(q,u) = 1$ in the case of time-optimal control for reaching a single goal state. The dynamics $f(q,u) = A q + B u$ is linear. Define $\tilde{f}(q,u) = [f(q,u)^T L(q,u)]^T$. If a control $\eta(t)$ is given, then $\tilde{f}_{\eta}(q,t) = \tilde{f}(q,\eta(t))$. Then,
\begin{dmath}
\tilde{f}_{\eta}(q,t) = \bmat{A q + B \eta(t) \\ 1}
\end{dmath}
We define the function $\rho_{\tilde{f},\Gamma_1,\bar{v},t_f}(v)$ as in~\cite{Piccoli00}, and compute it as
\begin{dmath}
\rho_{\tilde{f},\Gamma_1,\bar{v},t_f}(v)= \tilde{f}_{\eta_{\bar{x}} + v}(\gamma_{\bar{x}}(t) ,t) -\tilde{f}_{\eta_{\bar{x} }} (\gamma_{\bar{x}}(t) ,t) \hfill\\
= \bmat{A \gamma_{\bar{x}}(t) ,t) + B \eta_{\bar{x}}(t) + B v - A \gamma_{\bar{x}}(t) ,t) - B \eta_{\bar{x}}(t) \\ 1-1} \hfill\\
= \bmat{B v\\0}\hfill
\label{eq:fldiffres1}
\end{dmath}
The set $S_{thin} = \{ q \in \mathbb{R}^3 \colon q \in S_5 \cup S_{6} \cup L_v \cup L_{\omega} \cup \{0\} \} = \mathbb{R}^3 \backslash (\Omega_1 \cup \Omega_2 \cup \Omega_4) $ is a thin set based on definition $2.10$ in~\cite{Piccoli00}, where the only measure-zero set is the singleton containing the origin. Because the extremal controls are constant on each of the sets $\Omega_1$, $\Omega_2$ and $\Omega_4$, we have that any extremal control is piece-wise constant, with no more than two points of discontinuity. Thus,
\begin{dmath}
{ D \tilde{f}_\eta(q,t) = \bmat{A & 0 \\ 0 &0 } \mbox{ if } q \in \mathbb{R}^3 \backslash S_{thin} }
\label{eq:Dftilde}
\end{dmath}
We compute the following norm for points $y \in \mathbb{R}^3 \backslash S_{thin}$
\begin{dmath}
\left\| \tilde{f}_{\eta_{x}}(y ,t) - \tilde{f}_{\eta_{x}}(\gamma_{\bar{x}}(t) ,t) - D \tilde{f}_{\bar{x}}(\gamma_{\bar{x}}(t) ,t) (y - \gamma_{\bar{x}}(t)) \right\| \\
= \left\| \bmat{A y + B \eta_x(t) - A \gamma_{\bar{x}}(t) - B \eta_x(t)\\1 - 1 } - D \tilde{f}_{\bar{x}}(\gamma_{\bar{x}}(t) ,t) (y - \gamma_{\bar{x}}(t)) \right\| \\
= \left\| \bmat{ A (y - \gamma_{\bar{x}}(t)) \\ 0 } - \bmat{A & 0 \\ 0 &0 } (y - \gamma_{\bar{x}}(t)) \right\|
=0
\label{eq:fldiffres2}
\end{dmath}
\noindent where we have used \eqref{eq:Dftilde}.
The right hand sides of \eqref{eq:fldiffres1} and \eqref{eq:fldiffres2} immediately show that the conditions $DC1$ and $DC2$ of Definition $2.9$ in~\cite{Piccoli00} are satisfied by $f$ and $L$, for points $q \in \mathbb{R}^3 \backslash S_{thin}$. Thus, $\Gamma_1$ is $(f,L)$-differentable at a point $\bar{x} \in \mathbb{R}^3 \backslash S_{thin}$. Thus, based on Definition $2.12$ in~\cite{Piccoli00}, $\Gamma_1$ is regular, where the thin set is $S_{thin}$ defined above.
We have established that $\Gamma_1$ is a total extremal regular synthesis. Since we are concerned with the time-optimal control problem with a single goal state, we have that $V_{\Gamma_1} (0) = 0$, and so condition $(2.11)$ in~\cite{Piccoli00} is satsified. By condition (b) of Theorem $2.13$ in~\cite{Piccoli00}, we can conlude that $\Gamma_1$ is an optimal regular synthesis.
\end{pf}
\begin{prop}
For every $x \in (\Omega_1 \cup \Omega_2)$, $\Omega_4$, $S_5$, $S_{6}$, $L_v$, or $L_\omega$, let $(q_x^*(t),u^*_x(t))$ be the unique extremal defined for $x$ by Lemma \ref{lem:omega12}, \ref{lem:omega4bab}, \ref{lem:omega5}, \ref{lem:omega6}, \ref{lem:Lv}, or \ref{lem:Lomega} respectively. Let $\Gamma_2 = \cup_{x \in \mathbb{R}^3} (q_x^*(t),u^*_x(t))$. Then, $\Gamma_2$ defines an optimal regular synthesis for the time-optimal control problem.
\end{prop}
\begin{pf}
The arguments are similar to those in the proof of Proposition \ref{prop:regsynth1} and therefore the proof is omitted.
\end{pf}
Lemmas \ref{lem:omega12}-\ref{lem:C2bslow} have been used to construct two distinct optimal regular synthesis. We now define two control laws, one corresponding to each of these syntheses.
Consider the following (sub)sets:
\begin{align}
\Omega_{4}^{v+} &= \{ q\in \Omega_{4} : v > 0 \} \\
\Omega_{4}^{v-} &= \{ q\in \Omega_{4} : v < 0 \} \\
\Omega_{4}^{\omega+} &= \{ q\in \Omega_{4} : \omega > 0 \} \\
\Omega_{4}^{\omega-} &= \{ q\in \Omega_{4} : \omega < 0 \} \\
S_{5}^{\omega+} &= \{ q\in S_{5} : \omega > 0 \} \\
S_{5}^{\omega-} &= \{ q\in S_{5} : \omega < 0 \} \\
S_{6}^{v+} &= \{ q\in S_{6} : v > 0 \} \\
S_{6}^{v-} &= \{ q\in S_{6} : v < 0 \} \\
L_{v}^{+} &= \{ q\in L_v : v > 0 \} \\
L_{v}^{-} &= \{ q\in L_v : v < 0 \} \\
L_{\omega}^{+} &= \{ q\in L_{\omega} : \omega > 0 \} \\
L_{\omega}^{-} &= \{ q\in L_{\omega} : \omega < 0 \}
\label{eq:subsetsforfb}
\end{align}
The feedback law corresponding to $\Gamma_1$ is
\begin{dmath}
u_{fb1}(q) =
\begin{cases}
(+u_m,+u_m) &\mbox{ if } q \in \Omega_4^{v-} \cup L_v^{-} \cup S_{6}^{v-} \\
(+u_m,-u_m) &\mbox{ if } q \in \Omega_2 \cup L_{\omega}^{-} \cup S_5^{\omega-} \\
(-u_m,+u_m) &\mbox{ if } q \in \Omega_1 \cup L_{\omega}^{+} \cup S_5^{\omega+}\\
(-u_m,-u_m) &\mbox{ if } q \in \Omega_4^{v+} \cup L_v^{+} \cup S_{6}^{v+} \\
(0,0) &\mbox{ if } q=(0,0,0)
\end{cases}
\label{eq:statebasedcontrol1}
\end{dmath}
The feedback law corresponding to $\Gamma_2$ is
\begin{dmath}
u_{fb2}(q) =
\begin{cases}
(+u_m,+u_m) &\mbox{ if } q \in L_v^{-} \cup S_{6}^{v-} \\
(+u_m,-u_m) &\mbox{ if } q \in \Omega_2 \cup \Omega_4^{\omega-} \cup L_{\omega}^{-} \cup S_5^{\omega-} \\
(-u_m,+u_m) &\mbox{ if } q \in \Omega_1 \cup \Omega_4^{\omega+} \cup L_{\omega}^{+} \cup S_5^{\omega+}\\
(-u_m,-u_m) &\mbox{ if } q \in L_v^{+} \cup S_{6}^{v+} \\
(0,0) &\mbox{ if } q=(0,0,0)
\end{cases}
\label{eq:statebasedcontrol2}
\end{dmath}
Controls \eqref{eq:statebasedcontrol1} and \eqref{eq:statebasedcontrol2} differ when $(v,\theta,\omega) \in \Omega_4$. The first one is such that the resulting trajectory intersects the surface $H_1(v,\theta,\omega) = 0$ and the second one has a resulting trajectory which intersects the surface $H_2(v,\theta,\omega)$.
Note that the closed loop system can be viewed as a continuous system with a discontinuous control input. This results in a right-hand side which is continuous except for a measure-zero set $M$. For such systems, one can define solutions in multiple ways~\cite{Cortes08}, including Filippov and Caratheodory solutions. In order to take advantage of a right-uniqueness theorem in~\cite{Filippov88}, we utilize definition a) in $\S$4. We define the set-valued map $F(t,q)$ for each $t \in \mathbb{R} $ and $q \in \mathbb{R}^3$ as the smallest convex closed set containing the limit values of the vector valued function $f(t,q^*)$ for $(t,q^*) \notin M$, $q^* \rightarrow q$, and constant $t$. A solution of the closed loop system is defined to be a solution of the differential inclusion
\begin{dmath}
\dot{q} \in F(t,q)
\label{eq:diffincl}
\end{dmath}
Furthermore, we are concerned with the notion of right-uniqueness of the solutions of the closed-loop system (see 1, $\S$10 in~\cite{Filippov88}). For equation $\dot{q} = f(t,q)$, right uniqueness holds at a point $(t_0,q_0)$ if there exists $t_1 > t_0$ such that each two solutions $q(t)$ of this equation satsifying $q(t_0) = q_0$ coincide on the interval $[t_0,t_1]$ or on the part of the interval on which both solutions are defined.
\begin{lem}
Consider the feedback law \eqref{eq:statebasedcontrol1} for the system \eqref{eq:dynamicalsystem}. For every initial condition $q_0 \in \mathbb{R}^3$, the solutions of the closed-loop system corresponds to the unique controlled trajectory in $\Gamma_1$ corresponding to $q_0$.
\label{lem:prooffb1}
\end{lem}
\begin{pf}
The feedback system $\dot{q} = f(q,u)$ is converted into the closed loop system
\begin{equation}
\dot{q} = g(q)
\label{eq:closedloop1}
\end{equation}
\noindent by use of feedback \eqref{eq:statebasedcontrol1}. The vector field $g$ is discontinuous on the surfaces $H_1(q) = 0$ and $H_2(q) = 0$. It is easy to show that the unique extremal solution for any initial condition $q_0 \in \mathbb{R}^3$ is a solution of the closed loop system $\dot{q} = g(q)$ based on the differential inclusion \eqref{eq:diffincl}. If we show that the solutions of \eqref{eq:closedloop1} are unique for any initial condition, then we have proved the lemma.
The right uniqueness of the solutions of \eqref{eq:closedloop1} can be determined based on Theorems 2 and 4, $\S$10 in ~\cite{Filippov88}. Theorem 2 provides conditions which determine when the solutions to a system $\dot{q} = f(t,q)$ defined on a domain $G$ (where $f$ is discontinouous on a surface $S$ of codimension $1$) are (right) unique. Theorem 4 provides conditions which guarantee that solutions evolving along the intersection of multiple such surfaces are unique.
In order to apply Theorem 4, $\S$10 in ~\cite{Filippov88}, we must partition $\mathbb{R}^3$ appropriately (see Appendix \ref{app:filippv}) and show that the vector fields defined on these partitions meet certain conditions. Furthermore, the solutions of the discontinuous system must be, loosely speaking, compatible with this partition. The partition is based on the surfaces $H_1(q)=0$ and $H_2(q) = 0$ as follows. These surfaces divide $\mathbb{R}^3$ into six regions, and intersect along the two lines $L_v$ and $L_\omega$. In turn, these two lines intersect at the origin, and divide the suraces into four regions.
First, consider the following subsets of $S_5$:
\begin{align}
S_5^{++} &= \{ q \in \mathbb{R}^3 \colon H_1(q) = 0, \omega>0, v >0\}\\
S_5^{+-} &= \{ q \in \mathbb{R}^3 \colon H_1(q) = 0, \omega>0, v <0\}\\
S_5^{-+} &= \{ q \in \mathbb{R}^3 \colon H_1(q) = 0, \omega<0, v >0\}\\
S_5^{--} &= \{ q \in \mathbb{R}^3 \colon H_1(q) = 0, \omega<0, v <0\}
\end{align}
\noindent and the following subsets of $S_{6}$:
\begin{align}
S_{6}^{++} &= \{ q \in \mathbb{R}^3 \colon H_2(q) = 0, \omega>0, v >0\}\\
S_{6}^{+-} &= \{ q \in \mathbb{R}^3 \colon H_2(q) = 0, \omega>0, v <0\}\\
S_{6}^{-+} &= \{ q \in \mathbb{R}^3 \colon H_2(q) = 0, \omega<0, v >0\}\\
S_{6}^{--} &= \{ q \in \mathbb{R}^3 \colon H_2(q) = 0, \omega<0, v <0\}\\
\end{align}
Condition 1) is satisfied immediately, since the solutions, which are extremals, have only two points of switching. Condition 2) is met based on applying Theorem 2, $\S$10, \cite{Filippov88} to the hypersurfaces . Condition 3) is satsifed by proper construction of the hypersurfaces $S_i^k$.
\end{pf}
\begin{lem}
Consider the feedback law \eqref{eq:statebasedcontrol2} for the system \eqref{eq:dynamicalsystem}. For every initial condition $q_0 \in \mathbb{R}^3$, the solutions of the closed-loop system corresponds to the unique controlled trajectory in $\Gamma_2$ corresponding to $q_0$.
\end{lem}
\begin{rem}
The case when $\omega_d \neq 0$ is treated in the appendix. The total durations of all valid extremals resulting in a transition from any $q_0$ to any $q_d$ have been derived, along with the motor switching times. No feedback law is developed, unlike the case when $\omega_d = 0$.
\end{rem}
\subsection{Simulations}
For any initial condition, we can compute the time-optimal control using the method above, and simulate the open-loop implementation of this control. The results for six initial conditions are plotted in the plane $v = 0$ in Figure~\ref{fig:phase}. For all plotted trajectories, $v(0) = 1 m/s$. The circle indicates the initial values of $\theta$ and $\omega$ for each trajectory. The time-optimal control for the for the initial condition $(1 m/s,4 rad,-2 rad/s)$ is a $C1_{ns}$ control. The time-optimal controls for the remaining initial conditions are $C2$ controls. The open-loop controls result in all trajectories reaching the origin, as can be seen in Figure~\ref{fig:phase}.
\begin{figure}[tb]
\centering
\includegraphics[width=0.4\textwidth]{shortphase6.pdf}
\caption{Time-optimal trajectories for different initial conditions plotted in the $\theta-\omega$ plane. For all simulations, $v(0)=1 m/s$. The initial and final values of $(\theta,\omega)$ are marked by circles and squares respectively. Trajectories corresponding to control phases $\beta^{\pm}$ are represented by dashed-dotted lines and those for $\alpha^{\pm}$ are represented by dashed or dotted lines.}
\label{fig:phase}
\end{figure}
For the same initial conditions, instead of implementing open loop controls expressed as functions of time, we can use the state-based feedback controls \eqref{eq:statebasedcontrol1} and \eqref{eq:statebasedcontrol1}. The results are plotted in Figures \ref{fig:simfb1} and \ref{fig:simfb2} respectively. We can see that the closed loop trajectories plotted in Figure \ref{fig:simfb1} are identical to the time-optimal trajectories plotted in Figure \ref{fig:phase}. The difference between the two feedback control laws can be seen in the resulting closed-loop trajectory for the initial condition $(1 m/s,4 rad,-2 rad/s)$. The first trajectory leaves the region $\Omega_4$ by reaching the surface $H1(v,\theta,\omega) = 0$ (see Figure \ref{fig:simfb1}) while the second trajectory instead reaches the surface $H2(v,\theta,\omega) = 0$ upon leaving $\Omega_4$ (see Figure \ref{fig:simfb2}).
\begin{figure}[tb]
\centering
\includegraphics[width=0.4\textwidth]{simfb1.pdf}
\caption{Closed loop trajectories using \eqref{eq:statebasedcontrol1} for initial conditions in Figure \ref{fig:phase} (red circles) plotted in the $\theta-\omega$ plane. All trajectories reach the origin. These trajectories are identical to the open-loop time-optimal trajectories in Figure \ref{fig:phase}.}
\label{fig:simfb1}
\end{figure}
\begin{figure}[tb]
\centering
\includegraphics[width=0.4\textwidth]{simfb2.pdf}
\caption{Closed loop trajectories using \eqref{eq:statebasedcontrol2} for initial conditions in Figure \ref{fig:phase} (red circles) plotted in the $\theta-\omega$ plane. All trajectories reach the origin.}
\label{fig:simfb2}
\end{figure}
\subsection{Synthesis when $\omega(0) = 0$}
\label{ssec:synthwzero}
We are interested in target states where the robot has some desired velocity $\mathbf{v}_d$ in the plane. This corresponds to a desired forward speed $v_d = \| \mathbf{v}_d \|$ and orientation $\theta_d$, with zero angular velocity. In this subsection, we will focus on initial states where the robot angular velocity $\omega(0)$ is zero. Due to the fact that $f(q,u) = f(q + [v,\theta,0]^T,u)$, we can change coordinates such that the target state as $(0,0,0)$. The initial state $(v(0),\theta(0),0)$ becomes $(v_0,\theta_0,0)$ in the new coordinates. Thus, we are interested in transitioning from $(v0,\theta_0,0)$ to the origin $(0,0,0)$, the latter corresponding to $(v_d,\theta_d,0)$ in the original coordinates. Clearly, $v_0 = v(0) - v_d$ and $\theta_0 = \theta(0) - \theta_d$.
Assume that $\theta_0 = 0$. In order to avoid the trivial case, we must have $v_0 \neq 0$. Clearly, all we need to do is change the forward speed at the fastest possible rate to reach the origin. Thus, the extremal control is simply $\beta^{+}$ or $\beta^{-}$ for $t = \frac{|v|}{\beta}$ seconds.
Note that the extremal control is of the form $C1a$.
Suppose that $\theta_0 \neq 0$. Since $\omega_0 = 0$, in order to change the robots heading from $\theta_0$, we must increase or decrease the angular velocity. However, since we wish to end with zero angular velocity, we must also decelerate by applying the opposite torques. This means that each motor switches exactly once, and hence we expect the extremal control that achieves the desired change in state to be of the form $C2a$.
Since there are two switches, we can divide the interval $[0,t_f]$ into three sub-intervals of length $t_1$, $t_2$ and $t_3$, where the motors switch at time instants $t_1$ and $\bar{t}_2 = t_1+t_2$ respectively. During the first and third interval, the control is of the form $u_1}%{u_1 = -u_2$. This does not clarify what the motor torques are, but merely that they are opposite in sign ($\alpha^{-}$ or $\alpha^{+}$). During the second the control is of the form $u_1}%{u_1 = u_2$ ($\beta^{+}$ or $\beta^{-}$).
The total duration of the trajectory is $\bar{t}_3 = t_1 + t_2 + t_3$. We can compute the final state at $\bar{t}_3$ due to a $C2a$ control through straightforward integration of the equations of motion as follows:
\begin{dgroup}
\begin{dmath}
\theta(\bar{t}_3) = \frac{s_1 \alpha t_1^2}{2}+ s_1 \alpha t_1 t_2 + s_1 \alpha t_1 t_3 -\frac{s_1 \alpha t_3^2}{2} + \theta_0
\label{eq:wzerothetat3}
\end{dmath}
\begin{dmath}
\omega(\bar{t}_3) = s_1 \alpha t_1 - s_1 \alpha t_3
\label{eq:wzerowt3}
\end{dmath}
\begin{dmath}
v(\bar{t}_3) = s_2 \beta t_2 +v_0
\label{eq:wzerovt3}
\end{dmath}
\label{eq:wzero}
\end{dgroup}
\noindent where $s_1 \in \{1,-1\}$ determines whether the first phase is $\alpha^{-}$ ($s_1 = -1$) or $\alpha^{+}$ ($s_1 = -1$), and $s_2 \in \{1,-1\} $ determines whether the second phase is $\beta^{+}$ ($s_2 = 1$) or $\beta^{-}$($s_2 = -1$). We can set $v(\bar{t}_3)=0$ in \eqref{eq:wzerovt3}, which results in the conclusion that
\begin{dgroup}
\begin{dmath}
t_2 = \frac{| v_0 |}{\beta}
\label{eq:wzerot2}
\end{dmath}
\begin{dmath}
s_2 = -\mysign{v_0}
\label{eq:wzeros2}
\end{dmath}
\label{eq:wzerot2s2}
\end{dgroup}
Similarily, setting $\omega(\bar{t}_3)=0$ in \eqref{eq:wzerowt3} yields
\begin{dmath}
t_3 = t_1
\label{eq:wzerot3eqt1}
\end{dmath}
We can substitute \eqref{eq:wzerot2} and \eqref{eq:wzerot3eqt1} in \eqref{eq:wzerothetat3} along with the fact that we want $\theta(\bar{t}_3)=0$ to obtain
\begin{dmath}
s_1 \alpha t_1^2 + \frac{s_1 \alpha |v_0|}{\beta} t_1 + \theta_0 = 0
\label{eq:wzerot1quadratic}
\end{dmath}
\noindent for which the solutions are
\begin{equation}
t_{1,i} = \frac{1}{2 \alpha} \bigpar{ -\frac{\alpha |v_0|}{\beta} + (-1)^i \sqrt{\frac{\alpha^2 |v_0|^2}{\beta^2} - \frac{4 \alpha \theta_0}{s_1} } }
\end{equation}
\noindent for $i \in \{1,2\}$. A non-negative solution always exists (when choosing $s_1 = -\mysign{\theta_0}$) which is given by
\begin{dmath}
t_1 = \frac{1}{2 \alpha} \sqrt{\frac{\alpha^2 |v_0|^2}{\beta^2} + 4 \alpha |\theta_0 | } -\frac{|v_0|}{2 \beta}
\end{dmath}
We can the compute the total time $\bar{t}_3 = t_1 + t_2 + t_3$ as
\begin{dmath}
\bar{t}_3 = \frac{1}{2 \alpha} \sqrt{\frac{\alpha^2 |v_0|^2}{\beta^2} + 4 \alpha |\theta_0 | }
\end{dmath}
The switching times for the motors are $t_1$ and $t_1 + t_2$ respectively. The phases of the motor torques are determined by the $v_0$ and $\theta_0$, as described in Table \ref{tab:motorsequences}.
\begin{table}
\centering
\caption{Control phases of extremals when $\omega(0) = 0$. A blank entry implies that that phase is non-existent.}
\begin{tabular}{ccccc}
\hline
$\theta_0$ & $v_0$ & $ 0 < t <t_1 $ & $t_1 < t <\bar{t}_2 $ & $ \bar{t}_2 <t < \bar{t}_3$ \\
\hline
$<0$ & $ < 0 $ & $\alpha^{+}$ & $\beta^{+}$ & $\alpha^{-} $ \\
$<0$ & $ > 0 $ & $\alpha^{+}$ & $\beta^{-}$ & $\alpha^{-} $ \\
$<0$ & $ = 0 $ & $\alpha^{+}$ & - & $\alpha^{-} $ \\
$>0$ & $ < 0 $ & $\alpha^{-}$ & $\beta^{+}$ & $\alpha^{+} $ \\
$>0$ & $ > 0 $ & $\alpha^{-}$ & $\beta^{-}$ & $\alpha^{+} $ \\
$>0$ & $ = 0 $ & $\alpha^{-}$ & - & $\alpha^{+} $ \\
$=0$ & $ < 0 $ & - & $\beta^{+}$ & - \\
$=0$ & $ > 0 $ & - & $\beta^{-}$ & - \\
\hline
\end{tabular}
\label{tab:motorsequences}
\end{table}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{threeDphase.pdf}
\caption{Three optimal trajectories starting from the purple dots, marked by colored lines. The first trajectory consists of the green, blue and red curves. The second trajectory corresponds to the orange curve, for which there is no desired change in linear speed. The third trajectory corresponds to the case when $\theta_0=0$, and is represented by a cyan line. For the initial condition where $\theta_0 < 0$, the first phase of the trajectory represented by the green line lies in the (vertical) green plane $v = v_0$. The second phase of the trajectory represented by the blue line lies in a (horizontal) blue plane $\omega = c$, where $c$ is some constant. The third phase of the trajectory represented by the red line lies in the (vertical) red plane $v = 0$, as does the orange trajectory.}
\label{fig:maybe3d2}
\end{figure}
We can plot a sketch of these extremals in the $\mathbb{R}^3$, as done in Figure \ref{fig:maybe3d2}. Three extremals starting from the initial conditions marked by the three purple dots are seen in this figure. All three initial conditions are such that $\omega(0) = 0$. For the initial condition where $\theta_0 < 0$ the target linear speed is different from the initial linear speed. The trajectory consists of a sequence of phases $\alpha^{+}$, $\beta^{+}$ (or $\beta^{-}$ ), and $\alpha^{-}$. These are indicated by the green, blue and red lines respectively. For the initial condition where $\theta_0 > 0$ the target linear speed is identical to initial linear speed. As such, there are only two control phases: $\alpha^{-}$ and $\alpha^{+}$ (equivalently, $t_2 = \frac{| v_0 |}{\beta} = 0$). The case where only the linear speed needs to be changed is indicated by the cyan line.
These same trajectories are projected on to the $\theta-\omega$ and $\theta -v$ planes in Figures \ref{fig:omegathetaplane} and \ref{fig:vthetaplane} respectively for more clarity. In Figure \ref{fig:omegathetaplane}, the dashed line corresponds to points where a single control phase $\alpha^{-}$ or $\alpha^{+}$ would be sufficient to reach the origin. The case when $\theta_0 = 0$ is plotted as a cyan dot at the origin in the $\theta - \omega$ plane. Note for that case when $v_0 = 0$, the problem reduces to time-optimal trajectories for the double integrator, with the target state being the origin \cite{Schattler}.
\begin{figure}
\centering
\begin{tikzpicture}[scale = 1]
\draw[->] (-4,0) -- (4,0);
\draw[->] (0,-2.5) -- (0,2.5);
\draw (4cm,0cm) node[anchor = north east]{$\theta$};
\draw (0cm,2.5cm) node[anchor = north east]{$\omega$};
\begin{scope}[rotate=90]
\draw[dashed,thick] (0,0) parabola (2,4);
\draw[dashed,thick] (0,0) parabola (-2,-4);
\draw[-<-,blue,thick] (1,1) -- (1,3) ;
\end{scope}
\draw[->-,shift={(-4cm,0cm)},rotate=90,green,thick, domain=0:1] plot (\x, {-\x*\x});
\draw[-<-,rotate=90,red,thick, domain=0:1] plot (\x, {\x*\x});
\draw[-<-,shift={(0cm,0cm)},rotate=-90,orange,thick, domain=0:1.2] plot (\x, {\x*\x});
\draw[->-,shift={(2.88cm,0cm)},rotate=-90,orange,thick, domain=0:1.2] plot (\x, {-\x*\x});
\draw[color = purple,line width = 0.1mm,fill = purple] (-4,0) circle (0.05cm);
\draw[color = purple,line width = 0.1mm,fill = purple] (2.88cm,0) circle (0.05cm);
\draw[color = cyan,line width = 0.1mm,fill = cyan] (0,0) circle (0.05cm);
\end{tikzpicture}
\caption{Projection of the trajectories in Figure \ref{fig:maybe3d2} on to the plane $v=0$. The trajectory which corresponds to the case when $\theta_0=0$ gets projected to a point at the origin, represented by the cyan dot. The dashed curve represents points $(0,\theta,\omega)$ which would reach the origin if only the control $\alpha^{-}$ or $\alpha^{+}$ was used, for a suitable finite time period.}
\label{fig:omegathetaplane}
\end{figure}
\begin{figure}
\centering
\begin{tikzpicture}[scale = 1]
\draw[->] (-4,0) -- (4,0);
\draw[->] (0,-2.5) -- (0,2.5);
\draw (4cm,0cm) node[anchor = north east]{$\theta$};
\draw (0cm,2.5cm) node[anchor = north east]{$v$};
\draw[->-,blue,thick] (-3,-2) -- (-1,0) ;
\draw[->-,green,thick] (-4,-2) -- (-3,-2) ;
\draw[->-,red,thick] (-1,0) -- (0,0) ;
\draw[->-,orange,thick] (3,0) -- (1.5,0) ;
\draw[->-,orange,thick] (1.5,0)-- (0,0) ;
\draw[->-,cyan,thick] (0,1.5)-- (0,0) ;
\draw[color = purple,line width = 0.1mm,fill = purple] (-4,-2) circle (0.05cm);
\draw[color = purple,line width = 0.1mm,fill = purple] (3cm,0) circle (0.05cm);
\draw[color = purple,line width = 0.1mm,fill = purple] (0,1.5) circle (0.05cm);
\end{tikzpicture}
\caption{Projection of the trajectories in Figure \ref{fig:maybe3d2} on to the plane $\omega=0$.}
\label{fig:vthetaplane}
\end{figure}
In summary, for the case when the robot must change from moving with one constant velocity in the plane to some other desired velocity in the plane, the extremal control is of the form $C2a$ unless the desired heading is the same as the initial heading, for which case the extremal control is of the form $C1_{ns}$. The switching times for the $C2a$ control have been derived.
\section{Extremals for goal states with non-zero angular velocity}
\label{app:wdnonzero}
We have presented a suitable feedback law that drives the differential drive robot from any initial forward speed $v$, angular velocity $\omega$ and heading $\theta$ to a desired forward speed $v_d$ and heading $\theta_d$, where the desired angular velocity is zero. The feedback law was derived after analyzing the set of extremals from any initial conditions that reached the desired goal.
In what follows, given initial and goal states, we can determine whether a $C1_{ns}$ or $C2$ control exists that results in a transition from the initial state to the goal state.
Consider an initial condition $(v,\theta,\omega)$. We can apply a control consisting of a sequence of three control phases of duration $t_1$, $t_2$ and $t_3$ respectively. Note that any of $t_1$, $t_2$ and $t_3$ may be zero. The first control phase consists of $\alpha^{+}$ or $\alpha^{-}$, with control $\alpha^{+}$ or $\alpha^{-}$ in the third phase. The second phase consists of $\beta^{+}$ or $\beta^{-}$ control.
We can solve the time evolution of the state due to such a control quite easily as follows.
\begin{dgroup}
\begin{dmath}
v (\bar{t}_3) = v + s_2 \beta t_2
\end{dmath}
\begin{dmath}
\theta(\bar{t}_3) = \theta +\omega t_1 + \frac{1}{2} s_1 \alpha t_1^2+ \omega(\bar{t}_1) t_2 + \omega(\bar{t}_2) t_3 + \frac{1}{2} s_3 \alpha t_3^2
\end{dmath}
\begin{dmath}
\omega(\bar{t}_3) = \omega + s_1 \alpha t_1 + s_3 \alpha t_3
\end{dmath}
\end{dgroup}
\noindent where $s_1,s_2, s_3 \in \{1,-1\}$ and $\bar{t}_2 = t_1 + t_2$, $\bar{t}_3 = t_1+t_2+t_3$. If the first (third) control phase is $\alpha^{+}$, then $s_1 = 1$ ($s_3 = 1$) , otherwise $s_1 = -1$ ($s_3 = -1$). If the second control phase is $\beta^{+}$, then $s_2 = 1$, otherwise $s_2=-1$.
We wish to solve for $t_1$, $t_2$, $t_3$ such that
\begin{dgroup*}
\begin{dmath*}
{t_1 \geq 0, \hspace{5mm} t_2 \geq 0, \hspace{5mm} t_3 \geq 0}
\end{dmath*}
\begin{dmath*}
{v(\bar{t}_3) = 0, \theta(\bar{t}_3) = 0, \omega(\bar{t}_3) = \omega_d}
\end{dmath*}
\end{dgroup*}
We can immediately solve for $t_2$ and $s_2$:
\begin{dmath}
t_2 = \frac{-v}{s_2 \beta}\\
= \frac{|v|}{\beta}
\label{eq:t2}
\end{dmath}
\noindent where $s_2 = \mysign{-v}$. We can then express $t_3$ in terms of $t_1$ as
\begin{dmath}
t_3 = -\frac{s_1}{s_3} t_1 + \frac{\omega_d - \omega}{s_3 \alpha}
\label{eq:t3}
\end{dmath}
Which we can substitute into the expression for $\theta(\bar{t}_3)$ as
\begin{dmath}
\theta(\bar{t}_3) = \theta(\bar{t}_2)+ \omega(\bar{t}_2) t_3 + \frac{1}{2} s_3 \alpha t_3^2\\
= \theta(t_1) + \omega(t_1) t_2+ \omega(\bar{t}_2) t_3 + \frac{1}{2} s_3 \alpha t_3^2\\
= \theta + \omega t_1 + \frac{1}{2} s_1 \alpha t_1^2 + \omega(t_1) t_2+ \omega(\bar{t}_2) t_3 + \frac{1}{2} s_3 \alpha t_3^2\\
= \theta + \omega t_1 + \frac{1}{2} s_1 \alpha t_1^2 + \left( \omega + s_1 \alpha t_1 \right) \frac{|v|}{\beta}+ \left(\omega + s_1 \alpha t_1 \right) \left( -\frac{s_1}{s_3} t_1 + \frac{\omega_d - \omega}{s_3 \alpha} \right) - \frac{1}{2} s_1 \alpha \left( -\frac{s_1}{s_3} t_1 + \frac{\omega_d - \omega}{s_3 \alpha} \right)^2\\
= s_1 \alpha t_1^2 + \bigpar{2 \omega + \frac{s_1 \alpha |v|}{\beta}} t_1 + \bigpar{ \theta + \frac{\omega |v|}{\beta}} + \frac{\omega^2 - \omega_d^2}{2 s_1 \alpha}
\end{dmath}
Once we set the $\theta(\bar{t}_3)$ to be equal to the desired value of zero, we obtain
\begin{dmath}
\frac{s_1-s_3}{2} \alpha t_1^2 + \bigpar{ \omega \frac{(s_3-s_1)}{s_3} + \frac{s_1 \alpha |v|}{\beta}} t_1 + \bigpar{ \theta + \frac{\omega |v|}{\beta}} + \frac{\omega_d^2 - \omega^2}{2 s_3 \alpha} = 0
\label{eq:t1}
\end{dmath}
If $s_1 = -s_3$ then the equation is quadratic with solutions
\begin{dgroup}
\begin{dmath}
t_1 = \frac{1}{2 s_1 \alpha} \bigpar{ - \bigpar{2 \omega + \frac{s_1 \alpha | v | }{\beta}} \pm \sqrt{\Delta} }
\end{dmath}
\begin{dmath}
t_3 = \frac{1}{2 s_1 \alpha} \bigpar{ - \bigpar{2 \omega_d + \frac{s_1 \alpha | v | }{\beta}} \pm \sqrt{\Delta} }
\end{dmath}
\begin{dmath}
\Delta = 2 \omega^2 + 2 \omega_d^2 + \frac{\alpha^2 | v |^2 }{\beta^2} - 4 s_1 \alpha \theta
\end{dmath}
\label{eq:alphabetaalphaquadsol}
\end{dgroup}
Thus, there are four solutions for $t_1$ and $t_3$ (two for $s_1=1$ and two for $s_1=-1$). Not all solutions may be such that both $t_1$ and $t_3$ are non-negative. The total time for any valid solution can be computed to be
\begin{dmath}
\bar{t_3} = t_1 + t_2 + t_3\\
= \pm \sqrt{\Delta} - \frac{\omega_d + \omega}{s_1 \alpha}
\end{dmath}
Instead, if $s_1 = s_3$, then \eqref{eq:t1} reduces to a linear equation. The solution is
\begin{dgroup}
\begin{dmath}
t_1 = \frac{\beta (\omega^2-\omega_d^2)}{2 \alpha^2 |v|} - \frac{\theta \beta}{s_1 \alpha |v| } - \frac{\omega}{s_1 \alpha}
\end{dmath}
\begin{dmath}
t_3 = \frac{\beta (\omega_d^2-\omega^2)}{2 \alpha^2 |v|} + \frac{\theta \beta}{s_1 \alpha |v| } + \frac{\omega_d}{s_1 \alpha}
\end{dmath}
\begin{dmath}
s_1 = \mysign{\omega_d - \omega}
\end{dmath}
\label{eq:alphabetaalphalinsol}
\end{dgroup}
\noindent for which there is only one possible solution. Thus, we have five possible solutions for $(t_1, t_2, t_3)$.
\begin{figure}[tb]
\centering
\includegraphics[width=0.4\textwidth]{wdnonzero.pdf}
\caption{Five extremals in the $\theta - \omega$ plane for initial condition $q_0 = (3,-\pi,2)$ (red square) and goal state $q_d = (0,0,2.4)$ (red circle). }
\label{fig:wdnonzero}
\end{figure}
We follow a similar procedure when the first and third control phases consists of $\beta^{+}$ or $\beta^{-}$ control, and the second phase consists of $\alpha^{+}$ or $\alpha^{-}$ control. The integrated equations are:
\begin{dgroup}
\begin{dmath}
v (\bar{t}_3) = v + s_4 \beta t_1 + s_6 \beta t_3
\end{dmath}
\begin{dmath}
\theta(\bar{t}_3) = \theta + \omega t_1 + \omega(t_1) t_2 + \frac{1}{2} s_5 \alpha t_2^2+ \omega(\bar{t}_2) t_3
\end{dmath}
\begin{dmath}
\omega(\bar{t}_3) =\omega + s_5 \alpha t_2
\end{dmath}
\end{dgroup}
\noindent where $s_4, s_5, s_6 \in \{1,-1\}$ and $\bar{t}_2 = t_1 + t_2$, $\bar{t}_3 = t_1+t_2+t_3$. If the second control phase is $+\alpha$, then $s_5 = 1$, otherwise $s_5 = -1$. If the first control phase is $+\beta$, then $s_4 = 1$, otherwise $s_4=-1$. Similarly, if the third control phase is $+\beta$, then $s_6 = 1$, otherwise $s_6=-1$.
We can compute
\begin{dmath}
t_2 = \frac{\omega_d - \omega}{s_5 \alpha}
\end{dmath}
\noindent which implies that $s_5 = \mysign{\omega_d - \omega}$.
Setting $v (\bar{t}_3) =0$ and $\omega(\bar{t}_3) = \omega_d$, we can obtain
\begin{dgroup}
\begin{dmath}
t_1 = \bigpar{\omega - \frac{s_4}{s_6} \omega_d}^{-1} \bigpar{\frac{\omega_d v}{s_6 \beta} + \frac{\omega^2 - \omega_d^2}{2 s_5 \alpha} - \theta}
\end{dmath}
\begin{dmath}
t_3 = \bigpar{\omega_d - \frac{s_6}{s_4} \omega}^{-1} \bigpar{\frac{\omega v}{s_4 \beta} + \frac{\omega^2 - \omega_d^2}{2 s_5 \alpha} - \theta}
\end{dmath}
\label{eq:finaltimesc1}
\end{dgroup}
The total time taken for such a solution is
\begin{dmath}
\bar{t_3} = \begin{cases} -\frac{v}{s_4 \beta} + \frac{|\omega_d - \omega|}{\alpha} & \mbox{ if } s_4 s_6 = 1\\
\frac{\omega-\omega_d}{\omega+\omega_d} \frac{v}{s_4 \beta} - \frac{2 \theta}{\omega+\omega_d} & \mbox{ if } s_4 s_6 = -1
\end{cases}
\end{dmath}
Thus, given $(v,\theta, \omega)$, we can compute \eqref{eq:alphabetaalphaquadsol} when $s_1 \in \{1,-1 \}$, \eqref{eq:alphabetaalphalinsol} which has only one possible solution, and \eqref{eq:finaltimesc1} for four possible values of the pair ($s_4$,$s_6$). This results in nine values of the triplet $(t_1,t_2,t_3)$. The control corresponding to the least value of $t_1+t_2+t_3$, where all three durations are non-negative, is selected as the time-optimal control.
Consider the intial condition $q_0 = (3 m/s,-\pi rad,2 rad/sec)$ and goal state $q_d = (0 m/s,0 rad,2.4 rad/sec)$ when $\alpha = 2/3$, $\beta=2/3$. There are five extremals that achieve the transition from $q_0$ to $q_d$, and only one is time-optimal. These extremals are plotted in the $\theta - \omega$ plane in Figure \ref{fig:wdnonzero}. The initial condition $q_0 = (-1 m/s,-\pi rad,4 rad/sec)$ with goal state $q_d = (0 m/s,0 rad,4.4 rad/sec)$ also has five extremal solutions, however two of them are time-optimal. A future goal is to propose a feedback control law for the case when the goal angular velocity is non-zero, as was done for the case when it is zero.
\section{Torque Control}
The state is taken as $q = ( v, \theta, \omega)^T \in \mathbb{R}^3$, and its dynamics are given by \eqref{eq:ddwmrdynamics}. Note that $\theta$ is treated as a real number instead of an element of $\mathcal{S}^1$. The input space $U \subset \mathbb{R}^2$ is $[-u_m,u_m] \times [-u_m,u_m]$. We can write the dynamics in the form $\dot{q} = f(q,u)$ as shown:
\begin{dmath*}
\bmat{\dot{v}\\ \dot{\theta} \\ \dot{\omega}} = \bmat{\frac{r}{m} (u_1}%{u_1 + u_2) \\ \omega \\ \frac{2 r}{J_r b} ( u_1}%{u_1 - u_2) }
\end{dmath*}
Which is a linear system
\begin{align}
\bmat{\dot{v}\\ \dot{\theta} \\ \dot{\omega}} &= \bmat{0&0&0\\0&0&1\\0&0&0} q + \bmat{\frac{r}{m J}&\frac{r}{m J}\\0&0\\\frac{2 r}{J_r b}&-\frac{2 r}{J_r b}} \bmat{u_1}%{u_1 \\ u_2} \\
&= A q + B u
\label{eq:qdynamics}
\end{align}
We first check whether time-optimal controls that change the state from any initial state to any target state exist. We can use the Fillipov Existence Theorem~\cite{Hartl95,Mauder} to do this.
\begin{thm}[Filippov Existence Theorem] Consider state $q \in \mathbb{R}^n$ with dynamics $\dot{q} = f(q, u)$, where $u \in U \subset \mathbb{R}^p$ and $U$ is compact. Time-optimal solutions exist if the control system is controllable, $f(q, u)$ satisfies the linear growth condition $\| f(q, u) \| \leq c (1 + \|x\|)$ for some constant $c > 0$ and all $(q, u) \in \mathbb{R}^n \times U$, and the velocity sets $F_U(q) := \{f(q, u) | u \in¸ U\}$ are convex for all $q \in \mathbb{R}^n$.
\label{thm:fet}
\end{thm}
We can now show that our system does possess time-optimal controlled trajectories between any two states:
\begin{prop} There exists time optimal trajectories between any two state for the dynamical system \eqref{eq:qdynamics} with input space $U$.
\end{prop}
\begin{pf}
The system \eqref{eq:qdynamics} is a controllable linear system, which is trivial to check. The set of allowable inputs $U = [-u_m,u_m] \times [-u_m,u_m]$ is compact and convex. Thus, $B u$ is a convex subset of $\mathbb{R}^3$ and hence $A q + B u = f(q,u)$ is convex for each $q \in \mathbb{R}^n$. The norm of $f(q,u)$ can be bounded as follows:
\begin{align*}
\| A q + B u \| & \leq \| A q \| + \| B u \|\\
& \leq \| q \| + \|c\|\\
& \leq \max( \{1,\| c \| \}) ( 1 + \| q \|)
\end{align*}
\noindent where $c = \bmat{2 \frac{r}{m}&0&\frac{4 r}{J_r b}}^T$ . Thus, $f(q,u)$ satisfies a linear growth condition. The system \eqref{eq:qdynamics} satisfies the conditions for Theorem \ref{thm:fet}, and hence time optimal controls that change any initial state to any target state exist.
\end{pf}
We can conclude that it makes sense to search for time-optimal controlled trajectories. We begin by constructing the extremals through application of the Pontryagin Maximum Principle. We assume that the extremals are defined over some compact time interval $I = [0,t_f]$. Given an extremal $(q^*(t),\psi^*(t),u^*(t))$, we refer to $q^*(t)$ as the extremal trajectory and $u^*(t)$ as the extremal control. The adjoint state dynamics are derived using $N2$. The partial derivative $\pd{H}{q}$ is computed as:
\begin{dmath}
\pd{H}{q} = \pd{\ }{q} \bigpar{ - \mu + \psi^T (A q + B u)}\\
= A^T
\end{dmath}
The adjoint state dynamics are thus $\dot{\psi} = -A^T \psi$:
\begin{dmath}
\bmat{\dot{\psi}_1\\ \dot{\psi}_2 \\ \dot{\psi}_3} = \bmat{0 \\ 0 \\ -\psi_2}
\label{eq:quxnofriction}
\end{dmath}
The solution of this system given initial condition $\psi(0) = (\psi_1(0),\psi_2(0),\psi_3(0))$ is simply
\begin{dmath}
{\psi_1(t) = \psi_1(0), \psi_2(t) = \psi_2(0),\psi_3(t) = \psi_2(0) t + \psi_3(0) }
\label{eq:solnauxillary}
\end{dmath}
The Hamiltonian function becomes
\begin{dmath}
{H = - \mu + \psi^T f(q,u) = - \mu + \psi^T A q + psi^T B u}\\
= - \mu +\psi_1 \bigpar{\frac{r}{ m} (u_1}%{u_1 + u_2)} + \psi_2 ( \omega ) + \psi_3 \bigpar{\frac{2 r}{J_r b} ( u_1}%{u_1 - u_2)} \\
= \bigpar{\frac{r}{m} \psi_1 + \frac{2 r}{J_r b} \psi_3 } u_1}%{u_1 + \bigpar{\frac{r}{m} \psi_1 - \frac{2 r}{J_r b} \psi_3 }u_2 + \psi_2 \omega
\label{eq:hamiltonian2}
\end{dmath}
|
2,869,038,153,935 | arxiv | \section{Introduction}
The baryons built of four quarks and an antiquark as the lowest
Fock component, referred to as pentaquarks, are not forbidden by theory
and have been discussed ever since the appearance of the quark model
\cite{forerunners}.
The critical prediction by Diakonov, Petrov and
Polyakov \cite{DPP} has been that the lightest explicitly exotic
baryon with positive strangeness, the $\Theta^+(uudd\bar{s})$, must
be relatively light and narrow, which would have made its experimental
observation rather difficult.
Specifically, they predicted the mass $m \approx 1530$ MeV
and width $\Gamma < 15$ MeV for the $\Theta^+$, the lightest member of
the pentaquark antidecuplet, that should decay to the $nK^+$ and $pK^0$
final states. More recent theoretical analyses suggest that the
$\Theta^+$ intrinsic width may be on the order of 1 MeV or even
less \cite{width}.
Narrow peaks near 1540 MeV
in the $nK^+$ and $pK^0$ mass spectra were initially detected in
low-energy photoproduction by LEPS \cite{Nakano-old} and in the
charge-exchange reaction $K^+n \rightarrow pK^0$ by DIANA
\cite{DIANA-2003}. Subsequently, both experiments were able to confirm
their initial observations \cite{Nakano,DIANA-2007,DIANA-2010}.
Moreover, increasing the statistics of the charge-exchange reaction
allowed DIANA to directly estimate the $\Theta^+$ intrinsic width:
$\Gamma \simeq 0.4\pm0.1$ MeV \cite{DIANA-2007,DIANA-2010}.
More recently, observation of a narrow peak near 1.54 GeV in the
missing mass of the $K^0_S$ meson in the reaction
$\gamma p \rightarrow K^0_S K^0_L p$ was reported by a group from
the CLAS collaboration \cite{Amaryan}.
Other searches for the $\Theta^+$ baryon in different reactions and
experimental conditions yielded both positive and negative results,
see the review papers \cite{Burkert} and \cite{Danilov-Mizuk} and
references therein. The bulk of null results can be probably explained
by the extreme smallness of the $\Theta^+$ width that implies the
smallness of production cross-sections \cite{Diakonov}.
Azimov {\it et al.} \cite{AziGoStra} argue that the published null
results fail to rule out the existing positive evidence, and advocate
a new approach to detecting the $\Theta^+$ in hard collisions.
The charge-exchange reaction \under\ on a bound neutron, that
is investigated by DIANA and BELLE \cite{BELLE}, is particularly
interesting because it allows to probe the $\Theta^+$ intrinsic width
in a model-independent manner. The existing data on low-energy $K^+ d$
scattering have been found to leave room for a $pK^0$ resonance with
mass near 1540 MeV, provided that its width is less than 1 MeV
\cite{Strakovsky,Cahn-Trilling,Sibirtsev-width,Gibbs,Azimov}.
An important advantage of the reaction \under\ is that the
strangeness of the final-state $pK^0_S$ system is {\it a priori} known
to be positive. In this paper, the DIANA data on the charge-exchange
reaction \charex\ are analyzed using nearly 2.5 times more statistics
than in \cite{DIANA-2003}.
\section{The experiment and the data}
The DIANA bubble chamber \cite{chamber} filled with liquid Xenon was
exposed to a separated $K^+$ beam with momentum of 850 MeV from the
10-GeV proton synchrotron at ITEP, Moscow. The density and radiation
length of the fill were 2.2 g/cm$^3$ and 3.7 cm, respectively. The
chamber had a fiducial volume of $70\times70\times140$ cm$^3$ viewed by
four optical cameras, and operated without magnetic field.
In the fiducial volume of the bubble chamber, $K^+$ momentum varies
from $\simeq730$ MeV for entering kaons to zero for those that range
out through ionization. Throughout this momentum interval, all collisions
and decays of incident $K^+$ mesons are efficiently detected.
The $K^+$ momentum at interaction point is determined from the spatial
distance between the detected vertex and the mean position of the
vertices due to decays of stopping $K^+$ mesons.
The estimate of the $K^+$ momentum based on the measured position of the
interaction vertex has been verified by detecting and reconstructing the
$K^+ \rightarrow \pi^+\pi^+\pi^-$ decays in flight, which provided an
independent estimate of the $K^+$ momentum. Charged secondaries are
identified by ionization and momentum-analyzed by their range in Xenon.
The detection efficiency for $\gamma$-quanta with $p_\gamma > 25$ MeV
is close to 100\%.
On total, some $10^6$ tracks of incident $K^+$ mesons have been
recorded on film. Scanning of the film yielded nearly 55,000 events
with visible $K^0_S$ decays, $K^0_S \rightarrow \pi^+\pi^-$ and
$K^0_S \rightarrow \pi^0\pi^0$, inside the fiducial volume of the
bubble chamber. The ratio between the numbers of detected
$K^0_S \rightarrow \pi^+\pi^-$ and $K^0_S \rightarrow \pi^0\pi^0$
decays is consistent with the ratio between the corresponding $K^0_S$
branching fractions \cite{PDG}.
These $K^0_S$ decays could be associated with primary
$K^+$Xe vertices with various multiplicities of secondary particles.
Finally, events that featured a $K^0_S \rightarrow \pi^+\pi^-$ decay,
a measurable proton with track length over some 3.5 mm, and no
additional measurable or stub-like protons in the final state, were
selected as candidates for the charge-exchange reaction \under\ free of
intranuclear rescatterings. The $K^0_S \rightarrow \pi^+\pi^-$ decays
with a large spatial angle between the decay pions,
$\Theta_{\pi\pi} > 150^{0}$, were rejected.
The selected events are then fully
measured and reconstructed in space using specially designed
stereo-projectors. In a selected event, we measure the
polar and azimuthal angles of the $K^0_S$ and proton
with respect to the $K^+$ direction, similar angles of the
$\pi^+$ and $\pi^-$ with respect to the parent $K^0_S$
direction, and the proton and pion ranges in Xenon. We additionally
measure the opening angle between the $K^0_S$ and proton directions
which allows the most accurate estimate of the $pK^0_S$ effective mass.
The momentum is estimated by range for the proton, and by kinematic
reconstruction for the $K^0_S$ using the ranges and emission angles of
decay pions. For further
rejection of $K^0_S$ mesons that may have scattered by small angles in
liquid Xenon but passed the pointback criteria, we then apply the
selection $\tau < 3\tau_0$ where $\tau$ is the $K^0_S$ measured proper
lifetime and $\tau_0$ is its tabulated mean value \cite{PDG}.
The quality of the data is best reflected by the experimental
resolution on effective mass of the $pK^0_S$ system, estimated as
$\sigma_m \simeq 3.5$ MeV by error propagation for observed events
and by a simulation.
As expected, the resolution on the $pK^0_S$ effective mass is similar
to the instrumental width of the $\Lambda \rightarrow p \pi^-$ peak,
$\sigma = 3.3\pm1.0$ MeV, previously observed in the antiproton
exposure of DIANA \cite{lambda,DIANA-2003}.
Further details on the experimental procedure may be found in
\cite{exp-procedure-1,exp-procedure-2,DIANA-2007}.
\begin{figure}[!t]
\renewcommand{\baselinestretch}{1.}
\vspace{4.5 cm}
\special{psfile=pbeam-openangle.eps
hoffset=-10
voffset=10
hscale=80
vscale=80}
\caption {(Color online)
Incident $K^+$ momentum at collision point (a) and the opening angle
between the $K^0_S$ and proton directions in lab (b). The crosses show
the simulated distributions that have been normalized to the number of
live events (see Section 3).}
\label{pbeam}
\end{figure}
The measurements have been restricted to the region
$L(K^+) > 520$ mm, where $L(K^+)$ is the length of the $K^+$ path in
liquid Xenon before the collision. (Note that there is no one-to-one
correspondence between $L(K^+)$ and $K^+$ momentum, because the original
beam momentum varied by some $\pm20$ MeV in different exposures.)
The laboratory momentum of the incident $K^+$ at collision point
is shown in Fig.~\ref{pbeam}a for all measured events of the reaction
\charex\ with $K^0_S$ and proton momenta above 155 and 165 MeV,
respectively (instrumental thresholds). The measured opening angle
between the $K^0_S$ and proton directions is shown in Fig.~\ref{pbeam}b.
The dataset comprises the data treated in our initial analysis
\cite{DIANA-2003} and the subsequent measurements. The statistics of
the charge-exchange reaction has been increased by a factor $\simeq2.5$
as compared to \cite{DIANA-2003}.
\section { The Monte-Carlo description of the data }
Rescattering of either the $K^0$ or proton in the Xenon nucleus
distorts the effective mass of the $pK^0$ system originally formed
in the charge-exchange reaction \under\ on a bound neutron.
In formulating the selection criteria for unrescattered events, we
rely on a Monte-Carlo simulation of $K^+n$ and $K^+p$ collisions in
nuclear medium. We simulate the original collision that may be either
\under, $K^+n \rightarrow K^+n$, or $K^+p \rightarrow K^+p$, and then
follow the development of the intranuclear cascade that also involves
the elastic NN reactions $np \rightarrow np$, $nn \rightarrow nn$,
$pn \rightarrow pn$, and $pp \rightarrow pp$. In order to reproduce
the experimental selections for the measured events, we then select
those simulated events that feature a final-state
$K^0 \rightarrow \pi^+\pi^-$ with lab momentum $p_K > 155$ MeV
and $\Theta_{\pi\pi} < 150^{0}$,
a proton with $p_p > 165$ MeV, and no extra
protons with $p_p > 100$ MeV which corresponds to the experimental
threshold for proton detection. On the other hand, any number of
emitted neutrons is allowed.
\begin{figure}[!t]
\renewcommand{\baselinestretch}{1.}
\vspace{9.9cm}
\special{psfile=momenta.eps
hoffset=-10
voffset=10
hscale=80
vscale=80}
\caption {(Color online)
Laboratory momenta of the $K^0$ (a) and proton (b) and the cosines
of the $K^0$ (c) and proton (d) emission angles with respect to the
beam. The crosses show the corresponding distributions of all simulated
events that have been normalized to the number of measured events.
Additionally shown by dots in (a) and (c) are the simulated spectra of
unrescattered $K^0$ mesons, and in (b) and (d) --- of unrescattered
protons.}
\label{momenta}
\end{figure}
The cross-sections of the aforementioned $KN$ and $NN$ reactions
as functions of collision energy are parametrized using the existing
data \cite{cross-section,Dover,hadronic-xsections}. We substitute
$\sigma(K^+n\rightarrow K^+n) = \sigma(K^+d\rightarrow K^+d) -
\sigma(K^+p\rightarrow K^+p)$, and invoke the isospin relations
$\sigma(K^0n\rightarrow K^0n) = \sigma(K^+p\rightarrow K^+p)$ and
$\sigma(K^0p\rightarrow K^0p) = \sigma(K^+n\rightarrow K^+n)$ for
the $K^0N$ elastic cross sections that have not been measured
directly. The effective radius
of the target nucleus is taken in the form $r = 1.25 \times A^{1/3}$ fm
where $A = 131$ for Xenon, and the neutron and proton densities are
assumed to be uniform throughout the nucleus volume. For the same nucleus,
we use a realistic form of the Fermi-momentum distribution with maximum
near 160 MeV \cite{Zhalov}. For the unbound nucleons, Pauli blocking is
approximated by the cut $p_N > 180$ MeV on nucleon momentum, and
absorption is treated according to \cite{Bernardini}.
For the real intranuclear potentials of the nucleon and the $K^+$ meson
in the Xenon mucleus, we assume $V_N = -40$ MeV and $V_K = +25$ MeV
\cite{Dover,kaon-potential}. The flux of
incident $K^+$ mesons as a function of $K^+$ momentum at collision point
is inferred from the observed distribution of $K^+$ range in Xenon before
interaction or decay, see \cite{DIANA-2003}. The experimental losses
of soft protons and $K^0_S$ mesons, that largely occur at lab momenta
below some 200 MeV, are accounted for. The experimental
uncertainties and measurement errors are included in the simulation.
The simulation adequately reproduces the proportion among the numbers
of scanned events with different multiplicities of detected protons.
\begin{figure}[!t]
\renewcommand{\baselinestretch}{1.}
\vspace{9.9cm}
\special{psfile=others.eps
hoffset=-10
voffset=10
hscale=80
vscale=80}
\caption {(Color online)
The absolute lab momentum of the $pK^0$ system (a) ;
the cosine of the $pK^0$ emission angle in lab (b) ; and
the transverse (c) and longitudinal (d) momenta of the $pK^0$ system.
The crosses show the corresponding distributions of all
simulated events that have been normalized to the number of live
events. Depicted by dots are the simulated distributions of
rescattering-free events.}
\label{others}
\end{figure}
In Figures \ref{momenta} and \ref{others}, some distributions
of measured (or live) events are compared with those of simulated events.
Here and in what follows, the total number of simulated events is
normalized to that of live events prior to analysis selections.
Laboratory momenta of the $K^0$ and proton are shown in
Figs.~\ref{momenta}a and \ref{momenta}b, and their emission angles
with respect to the incident $K^+$ --- in Figs.~\ref{momenta}c and
\ref{momenta}d. Shown by dots in Figs.~\ref{momenta}a and
\ref{momenta}c are the simulated spectra of unrescattered $K^0$
mesons (in the same event, the proton may rescatter or not).
Simularly, the dots in Figs.~\ref{momenta}b and \ref{momenta}d are
the spectra of unrescattered protons (the $K^0$ may rescatter or
not). More originally-produced $K^0$ mesons than protons
are seen to escape from the nucleus without rescattering.
On average, the rescattered $K^0$ mesons and protons have
smaller momenta and broader emission angles than the unrescattered
ones. Therefore, rejecting the $K^0$ mesons and protons that travel
in the backward hemisphere in lab will enhance the fraction of
rescattering-free events (those in which both products of the
primary \under\ collision escaped from the nucleus without
rescattering).
The quantities that describe the $pK^0$ system as a whole
are plotted in Fig.~\ref{others}. Here, the dots refer to
the rescattering-free \under\ collisions. The distributions of
rescattering-free and rescattered events have similar shapes for the
absolute lab momentum of the $pK^0$ pair (Fig.~\ref{others}a),
but very different shapes for its transverse and longitudinal
components $p_T$ and $p_L$, see Figs. \ref{others}c and \ref{others}d.
The bulk of rescattering-free events lie in the region $p_T < 300$ MeV,
whereas the rescattered events transcend the domain of target
Fermi-motion by reaching up to some 600 MeV. On the other hand, the
simulation predicts that rescattering-free events should populate the
region $p_L > 100$ MeV unlike the rescattered ones that extend to
negative values of $p_L$. As a result, the rescattered $pK^0$ systems
are emitted at broader angles to the $K^+$ beam than the unrescattered
ones, see Fig.~\ref{others}b. Therefore, the fraction of rescattered
events will be reduced by rejecting events with large $p_T$ and small
$p_L$ of the emitted $pK^0$ system, or those emitted at broad angles
to the beam.
\section{The signal of the $\Theta^+$ baryon prior to analysis selections}
The $pK^0$ effective-mass spectrum for all measured events of
the reaction \charex, that is shown in Fig.~\ref{1dim-all-and-sele}a,
is enhanced in the region \dimass\ $\simeq 1538$ MeV.
This distribution is fitted to a Gaussian plus a background
function, constructed by scaling the simulated \dimass\ distribution
by a factor that is treated as a free parameter of the fit. (In this
and subsequent fits, the maximum-likelihood algorithm is used.)
The fitted position of the enhancement is close to
1538 MeV, and its fitted width is consistent with the simulated
experimental resolution on \dimass\ : $\sigma_m \simeq 3.5$ MeV.
As compared with our initial analysis \cite{DIANA-2003}, the fitted
signal has increased in magnitude according to the increase of the
total statistics of measured events. The same distribution is
also fitted to the background function alone, which corresponds to
the null hypothesis. (This is shown by the dashed line in
Fig.~\ref{1dim-all-and-sele}a.) The naive estimate of statistical
significance is $S/\sqrt{B} = 4.8\sigma$, where the signal $S$ and
background $B$ have been derived from the signal hypothesis alone
over the 90\% area of the Gaussian.
For formation of the putative pentaquark baryon $\Theta^+(1540)$
in the reaction \under\ on a free stationary neutron, the resonant
value of beam momentum $p(K^+)$ is $\simeq445$ MeV. For $\Theta^+$
formation on a bound neutron, $p(K^+)$ will be shifted up by some 50
MeV by the $K^+$ intranuclear potential, and smeared by Fermi motion of
the neutron target. Despite the smearing of the resonance lineshape in
$p(K^+)$ by nuclear effects, a combined analysis of the two variables may
prove to be more sensitive to $\Theta^+$ formation than the analysis of
\dimass\ alone. The scatter plot in \dimass\ and $p(K^+)$ for all live
events, shown in Fig.~\ref{2dim-all}a, is indeed enhanced in the region
\dimass\ $\simeq 1540$ MeV and $p(K^+) \simeq 500$ MeV. The corresponding
scatter plot for simulated events proves to be regular over the full area
of \dimass\ and $p(K^+)$, see Fig.~\ref{2dim-all}b.
In Fig.~\ref{2dim-all}a, the distribution of live events is fitted
to a two-dimensional Gaussian plus a background function, again
constructed by scaling the simulated distribution by a factor which
is a free parameter of the fit. The same distribution has also been
fitted to the background function alone, which corresponds to the
null hypothesis.
\begin{figure}[!b]
\renewcommand{\baselinestretch}{1.}
\vspace{15.5cm}
\special{psfile=1dim-all-and-sele.eps
hoffset=-10
voffset=10
hscale=82
vscale=82 }
\caption
{In (a), the original \dimass\ distribution is fitted to a Gaussian
plus a background function, obtained by scaling the simulated
distribution by a factor which is a free parameter of the fit.
The dashed line shows the null fit to the background function alone.
Shown and fitted in (c) and (e) are the $pK^0$ effective-mass spectra
under the selections $\Theta_K, \Theta_p < 100^0$ and $p_L > 120$ MeV,
respectively (see Section 5).
The selection $p_T < 300$ MeV is additionally applied in the
right-hand panels (b), (d), and (f).}
\label{1dim-all-and-sele}
\end{figure}
\begin{figure}[!t]
\renewcommand{\baselinestretch}{1.}
\vspace{12.5cm}
\special{psfile=2dim-all.eps
hoffset=-10
voffset=10
hscale=80
vscale=75}
\caption {(Color online)
The scatter plots in \dimass\ and $p(K^+)$ for all live (a) and
simulated (b) events. Also shown in (a) is the fit to a two-dimensional
Gaussian plus the background function. The ellipse in (a) is the 90\%
contour for the two-dimensional Gaussian.}
\label{2dim-all}
\end{figure}
The correlation parameter of the two-dimensional Gaussian (line 7
in the box in Fig.~\ref{2dim-all}a) is consistent with $\rho = 0$, as
physically expected for formation of a narrow $pK^0$ resonance.
The enhancement is centered at \dimass $\simeq 1538$ MeV and
$p(K^+) \simeq 490$ MeV, see lines 3 and 5 in the box. The rms width
of the enhancement in \dimass\ is consistent with the experimental
resolution, and that in $p(K^+)$ is $\simeq 28$ MeV (lines 4 and 6 in
the box). The observed spread of the signal in $p(K^+)$ is consistent
with the smearing of a narrow $pK^0$ resonance by nuclear effects
\cite{DIANA-2007}. The fitted signal (line 8) is in good agreement
with the one-dimensional signal in Fig.~\ref{1dim-all-and-sele}a,
but proves to be more significant : $S/\sqrt{B} = 5.1$. This is
because the fitted signal is spread over a narrower interval of
$p(K^+)$ than the nonresonant background.
\section { Applying additional selections }
\begin{figure}[!b]
\renewcommand{\baselinestretch}{1.}
\vspace{13.5cm}
\special{psfile=2dim-sele.eps
hoffset=-10
voffset=10
hscale=80
vscale=80}
\caption {(Color online)
Shown in (a) and (b) are the scatter plots in \dimass\ and $p(K^+)$
under the additional selections $\Theta_K, \Theta_p < 100^0$ and
$p_L > 120$ MeV, respectively. Either scatter
plot is fitted to a two-dimensional Gaussian plus a background
function obtained by scaling the simulated distribution under
similar selections (not shown). The ellipses are the 90\% contours
for the two-dimensional Gaussian.}
\label{2dim-sele}
\end{figure}
In order to verify that the enhancement at \dimass\ $\simeq 1538$
MeV is formed by rescattering-free events, as expected for the signal
of a narrow $pK^0$ resonance, one has to use additional selections
that reduce the fraction of rescattered events. We apply the following
selections:
\begin{enumerate}
\item
$\Theta_K < 100^0$ and $\Theta_p < 100^0$ for the $K^0$ and proton
emission angles, suggested by the distributions of these variables
shown in Figs.~\ref{momenta}c and \ref{momenta}d. This selection has
already been used in our previous papers \cite{DIANA-2003,DIANA-2007,
DIANA-2010}, and was found to produce no artificial structures in the
\dimass\ spectrum by an independent theoretical analysis \cite{Sibirtsev}.
The simulation predicts that this selection retains 77\% of all
rescattering-free events.
\item
$p_L > 120$ MeV for the longitudinal lab momentum of the $pK^0$
system, as suggested by the data shown in Fig.~\ref{others}d.
The acceptance to simulated rescattering-free events is $\simeq96$\%.
\end{enumerate}
The effects of the selections $\Theta_K, \Theta_p < 100^0$ and
$p_L > 120$ MeV are shown in Figs.
\ref{1dim-all-and-sele}c and \ref{1dim-all-and-sele}e, respectively.
Each mass spectrum is fitted to a Gaussian plus a background
function, that is constructed by scaling the simulated distribution
under similar selections. The signal and null fits are shown by the
solid and dashed lines, respectively. The value of $S/\sqrt{B}$ is
5.6 for the enhancement in Fig.~\ref{1dim-all-and-sele}c, and 5.5
for that in Fig.~\ref{1dim-all-and-sele}e. Each additional selection
is seen to render the signal more significant than in
Fig.~\ref{1dim-all-and-sele}a. The selection $p_T < 300$ MeV, that
is suggested by the data of Fig.~\ref{others}c, is additionally
applied in the right-hand panels of Fig.~\ref{1dim-all-and-sele}.
This further increases the $S/\sqrt{B}$ value by $\simeq 0.4$.
The experimental scatter plots in \dimass\ and $p(K^+)$
under the selections $\Theta_K, \Theta_p < 100^0$ and $p_L > 120$ MeV
are shown and fitted in Figs. \ref{2dim-sele}a and \ref{2dim-sele}b,
respectively. (The simulated scatter plots under these selections are
again regular throughout the full area of \dimass\ and $p(K^+)$.)
The positions and rms widths of the enhancement are consistent
with those in Fig.~\ref{2dim-all}. The two-dimensional signals in
Figs \ref{2dim-sele}a and \ref{2dim-sele}b are similar in magnitude
to the corresponding one-dimensional signals in Figs.
\ref{1dim-all-and-sele}c and \ref{1dim-all-and-sele}e, but have
higher values of $S/\sqrt{B}$ (5.8 and 6.2, respectively).
The fits of the scatter plots in \dimass\ and $p(K^+)$
shown in Figs. \ref{2dim-all} and \ref{2dim-sele}
suggest that the signal populates a limited range of $p(K^+)$, as it
should for formation of a narrow $pK^0$ resonance \cite{DIANA-2007}.
In Fig.~\ref{1dim-window-tra} the $pK^0$ effective mass is plotted
under the selections $\Theta_K, \Theta_p < 100^0$ or $p_L > 120$ MeV
plus the common selections $p_T < 300$ MeV and $445 < p(K^+) < 535$ MeV.
The null fits demonstrate that the extra selection $445 < p(K^+) < 535$
MeV does not produce any spurious structures in the \dimass\ spectrum,
while substantially increasing the signal-to-background ratio:
we have $S/\sqrt{B} = 6.8$ and 6.4 for the signals in Figs.
\ref{1dim-window-tra}a and \ref{1dim-window-tra}b, respectively.
\begin{figure}[t]
\renewcommand{\baselinestretch}{1.}
\vspace{5.1cm}
\special{psfile=1dim-window-tra.eps
hoffset=-10
voffset=10
hscale=82
vscale=82 }
\caption
{Shown in (a) and (b) are the $pK^0$ effective-mass spectra
under the selections $\Theta_K, \Theta_p < 100^0$ and $p_L > 120$ MeV
plus the common selections
$p_T < 300$ MeV and $445 < p(K^+) < 535$ MeV. The signal and
null fits are shown by the solid and dashed lines, respectively.}
\label{1dim-window-tra}
\end{figure}
\section{Statistical significance of the signal}
In all one- and two-dimensional fits shown in the previous
sections, the fitted width of the enhancement in \dimass\ is consistent
with being entirely due to the experimental resolution. So in order to
reduce the number of free parameters, the mass width is constrained to
the simulated value of $\sigma_m = 3.5$ MeV when estimating the
statistical significance of the signal.
For the fits of the scatter plots in \dimass\ and $p(K^+)$ shown
in Figs. \ref{2dim-all} and \ref{2dim-sele}, the correlation parameter
of the two-dimensional Gaussian is always consistent with $\rho = 0$,
as physically expected for the signal from formation of a narrow $pK^0$
resonance. Indeed, in this case the variation of \dimass\ is totally
due to measurement errors on the $K^0$ and proton momenta and on the
opening angle, and should be fully independent from the variation of
$p(K^+)$ that arises from Fermi motion of the target neutron. Therefore,
we use the constraint $\rho = 0$ when estimating the statistical
significance of the two-dimensional signal.
The results of the constrained fits of the $pK^0$ mass spectra
and of the scatter plots in \dimass\ and $p(K^+)$ are shown in Tables
\ref{constrained-1dim} and \ref{constrained-2dim}, respectively.
Also shown for each fit is the difference between the log-likelihood
values for the signal and null hypotheses, $-2\Delta\ln L$.
The number of degrees of freedom is $\Delta\mathrm{ndf} = 2$ and 4
for the constrained one- and two-dimensional fits, respectively.
The statistical significance of the signal is estimated using the
value of $\chi^2$ for one degree of freedom which corresponds to
the same $p$-value as $\chi^2 = -2\Delta\ln L$ for $\Delta$ndf
degrees of freedom \cite{PDG}.
We see that the statistical significance of the signal is
enhanced by the additional kinematic selections based on the Monte-Carlo
simulation, reaching some 5.5 standard deviations. The two-dimensional
signals are more significant than the one-dimensional ones under
similar selections.
\clearpage
\begin{table
\small
\begin{tabular}{|l|l|l|l|l|c|c|}
\hline
Selections & $m_0$ (MeV) & Signal (ev)
& $-2 \ln L$ & $-2 \ln L$ & $2\Delta\ln L$ & Stat. \\
& & $S/\sqrt{B}$
& $\chi^2/$ndf & $\chi^2/$ndf & & sign. \\
& &
& (signal fit) & (null fit) & & \\
\hline
\hline
None & $1537\pm1$ & $77.8\pm18.8$
& 47.6 & 65.7 & 18.1 & 4.0 \\
& & 4.8
& 45.9/62 & 62.9/64 & &\\% 17.0 & 3.8 \\
\hline
$p_T < 300$ MeV & $1537\pm1$ & $79.1\pm18.0$
& 65.7 & 88.9 & 23.2 & 4.4 \\
& & 5.3
& 67.8/62 & 91.9/64 & &\\% 24.1 & 4.5 \\
\hline
$p_T < 300$ MeV & $1537\pm1$ & $67.7\pm15.0$
& 51.6 & 75.0 & 23.4 & 4.4 \\
$445 < p(K^+) < 535$ MeV & & 5.6
& 45.7/62 & 66.2/64 & &\\% 20.5 & 4.2 \\
\hline
\hline
$\Theta_K,\Theta_p<100^0$ & $1538\pm1$ & $72.9\pm15.8$
& 69.0 & 92.8 & 23.8 & 4.5 \\
& & 5.6
& 55.7/62 & 77.0/64 & &\\% 21.3 & 4.2 \\
\hline
$\Theta_K,\Theta_p<100^0$ & $1538\pm1$ & $72.3\pm15.2$
& 80.9 & 109.9 & 28.9 & 5.1 \\
$p_T < 300$ MeV & & 6.0
& 70.7/62 & 99.8/64 & &\\% 29.1 & 5.1 \\
\hline
$\Theta_K,\Theta_p<100^0$ & $1538\pm1$ & $68.0\pm13.4$
& 61.8 & 96.9 & 35.1 & 5.5 \\
$p_T < 300$ MeV & & 6.8
& 45.8/62 & 77.3/64 & &\\% 31.5 & 5.3 \\
$445 < p(K^+) < 535$ MeV & &
& & & & \\
\hline
\hline
$p_L > 120$ MeV & $1538\pm1$ & $77.1\pm16.7$
& 58.0 & 81.8 & 23.8 & 4.5 \\
& & 5.5
& 51.6/62 & 72.8/64 & &\\% 21.2 & 4.2 \\
\hline
$p_L > 120$ MeV & $1537\pm1$ & $76.1\pm16.0$
& 68.6 & 97.1 & 28.5 & 5.0 \\
$p_T<300$ MeV & & 6.0
& 64.5/62 & 93.0/64 & &\\% 28.6 & 5.1 \\
\hline
$p_L > 120$ MeV & $1538\pm1$ & $67.6\pm13.9$
& 57.8 & 89.1 & 31.3 & 5.3 \\
$p_T<300$ MeV & & 6.4
& 48.6/62 & 78.5/64 & &\\% 29.9 & 5.2 \\
$445 < p(K^+) < 535$ MeV & &
& & & & \\
\hline
\end{tabular}
\renewcommand{\baselinestretch}{1.}
\caption
{The results of the one-dimensional fits in which the Gaussian mass
width of the signal has been constrained to the simulated resolution
of $\sigma_m = 3.5$ MeV.}
\label{constrained-1dim}
\end{table}
\begin{table
\small
\begin{tabular}{|l|l|l|l|l|l|c|c|}
\hline
Selections & $m_0$ (MeV) & $p_0(K^+)$ (MeV) & Signal (ev)
& $-2 \ln L$ & $-2 \ln L$ &$2\Delta\ln L$ & Stat. \\
& & $\sigma_p$ (MeV) & $S/\sqrt{B}$
& $\chi^2/$ndf & $\chi^2/$ndf & & sign. \\
& & &
& (signal fit) & (null fit) & & \\
\hline
\hline
None & $1538\pm1$ & $488.2\pm7.3$ & $79.4\pm18.7$
& 315.2 & 341.8 & 26.6 & 4.3 \\
& & $26.3\pm6.0$ & 5.4
& 268.5/247 & 305.8/251 & &\\% 37.3 & 5.2 \\
\hline
$\Theta_K,\Theta_p<100^0$ & $1538\pm1$ & $484.9\pm6.4$ & $73.1\pm15.5$
& 315.8 & 350.2 & 34.3 & 5.0 \\
& & $25.8\pm4.7$ & 6.1
& 236.7/247 & 279.7/251 & &\\% 43.0 & 5.7 \\
\hline
$p_L > 120$ MeV & $1539\pm1$ & $485.3\pm6.0$ & $81.0\pm16.3$
& 295.6 & 333.9 & 38.2 & 5.3 \\
& & $24.5\pm4.2$ & 6.5
& 218.0/247 & 266.0/251 & &\\% 48.0 & 6.1 \\
\hline
\end{tabular}
\renewcommand{\baselinestretch}{1.}
\caption
{The results of the two-dimensional fits in which the Gaussian mass
width of the signal has been constrained to the simulated resolution
of $\sigma_m = 3.5$ MeV and the correlation parameter $\rho$ has
been constrained to zero.}
\label{constrained-2dim}
\end{table}
\section{Intrinsic width of the $\Theta^+$ baryon}
Intrinsic width of a resonance formed in an $s$-channel reaction
like \under\ can be estimated by comparing the signal magnitude with
the level of non-resonant background under the peak, see {\it e.g.}
\cite{Cahn-Trilling}. However, this method cannot be directly applied
to $K^+$ collisions with heavy nuclei because the resonant signal and
the underlying non-resonant background may be very differently affected
by rescattering of the $K^0$ and proton in nuclear medium. That is, the
non-resonant background under the peak is a mixture of unrescattered
and rescattered events, whereas a true signal should consist of
unrescattered events only.
As soon as the $\Theta^+$ decay width is on the order of
1 MeV or less, the peak will not be depleted by the $K^0$
and proton rescatterings because the bulk of produced $\Theta^+$
baryons will decay upon leaving the nucleus. Therefore, for a
consistent determination of the $\Theta^+$ intrinsic width based on
the method \cite{Cahn-Trilling} one will need the distribution of
non-resonant events in the effective mass of the originally formed
$pK^0$ system prior to any rescatterings, $m_0(pK^0)$.
The $m_0(pK^0)$ distribution of non-resonant events can only be
obtained through a simulation. Pauli blocking for protons in nuclear
matter does not affect the process of $\Theta^+$ formation
and decay, and therefore should be switched off when consistently
simulating the ``equivalent" non-resonant background.
Then, assuming $J = 1/2$ for the $\Theta^+$ spin and using the
observed signal and simulated non-resonant background, the
intrinsic width of the $\Theta^+$ baryon can be derived as
\begin{equation}
\Gamma = \frac{N_\mathrm{peak}}{N_\mathrm{bkgd}}
\times \frac{\sigma^\mathrm{CE}}{107\mathrm{mb}}
\times \frac{\Delta m_0}{B_i B_f}.
\end{equation}
Here, $N_\mathrm{peak}$ is the fitted number of events in the peak
\vspace{.1in}
corrected for experimental losses; \\
$\Delta m_0$ is the interval of the original mass $m_0(pK^0)$
centered on peak position, that is populated by $N_\mathrm{bkgd}$
\vspace{.1in}
simulated non-resonant events; \\
$\sigma^\mathrm{CE} = 4.1\pm0.3$ mb is the measured cross section
of the charge-exchange reaction \under\ for the center-of-mass energy
\vspace{.1in}
equal to the $\Theta^+$ mass \cite{cross-section}; \\
and $B_i$ and $B_f$ are the branching fractions for the initial and
final states ($B_i = B_f = 1/2$ for the $\Theta^+$ isospin of either
$I = 0$ or $I = 1$).
\begin{figure}[h]
\renewcommand{\baselinestretch}{1.}
\vspace{6cm}
\special{psfile=undistorted.eps
hoffset=-10
voffset=10
hscale=80
vscale=80}
\caption {(Color online)
The simulated effective mass of the original $pK^0$ system prior
to any intranuclear rescatterings, $m_0(pK^0)$, for the restricted
fiducial volume $L(K^+) > 520$ mm corresponding to the region of
throughput measurements (a). The $K^0$ lab momentum is restricted
to $p_K > 140$ MeV which is the effective threshold for detecting
$K^0_S \rightarrow \pi^+\pi^-$ decays in the scan, and the distribution
is normalized to the total number of $K^+\mathrm{Xe} \rightarrow K^0_S X$
collisions with $K^0_S \rightarrow \pi^+\pi^-$ decays found by
the scan in the aforementioned region $L(K^+) > 520$ mm.
Switching off Pauli blocking in the simulation and lifting the
cut $p_K > 140$ MeV results in the $m_0(pK^0)$ spectrum that
is shown in (b). The open-white corridor in the latter histogram
depicts the mass region $1529 < m_0(pK^0) < 1547$ MeV.}
\label{undistorted}
\end{figure}
For the simulated charge-exchange collisions \under\ on a bound
neutron in the bubble chamber DIANA, the effective mass of the original
$pK^0$ system is plotted in Fig.~\ref{undistorted}a. At this stage,
Pauli blocking is still present in the simulation to allow for an
absolute normalization based on the scanning information. The $K^0$
lab momentum is restricted to $p_K > 140$ MeV which is the (effective)
threshold for detecting the $K^0_S \rightarrow \pi^+\pi^-$ decays in
the scan. The simulated $m_0(pK^0)$ distribution of
Fig.~\ref{undistorted}a has been scaled to the total number of
$K^+\mathrm{Xe} \rightarrow K^0_S X$ collisions with
$K^0_S \rightarrow \pi^+\pi^-$ decays and any number of detected
protons, that have been found by the scan
in the restricted fiducial volume $L(K^+) > 520$ mm ($8500\pm540$
events). Thereby, we obtain the correctly normalized $m_0(pK^0)$
distribution for all events of the charge-exchange reaction
$K^+\mathrm{Xe} \rightarrow K^0_S X$ found by the scan in the
part of the detector fiducial volume where throughput measurements
were made. The next step is to switch off the Pauli suppression and
lift the selection $p_K > 140$ MeV in the simulation. The resultant
$m_0(pK^0)$ distribution, that is shown in Fig.~\ref{undistorted}b,
can be directly used for estimating the ``equivalent" non-resonant
background under the $\Theta^+$ peak.
In Eq.~1, we substitute $\Delta m_0 = 18$ MeV and
$N_\mathrm{bkgd} = 1696\pm108$ events that populate the mass interval
$1529 < m_0(pK^0) < 1547$ MeV of the simulated $m_0(pK^0)$ distribution
of Fig.~\ref{undistorted}b. The fit of the resonant signal prior to
analysis selections, shown in Fig.~\ref{1dim-all-and-sele}a, returned
$83.1\pm22.1$ events above the background. This has to be corrected for
the experimental losses due to secondary interactions of the $K^0$ or
proton in liquid Xenon, $K^0_S$ mesons decaying too close to the primary
vertex, and technical reasons for which some events could not be properly
measured. The correction factor for these losses is estimated
as $1.67\pm0.19$. The $\Theta^+$ signal must also be corrected for the
cuts $p_K > 155$ MeV and $p_p > 165$ MeV and the losses of soft
secondaries, as well as for the experimental selections
$\Theta_{\pi\pi} < 150^{0}$ and $p_\pi > 75$ MeV for the decays
$K^0_S \rightarrow \pi^+\pi^-$. The corresponding correction factor is
estimated as 1.43 using a simulation of $\Theta^+$ formation and decay.
And finally, the signal must be corrected for the cut $\tau < 3\tau_0$
on the $K^0_S$ proper lifetime. As a result, in Eq.~1
we have to substitute $N_\mathrm{peak}=208\pm60$ for the
acceptance-corrected signal of the $\Theta^+$ baryon.
Finally, for the intrinsic width of the $\Theta^+$ baryon
we obtain $\Gamma = 0.34\pm0.10$ MeV, where the error does not include
the systematic uncertainties of the simulation procedure. This estimate
of the $\Theta^+$ intrinsic width has been derived assuming that the
bulk of produced $\Theta^+$ baryons decay upon leaving the nucleus.
The value of $\Gamma$ obtained in this analysis is consistent with our
earlier estimates \cite{DIANA-2007,DIANA-2010}, and does not violate
the upper limits set by BELLE \cite{BELLE} and by the E19 experiment
at J-PARC \cite{Shirotori}.
\section{Summary and conclusions}
We have analyzed the DIANA data on the
charge-exchange reaction \charex\ using increased statistics and
modified selections. The distribution of the $pK^0$ effective mass
shows a prominent enhancement at 1538 MeV whose width is consistent
with being entirely due to the experimental resolution. Applying the same
selections as in our previous analysis \cite{DIANA-2003}, we find that
this narrow enhancement has increased proportionally to the increase
of the data sample. A corresponding enhancement at
\dimass$\simeq1538$ MeV and $p(K^+)\simeq 490$ MeV, formed by nearly 80
events above the background, is observed in the scatter plot in the
variables \dimass\ and $p(K^+)$. Relying on a simulation of $K^+$Xe
collisions that includes the development of the intranuclear cascade,
we have shown that the observed signal is not a spurious structure
created by the selections. Under the additional kinematic selections
based on the simulation, the statistical significance of the signal
reaches 5.5 standard deviations. We interpret
our observations as strong evidence for formation of a pentaquark
baryon with positive strangeness, $\Theta^+(uudd\bar{s})$, in the
charge-exchange reaction \under\ on a bound neutron. The mass of the
$\Theta^+$ baryon has been measured as $m(\Theta^+) = 1538\pm2$ MeV.
Using the ratio between the numbers of resonant and non-resonant
charge-exchange events in the peak region, the intrinsic width of
this baryon resonance has been determined as
$\Gamma(\Theta^+) = 0.34\pm0.10$ MeV.
The results reported in this paper confirm our earlier observations
based on part of the present statistics of the charge-exchange
reaction \charex\ \cite{DIANA-2003,DIANA-2007,DIANA-2010}.
We wish to thank M.~B. Zhalov for communicating his results on
Fermi momentum in the Xe nucleus. We also thank Ya.~I. Azimov,
L.~N. Bogdanova, D.~I. Diakonov, and I.~I. Strakovsky for useful
comments. This work is supported by the Russian Foundation for
Basic Research (grant 10-02-01487).
|
2,869,038,153,936 | arxiv | \section{Introduction}
\IEEEPARstart{M}{ulti}-organ segmentation of organs such as the liver, pancreas and stomach in abdominal computed tomography (CT) scans is an essential task for computer-aided diagnosis (CAD), image-guided robotic surgeries, radiation therapy, and virtual surgeries \cite{cad, robotics_surgery, 788580, ma2020abdomenct1k}. Accurate and reliable segmentation results are required to optimize clinical workflow, such as the planning of surgeries or treatments. With the universal adaptability of convolutional neural networks (CNNs) \cite{10.5555/2999134.2999257}, different levels of visual features and patterns of organs in medical images can be efficiently detected and trained. Yet, a large amount of annotated data is required to accelerate a CNN to accurately detect medical imaging findings and precisely segment the organs \cite{cho2016data}, which is considered as an expensive and time-consuming task. Thus, the embedding of rich semantic information by exploiting limited data is a primary challenge in medical image segmentation task.
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth]{figures/method.png}
\caption{\textbf{Our proposed method} Orange and blue dots on the computed tomography scan refers to voxels; the voxels with the same color are from the same class. Two sets of voxel-level features are processed by the same network. Then, each of the features in the same class and from different networks are pulled closer to each other, thereby maximizing similarity (sky blue-colored area is an embedding space).}
\label{fig:summary}
\end{figure}
Recently, efforts have been devoted to improve the performance of segmentation based on encoding high-level features, which is also known as feature learning \cite{7954012}. Different types of architectures have been developed to increase the representation power of volumetric CT images \cite{VoxelResNet, 3DUNet, ResidualUNet, VNet}. Various objective functions, such as distributed-based losses \cite{celoss, focal_loss}, region-based losses \cite{dice_loss, log_dice}, and boundary-based losses \cite{hausdorff} have been proposed for domain-specific problems. Despite their improved representations for 3D medical data, these methods focus on the local context instead of the global context for dense segmentation results, which indicates that these methods are prone to be affected by data distribution. Attention-gated mechanisms \cite{attention_1} were proposed for fusing both global and local contexts, learning only relevant regions in input images, or leveraging contextual information over multiple slices \cite{attention_2}. However, these methods ignore the cross-image global context and define the context relations in the final feature space (i.e., the last feature layer or decision space), which significantly reduces the representation power of the intermediate hidden layers. Meanwhile, self-supervised learning methods based on contrastive loss \cite{8578491, oord2019representation, simclr, moco}, which defines the context relations in the representation space, have proven the effectiveness of learning representations. Recently, contrastive loss was employed in semantic segmentation tasks in the natural images \cite{wang2021exploring}, which demonstrated the effectiveness of contrastive loss for learning the global context in a supervised learning framework. However, these methods used memory banks or large batch sizes for a large number of negative samples, which resulted in a high computational cost. Furthermore, embedding the local context along with the global context is essential for accurate segmentation, which was not addressed in \cite{wang2021exploring}.
Inspired by recent methods of representation learning in a self-supervised learning framework, we aim to improve the representation power for multi-organ segmentation task. We can define the problem of multi-organ segmentation on CT scans as a voxel-wise classification problem; consequently, the voxel-wise semantic information must be embedded for both global and local context. However, there are two limitations that occur when applying previous methods to volumetric CT images segmentation. \textbf{(1)} Owing to the extensive requirement of large negative samples, when employing contrastive loss, the current methods result in additional computational time and cost, which are primarily increased by the size of the volume data. \textbf{(2)} The current methods only have an advantage in the global context, not the local context. This is because these methods only take advantage of the last layer for feature embedding.
To resolve the aforementioned issues, we propose effective voxel-level representation learning method for multi-organ segmentation. Based on a previous research \cite{wang2021exploring}, we defined positive samples as voxels from the same class (organs) and computed voxel-to-voxel and voxel-to-region contrasts that enforce embeddings to be similar for positive samples. The uniqueness of our method is that we \textbf{do not employ negative samples}, which is the solution for the first limitation \textbf{(1)}. Thus, computational cost can be significantly reduced because we can work with a typical batch size \cite{simclr} and do not require additional memory bank \cite{byol} for negative (voxel) samples. Another recent work, i.e., \textbf{SimSiam} \cite{simsiam}, proposed a simple Siamese network and stop-gradient operation, and further proved that neither negative samples nor momentum encoders are critical for the comparative performance. Thus, we adopted the SimSiam method as a baseline for our representation learning and customized it for our supervised voxel-level representation embedding. We build our representation learning network using a standard 3D U-Net architecture. The 3D U-Nets \cite{3DUNet} are commonly used for image segmentation tasks because of their good performance and efficient use of GPU memory \cite{attention_1}. For the solution for the second limitation \textbf{(2)}, we propose a \textbf{multi-resolution context aggregation} in the 3D U-Net architecture for embedding both local and global contexts, which enables both semantic information and precise localization.
To summarize, our contributions are as follows:
\begin{itemize}
\item We propose simple yet effective \textbf{voxel-level representation learning} method for multi-organ segmentation on abdominal CT scans. Our method enforces voxel-level feature relations in the representation space so that we can enhance representation power of the base network (i.e., 3D U-Net \cite{3DUNet}).
\item We define voxel-level feature relations \textbf{without using negative samples}, which is an efficient method in terms of the computational cost. While using SimSiam method \cite{simsiam}, we neither use a large batch size nor use a momentum encoder, which are typically required for collecting a large amount of negative samples.
\item We propose a \textbf{multi-resolution context aggregation} method that aggregates features from the intermediate layers and the last hidden layer. Using our method, we can train both global and local context features simultaneously.
\item Our proposed method shows superior performance when compared to the existing state-of-the-art methods. Moreover, our method can be effortlessly combined with any base network without extra parameters during inference. Furthermore, we demonstrate that our method is effective with a limited dataset.
\end{itemize}
\par
\section{RELATED WORK}
\subsection{Medical Image Segmentation}
The state-of-the-art models for medical segmentation are based on encoder-decoder architectures, such as U-Net \cite{UNet} (includes contracting path, expanding path, and skip connections). For dense segmentation results in volumetric images, encoder-decoder based architectures \cite{VoxelResNet, 3DUNet, ResidualUNet, VNet} using 3D convolutional networks were proposed, and the weighted cross-entropy loss \cite{celoss} or Dice coefficient \cite{dice_loss} was used as a basic loss function. In this setting, voxel-level feature learning can embed a rich local context that enables precise localization. However, it is limited to the local context, and it is difficult to capture long-range dependencies. To overcome this limitation, attention-gated mechanisms \cite{attention_1, attention_2} were proposed, which are considered to be efficient for fusing both global and local semantic information. Howevere, previous attention mechanisms have a common limitation that it is difficult to represent context relations in the representation space.
\subsection{Contrastive Learning}
Self-supervised learning methods based on contrastive loss \cite{1640964, 8578491, oord2019representation, hjelm2019learning, h2020dataefficient, simclr, moco} have proven the effectiveness of learning representations that are invariant to different views of the same instance. In contrastive learning, images from multiple (similar) views are pulled closer together, while all other images (negative samples) are pushed away. Defining the positive (similar) and negative samples is an essential part of contrastive learning, and in general, data augmentation of the same image is used for generating positive samples, and different images are used as negative samples. It is commonly known that contrastive loss benefits from more negative examples \cite{1640964, tian2020contrastive} (collapsing occurs without negative samples). Thus, previous studies relied on large batch sizes \cite{simclr} or memory banks \cite{moco}. Beyond contrastive learning, recent studies \cite{byol, simsiam} have shown promising results for representation learning by employing only positive pairs without using the negative samples. A momentum encoder and moving-average updating technique \cite{byol} or the SimSiam method \cite{simsiam} are well-known techniques that do not use negative pairs to prevent the occurrence of a collapsing. In this work, we used SimSiam \cite{simsiam} as a baseline of our feature learning network to capitalize on the computational efficiency by using only positive pairs.
\begin{figure*}[h!bt]
\centering
\includegraphics[width=\linewidth]{figures/architecture.png}
\caption{\textbf{Overview of the proposed architecture} Voxel-level feature layer takes features of hidden layers as input and defines voxel-level feature relations in the representation space (sky blue-colored area). Using a Siamese network (encoder and projection layer), voxel-level feature batch ($F_i$) is projected to $p$. Similarity between $p$ and $z$ is minimized by stop-gradient technique.}
\label{fig:architecture}
\end{figure*}
Furthermore, contrastive learning has been applied to medical or natural image segmentation to enhance the representation power. There are certain recent works \cite{chaitanya2020contrastive, taleb20203d} that used contrastive loss for the pretraining encoder, which was beneficial for the downstream medical segmentation task. Moreover, another current work \cite{wang2021exploring} took advantage of contrastive loss in a supervised learning framework and showed outperforming results in natural image semantic segmentation tasks. This work embedded global context using pixel-to-pixel and pixel-to-region contrasts, thereby utilizing the annotated labels of each pixel. Inspired by this work \cite{wang2021exploring}, we propose a method for voxel-level representation learning that embeds both local and global contexts.
\section{PROPOSED METHOD}
\subsection{Preliminaries}
The core idea of visual self-supervised learning is to train the encoder to embed high-level visual features from the data itself. To achieve this, simple Siamese (SimSiam \cite{simsiam}) method takes two randomly augmented (i.e., rotation, color jitter or scaling) views $x_1$ and $x_2$ as an inputs to the encoder and minimizes negative cosine similarity of the two output vectors, $p_1$ and $z_2$:
\begin{equation}
\mathcal{D}(p_1, z_2) = - {p_1 \over \lVert p_1 \rVert_2} \cdot {z_2 \over \lVert z_2 \rVert_2},
\label{eq:1}
\end{equation}
where $\lVert \cdot \rVert_2$ is $l_2$-norm, $p_1 \triangleq h(k(f(x_1)))$ and $z_2 \triangleq k(f(x_2))$ (encoder network $f$, projection MLP head $k$, and prediction MLP head $h$.)
A key component in SimSiam is the stop-gradient, which prevents a network $f$ from collapsing. The total form of the SimSiam loss is as follows:
\begin{equation}
\mathcal{L} = {1 \over 2}\mathcal{D}(p_1, stopgrad(z_2)) + {1 \over 2}\mathcal{D}(p_2, stopgrad(z_1)),
\label{eq:2}
\end{equation}
where $stopgrad$ indicates that the loss is not updated by the gradients of its parameters, which is PyTorch-like style.
\label{section:pre}
\subsection{Voxel-level Representation Learning}
\label{section:3-b}
Our architecture (Fig. \ref{fig:architecture}) is composed of a base network (3D U-Net) and voxel-level feature layers. We first demonstrate a basic 3D U-Net network training scheme and introduce our novel voxel-level representation learning method.
3D U-Net takes a 3D volume, $x \in \mathbb{R}^{H \times W \times D}$ as an input, and the encoder produces each features $F_i \in \mathbb{R}^{H \times W \times D \times C_i}$ from the $i^{th}$ layer (from the last). For each upsampling step in the decoder (expanding path), these features ($F_i$, except for the $1^{th}$ layer) are concatenated with the upsampled feature map, which is connected with the gray line in Fig. \ref{fig:architecture}, and is also known as skip connection, which is crucial for precise localization. Then, the last layer outputs the score map $\mathcal{S} \in \mathbb{R}^{H \times W \times D \times Class}$ for each class. The Dice coefficient is used in most 3D medical image segmentation tasks as a loss function; therefore, we used multi-label soft Dice loss \cite{VNet} to maximize the overlap between the ground truth and prediction. The multi-label soft Dice loss is defined as:
\begin{equation}
\mathrm{L}_{dice} = \sum^{class}_c{{2\sum^{H \times W \times D}_i {p_i^c g_i^c}} \over {\sum_i^{H \times W \times D} {(p_i^c)}^2 + \sum_i^{H \times W \times D} {(g_i^c)}^2}},
\label{eq:3}
\end{equation}
where $p_i \in \mathbb{R}^{H \times W \times D}$ is ground truth and $g_i \in$ \textit {softmax}$(\mathcal{S})$
A lot of work in medical imaging tasks uses Dice coefficient instead of cross-entropy as it works better for class imbalance. However, similar to the problem of cross-entropy loss as stated in \cite{wang2021exploring}, Dice loss also penalizes voxel-wise predictions independently, ignoring voxel-to-voxel relations. It is true that (\ref{eq:3}) is only computed for each predicted voxel to determine if it is in the ground-truth mask region. In the updating phase, the loss of each voxel prediction is backpropagated independently so that the network cannot learn their relations. Further, prediction is performed in the last layer of the decoder, and it is not sufficient to encode high-level features in the encoder. As shown in recent downstream tasks in medical imaging \cite{chaitanya2020contrastive, taleb20203d}, a pretraining encoder that uses pretext tasks or contrastive loss before training the entire segmentation network leads to substantial performance improvements, which demonstrates that there is room for performance improvement in the encoder.
To tackle these issues, additional loss for accelerating cross-voxel relations and encoding sufficient high-level features in the encoder is required. A recent paper \cite{wang2021exploring} proposed a pixel-wise contrastive algorithm and showed the result of a well-structured semantic feature space. In this study, we extended this method to a voxel-wise contrastive algorithm that defines the voxel-to-voxel and voxel-to-region contrasts. Thus, we can suppress voxel-level relations to obtain similar features from the same class. Furthermore, we defined voxel-wise relations without using negative samples, which is the main difference from the previous method \cite{wang2021exploring}. We used SimSiam method that utilizes both the Simaese network and the stop-gradient technique. We can reduce the computational cost while maintaining the competitive performance because we did not use either a large batch size or a memory bank for negative samples.
We propose a voxel-level feature layer that consists of two MLP heads, i.e., projection and prediction. The voxel-level feature layer takes $F_i$ as an input. The $F_i$ feature is converted into $C_i$-d vector with a batch size of $H \times W \times D$, which can be referred to as $(H \times W \times D) \times C_i$. Then, two identical $C_i$-d features, $f$ are passed through the projection layer and one of the outputs is passed through the prediction layer, which can be represented as $p \triangleq pred(proj(f))$ and $z \triangleq proj(f)$, where $pred$ is prediction layer and $proj$ is projection layer. Let the sets of every $p$ and $z$ from different voxels be represented as $\mathcal{P}$ and $\mathcal{Z}$, respectively, and let the sets of voxels from the same class $c$ be represented as $\mathcal{P}^c$ and $\mathcal{Z}^c$, respectively. Then, following (\ref{eq:1}), the voxel-wise loss function that maximizes the similarity from the same class can be defined as:
\begin{equation}
\mathcal{L}_{voxel-wise}(p, z) = \sum_{p \in \mathcal{P}^c} \sum_{z \in \mathcal{Z}^c} {p \over \lVert p \rVert_2} \cdot {z \over \lVert z \rVert_2}.
\label{eq:4}
\end{equation}
Then, to minimize the loss using stop-gradient technique for multi-class relations, the loss for training can be defined as:
\begin{equation}
\mathcal{L} = - \sum_c^{class}\mathcal{L}_{voxel-wise}(p, stopgrad(z)).
\label{eq:5}
\end{equation}
The loss function according to \ref{eq:5} suppresses voxels from the same class so that they can be projected onto the same point in the embedding space. Further, we can easily extend this loss to the intra-voxel-wise loss function by defining $f$ as $(B \times H \times W \times D) \times C_i$, where $B$ is batch size. Then, let $\mathcal{S}_1$ be a set of feature $f$ from batch $B$. Therefore, we not only considered long dependency in the same volume, but also the intra-dependency of each feature from each volume. Moreover, to acquire robustness against the class imbalance problem, we added a weight for each class. Then, the loss can be defined as:
\begin{equation}
\mathcal{L}_{feature_1} = - \sum_c^{class} w_c \sum_{\mathcal{P}^c, \mathcal{Z}^c \in \mathcal{S}_1} \mathcal{L}_{voxel-wise}(p, z'),
\label{eq:6}
\end{equation}
where $z'$ is $stopgrad(z)$ and $w_c$ is ${{|\mathcal{P}| / |\mathcal{P}^c|} \over \sum_c({|\mathcal{P}| / |\mathcal{P}^c|})}$.
\begin{figure}[t]
\begin{center}
\subfloat[3D U-Net \cite{3DUNet}]{\includegraphics[height=1.3in]{figures/unet.png}}
\\
\subfloat[Ours $\mathcal{L}_{feature_1}$]{\includegraphics[height=1.3in]{figures/feature3.png}}
\subfloat[Ours $\mathcal{L}_{feature_2}$]{\includegraphics[height=1.3in]{figures/feature1.png}}
\caption{Visualization of features learned with (a) typical 3D U-Net, (b) our feature loss, $\mathcal{L}_{feature_1}$, and (c) our feature loss, $\mathcal{L}_{feature_2}$. Features are colored according to class labels and we used the test dataset for visualization (test labels are used for only visualization).}
\label{fig:tsne}
\end{center}
\end{figure}
\subsection{Multi-Resolution Context Aggregation}
We introduce the feature loss $\mathcal{L}_{feature_1}$ to suppress intra-voxel-wise relations from the same class. We can enhance the embedding of global context by applying this loss in the last layer of the encoder. However, this method only improves discriminativeness and quality of the last hidden-layer feature map. Moreover, in the segmentation task, learning the local context for accurate localization is also important; however, this loss cannot directly influence the local context. The higher-level hidden layers have relatively larger receptive fields than lower-level hidden layers, which implies that they contain less local context than lower-level layers. When we only consider the features of the last layer, these features have a high receptive field that can ignore local information; further, these features are too downsampled that even the loss of small objects can occur. This is critical for organs such as the pancreas that account for the smallest portion of the 3D volume because the feature relation of the pancreas cannot be defined. As shown in Fig. \ref{fig:tsne} (b), our feature loss is more effective for feature embedding than Fig. \ref{fig:tsne} (a) but it is dominated by a large portion of classes and some of the classes are missing.
Inspired by previous studies \cite{lee2014deeplysupervised, xu2021seed}, we propose multi-resolution context aggregation for obtaining both a highly discriminative quality in the intermediate hidden layers and local context. The method is simple; we only add voxel-level feature layers in the intermediate layers for features $F_2$ and $F_3$. Then, we can define intra-voxel-wise relations from different hidden layers, which is referred to as context aggregation. Based on context aggregation, the semantic information includes not only the global context but also the local context because lower-level hidden layers contain information from the local regions. Furthermore, by enhancing the discriminative quality of the intermediate hidden layers, we also have the advantage of localization. In the 3D U-Net architecture, there is a skip layer that integrates intermediate hidden layer of the encoder and decoder for localization. Learning useful feature information is directly related to the quality of the score map. Thus, this method improves the quality of the segmentation map.
The multi-resolution context aggregation method is important for embedding good semantic information both locally and globally. The final voxel-level feature loss function can be defined as:
\begin{equation}
\mathcal{L}_{feature_{\mid F \mid}} = - \sum_c^{class} w_c \sum_{\mathcal{P}^c, \mathcal{Z}^c \in \mathcal{S}_{\mid F \mid}} \\ w_f \cdot \mathcal{L}_{voxel-wise}(p, z'),
\label{eq:6}
\end{equation}
where $\mathcal{S}_{\mid F \mid}$ is a set of multi-resolution features, $\mid F \mid$ indicates the total number of hidden layers that is used for defining the feature relation, and $w_f$ is the weight of each hidden layers. Here, we set $\mid F \mid$ as 3, that is, we used the last three hidden layers; moreover, the weights $w_f$ was set as 1.
Finally, we used both voxel-level feature loss and Dice loss for achieving a good representation power and precise segmentation map. Note that the voxel-level feature layer is only used in the training phase, which indicates that no extra capacities are required in the inference phase (i.e., time and space). The total loss can be computed as follows:
\begin{equation}
\mathcal{L} = \mathcal{L}_{dice} + \lambda \mathcal{L}_{feature_{\mid F \mid}},
\label{eq:7}
\end{equation}
where $\lambda$ is a weighting parameter.
\par
\begin{table*}[h]
\captionsetup{justification=centering, labelsep=newline}
\caption{Quantitative comparison of the proposed methods with other networks in terms of Dice coefficient.}
\label{T:dice-score}
\footnotesize
\begin{center}
\begin{tabular}{| c | c || c c c c c c c c |}
\hline
\textbf{Method} & \multicolumn{9}{ c |}{\textbf{DSC $\uparrow$}} \\
\cline{2-10}
& \textbf{avg}
& \textbf{spleen} & \textbf{left kidney} & \textbf{gallbladder} & \textbf{esophagus}
& \textbf{liver} & \textbf{stomach} & \textbf{pancreas} & \textbf{duodenum}\\
\hline
3D U-Net \cite{3DUNet}& 0.786 & 0.931 & 0.923 & 0.75 & 0.608 & 0.952 & 0.839 & 0.701 & 0.583 \\ \hline
Residual 3D U-Net \cite{ResidualUNet}& 0.784 & 0.928& \textbf{0.937} & 0.735 & 0.593& 0.954 & 0.823 & 0.714 & 0.591 \\ \hline
VNet \cite{VNet}& 0.773 & 0.923 & 0.913 & 0.766 & 0.601 & 0.943 & 0.814 & 0.668 & 0.555 \\ \hline
Attention U-Net \cite{attention_1}& 0.787 & \textbf{0.944} & 0.932 & 0.726 & 0.588& \textbf{0.956} & 0.846 & 0.719 & 0.584\\ \hline
Supervised Contrastive Loss & 0.787 & 0.914 & 0.935 & 0.704 &0.618 & 0.955 & 0.838 & 0.715 & 0.590 \\ \hline
\textbf{Ours} & \textbf{0.806} & 0.943 & \textbf{0.937} & \textbf{0.793} & \textbf{0.620} & 0.955 & \textbf{0.869} & \textbf{0.725} & \textbf{0.609} \\ \hline
\end{tabular}
\end{center}
\end{table*}
\begin{table}[t]
\captionsetup{justification=centering, labelsep=newline}
\caption{Quantitative comparison of the proposed methods with other networks in terms of parameter size, 95\% Hausdorff distance (HD95) and average symmetric surface distance (ASSD).}
\label{T:params}
\begin{center}
\begin{tabular}{| c || c | c | c |}
\hline
\textbf{Method} & \textbf{\#params} & \textbf{95\% HD $\downarrow$} & \textbf{ASSD $\downarrow$} \\ \hline
3D U-Net \cite{3DUNet}& 16.313M & 3.170 & 0.845 \\ \hline
Residual 3D U-Net \cite{ResidualUNet}& 141.229M & 3.311 & 0.832 \\ \hline
VNet \cite{VNet}& 9.449M & 5.861 & 1.33 \\ \hline
Attention U-Net \cite{attention_1}& 16.836M & 3.013 & 0.812 \\ \hline
Supervised Contrastive Loss & 16.313M & 3.621 & 0.815 \\ \hline
\textbf{Ours} & \textbf{16.313M} & \textbf{2.461} & \textbf{0.681} \\ \hline
\end{tabular}
\end{center}
\end{table}
\section{EXPERIMENTAL RESULTS}
\subsection{Dataset details}
We used 90 abdominal CT images, i.e., 43 from Pancreas-CT and 47 from Beyond the Cranial Valut (BTCV) dataset \cite{DenseVNet}, and referenced standard segmentations of the spleen, left kidney, gallbladder, esophagus, liver, stomach, pancreas, and duodenum. In the dataset, the slice thickness was in the range of 0.5-5.0 mm and pixel sizes were in the range of 0.6-1.0 mm. The dataset was separated into two sets: training and testing, with 70 images for training and 20 for testing. We resampled all abdominal CT images into 128 x 128 x 64. We preprocessed the image using a soft-tissue CT windowing range of [-200, 250] Hounsfield units. After rescaling, we normalized the input images to zero mean and unit variance (i.e., the range of the value is $[0, 1]$).
\subsection{Implementation details}
For training, we used a 3D U-Net \cite{3DUNet} architecture as the base network. The architecture of our voxel-level feature layer was based on a previous study \cite{simsiam}, with the dimensions of all hidden layers set to 64. We used a batch size of four, an Adam optimizer, a weight decay of 0.00001, and a learning rate of 0.001 for our experiment. We adopted a polynomial annealing policy \cite{chen2017rethinking} to schedule the learning rate, which was multiplied by $(1 - {epoch \over total\_epoch})^{p}$ with $p=0.9$. The weighting parameter $\lambda$ was 10 for $\mathcal{L}_{feature_3}$ and 100 for $\mathcal{L}_{featre_1}$. We sampled a maximum of 1700 features from all voxel features for each hidden layer. Further, we sampled more features from false-negative data than true-positive data, where the maximum number of sampled false-negative data was 1000. The network was trained for 500 epochs. We implemented our framework in PyTorch \cite{paszke2019pytorch}, using an NVIDIA TITAN RTX GPU. At the inference time, only the 3D U-Net network was used for segmentation.
\subsection{Results}
For our evaluation metrics, we used Dice score coefficient (DSC) \cite{dice_loss}, Hausdorff distance (HD95; mm) \cite{hd, hd2}, and average symmetric surface distance (ASSD; mm) \cite{assd}. HD95 is considered as a better-generalized evaluation metric for distance because of the existence of ground-truth variations (e.g., portal vein regions adjacent to the liver). In our experiments, we evaluated the performance in terms of accuracy of our proposed network by comparing the results with those of the state-of-the-art models, i.e., 3D U-Net \cite{3DUNet}, Residual 3D U-Net \cite{ResidualUNet}, VNet \cite{VNet}, attention U-Net \cite{attention_1}, and supervised contrastive loss (SCL). For a fair comparison, we did not perform any postprocessing for the output. SCL is also based on the 3D U-Net architecture and used a typical contrastive learning method for voxel-level feature learning, as discussed in \cite{simclr, wang2021exploring}. The SCL method used the same multi-resolution layer feature and number of sampling positive features as in $L_{feature_3}$, and the number of negative samples was set as 1700 for each layer.
\begin{figure}[t]
\begin{center}
\includegraphics[height=0.2in]{figures/plot/plot.png}\\
\subfloat[spleen]{\includegraphics[height=0.35in]{figures/plot/plot_1.png}}\\
\subfloat[left kidney]{\includegraphics[height=0.35in]{figures/plot/plot_2.png}}\\
\subfloat[gallbladder]{ \includegraphics[height=0.35in]{figures/plot/plot_3.png}}\\
\subfloat[esophagus]{ \includegraphics[height=0.35in]{figures/plot/plot_4.png}}\\
\subfloat[liver]{ \includegraphics[height=0.35in]{figures/plot/plot_5.png}}\\
\subfloat[stomach]{ \includegraphics[height=0.35in]{figures/plot/plot_6.png}}\\
\subfloat[pancreas]{ \includegraphics[height=0.35in]{figures/plot/plot_7.png}}\\
\subfloat[duodenum]{ \includegraphics[height=0.35in]{figures/plot/plot_8.png}}
\caption{Box plots of the Dice score coefficient (DSC) of eight different organs for different approaches.}
\label{fig:plot}
\end{center}
\end{figure}
\begin{figure*}[t]
\begin{center}
\includegraphics[height=0.3in]{figures/2d_vis/2d_vis_1.png}
\subfloat[GroundTruth]{\includegraphics[height=2.2in]{figures/2d_vis/2d_vis_gt.png}}
\hspace{0.01in}
\subfloat[Ours]{\includegraphics[height=2.2in]{figures/2d_vis/2d_vis_ours.png}}
\hspace{0.01in}
\subfloat[SCL]{ \includegraphics[height=2.2in]{figures/2d_vis/2d_vis_cont.png}}
\hspace{0.01in}
\subfloat[Attention U-Net \cite{attention_1}]{ \includegraphics[height=2.2in]{figures/2d_vis/2d_vis_agu.png}}
\caption{Qualitative comparison of different approaches by 2D visualization. From left to right: (a) GroundTruth, (b) Our proposed method, (c) Supervised Contrastive Loss (SCL), (d) Attention 3D U-Net \cite{attention_1}.}
\label{fig:2d_vis}
\end{center}
\end{figure*}
\begin{figure*}[t]
\begin{center}
\subfloat[GroundTruth]{\includegraphics[height=1.2in]{figures/3d_vis/3d_vis_gt.png}}
\subfloat[Ours]{\includegraphics[height=1.2in]{figures/3d_vis/3d_vis_ours.png}}
\subfloat[SCL]{ \includegraphics[height=1.2in]{figures/3d_vis/3d_vis_simclr.png}}
\subfloat[Attention U-Net \cite{attention_1}]{ \includegraphics[height=1.2in]{figures/3d_vis/3d_vis_agunet.png}}
\caption{Qualitative comparison of different approaches by 3D visualization. From left to right: (a) GroundTruth, (b) Our proposed method, (c) Supervised Contrastive Loss (SCL), (d) Attention 3D U-Net \cite{attention_1}.}
\label{fig:3d_vis}
\end{center}
\end{figure*}
\textbf{Quantitative results} Table \ref{T:dice-score} and Table \ref{T:params} list the quantitative results of multi-organ segmentation. The results show that our proposed method outperformed previous methods with significant improvements in all evaluation metrics. We achieved superior values of DSC (0.806), HD95 (2.461) and ASSD (0.681) with the same number parameters as the 3D U-Net \cite{3DUNet}. VNet \cite{VNet} performed the worst when compared to the other methods. Our proposed method outperformed attention U-Net \cite{attention_1} in terms of all evaluation metrics; moreover, our method used fewer parameters when compared to \cite{attention_1}. The SCL, which also uses voxel-level feature learning and employs contrastive loss, showed a small improvement when compared to 3D U-Net \cite{3DUNet}. The relatively inferior performance of SCL was because the contrastive loss required a large number of negative samples and a large batch size, which is difficult to obtain in 3D volumetric applications. From Table \ref{T:dice-score}, it can be observed that our proposed method achieved significant improvements in the segmentation of the gallbladder, esophagus, pancreas and duodenum when compared to other organs. Specifically, our method showed superior performance for small organs. Box plots of the results listed in Table \ref{T:dice-score} are illustrated in Fig. \ref{fig:plot}.
\textbf{Qualitative results} Our qualitative results are illustrated in Fig. \ref{fig:2d_vis} (2D visualization) and \ref{fig:3d_vis} (3D visualization). We chose top-two related methods for comparison, i.e., SCL and attention U-Net \cite{attention_1}. When compared to other methods, our results have a higher overlap ratio with respect to GroundTruth (i.e., stomach and duodenum). Figure \ref{fig:3d_vis} illustrates that the stomach is difficult to detect in SCL, and the duodenum is partially missing in attention U-Net \cite{attention_1} when compared to our method. Moreover, our method showed fewer false-positive responses for the duodenum and pancreas (Fig. \ref{fig:3d_vis}).
\subsection{Ablation Study}
We investigated the effectiveness of the different components in our proposed method. Table \ref{T:ablation1}-\ref{T:ablation3} and Fig. \ref{plot} show the quantitative results (DSC, HD95 and ASSD) of different model settings. We conducted experiments to verify the effectiveness of our newly proposed methods: multi-resolution context aggregation, loss functions, and different architectures. Furthermore, we performed extensive experiments in terms of the size of the training dataset. The examinations are described in detail in the following subsections.
\textbf{Multi-Resolution Context Aggregation} We first analyzed the impact of our multi-resolution context aggregation. First, when we compared representation embeddings of feature using $\mathcal{L}_{feature_1}$ and $\mathcal{L}_{feature_2}$ as shown in Fig. \ref{fig:tsne}; based on this comparison, it was observed that feature embedding where context aggregation from multi-layers is considered as more structured than that of a single layer. In Table \ref{T:ablation1}, we list the quantitative segmentation results using three different proposed feature losses, where the first row lists the results of 3D U-Net trained with $\mathcal{L}_{feature_1}$ and the second and third rows list the results with $\mathcal{L}_{feature_2}$, $\mathcal{L}_{feature_3}$, respectively. The first row is similar to recent works on contrastive learning as it applies loss in the last layer of the encoder. Then, we aggregated the contexts of the last two hidden layers, which is listed in the second row, showing performance improvement when compared to the first row. Moreover, aggregating the contexts of last three hidden layers significantly improved performance, which can be explained by the fact that aggregating the local and global contexts results in a more precise segmentation output.
\textbf{Weighted Loss Function} We also compared our loss function with and without a weighted function (i.e., \ref{eq:6}). Table \ref{T:ablation2} lists our training result obtained by learning using the $\mathcal{L}_{feature_3}$ function. The first row lists the results when the weighted function is not applied, and the second row lists the results of the final loss function. The feature loss without weighted loss function even showed superior performance when compared other methods in terms of DSC and HD95. Meanwhile, applying weighted function improved the performance itself and stability of training.
\begin{table}[t]
\captionsetup{justification=centering, labelsep=newline}
\caption{Ablation study of the multi-resolution context aggregation in terms of Dice score coefficient (DSC), HD95 and ASSD.}
\label{T:ablation1}
\begin{center}
\begin{tabular}{| c || c | c | c |}
\hline
\textbf{Method} & \textbf{DSC} & \textbf{HD95} & \textbf{ASSD} \\ \hline
feature (1) & 0.793 & 3.210 & 0.86 \\ \hline
feature (2) & 0.797 & 2.895 & 0.761 \\ \hline
feature (3) & \textbf{0.806} & \textbf{2.461} & \textbf{0.681} \\ \hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[t]
\captionsetup{justification=centering, labelsep=newline}
\caption{Ablation study of the weighted loss in terms of DSC, HD95 and ASSD.}
\label{T:ablation2}
\begin{center}
\begin{tabular}{| c || c | c | c |}
\hline
\textbf{Method} & \textbf{DSC} & \textbf{HD95} & \textbf{ASSD} \\ \hline
feature (w/o weight) & 0.8 & 2.918 & 0.825 \\ \hline
feature & \textbf{0.806} & \textbf{2.461} & \textbf{0.681} \\ \hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[t]
\captionsetup{justification=centering, labelsep=newline}
\caption{Ablation study of the hidden dimension in terms of DSC, HD95 and ASSD.}
\label{T:ablation3}
\begin{center}
\begin{tabular}{| c || c | c | c |}
\hline
\textbf{Method} & \textbf{DSC} & \textbf{HD95} & \textbf{ASSD} \\ \hline
feature (64) & 0.793 & 3.210 & 0.86 \\ \hline
feature (128) & 0.793 & 3.263 & 0.822 \\ \hline
feature (256) & \textbf{0.796} & \textbf{3.089} & \textbf{0.804} \\ \hline
\end{tabular}
\end{center}
\end{table}
\textbf{Hidden Dimension} Then, we analyzed the efficacy of the SimSiam method for different hidden dimensions. In our projection and prediction layers in the voxel-level feature layer, MLP heads are used for feature projection and prediction. It is important to choose an appropriate number of hidden dimensions. Table \ref{T:ablation3} lists the results of the experiments on different hidden dimensions (64, 128 and 256). When the number of hidden dimensions was 64 or 128, the experiments achieved only minimal improvements in HD95 but showed worse results in ASSD. When the number of hidden dimensions was 256, superior results were achieved in all metrics, which proves that superior performance can be obtained by not reducing the original feature dimension.
\textbf{Dataset Size} We investigated our proposed method with different percentage of labels. Our method is an efficient representation learning method for embedding rich information with limited annotated data. We compared our method with the original 3D U-Net network with a limited dataset size. As shown in Fig. \ref{plot}, by using a small dataset size (10 and 20\%), we achieved a significant improvement in DSC (i.e., 0.431 (3D U-Net) to 0.548 (ours)). This shows that simply adding our methods to a base network is efficient for learning generalized and high-level representations.
\par
\begin{figure}[!t]
\begin{center}
\begin{tikzpicture}[thick,scale=0.8, every node/.style={scale=0.8}]
\begin{axis}[
xlabel=\% of labels,
ylabel=DSC, legend pos=outer north east]
\addplot[smooth,mark=*,blue] plot coordinates {
(10,0.431)
(20,0.609)
(50,0.748)
(100,0.786)
};
\addlegendentry{3D U-Net}
\addplot[smooth,color=red,mark=*]
plot coordinates {
(10,0.548)
(20,0.674)
(50,0.764)
(100,0.806)
};
\addlegendentry{Ours}
\end{axis}
\end{tikzpicture}
\end{center}
\caption{DSC evaluation of the proposed method with different percentage of labels.}
\label{plot}
\end{figure}
\section{DISCUSSION}
Recent studies on multi-organ segmentation have explored various deep learning architectures, such as encoder-decoder based networks \cite{VoxelResNet, 3DUNet, ResidualUNet, VNet} or attention networks \cite{attention_1, attention_2}, for encoding high-level features using limited training data. However, training the network in the final output space makes it difficult to embed rich global and local contextual features in the representation space (as shown in Fig. \ref{section:3-b} and Figure \ref{fig:tsne}). Our proposed method is effective for representation learning in that it suppresses voxel-wise relations in the representation space, i.e., it enables voxels from the same class to be projected to the same point in the representation space. Moreover, by learning voxel-wise feature relations based on the SimSiam \cite{simsiam} method, the utilization of negative samples can be avoided; consequently, our method is superior in terms of computational efficiency. A recent work \cite{wang2021exploring}, which employed contrastive loss for supervised segmentation tasks, showed minimal improvement on our dataset (Table \ref{T:dice-score}). Because it is difficult to consider a large amount of negative samples in 3D volumetric datasets, it is challenging to employ existing contrastive learning methods in medical imaging applications. Our proposed method, which does not require negative samples during training, suggests a simple way of employing contrastive loss in the medical image segmentation task. Furthermore, by introducing a multi-resolution context aggregation method, we encoded both local and global contexts in the representation space. We achieved a more structured representation space (Fig. \ref{fig:tsne}) and precise segmentation results (Fig. \ref{fig:2d_vis}) when compared to the previous method \cite{wang2021exploring} that only considered the global context. Our extensive experiments demonstrated that our method provides more informative feature embedding, resulting in superior accuracy, especially with a limited data size (Fig. \ref{fig:plot}). Moreover, our method can be simply added to existing networks, which implies that the proposed metric can be easily extended to other dense prediction tasks. In the future, our method can be improved by developing more efficient feature sampling techniques or new representation losses.
\section{CONCLUSION}
In this work, we proposed a new voxel-level representation learning method for multi-organ segmentation task. Our method enforces voxel-wise relations while preserving computational efficiency and encodes both the local and global contexts for achieving precise segmentation results. Our method successfully performed the rich representation learning by employing a 3D U-Net architecture as a backbone without introducing additional parameters. The experimental results demonstrated that our method is superior to other contrastive loss-based methods. The proposed method is also independent of the model architecture, which indicates that our algorithm can be applied to any types of network models.
\bibliographystyle{plain}
|
2,869,038,153,937 | arxiv | \section*{Introduction}
In 2001, Razumov and Stroganov~\cite{RS-conj} conjectured that there is a correspondence between the Fully Packed Loop (FPL) configurations, a combinatorial model, and the components of the groundstate vector in the $O(n)$ Loop model, a model in statistical physics.
On the one hand, the connectivity of the FPL configurations at the boundary is described by a perfect noncrossing matchings $\pi$ of $2n$ points (see definitions~\ref{representations}).
The number of FPL configurations associated to a certain matching $\pi$ is denoted by $A_\pi$.
On the other hand, the $O(n)$ model is defined on the set of matchings and the groundstate components are naturally indexed by the matchings and are written $\psi_\pi$.
Razumov and Stroganov conjectured that this quantities are the same $A_\pi = \psi_\pi$ for all matchings $\pi$.
This conjecture was proved in 2010 by Cantini and Sportiello~\cite{ProofRS}.
Consider matchings with $p$ nested arches surrounding a smaller matching $\pi$, which we denote $(\pi)_p = (\cdots(\pi)\cdots)$.
It was conjectured in~\cite{Zuber-conj}, and after proved in~\cite{CKLN,artic47}, that the quantities $A_{(\pi)_p}$ and $\psi_{(\pi)_p}$ are polynomials in $p$.
In a recent article, Nadeau and Fonseca~\cite{negative} conjectured some surprising properties of these polynomials.
The goal of this article is to prove some of these conjectures, notably 3.8 and 3.11.
\medskip
Let $\pi$ be a matching composed by $n$ arches.
We denote by $A_\pi(p)$ (respectively $\psi_\pi(p)$) the polynomial which coincides with $A_{(\pi)_p}$ (respectively $\psi_{(\pi)_p}$) when $p$ is a nonnegative integer.
In this paper we prove that, for $p$ between $0$ and $-n$, these quantities are either zero or they can be seen as the product of two distinct terms.
One being a new quantity $g_\pi$ also indexed by perfect noncrossing matchings and the other being again the quantities $A_\pi$.
The quantities $g_\pi$ are surprisingly connected with the Fully Packed Loop model: the sum of the absolute values of $g_\pi$ is equal to the number of FPL configurations.
This relation has been proven in~\cite{tese}.
Here we prove another sum rule also conjectured in~\cite{negative}: the sum of the quantities $g_\pi$ is equal to the number of vertically symmetric FPL configurations, up to an eventual sign.
These sum rules, together with other properties of $g_\pi$, rises the idea that these numbers $g_\pi$ have some combinatorial meaning, \emph{i.e.}\xspace they are counting something which is related to the FPL configurations.
An interesting byproduct of the proofs are the multivariate integral formulæ proposed for $g_\pi$, which allow us to reformulate the first mentioned conjecture in a stronger form (a polynomial form).
\medskip
Let us give a detailed outline of this article.
In the first section, we introduce the two models: the Fully Packed Loop (FPL) model and the $O(n)$ Loop model, and the associated quantities $A_\pi$ and $\psi_\pi$, respectively.
Furthermore, we give a brief perspective of the case of $\pi$ when it contains $p$ nested arches.
In Section~\ref{sec:conj} we state the two conjectures that we solve here.
They concern the polynomials $A_\pi (t)$.
In order to prove the first one, we introduce a multivariate polynomial version of the quantities $\psi_\pi$ in Section~\ref{sec:CPL_multi}, defined though the quantum Knizhnik--Zamolodchikov equation.
Although it seems a more complicated approach, this version allows us to use some polynomial properties, which will be essential to the proof.
The two further sections are dedicated to the proof of the conjectures.
The paper finishes with an appendix, where we describe some important results, which are straightforward but a little bit tedious.
\section{Definitions}\label{sec:defi}
In this section we introduce the concept of matchings.
Furthermore, we briefly describe the Fully Packed Loop model and the $O(n)$ Loop model.
Finally, we introduce the concept of nested matching.
\subsection{Matchings}\label{representations}
A matching\footnote{these matchings are usually called {\em perfect noncrossing matchings} in the literature, but this is the only kind of matchings we will encounter so there will be no possible confusion.} $\pi$ of size $n$ is defined as a set of $n$ disjoint pairs of integers $\{1,\ldots,2n\}$, which are {\em noncrossing} in the sense that if $\{i,j\}$ and $\{k,l\}$ are two pairs in $\pi$ with $i<j$ and $k<l$, then it is forbidden to have $i<k<j<l$ or $k<i<l<j$.
The number of matchings with $n$ pairs is the Catalan number $c_n=\frac{1}{n+1}\binom{2n}{n}$.
Matchings can be represented in several ways:
\begin{itemize}
\item A Link Pattern is a set of noncrossing arches on $2n$ horizontally aligned points labelled from $1$ to $2n$.
Given a pair in a matching $\{i,j\}$, the corresponding arch connects point $i$ to the point $j$.
This will be our standard representation;
\[
\{\{1,2\},\{3,6\},\{4,5\}\}
\Leftrightarrow
\begin{tikzpicture}[scale=0.25]
\draw[arche] (0,0) .. controls (0,.5) and (1,.5) .. (1,0);
\draw[arche] (2,0) .. controls (2,1.5) and (5,1.5) .. (5,0);
\draw[arche] (3,0) .. controls (3,.5) and (4,.5) .. (4,0);
\draw[line] (-.5,0) -- (5.5,0);
\end{tikzpicture}
\]
\item A well-formed sequence of parentheses, also called \emph{parenthesis word}.
Given an arch in a matching, the point connected to the left (respectively to the right) is encoded by an opening parenthesis (resp. by a closing parenthesis);
\[
\begin{tikzpicture}[scale=0.25]
\draw[arche] (0,0) .. controls (0,.5) and (1,.5) .. (1,0);
\draw[arche] (2,0) .. controls (2,1.5) and (5,1.5) .. (5,0);
\draw[arche] (3,0) .. controls (3,.5) and (4,.5) .. (4,0);
\draw[line] (-.5,0) -- (5.5,0);
\end{tikzpicture}
\Leftrightarrow ()(())
\]
\item A Dyck Path, which is a path between $(0,0)$ and $(2n,0)$ with steps NE $(1,1)$ and SE $(1,-1)$ that never goes under the horizontal line $y=0$.
An opening parenthesis corresponds to a NE step, and a closing one to a SE step;
\[
()(()) \Leftrightarrow
\begin{tikzpicture}[scale=0.25, baseline=2pt]
\draw[dyck] (0,0) -- (1,1) -- (2,0) -- (3,1) -- (4,2) -- (5,1) -- (6,0);
\end{tikzpicture}
\]
\item A Young diagram is a collection of boxes, arranged in left-justified rows, such that the size of the rows is weakly decreasing from top to bottom.
Matchings with $n$ arches are in bijection with Young diagrams such that the $i$th row from the top has no more than $n-i$ boxes.
The Young diagram can be constructed as the complement of a Dyck path, rotated $45^\circ$ counterclockwise;
\[
\begin{tikzpicture}[scale=0.25, baseline=3pt]
\draw[dyck] (0,0) -- (1,1) -- (2,0) -- (3,1) -- (4,2) -- (5,1) -- (6,0);
\draw[young, dotted] (1,1) -- (3,3);
\draw[young, dotted] (2,0) -- (4,2);
\draw[young, dotted] (1,1) -- (2,0);
\draw[young, dotted] (2,2) -- (3,1);
\draw[young, dotted] (3,3) -- (4,2);
\end{tikzpicture}
\Leftrightarrow
\begin{tikzpicture}[scale=0.25, baseline=-10pt]
\draw[young] (0,0) -- (0,-2);
\draw[young] (1,0) -- (1,-2);
\draw[young] (0,0) -- (1,0);
\draw[young] (0,-1) -- (1,-1);
\draw[young] (0,-2) -- (1,-2);
\end{tikzpicture}
\]
\item A sequence $a=\{a_1,\ldots,a_n\}\subseteq\{1,\ldots,2n\}$, such that $a_{i-1}<a_i$ and $a_i\leq 2i-1$ for all $i$.
Here $a_i$ is the position of the $i$th opening parenthesis.
\[
()(()) \Leftrightarrow \{1,3,4\}
\]
\end{itemize}
We will often identify matchings under those different representations, through the bijections explained above.
We may need at times to stress a particular representation: thus we write $Y(\pi)$ for the Young diagram associated to $\pi$, and $a(\pi)$ for the increasing sequence associated to $\pi$, etc...
We will represent $p$ nested arches around a matching $\pi$ by ``$(\pi)_p$'', and $p$ consecutive small arches by ``$()^p$''; thus for instance
\[
((((()()))))()()()=(()^2)_4()^3.
\]
We define a {\em partial order} on matchings as follows: $\sigma \preceq \pi$ if the Young diagram of $\pi$ contains the Young diagram of $\sigma$, that is $Y(\sigma)\subseteq Y(\pi)$.
In the Dyck path representation, this means that the path corresponding to $\sigma$ is always weakly above the path corresponding to $\pi$; in the sequence representation, if we write $a=a(\sigma)$ and $a'=a(\pi)$, then this is simply expressed by $a_i\leq a'_i$ for all $i$.
Given a matching $\pi$, we define $d(\pi)$ as the total number of boxes in the Young diagram $Y(\pi)$.
We also let $\pi^*$ be the conjugate matching of $\pi$, defined by: $\{i,j\}$ is an arch in $\pi^*$ if and only if $\{2n+1-j,2n+1-i\}$ is an arch in $\pi$.
This corresponds to a mirror symmetry of the parenthesis word, and a transposition in the Young diagram.
We also define a natural {\em rotation} $r$ on matchings: $i,j$ are linked by an arch in $r(\pi)$ if and only if $i+1,j+1$ are linked in $\pi$ (where indices are taken modulo $2n$).
These last two notions are illustrated on Figure~\ref{fig:matchings}.
\begin{figure}[!ht]
\begin{align*}
\pi&=
\begin{tikzpicture}[scale=0.25]
\draw[arche] (0,0) .. controls (0,.5) and (1,.5) .. (1,0);
\draw[arche] (2,0) .. controls (2,2) and (9,2) .. (9,0);
\draw[arche] (3,0) .. controls (3,.5) and (4,.5) .. (4,0);
\draw[arche] (5,0) .. controls (5,1) and (8,1) .. (8,0);
\draw[arche] (6,0) .. controls (6,.5) and (7,.5) .. (7,0);
\draw[line] (-.5,0) -- (9.5,0);
\end{tikzpicture}
&
\pi^*&=
\begin{tikzpicture}[scale=0.25]
\draw[arche] (9,0) .. controls (9,.5) and (8,.5) .. (8,0);
\draw[arche] (7,0) .. controls (7,2) and (0,2) .. (0,0);
\draw[arche] (6,0) .. controls (6,.5) and (5,.5) .. (5,0);
\draw[arche] (4,0) .. controls (4,1) and (1,1) .. (1,0);
\draw[arche] (3,0) .. controls (3,.5) and (2,.5) .. (2,0);
\draw[line] (-.5,0) -- (9.5,0);
\end{tikzpicture}
&
r(\pi)&=
\begin{tikzpicture}[scale=0.25]
\draw[arche] (0,0) .. controls (0,2.5) and (9,2.5) .. (9,0);
\draw[arche] (1,0) .. controls (1,2) and (8,2) .. (8,0);
\draw[arche] (2,0) .. controls (2,.5) and (3,.5) .. (3,0);
\draw[arche] (4,0) .. controls (4,1) and (7,1) .. (7,0);
\draw[arche] (5,0) .. controls (5,.5) and (6,.5) .. (6,0);
\draw[line] (-.5,0) -- (9.5,0);
\end{tikzpicture}
\end{align*}
\caption{A matching, its conjugate, and the rotated matching.\label{fig:matchings}}
\end{figure}
We need additional notions related to the Young diagram representation.
So let $Y$ be a young diagram, and $u$ one of its boxes.
The {\em hook length} $h(u)$ is the number of boxes below $u$ in the same column, or to its right in the same row (including the box $u$ itself).
We note $H_Y$ the product of all hook lengths, \emph{i.e.}\xspace $H_Y=\prod_{u\in Y} h(u)$.
\subsection{Fully Packed Loop}\label{sub:FPLintro}
A \emph{Fully Packed Loop configuration} (FPL) of size $n$ is a subgraph of the square grid with $n^2$ vertices, such that each vertex is connected to exactly two edges.
We furthermore impose the following boundary conditions: we select alternatively every second of the external edges to be part of our FPLs.
By convention, we fix that the leftmost external edge on the top boundary is part of the selected edges, which fixes thus the entire boundary of our FPLs.
We number these external edges clockwise from $1$ to $2n$, see Figure~\ref{fig:fplexample}.
\begin{figure}[!ht]
\begin{center}
\begin{tikzpicture}[scale=.4]
\draw[step=1,ajuda] (-.5,-.5) grid (5.5,5.5);
\draw[FPL] (-.5,0) -- (0,0) -- (0,1) -- (1,1) -- (1,2) -- (1,3) -- (0,3) -- (0,2) -- (-.5,2);
\draw[FPL] (1,-.5) -- (1,0) -- (2,0) -- (2,1) -- (3,1) -- (4,1) -- (4,2) -- (4,3) -- (4,4) -- (5,4) -- (5,5) -- (5.5,5);
\draw[FPL] (3,-.5) -- (3,0) -- (4,0) -- (5,0) -- (5,-.5);
\draw[FPL] (5.5,1) -- (5,1) -- (5,2) -- (5,3) -- (5.5,3);
\draw[FPL] (-.5,4) -- (0,4) -- (1,4) -- (1,5) -- (0,5) -- (0,5.5);
\draw[FPL] (2,5.5) -- (2,5) -- (2,4) -- (3,4) -- (3,5) -- (4,5) -- (4,5.5);
\draw[FPL] (2,2) rectangle (3,3);
\node[above] at (0,5.5) {1};
\node[above] at (2,5.5) {2};
\node[above] at (4,5.5) {3};
\node[right] at (5.5,5) {4};
\node[right] at (5.5,3) {5};
\node[right] at (5.5,1) {6};
\node[below] at (5,-.5) {7};
\node[below] at (3,-.5) {8};
\node[below] at (1,-.5) {9};
\node[left] at (-.5,0) {10};
\node[left] at (-.5,2) {11};
\node[left] at (-.5,4) {12};
\begin{scope}[shift={(11,2)},scale=.8]
\draw[arche] (0,0) .. controls (0,3) and (11,3) .. (11,0);
\draw[arche] (1,0) .. controls (1,.5) and (2,.5) .. (2,0);
\draw[arche] (3,0) .. controls (3,1.5) and (8,1.5) .. (8,0);
\draw[arche] (4,0) .. controls (4,.5) and (5,.5) .. (5,0);
\draw[arche] (6,0) .. controls (6,.5) and (7,.5) .. (7,0);
\draw[arche] (9,0) .. controls (9,.5) and (10,.5) .. (10,0);
\draw[line] (-.5,0) -- (11.5,0);
\node[below] at (0,0) {\tiny{1}};
\node[below] at (1,0) {\tiny{2}};
\node[below] at (2,0) {\tiny{3}};
\node[below] at (3,0) {\tiny{4}};
\node[below] at (4,0) {\tiny{5}};
\node[below] at (5,0) {\tiny{6}};
\node[below] at (6,0) {\tiny{7}};
\node[below] at (7,0) {\tiny{8}};
\node[below] at (8,0) {\tiny{9}};
\node[below] at (9,0) {\tiny{10}};
\node[below] at (10,0) {\tiny{11}};
\node[below] at (11,0) {\tiny{12}};
\end{scope}
\end{tikzpicture}
\end{center}
\caption{FPL with its associated matching \label{fig:fplexample}}
\end{figure}
In each FPL configuration $F$ the chosen external edges are clearly linked by paths which do not cross each other.
We define $\pi(F)$ as the set of pairs $\{i,j\}$ of integers in $\{1,\ldots,2n\}$ such that the external edges labeled $i$ and $j$ are linked by a path in $F$.
Then $\pi(F)$ is a matching in the sense of Section~\ref{representations}; an example is given on the right of Figure~\ref{fig:fplexample}.
\begin{defi}[$A_\pi$]
For any matching $\pi$, we define $A_\pi$ as the number of FPLs $F$ such that $\pi(F)=\pi$.
\end{defi}
A result of Wieland~\cite{wieland} shows that a rotation on matchings leaves the numbers $A_\pi$ invariant, and it is then easily seen that conjugation of matchings also leaves them invariant:
\begin{thm}[\cite{wieland}]
\label{thm:invar_api}
For any matching $\pi$, we have $A_\pi=A_{r(\pi)}$ and $A_\pi=A_{\pi^*}$.
\end{thm}
Now we let $A_n$ be the total number of FPLs of size $n$; by definition we have $A_n=\sum_\pi A_\pi$ where $\pi$ goes through all matchings with $n$ arches.
We also define $A_{n}^V$ as the number of FPLs of size $n$ which are invariant with respect to vertical symmetry.
It is easily seen that $A_{2n}^V=0$.
We have the famous product expressions of these quantities:
\begin{align}
A_n&=\prod_{k=0}^{n-1} \frac{(3k+1)!}{(n+k)!}; \\
A_{2n+1}^V&= \frac{1}{2^n}\prod_{k=1}^n\frac{(6k-2)!(2k-1)!}{(4k-1)!(4k-2)!}.
\end{align}
The original proofs can be found in~\cite{Zeil-ASM,Kup-ASM} for $A_n$, and~\cite{MR1954236} for $A_{n}^V$.
\subsection{$O(n)$ Loop model}
\label{sub:O1}
In this subsection we briefly explain the $O(n)$ Loop model with periodic boundary conditions; for more details see~\cite{artic47, hdr, dG-review}.
Let $n$ be an integer, and define a {\em state} as a column vector indexed by matchings of size $n$.
Let $e_i$ be the operator on matchings which creates a new arch at $(i,i+1)$, and join the vertices formerly linked to $i$ and $i+1$, as shown in the following examples:
\begin{align*}
e_3
\begin{tikzpicture}[scale=0.25, baseline=-3pt]
\draw[arche] (0,0) .. controls (0,.5) and (1,.5) .. (1,0);
\draw[arche] (2,0) .. controls (2,1.5) and (5,1.5) .. (5,0);
\draw[arche] (3,0) .. controls (3,.5) and (4,.5) .. (4,0);
\draw[line] (-.5,0) -- (5.5,0);
\end{tikzpicture} =
\begin{tikzpicture}[scale=0.25, baseline=-3pt]
\draw[arche] (0,0) .. controls (0,.5) and (1,.5) .. (1,0);
\draw[arche] (2,0) .. controls (2,1.5) and (5,1.5) .. (5,0);
\draw[arche] (3,0) .. controls (3,.5) and (4,.5) .. (4,0);
\draw[line] (-.5,0) -- (5.5,0);
\draw[arche] (0,0) -- (0,-1);
\draw[arche] (1,0) -- (1,-1);
\draw[arche] (2,0) .. controls (2,-.5) and (3,-.5) .. (3,0);
\draw[arche] (2,-1) .. controls (2,-.5) and (3,-.5) .. (3,-1);
\draw[arche] (4,0) -- (4,-1);
\draw[arche] (5,0) -- (5,-1);
\draw[line] (-.5,-1) -- (5.5,-1);
\end{tikzpicture} &=
\begin{tikzpicture}[scale=0.25, baseline=-3pt]
\draw[arche] (0,0) .. controls (0,.5) and (1,.5) .. (1,0);
\draw[arche] (2,0) .. controls (2,.5) and (3,.5) .. (3,0);
\draw[arche] (4,0) .. controls (4,.5) and (5,.5) .. (5,0);
\draw[line] (-.5,0) -- (5.5,0);
\end{tikzpicture}\\
e_4
\begin{tikzpicture}[scale=0.25, baseline=-3pt]
\draw[arche] (0,0) .. controls (0,.5) and (1,.5) .. (1,0);
\draw[arche] (2,0) .. controls (2,1.5) and (5,1.5) .. (5,0);
\draw[arche] (3,0) .. controls (3,.5) and (4,.5) .. (4,0);
\draw[line] (-.5,0) -- (5.5,0);
\end{tikzpicture} =
\begin{tikzpicture}[scale=0.25, baseline=-3pt]
\draw[arche] (0,0) .. controls (0,.5) and (1,.5) .. (1,0);
\draw[arche] (2,0) .. controls (2,1.5) and (5,1.5) .. (5,0);
\draw[arche] (3,0) .. controls (3,.5) and (4,.5) .. (4,0);
\draw[line] (-.5,0) -- (5.5,0);
\draw[arche] (0,0) -- (0,-1);
\draw[arche] (1,0) -- (1,-1);
\draw[arche] (3,0) .. controls (3,-.5) and (4,-.5) .. (4,0);
\draw[arche] (3,-1) .. controls (3,-.5) and (4,-.5) .. (4,-1);
\draw[arche] (2,0) -- (2,-1);
\draw[arche] (5,0) -- (5,-1);
\draw[line] (-.5,-1) -- (5.5,-1);
\end{tikzpicture} &=
\begin{tikzpicture}[scale=0.25, baseline=-3pt]
\draw[arche] (0,0) .. controls (0,.5) and (1,.5) .. (1,0);
\draw[arche] (2,0) .. controls (2,1.5) and (5,1.5) .. (5,0);
\draw[arche] (3,0) .. controls (3,.5) and (4,.5) .. (4,0);
\draw[line] (-.5,0) -- (5.5,0);
\end{tikzpicture}
\end{align*}
The operator $e_0$ creates an arch linking the positions $1$ and $2n$.
Attached to these operators is the {\em Hamiltonian}
\[
\mathcal{H}_{2n}=\sum_{i=0}^{2n-1} (1-e_i),
\]
where $1$ is the identity.
$\mathcal{H}_{2n}$ acts naturally on states, and the groundstate $(\psi_\pi)_{\pi:|\pi|=n}$ attached to $\mathcal{H}_{2n}$ is defined as follows:
\begin{defi}[$\psi_\pi$]
\label{defi:psipi}
Let $n$ be a positive integer.
We define the groundstate in the $O(n)$ Loop model as the vector $\psi=(\psi_\pi)_{\pi:|\pi|=n}$ which is the solution of $\mathcal{H}_{2n}\psi=0$, normalized by $\psi_{()_n}=1$.
\end{defi}
By the Perron-Frobenius theorem, this is well defined.
We have then the followings properties:
\begin{thm}
\label{th:propPsipi}
Let $n$ be a positive integer.
\begin{itemize}
\item For any $\pi$, $\psi_{r(\pi)}=\psi_{\pi^*}=\psi_{\pi}$.
\item The numbers $\psi_\pi$ are positive integers.
\item $\sum_\pi \psi_\pi = A_n$, where the sum is over matchings such that $|\pi|=n$.
\end{itemize}
\end{thm}
The stability by rotation and conjugation is clear from the symmetry of the problem.
The integral property was proved in~\cite[Section 4.4]{artic43}, while the sum rule was proved in~\cite{artic31}.
The computation of this groundstate has received a lot of interest, mainly because of the Razumov--Stroganov (ex-)conjecture.
\subsection{The Razumov--Stroganov conjecture}
A simple computation shows that
\begin{align*}
\psi_{
\begin{tikzpicture}[scale=0.15]
\draw[arche] (0,0) .. controls (0,.5) and (1,.5) .. (1,0);
\draw[arche] (2,0) .. controls (2,.5) and (3,.5) .. (3,0);
\draw[arche] (4,0) .. controls (4,.5) and (5,.5) .. (5,0);
\end{tikzpicture}}
&=2&
\psi_{
\begin{tikzpicture}[scale=0.15]
\draw[arche] (0,0) .. controls (0,1.5) and (5,1.5) .. (5,0);
\draw[arche] (1,0) .. controls (1,.5) and (2,.5) .. (2,0);
\draw[arche] (3,0) .. controls (3,.5) and (4,.5) .. (4,0);
\end{tikzpicture}}
&=2 &
\psi_{
\begin{tikzpicture}[scale=0.15]
\draw[arche] (0,0) .. controls (0,1) and (3,1) .. (3,0);
\draw[arche] (1,0) .. controls (1,.5) and (2,.5) .. (2,0);
\draw[arche] (4,0) .. controls (4,.5) and (5,.5) .. (5,0);
\end{tikzpicture}}
&=1\\
\psi_{
\begin{tikzpicture}[scale=0.15]
\draw[arche] (0,0) .. controls (0,.5) and (1,.5) .. (1,0);
\draw[arche] (2,0) .. controls (2,1) and (5,1) .. (5,0);
\draw[arche] (3,0) .. controls (3,.5) and (4,.5) .. (4,0);
\end{tikzpicture}}
&=1 &
\psi_{
\begin{tikzpicture}[scale=0.15]
\draw[arche] (0,0) .. controls (0,1.5) and (5,1.5) .. (5,0);
\draw[arche] (1,0) .. controls (1,1) and (4,1) .. (4,0);
\draw[arche] (2,0) .. controls (2,.5) and (3,.5) .. (3,0);
\end{tikzpicture}}
&=1
\end{align*}
which are exactly the numbers that appear in the FPL counting:
\medskip
\[
\begin{tikzpicture}[scale=.3]
\draw[step=1,ajuda] (-.5,-.5) grid (2.5,2.5);
\draw[FPL] (-.5,0) -- (0,0) -- (0,1) -- (0,2) -- (-.5,2);
\draw[FPL] (1,-.5) -- (1,0) -- (1,1) -- (2,1) -- (2,0) -- (2.5,0);
\draw[FPL] (1,2.5) -- (1,2) -- (2,2) -- (2.5,2);
\draw[shift={(4,0)},step=1,ajuda] (-.5,-.5) grid (2.5,2.5);
\draw[shift={(4,0)},FPL] (-.5,0) -- (0,0) -- (0,1) -- (0,2) -- (-.5,2);
\draw[shift={(4,0)},FPL] (1,-.5) -- (1,0) -- (2,0) -- (2.5,0);
\draw[shift={(4,0)},FPL] (1,2.5) -- (1,2) -- (1,1) -- (2,1) -- (2,2) -- (2.5,2);
\draw[snake=brace,mirror snake] (-.5,-1)--(6.5,-1);
\draw[shift={(1.75,-2.5)},arche] (0,0)..controls(0,.25)and(.5,.25)..(.5,0);
\draw[shift={(1.75,-2.5)},arche] (1,0)..controls(1,.25)and(1.5,.25)..(1.5,0);
\draw[shift={(1.75,-2.5)},arche] (2,0)..controls(2,.25)and(2.5,.25)..(2.5,0);
\draw[shift={(8,0)},step=1,ajuda] (-.5,-.5) grid (2.5,2.5);
\draw[shift={(8,0)},FPL] (-.5,0) -- (0,0) -- (0,1) -- (1,1) -- (1,0)-- (1,-.5);
\draw[shift={(8,0)},FPL] (2.5,2) -- (2,2) -- (2,1) -- (2,0) -- (2.5,0);
\draw[shift={(8,0)},FPL] (1,2.5) -- (1,2) -- (0,2) -- (-.5,2);
\draw[shift={(12,0)},step=1,ajuda] (-.5,-.5) grid (2.5,2.5);
\draw[shift={(12,0)},FPL] (-.5,0) -- (0,0) -- (1,0) -- (1,-.5);
\draw[shift={(12,0)},FPL] (2.5,2) -- (2,2) -- (2,1) -- (2,0) -- (2.5,0);
\draw[shift={(12,0)},FPL] (1,2.5) -- (1,2) -- (1,1) -- (0,1) -- (0,2) -- (-.5,2);
\draw[snake=brace,mirror snake] (7.5,-1)--(14.5,-1);
\draw[shift={(9.75,-2.5)},arche] (0,0)..controls(0,.75)and(2.5,.75)..(2.5,0);
\draw[shift={(9.75,-2.5)},arche] (.5,0)..controls(.5,.25)and(1,.25)..(1,0);
\draw[shift={(9.75,-2.5)},arche] (1.5,0)..controls(1.5,.25)and(2,.25)..(2,0);
\draw[shift={(16,0)},step=1,ajuda] (-.5,-.5) grid (2.5,2.5);
\draw[shift={(16,0)},FPL] (-.5,0) -- (0,0) -- (1,0) -- (1,-.5);
\draw[shift={(16,0)},FPL] (-.5,2) -- (0,2) -- (0,1) -- (1,1) -- (2,1) -- (2,0) -- (2.5,0);
\draw[shift={(16,0)},FPL] (1,2.5) -- (1,2) -- (2,2) -- (2.5,2);
\draw[snake=brace,mirror snake] (15.5,-1)--(18.5,-1);
\draw[shift={(15.75,-2.5)},arche] (0,0)..controls(0,.25)and(.5,.25)..(.5,0);
\draw[shift={(15.75,-2.5)},arche] (1,0)..controls(1,.5)and(2.5,.5)..(2.5,0);
\draw[shift={(15.75,-2.5)},arche] (1.5,0)..controls(1.5,.25)and(2,.25)..(2,0);
\draw[shift={(20,0)},step=1,ajuda] (-.5,-.5) grid (2.5,2.5);
\draw[shift={(20,0)},FPL] (-.5,0) -- (0,0) -- (0,1) -- (0,2) -- (-.5,2);
\draw[shift={(20,0)},FPL] (1,-.5) -- (1,0) -- (1,1) -- (1,2) -- (1,2.5);
\draw[shift={(20,0)},FPL] (2.5,0) -- (2,0) -- (2,1) -- (2,2) -- (2.5,2);
\draw[snake=brace,mirror snake] (19.5,-1)--(22.5,-1);
\draw[shift={(19.75,-2.5)},arche] (0,0)..controls(0,.5)and(1.5,.5)..(1.5,0);
\draw[shift={(19.75,-2.5)},arche] (.5,0)..controls(.5,.25)and(1,.25)..(1,0);
\draw[shift={(19.75,-2.5)},arche] (2,0)..controls(2,.25)and(2.5,.25)..(2.5,0);
\draw[shift={(24,0)},step=1,ajuda] (-.5,-.5) grid (2.5,2.5);
\draw[shift={(24,0)},FPL] (-.5,0) -- (0,0) -- (0,1) -- (1,1) -- (2,1) -- (2,2) -- (2.5,2);
\draw[shift={(24,0)},FPL] (1,-.5) -- (1,0) -- (2,0) -- (2.5,0);
\draw[shift={(24,0)},FPL] (1,2.5) -- (1,2) -- (0,2) -- (-.5,2);
\draw[snake=brace,mirror snake] (23.5,-1)--(26.5,-1);
\draw[shift={(23.75,-2.5)},arche] (0,0)..controls(0,.75)and(2.5,.75)..(2.5,0);
\draw[shift={(23.75,-2.5)},arche] (.5,0)..controls(.5,.5)and(2,.5)..(2,0);
\draw[shift={(23.75,-2.5)},arche] (1,0)..controls(1,.25)and(1.5,.25)..(1.5,0);
\end{tikzpicture}
\]
\medskip
Razumov and Stroganov~\cite{RS-conj} noticed in 2001 that this seems to hold in general, and this was recently proved by Cantini and Sportiello~\cite{ProofRS}:
\begin{thm}[Stroganov--Razumov--Cantini--Sportiello Theorem]
\label{conj:rs}
The groundstate components of the $O(n)$ Loop model count the number of FPL configurations: for any matching $\pi$,
\[
\psi_\pi=A_{\pi}.
\]
\end{thm}
The proof of Cantini and Sportiello consists in verifying that the relations of Definition~\ref{defi:psipi} hold for the numbers $A_\pi$.
We note also that the results of Theorem~\ref{th:propPsipi} are now a corollary of the Razumov--Stroganov conjecture.
\subsection{Matchings with nested arches and polynomials}
In~\cite{Zuber-conj}, Zuber computed some $\psi_{(\pi)_p}$ for some small matchings $\pi$, and $p=0,1,2,...$. Among other things, he conjectured the following:
\begin{thm}[{\cite{CKLN,artic47}}]
\label{zuber}
For any matching $\pi$ and $p$ a nonnegative integer, the quantity $A_{(\pi)_p}$ can be written in the following form:
\[
A_{(\pi)_p}=\frac{P_\pi (p)}{d(\pi)!},
\]
where $P_\pi (p)$ is a polynomial in $p$ of degree $d(\pi)$ with integer coefficients, and leading coefficient equal to $d(\pi)!/H_{Y(\pi)}$.
\end{thm}
This was proved first by Caselli, Krattenthaler, Lass and Nadeau in~\cite{CKLN} for $A_{(\pi)_p}$, and by Fonseca and Zinn-Justin in~\cite{artic47} for $\psi_{(\pi)_p}$.
Because of this polynomiality property, we introduce the following notations:
\begin{defi}[$A_\pi(t)$ and $\psi_\pi(t)$]
We let $A_\pi(t)$ (respectively $\psi_\pi(t)$) be the polynomial in $t$ such that $A_\pi(p)=A_{(\pi)_p}$ (resp. $\psi_\pi(p)=\psi_{(\pi)_p}$) for all integers $p\geq 0$.
\end{defi}
By the Razumov--Stroganov conjecture~\ref{conj:rs} one has clearly for all $\pi$:
\[
A_\pi(t)=\psi_\pi(t).
\]
We introduced two different notations so that the origin of the quantities involved becomes clearer; in most of this paper however we will only use the notation $\psi_\pi(t)$.
The following proposition sums up some properties of the polynomials.
\begin{prop}
\label{prop:polynomials}
The polynomial $\psi_\pi(t)$ has degree $d(\pi)$ and leading coefficient $1/H_{Y(\pi)}$.
Furthermore, we have $\psi_\pi(t)=\psi_{\pi^*}(t)$, and $\psi_{(\pi)_\ell}(t)=\psi_{\pi}(t+\ell)$ for any nonnegative integer $\ell$.
\end{prop}
The first part comes from Theorem~\ref{zuber}, while the rest is clear when $t$ is a nonnegative integer and thus holds true in general by polynomiality in $t$.
\section{Conjectures}\label{sec:conj}
The aim of this article is to prove two conjectures presented in~\cite{negative} about the polynomials $\psi_\pi (t)$ for negative $t$.
In fact, when computing these quantities, it is natural to add an extra parameter $\tau$, \emph{i.e.}\xspace there is a bivariate polynomial $\psi_\pi (\tau,t)$ which has the same properties as $\psi_\pi (t)$ and in the limit $\tau=1$, it coincides with $\psi_\pi (t)$.
In Section~\ref{sec:CPL_multi}, where we explain how to compute the $\psi_\pi (t)$, the origin of this parameter will be made more clear.
For now, it will be enough to think at this parameter as a refinement.
\subsection{Integer roots}
Let $\pi$ be a matching, represented by a link pattern, and $|\pi|=n$ its number of arches.
Define $\hat{x}:=2n+1-x$.
\begin{defi}[$m_p (\pi)$]
Let $p$ be an integer between $1$ and $n-1$.
We consider the set $\mathcal{A}_p^L (\pi)$ of arches $\{a_1,a_2\}$ such that $a_1\leq p$ and $p<a_2<\hat{p}$, and the set $\mathcal{A}_p^R (\pi)$ of arches $\{a_1,a_2\}$ such that $p<a_1<\hat{p}$ and $a_2\geq \hat{p}$.
It is clear that $\left|\mathcal{A}_p^L(\pi)\right|+\left|\mathcal{A}_p^R(\pi)\right|$ is a even nonnegative integer, and we can thus define the nonnegative integer
\[
m_p(\pi) := \frac{\left|\mathcal{A}_p^L(\pi)\right|+\left|\mathcal{A}_p^R(\pi)\right|}{2}.
\]
\end{defi}
For example, let $\pi$ be the following matching with eight arches.
For $p=4$, we get $\left|\mathcal{A}_p^L(\pi)\right|=3$ and $\left|\mathcal{A}_p^R(\pi)\right|=1$, which count arches between the regions (O) and (I), thus $m_4(\pi)=2$.
In the figure on the right we give an alternative representative by folding the link pattern, it is then clear that $m_p(\pi)$ is half of the number of arches linking (O) with (I).
\[
\begin{tikzpicture}[scale=.3]
\draw[ajuda] (-.5,0)--(15.5,0);
\draw[arche] (0,0) .. controls (0,4) and (15,4) .. (15,0);
\draw[arche] (1,0) .. controls (1,2) and (8,2) .. (8,0);
\draw[arche] (2,0) .. controls (2,1) and (5,1) .. (5,0);
\draw[arche] (3,0) .. controls (3,.5) and (4,.5) .. (4,0);
\draw[arche] (6,0) .. controls (6,.5) and (7,.5) .. (7,0);
\draw[arche] (9,0) .. controls (9,1.5) and (14,1.5) .. (14,0);
\draw[arche] (10,0) .. controls (10,.5) and (11,.5) .. (11,0);
\draw[arche] (12,0) .. controls (12,.5) and (13,.5) .. (13,0);
\node[below] at (0,0) {\tiny{$1$}};
\node[below] at (1,0) {\tiny{$2$}};
\node[below] at (2,0) {\tiny{$3$}};
\node[below] at (3,0) {\tiny{$4$}};
\node[below] at (4,0) {\tiny{$5$}};
\node[below] at (5,0) {\tiny{$6$}};
\node[below] at (6,0) {\tiny{$7$}};
\node[below] at (7,0) {\tiny{$8$}};
\node[below] at (8,0) {\tiny{$\hat{8}$}};
\node[below] at (9,0) {\tiny{$\hat{7}$}};
\node[below] at (10,0) {\tiny{$\hat{6}$}};
\node[below] at (11,0) {\tiny{$\hat{5}$}};
\node[below] at (12,0) {\tiny{$\hat{4}$}};
\node[below] at (13,0) {\tiny{$\hat{3}$}};
\node[below] at (14,0) {\tiny{$\hat{2}$}};
\node[below] at (15,0) {\tiny{$\hat{1}$}};
\draw[gray,dashed] (3.5,-1) -- (3.5,5.5);
\draw[gray,dashed] (11.5,-1) -- (11.5,5.5);
\node at (1.5,5) {(O)};
\node at (7.5,5) {(I)};
\node at (13.5,5) {(O)};
\end{tikzpicture}
\qquad\qquad\qquad
\begin{tikzpicture}[scale=.3]
\draw[ajuda] (-.5,0)--(7.5,0);
\draw[ajuda] (-.5,3)--(7.5,3);
\draw[arche] (0,0)--(0,3);
\draw[arche] (1,0) .. controls (1,1.75) and (7,1.25) .. (7,3);
\draw[arche] (2,0) .. controls (2,1) and (5,1) .. (5,0);
\draw[arche] (3,0) .. controls (3,.5) and (4,.5) .. (4,0);
\draw[arche] (6,0) .. controls (6,.5) and (7,.5) .. (7,0);
\draw[arche] (6,3) .. controls (6,1.5) and (1,1.5) .. (1,3);
\draw[arche] (5,3) .. controls (5,2.5) and (4,2.5) .. (4,3);
\draw[arche] (3,3) .. controls (3,2.5) and (2,2.5) .. (2,3);
\node[below] at (0,0) {\tiny{$1$}};
\node[below] at (1,0) {\tiny{$2$}};
\node[below] at (2,0) {\tiny{$3$}};
\node[below] at (3,0) {\tiny{$4$}};
\node[below] at (4,0) {\tiny{$5$}};
\node[below] at (5,0) {\tiny{$6$}};
\node[below] at (6,0) {\tiny{$7$}};
\node[below] at (7,0) {\tiny{$8$}};
\node[above] at (0,3) {\tiny{$\hat{1}$}};
\node[above] at (1,3) {\tiny{$\hat{2}$}};
\node[above] at (2,3) {\tiny{$\hat{3}$}};
\node[above] at (3,3) {\tiny{$\hat{4}$}};
\node[above] at (4,3) {\tiny{$\hat{5}$}};
\node[above] at (5,3) {\tiny{$\hat{6}$}};
\node[above] at (6,3) {\tiny{$\hat{7}$}};
\node[above] at (7,3) {\tiny{$\hat{8}$}};
\draw[gray,dashed] (3.5,-1) -- (3.5,5.5);
\node at (1.5,5) {(O)};
\node at (5.5,5) {(I)};
\end{tikzpicture}
\]
The reader can check that $m_p=0,1,2,2,2,1,1$ for $p=1,\ldots,7$.
It was conjectured in~\cite{negative} that these numbers correspond to the multiplicity of the real roots of $\psi_\pi (t)$:
\begin{conj}\label{conj:realroots}
All the real roots of the polynomials $\psi_{\pi}(t)$ are negative integers, and $-p$ appears with multiplicity $m_p(\pi)$. Equivalently, we have a factorization:
\[
\psi_{\pi}(t) = \frac{1}{|d(\pi)|!} \cdot \left(\prod_{p=1}^{|\pi|-1} (t+p)^{m_p(\pi)}\right)\cdot Q_{\pi} (t),
\]
where $Q_{\pi} (t)$ is a polynomial with integer coefficients and no real roots.
\end{conj}
In the context of the $O(n)$ Loop model, it is normal to have an extra parameter $\tau$. We have then $\psi_{\pi} (\tau,t)$ which coincides with $\psi_{\pi} (\tau)$ whenever $\tau = 1$.
The previous conjecture seems to hold, the only difference being that $Q_\pi$ depends now on $\tau$.
In Section~\ref{sec:deco}, we prove a weaker version of this conjecture: $\psi_\pi (\tau,-p)=0$ if $m_p (\pi)\neq 0$.
\subsection{Values at negative $p$}
We are now interested in the value of the polynomial $\psi_\pi (\tau,-p)$ for integer values $0 \leq p \leq n$.
It has been already conjectured that it vanishes if $m_p (\pi) \neq 0$.
So let $\pi$ be a matching and $p$ such that $m_p (\pi)$ vanishes.
It means that there are no arches that link the outer part with the inner part of $\pi$.
\emph{I.e.}\xspace we can define a matching sitting in the outer part (denote it by $\alpha$) and an other in the inner part (denote it by $\beta$), as shown in the picture:
\[
\pi=
\begin{tikzpicture}[scale=.25,baseline=0pt]
\fill [blue!10!white] (4,0) .. controls (4,4) and (-4,4) .. (-4,0) -- (-2,0) .. controls (-2,2) and (2,2) .. (2,0) -- cycle;
\draw [green, snake=brace, mirror snake, segment amplitude=1pt] (-4,0) -- (-2,0);
\draw [green, snake=brace, mirror snake, segment amplitude=1pt] (2,0) -- (4,0);
\draw [black] (-3,-.5) node {\tiny $p$};
\draw [black] (3,-.5) node {\tiny $p$};
\draw [black] (0,0) node {$\beta$};
\draw [black] (0,2) node {$\alpha$};
\end{tikzpicture}
\]
we introduce the notation $\pi = \alpha \circ \beta$ to describe this situation.
We need one more definition:
\begin{defi}[$g_\pi$] For any matching $\pi$ we define
\[
g_\pi (\tau):=\psi_{\pi}(\tau,-|\pi|),
\]
and $g_\pi := g_\pi (1)$.
\end{defi}
We are now ready to present the main result of this article:
\begin{thm}[Generalization of Conjecture 3.8 of~\cite{negative}]\label{thm:dec}
Let $\pi$ be a matching and $p$ be an integer between $1$ and $|\pi|-1$ such that $m_p(\pi)=0$, and write $\pi=\alpha \circ \beta$ with $|\alpha|=p$. We then have the following factorization:
\[
\psi_{\pi}(\tau,-p)= g_\alpha(\tau) \psi_{\beta}(\tau).
\]
\end{thm}
Notice that we are reducing the number of unknowns from $c_n$ to $c_p+c_{n-p}-1$.
The proof is postponed to Section~\ref{sec:deco}.
\subsection{Sum rules}
It has been proved in~\cite{tese} that these numbers $g_\pi (\tau)$ have some interesting properties.
For example $g_{\pi} (-\tau) = (-1)^{d(\pi)} g_{\pi} (\tau)$ and
\begin{thm}[\cite{tese}]\label{thm:sum_Gtau}
We have the sum rule:
\[
\sum_{\pi:|\pi|=n} g_\pi (\tau) = \sum_{\pi:|\pi|=n} \psi_\pi (-\tau).
\]
\end{thm}
This can be used to partially prove Conjecture 3.11 of~\cite{negative}.
Notice that, according to the Conjecture~\ref{conj:realroots}, $(-1)^{d(\pi)}g_\pi = |g_\pi|$.
\begin{thm}\label{thm:sum_G}
For any positive integer $n$, we have
\begin{align}
\sum_{\pi:|\pi|=n} (-1)^{d(\pi)} g_\pi &= A_n \label{eq:sum_G_1} \\
\sum_{\pi:|\pi|=n} g_\pi &= (-1)^{\frac{n(n-1)}{2}}\left(A^V_n\right)^2 \label{eq:sum_G_2},
\end{align}
where $d(\pi)$ is the number of boxes of the Young diagram $Y(\pi)$.
\end{thm}
The first equation~\eqref{eq:sum_G_1} has been proved in~\cite{tese}, and it follows from Theorem~\ref{thm:sum_Gtau}. Here we will prove the second equation~\ref{eq:sum_G_2}. The point is that $\sum_{\pi} g_{\pi} = \sum_{\pi} \psi_{\pi} (-1)$ is equivalent to the minus enumeration of TSSCPP which appears in Di Francesco's article~\cite{DF-qKZ-TSSCPP}, and this can be computed, see Section~\ref{sec:-enum} for a better explanation.
\section{Multivariate solutions of the $O(n)$ Loop model}\label{sec:CPL_multi}
In this section, we briefly describe a multivariate version of the $O(n)$ Loop model.
This version, although more complicated, is useful to the proof of Theorem~\ref{thm:dec}.
The $O(n)$ model is an integrable model, meaning that there is an operator called $\check{R}$-Matrix which obeys to the Yang--Baxter equation.
We can add new parameters $\{z_1,z_2,\ldots,z_{2n}\}$, called spectral parameters, which will characterize each column.
In this multivariate setting, the groundstate depends on the $2n$ spectral parameters (and in an extra parameter $q$), in fact the components of the groundstate can be normalized such that they are homogeneous polynomials $\Psi_\pi (z_1,\ldots,z_{2n})$ of degree $n(n-1)$.
The important fact about these solutions is that we recover the solutions of the $O(n)$ model, as stated in Section~\ref{sub:O1}, in the limit $z_i=1$ for all $i$, and $q=e^{2\pi i/3}$.
We shall not describe this model in detail here, such detailed description can be found in~\cite{artic47,hdr,artic43,tese}.
In order to simplify notation we will use $z=\{z_1,\ldots,z_{2n}\}$, thus we will often write $\Psi_\pi (z)$.
Notice that these polynomials depend on $q$, but we omit this dependence.
\subsection{The quantum Knizhnik--Zamolodchikov equation}\label{sec:qKZ}
The groundstate of the multivariate $O(n)$ model is known to solve the quantum Knizhnik--Zamolodchikov (qKZ) equation in the special value $q=e^{2\pi i/3}$.
See a complete explanation in~\cite{hdr,tese}.
The qKZ equation was firstly introduced in a paper by Frenkel and Reshetkhin~\cite{FR-qkz}.
Here we use the version introduced by Smirnov~\cite{Smi}.
Let the $\check{R}$-Matrix be the following operator,
\[
\check{R}_i(z_i,z_{i+1}) = \frac{q z_{i+1}-q^{-1}z_i}{q z_i-q^{-1} z_{i+1}} Id + \frac{z_{i+1}-z_i}{q z_i - q^{-1}z_{i+1}} e_i.\label{eq:R-matrix}
\]
The quantum Knizhnik--Zamolodchikov equation:
\begin{itemize}
\item The \emph{exchange} equation:
\begin{equation}\label{eq:exc}
\check{R}_i(z_i,z_{i+1}) \Psi(z_1,\ldots,z_i,z_{i+1},\ldots,z_{2n}) = \Psi(z_1,\ldots,z_{i+1},z_i.\ldots,z_{2n}),
\end{equation}
for $i=1,\ldots,2n$.
\item The \emph{rotation} equation:
\begin{equation}\label{eq:rot}
\rho^{-1} \Psi (z_1,z_2,\ldots,z_{2n})=\kappa \Psi (z_2,\ldots,z_{2n},s z_1),
\end{equation}
where $\kappa$ is a constant such that $\rho^{-2n}=1$. In our case $s=q^6$ and $\kappa=q^{3(n-1)}$.
\end{itemize}
\subsection{Solutions of the quantum Knizhnik--Zamolodchikov equation}\label{sec:sol_qKZ}
We start for pointing down some properties of the solutions of the qKZ equation without proof.
\begin{itemize}
\item The solutions are homogeneous polynomials in $2n$ variables;
\item The total degree is $n(n-1)$ and the individual degree in each $z_i$ is $n-1$;
\item They obey to the \emph{wheel condition}:
\[
\left.P(z_1,\ldots,z_{2n})\right|_{z_k=q^2 z_j = q^4 z_i} =0 \qquad \forall \, k>j>i.
\]
\end{itemize}
In fact these three properties define a vector space:
\begin{defi}[$\mathcal{V}_n$]
We define $\mathcal{V}_n$ as the vector space of all homogeneous polynomials in $2n$ variables, with total degree $\delta=n(n-1)$ and individual degree $\delta_i=n-1$ in each variable which obey to the \emph{wheel condition}.
\end{defi}
This vector space has dimension $c_n$, exactly the number of matchings of size $|\pi|=n$.
Moreover, the polynomials $\Psi_\pi(z)$ verify the following important lemma:
\begin{lemma}[\cite{artic41}]\label{lem:dual}
Let $q^\epsilon=\{q^{\epsilon_1},\ldots,q^{\epsilon_{2n}}\}$, where $\epsilon_i=\pm 1$ are such that changing $q^{-1}$ into ``$($'' and changing $q$ into ``$)$'' gives a valid parenthesis word $\pi(\epsilon)$. Then
\[
\Psi_\pi(q^\epsilon) = \tau^{d(\pi)} \delta_{\pi,\epsilon},
\]
where $\delta_{\pi,\epsilon}=1$ when we have $\pi(\epsilon)=\pi$. $\tau$ is related with $q$ by the formula $\tau=-q-q^{-1}$.
\end{lemma}
Since there are $c_n$ polynomials $\Psi_\pi(z)$, this lemma shows that these polynomials form a basis of $\mathcal{V}_n$.
Thus a polynomial in this space is determined by its value on these points $q^\epsilon$.
\subsection{A different approach}\label{sec:base_a}
We now define another set of polynomials, introduced in~\cite{artic41}, $\Phi_a (z_1,\ldots,z_{2n})$ (indexed by the increasing sequences defined earlier), by the integral formula:
\begin{multline}\label{eq:qKZ_var}
\Phi_a(z)= k_n
\prod_{1\le i<j\le 2n} (qz_i-q^{-1}z_j)\\ \times \oint\ldots\oint \prod_{i=1}^n \frac{dw_i}{2\pi i} \frac{\prod_{1\le i<j\le n}(w_j-w_i)(qw_i-q^{-1}w_j)}{\prod_{1\le k\leq a_i}(w_i-z_k)\prod_{a_i<k\le 2n}(qw_i-q^{-1}z_k)},
\end{multline}
where the integral is performed around the $z_i$ but not around $q^{-2} z_i$, and $k_n=(q-q^{-1})^{-n(n-1)}$.
It is relatively easy to check that these polynomials belong to the vector space $\mathcal{V}_n$, so we can write:
\[
\Phi_a (z)=\sum_\pi C_{a,\pi} (\tau) \Psi_\pi (z),
\]
where $C_{a,\pi}(\tau)$ are the coefficients given by the formula:
\[
\Phi_a (q^\epsilon) = \tau^{d(\epsilon)} C_{a,\epsilon} (\tau).
\]
An algorithm to compute the coefficients $C_{a,\pi} (\tau)$ is given in~\cite[Appendix A]{artic41}.
We just need the following facts:
\begin{prop}[{\cite[Lemma 3]{artic47}}]
\label{prop:Bases}
Let $a$ and $\pi$ be two matchings. Then we have:
\[
C_{a,\pi}(\tau)=\begin{cases}
0 & \textrm{if } \pi \npreceq a;\\
1 & \textrm{if } \pi=a;\\
P_{a,\pi} (\tau) & \textrm{if } \pi \prec a,
\end{cases}
\]
where $P_{a,\pi}(\tau)$ is a polynomial in $\tau$ with degree $\leq d(a)-d(\pi)-2$.
\end{prop}
Moreover, we have
\begin{equation}
\label{eq:capi-tau}
C_{a,\pi}(\tau)=(-1)^{d(a)-d(\pi)} C_{a,\pi}(-\tau),
\end{equation}
since it is a product of polynomials $U_s$ in $\tau$ with degree of the form $d(a)-d(\pi)-2k$, $k\in \mathbb{N}$, and parity given by $d(a)-d(\pi)$: this is an easy consequence of~\cite[p.12 and Appendix C]{artic47}.
By abuse of notation, we write $(a)_p$ to represent $\{1,\ldots,p,p+a_1,\ldots,p+a_n\}$, since this corresponds indeed to adding $p$ nested arches to $\pi(a)$ via the bijections of Section~\ref{sec:defi}. Then
one easy but important lemma for us is the following:
\begin{lemma}[{\cite[Lemma 4]{artic47}}]
\label{lem:sta}
The coefficients $C_{a,\pi}(\tau)$ are stable, that is:
\[
C_{(a)_p,(\pi)_p}(\tau)=C_{a,\pi}(\tau) \qquad \forall p\in \mathbb{N}.
\]
\end{lemma}
We remark that Proposition~\ref{prop:Bases}, Equation~\eqref{eq:capi-tau} and Lemma~\ref{lem:sta} also hold for the coefficients $C^{-1}_{a,\pi}(\tau)$ of the inverse matrix.
\subsection{The homogeneous limit}\label{sec:pol_qKZ}
The bivariate polynomials $\psi_{\pi}(\tau,t)$ are defined as the homogeneous limit of the previous multivariate polynomials (\emph{i.e.}\xspace $z_i=1$ for all $i$).
Though we are mostly interested in the case $\tau=1$, since we recover the groundstate $\psi_{\pi}(t)=\psi_{\pi}(1,t)$, as explained in~\cite{hdr}, the variable $\tau$ it will be useful for the proofs presented here.
Define $\phi_a (\tau):=\Phi_a (1,\ldots,1)$\footnote{Notice that $\Phi_a(z)$ depends on $q$, even if we do not write it explicitly.}.
Using variable transformation
\[
u_i = \frac{w_i-1}{q w_i - q^{-1}},
\]
we obtain the formula:
\begin{equation}
\phi_{a}(\tau) = \oint \ldots \oint \prod_i \frac{du_i}{2 \pi i u_i^{a_i}} \prod_{j>i} (u_j-u_i) (1+\tau u_j+u_i u_j).
\end{equation}
Thus, we can then obtain the $\psi_\pi(\tau)$ via the matrix $C(\tau)$:
\begin{align}
\phi_a (\tau)=&\sum_\pi C_{a,\pi}(\tau) \psi_\pi (\tau);\label{eq:psiphi1}\\
\psi_\pi (\tau)=&\sum_a C^{-1}_{\pi,a}(\tau) \phi_a (\tau).\label{eq:psiphi2}
\end{align}
Let $\hat{a}_i$ be the components of $(a)_p$.
Now
\begin{align*}
\phi_{(a)_p} (\tau) =& \oint\ldots\oint \prod_i^{n+p} \frac{du_i}{2\pi i u_i^{\hat{a}_i}} \prod_{j>i} (u_j-u_i)(1+\tau u_j+u_i u_j)\\
=& \oint\ldots\oint \prod_i^n \frac{du_i}{2\pi i u_i^{a_i}} (1+\tau u_i)^p \prod_{j>i} (u_j-u_i)(1+\tau u_j+u_i u_j),
\end{align*}
where we integrated in the first $p$ variables and renamed the rest $u_{p+i}\mapsto u_i$.
This is a polynomial in $p$, and we will naturally note $\phi_{a} (\tau,t)$ the polynomial such that $\phi_{a} (\tau,p)=\phi_{(a)_p}(\tau)$.
Finally, from Equation~\eqref{eq:psiphi2} and Lemma~\ref{lem:sta} we obtain the fundamental equation
\begin{equation}
\label{eq:psitauphitau}
\psi_\pi (\tau,t) = \sum_a C^{-1}_{\pi,a}(\tau) \phi_a (\tau,t).
\end{equation}
In the special case $\tau=1$, we write $C_{a,\pi}=C_{a,\pi}(1)$, $\phi_a (t)=\phi_a (1,t)$ and thus
\[
A_\pi(t)=\psi_\pi (t) = \sum_a C^{-1}_{\pi,a} \phi_a (t),
\]
thanks to the Razumov--Stroganov conjecture~\ref{conj:rs}.
\section{Decomposition formula}\label{sec:deco}
The aim of this section is to prove Theorem~\ref{thm:dec}.
With this in mind, we introduce two new multivariate polynomials $\Psi_{\pi,-p} (z)$ and $G_\pi (z)$, which generalize the quantities $\psi_\pi (\tau,-p)$ and $g_\pi (\tau)$ respectively.
\subsection{New polynomials}
Let $\pi$ be a matching with size $|\pi|=n$ and $p$ be a nonnegative integer less or equal than $n$.
We require that the new object $\Psi_{\pi,-p} (z)$ has the following essential properties:
\begin{itemize}
\item It generalizes $\psi_\pi (\tau,-p)$, \emph{i.e.}\xspace $\psi_\pi (\tau,-p) = \Psi_{\pi,-p} (1,\ldots,1)$;
\item When $p=0$, we have $\Psi_{\pi,0} (z) = \Psi_\pi (z)$, justifying the use of the same letter;
\item They are polynomials on $z_i$.
\end{itemize}
Thus, we can define a multivariate version of $g_\pi (\tau)$, by
\begin{equation}
G_\pi (z_1,\ldots,z_{2n}) := \Psi_{\pi,-|\pi|} (z_1,\ldots,z_{2n}),
\end{equation}
such that $g_\pi (\tau) = G_\pi (1,\ldots,1)$.
Surprisingly enough, using these new polynomials we can state a theorem equivalent to Theorem~\ref{thm:dec}:
\begin{thm}\label{thm:dec_mult}
Let $\pi$ be a matching and $p$ be an integer between $1$ and $|\pi|-1$ such that $m_p(\pi)=0$, and write $\pi=\alpha \circ \beta$ with $|\alpha|=p$. We then have the following factorization:
\[
\Psi_{\pi,-p}(z_1,\ldots,z_{2n})= G_\alpha(z_1,\ldots,z_{p},z_{\hat{p}},\ldots,z_{\hat{1}}) \Psi_{\beta}(z_{p+1},\ldots,z_{\hat{p}-1}).
\]
\end{thm}
In what follows we use the short notation $z^{(O)}$ for the outer variables $\{z_1,\ldots,z_p,\allowbreak z_{\hat{p}},\ldots,z_{\hat{1}}\}$ and $z^{(I)}$ for the inner variables $\{z_{p+1},\ldots,z_{\hat{p}-1}\}$.
\subsection{A contour integral formula}
We will follow the same path as in Section~\ref{sec:base_a}.
That is, we introduce a new quantity $\Phi_{a,-p} (z)$ defined by a multiple contour integral formula, and after we can obtain $\Psi_{\pi,-p} (z)$ by:
\begin{equation}
\Psi_{\pi,-p} (z) := \sum_a C^{-1}_{\pi,a}(\tau) \Phi_{a,-p} (z).
\end{equation}
This new quantity $\Phi_{a,-p} (z)$ must be a generalization of the formula $\Phi_a(z)$, let $\hat{\jmath}=2n-j+1$:
\begin{align*}
\Phi_{a,-p}(z)&:=k_n \prod_{1\leq i,j \leq p} (q z_i -q^{-1} z_j) \prod_{2 \leq i,j \leq p} (q z_i - q^{-1}z_{\hat{\jmath}}) \\
&\qquad \prod_{i=1}^p \prod_{j=p+1}^{2n-p} (q z_i -q^{-1} z_j) \prod_{p<i<j< \hat{p}} (q z_i -q^{-1} z_j) \\
&\qquad \oint \ldots \oint \prod_{i=1}^n \frac{dw_i}{2\pi i} \frac{\prod_{j>i} (w_j-w_i)(qw_i-q^{-1}w_j)}{\prod_{j\leq a_i} (w_i-z_j) \prod_{j>a_i} (qw_i-q^{-1}z_j)} \prod_{j=1}^p \frac{qw_i-q^{-1}z_{\hat{\jmath}}}{qz_j - q^{-1}w_i},
\end{align*}
where $k_n$ is a normalization constant:
\[
k_n =
\begin{cases}
(q-q^{-1})^{-n(n-1)}&\text{if }p=0;\\
(q-q^{-1})^{-(p-1)^2 -(n-p)(n-p-1)}& \text{otherwise},
\end{cases}
\]
and the contours of integration surround all $z_i$ but not $q^{\pm 2} z_i$.
This means, that integrating is equivalent to choose all possible combinations of poles $(w_k - z_i)^{-1}$ for all $k\leq n$ such that $i\leq a_k$.
Notice that the presence of the Vandermonde implies that we cannot chose the same pole twice.
In the homogeneous limit $z_i=1$ for all $i$, we get:
\[
\Phi_{a,-p} (1,\ldots,1)=\oint \ldots \oint \prod_i \frac{du_i}{2\pi i u_i^{a_i}} (1+\tau u_i)^{-p} \prod_{j>i} (u_j-u_i)(1+\tau u_j +u_i u_j),
\]
which is precisely $\phi_a (\tau,-p)$.
In fact, this is the main reason for the formula presented here.
Therefore, we can write
\begin{equation}
G_\pi (z)=\sum_a C^{-1}_{\pi,a}(\tau) \Phi_{a,-|\pi|} (z),
\end{equation}
where the sum runs over all matchings $a$.
In fact, we will use this equation as definition of $G_\pi (z)$.
\subsection{Some properties of $\Phi_{a,-p} (z)$}
In Section~\ref{sec:base_a}, we have seen that $\Phi_a (z)$ are useful because they are homogeneous polynomials with a certain degree which obey to the \emph{wheel condition}, thus they span $\mathcal{V}_n$ and we can expand $\Psi_\pi (z)$ as a linear combination of $\Phi_a (z)$.
We hope that we can find some properties of $\Phi_{a,-p} (z)$ which allow us to apply similar methods.
Let us start with the polynomiality:
\begin{prop}[Polynomiality]\label{prop:poly}
The function $\Phi_{a,-p} (z_1,\ldots,z_{2n})$ is a homogeneous polynomial on the variables $z_i$.
\end{prop}
\begin{proof}
It is obvious that $\Phi_{a,-p} (z)$ can be written as a ratio of two polynomials.
Thus, to prove that this is a polynomial, it is enough to prove that there are no poles.
The proof is straightforward but tedious, thus we shall not repeat on the body of this paper, see the Appendix~\ref{sec:poly} for the details.
The fact that it is homogeneous is obvious from the definition, once we know that it is a polynomial.
\end{proof}
\begin{prop}[The individual degree]
The degree of the polynomials $\Psi_{a,-p} (z)$ at a single variable $z_i$ is given by:
\[
\delta_i =
\begin{cases}
0&\text{if }i=1\text{ or }i=\hat{1};\\
p-1&\text{if }1<i\leq p \text{ or }\hat{p}\leq i <\hat{1};\\
n-p-1&\text{if }p<i<\hat{p}.
\end{cases}
\]
\end{prop}
\begin{proof}
For a certain variable $z_i$ two things can happen when we perform the contour integration.
Either we chose a pole $(w_k-z_i)^{-1}$ for some $k=1,\ldots,n$ or not.
It is enough to compute the degree in both cases and we arrive to the desired result.
\end{proof}
\begin{prop}[The combined degree]
The degree of the polynomials $\Psi_{a,-p} (z)$ in the outer variables is $(p-1)^2$, in the inner variables is $(n-p)(n-p-1)$ and in all variables is $(p-1)^2 +(n-p)(n-p-1)$.
\end{prop}
\begin{proof}
The total degree in all variables is easy to compute: $\delta=(p-1)^2 +(n-p)(n-p-1)$.
The total degree in the inner variables (respectively outer variables) is more complex.
Assume that we choose $\alpha$ inner poles $(w_i-z_j)^{-1}$ for $p<j<\hat{p}$ (respectively outer poles $(w_i-z_j)^{-1}$ for $j\leq p$ or $j\geq \hat{p}$).
The degree is $\alpha(2n-2p-\alpha)-n+p$ (respectively $\alpha (2p-\alpha)-2p+1$).
The maximum is when $\alpha=n-p$ (respectively $\alpha=p$), and it is equal to $(n-p)(n-p-1)$ (resp. $(p-1)^2$).
\end{proof}
Notice that if we choose the maximum in both sets (the outer and the inner variables) we obtain $(p-1)^2+(n-p)(n-p-1)$, which is equal to $\delta$.
Thus we can write:
\begin{equation}
\Phi_{a,-p}(z)=\sum_i P_i (z^{(O)}) Q_i (z^{(I)}),
\end{equation}
where $P_i$ and $Q_i$ are polynomials of total degree $(p-1)^2$ and $(n-p)(n-p-1)$ respectively.
\begin{prop}[The \emph{wheel condition}]
Let $z_k=q^2 z_j=q^4 z_i$ for $p<i<j<k<\hat{p}$.
Thus, $\Phi_{a,-p}(z)=0$.
\end{prop}
\begin{proof}
The term $\prod_{p<i<j<\hat{p}} (qz_i-q^{-1}z_j)$ contain two zeros (if $z_k=q^2 z_j = q^4 z_i$), if we want to prove the \emph{wheel condition} it is enough to prove that it is impossible to cancel both at the same time.
In order to cancel the zero $(qz_j-q^{-1}z_k)$, we need to chose a pole $(w_l-z_j)^{-1}$ for some $l$ such that $j\leq a_l <k$.
In the same way we must chose $(w_m - z_i)^{-1}$ for some $m$ such that $i\leq a_m <j$.
But this implies that $m<l$, so $(qw_m-q^{-1}w_l)$ will be zero, making the whole expression to vanish.
\end{proof}
Therefore the inner parts $Q_i (z_{p+1},\ldots,z_{\hat{p}-1})$ are homogeneous polynomials with total degree $(n-p)(n-p-1)$ and satisfy the \emph{wheel condition}, so they belong to the vector space $\mathcal{V}_{n-p}$, that is:
\begin{equation}
\Phi_{a,-p}(z)=\sum_{\beta:|\beta|=n-p} P_{a;\beta} (z^{(O)}) \Psi_\beta (z^{(I)}),
\end{equation}
where the sum runs over all matchings $\beta$ of size $n-p$.
\subsection{Computing $P_{a;\beta}$}
In order to compute these polynomials, we need a new definition:
\begin{defi}
Let $a=\{a_1,\ldots,a_n\}$ be a matching of size $n$.
Separate it into two parts: the inner part composed by all $p<a_i<\hat{p}$ subtracted by $p$, and the outer part composed by all $a_i\leq p$, and the $a_i\geq\hat{p}$ subtracted by $2(n-p)$.
Let $c$ be the inner part and $b$ the outer part.
Moreover if the number of elements of $c$ is bigger than $n-p$ by $s$ we add $s$ times $p$ to the outer part.
$b$ and $c$ are not necessarily matchings.
Write $a=b\bullet c$.
\end{defi}
If $m_p(\pi)=0$, this definition coincides with the one of $\pi=\alpha\circ\beta$.
For example, let $a=\{1,3,5,6,7,10\}$ and $p=4$.
Then, we have $b=\{1,3,4,6\}$ and $c=\{1,2,3\}$.
With this new notation, the polynomials $P_{a;\beta} (z_1,\ldots,z_p,z_{\hat{p}},\ldots,z_{\hat{1}})$ are given by the following proposition:
\begin{prop}\label{prop:1step}
\[
\Phi_{b\bullet c,-p} (z)=\Phi_{b,-p} (z^{(O)}) \sum_\beta C_{c,\beta}(\tau) \Psi_\beta (z^{(I)}).
\]
\end{prop}
\begin{proof}
Remember that the coefficients $C_{a,\pi}(\tau)$ are defined as $C_{a,\pi}(\tau)=\tau^{-d(\pi)}\Phi_a (q^\pi)$ and can be constructed using the algorithm based in a recursion formula proved in Lemma~\ref{lema:0rec}:
\[
\Phi_{a} (q^\pi) = [s] \tau^{d(\pi)-d(\hat{\pi})} \Phi_{\hat{a}} (q^{\hat{\pi}}),
\]
where $\hat{\pi}$ is a matching obtained by removing a small arch $(j,j+1)$ from $\pi$, $\hat{a}$ is a matching obtained from $a$ by removing one element $a_i$ of the sequence such that $a_i=j$, decreasing all elements bigger then $j$ by two ($a_i\rightarrow a_i-2$ if $a_i>j$) and by one if the element is equal to $j$ ($a_i\rightarrow a_i-1$ if $a_i=j$), $s$ is the number of $a_i$ such that $a_i=j$.
Therefore we can study the polynomial $\Phi_{{b\bullet c},-p} (z)$ at the special points $z=\{z_1,\ldots,z_p,q^\beta,z_{\hat{p}},\ldots,z_{\hat{1}}\}$, because this is enough to characterize the polynomial.
It is not difficult to see that we obtain exactly the same recursion formula, see Lemma~\ref{lema:prec} for the technical details.
Thus, it is not hard to prove that
\[
\Phi_{b\bullet c,-p} (z_1,\ldots,z_p,q^\beta,z_{\hat{p}},\ldots,z_{\hat{1}}) =C_{c,\beta}(\tau) \tau^{d(\beta)} \Phi_{b,-p} (z_1,\ldots,z_p,z_{\hat{p}},\ldots,z_{\hat{1}}) ,
\]
which is the object of Corollary~\ref{cor:prec}.
This is equivalent to the result we wanted to prove.
\end{proof}
The remaining polynomial $\Phi_{b,-p}$ can be expressed by means of the polynomials $G_\alpha$:
\begin{prop}\label{prop:2step}
\[
\Phi_{b,-p} (z^{(O)})=\sum_\alpha C_{b,\alpha}(\tau) G_\alpha (z^{(O)}).
\]
\end{prop}
\begin{proof}
If $b$ is a matching, this proposition is equivalent to the definition of $G_\alpha$.
Thus, the hard case is when $b$ is not a matching.
In that case $C_{b,\alpha}(\tau)$ is defined by the equation $\Phi_b(q^\alpha)$.
We know that $\mathcal{V}_p$ is spanned by $\Phi_f (z_1,\ldots,z_{2p})$ where $f$ is a matching of size $p$.
So we can write $\Phi_b (z_1,\ldots,z_{2p}) = \sum_f R_{b,f}(\tau) \Phi_f (z_1,\ldots,z_{2p})$, where $R_{b,f}(\tau)$ is a matrix to be determined.
This is equivalent to
\begin{equation}
C_{b,\alpha}(\tau) = \sum_f R_{b,f}(\tau) C_{f,\alpha}(\tau).
\end{equation}
where $f$ and $\alpha$ are matchings, but not necessarily $b$.
We shall determine an algorithm to compute the matrix $R_{b,f}(\tau)$, but we shall only treat the case when $b_i\leq 2i-1$ for all $i$, but ignoring the condition $b_i \neq b_j$ if $i\neq j$.
Thus, the only ``anomaly'' which can occur is the existence of several $b_i$ with the same value, this is, there are some $p$ such that $\sharp\{b_i : b_i=p\}>1$.
Let $b$ be such that it has one element repeated at least twice, say $b_k=b_{k+1}=j$ and $b_{k-1}<j$, so apart from the term $(qw_k-q^{-1}w_{k+1})$ the integrand is antisymmetric on $w_k$ and $w_{k+1}$.
Using the fact that
\begin{multline}
\mathcal{A}\left\{ \frac{qw_k-q^{-1}w_{k+1}}{(w_k-z_j)(w_{k+1}-z_j)}+\frac{qw_k-q^{-1}w_{k+1}}{(qw_k-q^{-1}z_j)(qw_{k+1}-q^{-1}z_j)}\right.\\
+\left.\tau\frac{qw_k-q^{-1}w_{k+1}}{(qw_k-q^{-1}z_j)(w_{k+1}-z_j)} \right\}=0
\end{multline}
we can write
\[
\Phi_b (z_1,\ldots,z_{2p})=-\Phi_{\tilde{b}} (z_1,\ldots,z_{2p})-\tau \Phi_{\check{b}} (z_1,\ldots,z_{2p})
\]
where $\check{b}$ is obtained from $b$ by $b_k\rightarrow b_k-1$, and $\tilde{b}$ is obtained from $b$ by $b_k\rightarrow b_k-1$ and $b_{k+1}\rightarrow b_{k+1}-1$.
If $\sharp\{b_i\text{ such that }b_i<j\}\geq j$ the integral vanishes.
Thus, we can follow this procedure until we get either a matching or it vanishes.
Now, if we look to the expression of $\Phi_{b,-p} (z_1,\ldots,z_{2p})$, we can try to apply the same procedure in order to get the same recursion.
Two things are essential, the vanishing conditions are the same, and if $b_k=b_{k+1}=j$ the integrand should be antisymmetric apart from the term $(qw_k-q^{-1}w_{k+1})$, which is true.
Having the same recursion, we can write:
\begin{align*}
\Phi_{b,-p}(z_1,\ldots,z_{2p}) &= \sum_f R_{b,f}(\tau) \Phi_f (z_1,\ldots,z_{2p})\\
&= \sum_f \sum_\alpha R_{b,f}(\tau) C_{f,\alpha}(\tau) G_\alpha (z_1,\ldots,z_{2p})\\
&= \sum_\alpha C_{b,\alpha}(\tau) G_\alpha (z_1,\ldots,z_{2p})
\end{align*}
from the first to the second equations we apply the definition of $G_\alpha$.
\end{proof}
\subsection{Final details}
In conclusion we have that
\begin{equation}
\Phi_{b\bullet c,-p} (z) = \sum_\alpha \sum_\beta C_{b,\alpha}(\tau)C_{c,\beta}(\tau) G_\alpha (z^{(O)}) \Psi_\beta (z^{(I)}).
\end{equation}
A simple consequence of the algorithm that we use to compute the coefficients $C_{a,\pi}(\tau)$ is that it can be decomposed, as shown in Corollary~\ref{cor:0rec}: $C_{a,\alpha\circ\beta}(\tau)=C_{b,\alpha}(\tau)C_{c,\beta}(\tau)$, for $a=b\bullet c$.
Thus,
\[
\Phi_{a,-p} (z) = \sum_\alpha \sum_\beta C_{a,\alpha\circ\beta}(\tau) G_\alpha (z^{(O)}) \Psi_\beta (z^{(I)}).
\]
When we compare with the formula
\[
\Phi_{a,-p} (z) = \sum_\pi C_{a,\pi}(\tau) \Psi_{\pi,-p} (z),
\]
we conclude that
\begin{equation}
\Psi_{\pi,-p} (z) =
\begin{cases}
0 & \text{if }m_p(\pi)\neq 0\\
G_\alpha (z^{(O)}) \Psi_\beta (z^{(I)})&\text{if } \pi=\alpha\circ\beta
\end{cases}
\end{equation}
is a solution of the system of equations.
And that solution must be unique because $C_{a,\pi}(\tau)$ is invertible.\qed
\section{Sum rule}\label{sec:sumr}
The main purpose of this section is to prove Theorem~\ref{thm:sum_G}:
\begin{align*}
\sum_{\pi:|\pi|=n} (-1)^{d(\pi)} g_\pi &= A_n & \text{and} && \sum_{\pi:|\pi|=n} g_\pi &= (-1)^{\binom{n}{2}} \left( A_n^V \right)^2,
\end{align*}
the first one is a simple corollary of Theorem~6.16 at~\cite{tese}:
\[
\sum_{\pi:|\pi|=n} g_\pi (-\tau) = \sum_{\pi:|\pi|=n} \psi_\pi (\tau),
\]
because it is known that $\sum_{\pi:|\pi|=n} \psi_\pi=A_n$, proved in~\cite{artic31}, and it is easy to check that $g_\pi (-1) = (-1)^{d(\pi)} g_\pi$.
\subsection{An integral formula}
The quantities $g_\pi (\tau)$ can be expressed by:
\[
g_\pi (\tau) = C^{-1}_{\pi,a}(\tau) \phi_a (\tau,-|a|)
\]
following the definition at Section~\ref{sec:pol_qKZ}, where
\[
\phi_a (\tau,-|a|) = \oint \ldots \oint \prod_{i=1}^{|a|} \frac{du_i}{2\pi i u_i^{a_i}} (1+\tau u_i)^{-|a|} \prod_{j>i} (u_j-u_i) (1+\tau u_j +u_i u_j).
\]
Notice that if we change the sign of $\tau$ and at the same time the one from the variables $\{u_i\}_i$, the integral change like $ \phi_a (\tau,-|a|)=(-1)^{d(\tau)} \phi_a (\tau,-|a|)$, which in conjunction with Equation~\ref{eq:capi-tau}, we obtain: $g_\pi (-\tau) = (-1)^{d(\pi)}g_\pi (\tau)$.
Let $\mathcal{L}_n$ be the set of matchings (in the form of sequences) of size $n$ defined by $a\in \mathcal{L}_n$ if and only if $a_i=2i-1$ or $a_i=2i-2$ for all $i$.
Following Section~3.3 of article~\cite{artic41} we can conclude that:
\begin{equation}
\sum_{\pi:|\pi|=n} g_\pi (\tau) = \sum_{a\in \mathcal{L}_n} \phi_a (\tau,-|a|).
\end{equation}
This results in:
\[
\sum_{\pi:|\pi|=n} g_\pi (\tau) = \oint \ldots \oint \prod_i \frac{du_i}{2\pi i u_i^{2i-1}} (1+u_i) (1+\tau u_i)^{-n} \prod_{j>i} (u_j -u_i) (1+\tau u_j +u_i u_j),
\]
which can be compared with the formula:
\[
\sum_{\pi:|\pi|=n}\psi_\pi (\tau) = \oint \ldots \oint \prod_i \frac{du_i}{2\pi i} (1+u_i) \prod_{j>i} (u_j-u_i)(1+\tau u_j +u_i u_j).
\]
In~\cite{tese} has been proved that:
\begin{multline*}
\oint\ldots\oint \prod_i \frac{du_i}{2\pi i u_i^{2i-1}} (1+u_i) (1-\tau u_i)^{-n} \prod_{j>i} (u_j -u_i) (1-\tau u_j +u_i u_j)\\
= \oint \ldots \oint \prod_i \frac{du_i}{2\pi i} (1+u_i) \prod_{j>i} (u_j-u_i)(1+\tau u_j +u_i u_j).
\end{multline*}
The idea of the proof, which we shall not repeat here, is that both sides count Totally Symmetric Self Complementary Plane Partitions (TSSCPP) with the same weights.
On the one hand, it was already known that the TSSCPP can be seen as lattice paths.
In this framework, we can count them using the Lindström--Gessel--Viennot formula, moreover, in this case we have a weight $\tau$ by vertical step.
This will be equivalent to the RHS.
On the other hand, we can describe the TSSCPP using a different set of lattice paths, which are called Dual Paths in~\cite{tese}.
This will give rise to the LHS.
\begin{cor}
\[
\sum_{\pi:|\pi|=n} (-1)^{d(\pi)} g_\pi = A_n.
\]
\end{cor}
\begin{proof}
This is a consequence of the fact that $\sum_{\pi:|\pi|=n} \psi_\pi = A_n$ and $g_\pi (-1) = (-1)^{d(\pi)} g_\pi$.
\end{proof}
\subsection{Alternating Sign Matrices}
In what follows, it will be convenient to see the Fully Packed Loop as Alternating Sign Matrices.
The bijection is well known, but we shall give here a sketch of it.
An Alternating Sign Matrices (ASM) of size $n$ is a matrix $n\times n$ containing entries $\pm 1$ and $0$, such that if we ignore the zeros, the $1$ and $-1$ alternate in each column and row, and every column and row sum up to $1$.
Take a Fully Packed Loop Configuration on a $n\times n$ square lattice, in each vertex write a number $0$ if it corresponds to a corner and $\pm 1$ otherwise, and choosing the signs of the non-zero entries in such a way that we obtain a ASM.
We claim that this defines a bijection.
For example, the configuration on Figure~\ref{fig:fplexample} becomes the following ASM:
\begin{center}
\begin{tikzpicture}[scale=0.4]
\node[ASM] at (0,0) {0};
\node[ASM] at (1,0) {0};
\node[ASM] at (2,0) {0};
\node[ASM] at (3,0) {0};
\node[ASM] at (4,0) {1};
\node[ASM] at (5,0) {0};
\node[ASM] at (0,1) {0};
\node[ASM] at (1,1) {0};
\node[ASM] at (2,1) {0};
\node[ASM] at (3,1) {1};
\node[ASM] at (4,1) {0};
\node[ASM] at (5,1) {0};
\node[ASM] at (0,2) {0};
\node[ASM] at (1,2) {1};
\node[ASM] at (2,2) {0};
\node[ASM] at (3,2) {0};
\node[ASM] at (4,2) {-1};
\node[ASM] at (5,2) {1};
\node[ASM] at (0,3) {0};
\node[ASM] at (1,3) {0};
\node[ASM] at (2,3) {0};
\node[ASM] at (3,3) {0};
\node[ASM] at (4,3) {1};
\node[ASM] at (5,3) {0};
\node[ASM] at (0,4) {1};
\node[ASM] at (1,4) {0};
\node[ASM] at (2,4) {0};
\node[ASM] at (3,4) {0};
\node[ASM] at (4,4) {0};
\node[ASM] at (5,4) {0};
\node[ASM] at (0,5) {0};
\node[ASM] at (1,5) {0};
\node[ASM] at (2,5) {1};
\node[ASM] at (3,5) {0};
\node[ASM] at (4,5) {0};
\node[ASM] at (5,5) {0};
\end{tikzpicture}
\end{center}
Obviously, the number of ASM of size $n$ is the famous $A_n$.
Using this transformation, we see that the vertically symmetric FPL configurations are in bijection with vertically symmetric ASM.
There is only an $1$ at the first row.
Let $A_{n,i}$ count the number of Alternating Sign Matrices with the $1$ of the first row at the $i$th column, it was proved by Zeilberger in~\cite{Zeil-ASM-ref} that:
\[
A_{n,i} = \binom{n+i-2}{i-1} \frac{(2n-i-1)!}{(n-i)!} \prod_{j=0}^{n-2} \frac{(3j+1)!}{(n+j)!}.
\]
We know also, see for example~\cite{artic45}, that
\[
A_n (x):= \sum_{i=1}^{n} A_{n,i} x^{i-1} = \oint \ldots \oint \prod_{i=1}^n \frac{du_i}{2\pi i u_i^{2i-1}} (1+xu_i) \prod_{j>i} (u_j-u_i) (1+u_j+u_i u_j).
\]
In fact, this was conjectured in Zinn-Justin and Di Francesco's article~\cite{artic41}, in the same article the authors reformulate this conjecture in a different equation which was proved by Zeilberger in~\cite{Zeil-qKZ}.
Thus, it is straightforward to see that $\sum_\pi g_\pi = \sum_\pi \psi_\pi (-1) = (-1)^{\binom{n}{2}} A_n (-1)$.
\subsection{The $-1$ enumeration of ASM}\label{sec:-enum}
Next, we prove that the $-1$ enumeration of Alternating Sign Matrices $A_n (-1)$ is exactly the number of Vertically Symmetric Alternating Sign Matrices $A^V_n$ squared.
This result is already present in Di Francesco~\cite[Equation~2.8]{DF-qKZ-TSSCPP} and a detailed proof can be found in Williams' article~\cite{Nathan}.
For the sake of completeness we shall prove it in detail.
\begin{prop}
We want to prove that $A_n(-1) = (A^V_n)^2$, \emph{i.e.}\xspace
\begin{multline*}
\sum_i (-1)^{i-1} \binom{n+i-2}{i-1} \frac{(2n-i-1)!}{(n-i)!} \prod_{j=0}^{n-2} \frac{(3j+1)!}{(n+j)!}\\
= \begin{cases}
0 & \text{if }n\text{ is even};\\
\left(\frac{1}{2^m} \prod_{i=1}^m \frac{(6i-2)!(2i-1)!}{(4i-1)!(4i-2)!}\right)^2 & \text{if }n=2m+1.
\end{cases}
\end{multline*}
\end{prop}
\begin{proof}
We can rewrite the expression:
\begin{align*}
A_n (-1)&= \left.(-x)^{i-1}\binom{n+i-2}{i-1} \frac{(2n-i-1)!}{(n-i)!} x^{n-i} \prod_{j=0}^{n-2} \frac{(3j+1)!}{(n+j)!}\right|_{x=1}\\
&=\left.\sum_{i=1}^n\sum_{k=0}^{n-1} (-x)^{i-1}\binom{n+i-2}{i-1} \frac{(n+k-1)!}{k!} x^{k} \prod_{j=0}^{n-2} \frac{(3j+1)!}{(n+j)!}\right|_{x^{n-1}},
\end{align*}
where the subscript $x^{n-1}$ means that we select the coefficient of $x^{n-1}$.
Changing $i-1 \rightarrow i$ and changing the limits, because they do not interfere in our computation:
\[
A_n (-1)= \left.\sum_{i=0}^\infty\sum_{k=0}^\infty (-x)^i\binom{n+i-1}{i} \binom{n+k-1}{k} x^{k} (n-1)!\prod_{j=0}^{n-2} \frac{(3j+1)!}{(n+j)!}\right|_{x^{n-1}}.
\]
But
\[
\frac{1}{(1+x)^n} = \sum_{i=0}^\infty (-x)^i \binom{n+i-1}{i}.
\]
Applying this, we get:
\begin{align*}
A_n (-1) &= \left.\frac{1}{(1+x)^n}\frac{1}{(1-x)^n} (n-1)!\prod_{j=0}^{n-2} \frac{(3j+1)!}{(n+j)!}\right|_{x^{n-1}}\\
&=\left.\frac{1}{(1-x^2)^n} (n-1)!\prod_{j=0}^{n-2} \frac{(3j+1)!}{(n+j)!}\right|_{x^{n-1}}\\
&=\left.\sum_i x^{2i} \binom{n+i-1}{i} (n-1)!\prod_{j=0}^{n-2} \frac{(3j+1)!}{(n+j)!}\right|_{x^{n-1}}.
\end{align*}
And this is zero if $n-1$ is odd. Set $n=2m+1$ with $m$ integer, thus:
\[
A_{2m+1}(-1) = \binom{3m}{m} (2m)! \prod_{j=0}^{2m-1} \frac{(3j+1)!}{(2m+j+1)!}.
\]
A simple manipulation shows that this is equal to $\left(A_{2m+1}^V\right)^2$.
\end{proof}
\section{Further Questions}
\subsection{Solving the conjectures}
This paper is, in a certain way, the continuation of the article~\cite{negative}.
Although, we solve here two conjectures, there still two more: the fact that that all coefficients of $A_\pi (t)$ are positive (in fact, we have some numerical evidence that all roots of $A_\pi (t)$ have negative real part) and the multiplicity of the real roots.
Also, the value of $g_{()^{2m+1}}$ it is still a conjecture.
In~\cite{tese} the author presents an interpretation of this value as counting a certain subset $\mathcal{R}$ of the Totally Symmetric Plane Partitions.
In fact it is not hard to prove that this subset $\mathcal{R}$ is exactly the subset $\mathcal{P}_n^R$ which appears in Ishikawa's article~\cite{Masao-tsscpp1}.
Therefore, this conjecture it is equivalent to the unweighted version of \cite[Conjecture~4.2]{Masao-tsscpp1}.
\subsection{Combinatorial reciprocity}
We recover here one of the ideas of~\cite{negative}.
The Theorems proved here, together with the conjectures of~\cite{negative} which remain unproved suggest that $g_\pi$ and $A_\pi (-t)$ for $t\in\mathbb{N}$ have a combinatorial interpretation.
That is, we believe that exist yet-to-be-discovered combinatorial objects indexed by the matchings $\pi$ counted by $|g_\pi|$ or by $|A_\pi (-p)|$.
Notice that, even if the sum rule of $A_\pi$ and $g_\pi$ are related ($\sum_\pi (-1)^{d(\pi) g_\pi = \sum_\pi A_\pi}$), both quantities are essentially different, it is, they have different symmetries.
On the one hand, the $A_\pi$ are stable in respect of the rotation and mirror symmetries.
On the other hand, the $g_\pi$ are stable by inclusion, this is $g_{(\pi)}=g_\pi$.
The well-known \emph{Ehrhart polynomial} $i_P(t)$ of a lattice polytopes $P$, which counts the number of lattice points in $tP$ when $t$ is a positive integer, has an interesting property.
When $t$ is negative, $(-1)^{\dim P} i_P(t)$ counts the lattice points strictly inside of $-tP$ (see~\cite{BeckRobins} for instance).
We believe that should be something similar in the quantities $A_\pi (t)$.
\subsection{A new vector space of polynomials}
The polynomial $\Psi_\pi (z_1,\ldots,z_{2n})$ can be seen as the solution of the quantum Knizhnik--Zamolodchikov equation, moreover they define a vector subspace of polynomials characterized by a vanish condition (the \emph{wheel condition}) and a overall degree.
These polynomials, are also related to the non-symmetric Macdonald polynomials and can be constructed using some difference operators as showed by Lascoux, de Gier and Sorrell in~\cite{Lascoux-KL-M}.
In the same way, $G_\pi (z_1,\ldots,z_{2n})$ span a vector subspace of polynomials with a certain fixed degree.
Therefore, it will be interesting to fully characterize this vector subspace, and see if there is some other way to construct them, as the difference operator used in~\cite{Lascoux-KL-M}.
\section*{Acknowledgments}
The author is thankful to Ferenc Balogh for all the interesting discussions about this subjects and others.
The author would like also to thank Philippe Nadeau, with whom he published the article that inspiredu this one.
|
2,869,038,153,938 | arxiv | \section{Introduction}
A lot of effort has been devoted to understanding how galaxies evolve in clusters
over recent years.
Most of the attention has been placed on the highest-mass clusters,
largely because they are the
easiest ones to detect to high-redshift, and therefore can be contrasted
to lower redshift analogues
(e.g.\ Yee et al.\ 1996; Balogh et al.\ 1999; Fasano et al.\ 2000;
Ebeling et al.\ 2001; Fairley et al.\ 2002; Poggianti et al.\ 2004;
Wake et al.\ 2005; Pimbblet et al.\ 2006; Poggianti et al.\ 2009; Valentinuzzi et al.\ 2011).
With a few notable exceptions
(e.g.\ Andreon et al.\ 2004; the WINGS survey of Fasano et al.\ 2006),
intermediate and low-mass clusters ($L_X \sim 1 \times 10^{44}$ erg s$^{-1}$)
are a relatively unexplored region of parameter space and thus merit attention.
Interpreting X-ray luminosity, $L_X$, as a proxy for cluster mass,
the bias toward high mass is immediately apparent in a diagram of
X-ray luminosity-redshift parameter space. In Fig.~\ref{fig:lxz},
we present the regions of this parameter space explored by four
X-ray luminosity-selected galaxy cluster studies --
the Las Campanas/Anglo-Australian Telescope Rich Cluster Survey
(LARCS; Pimbblet et al.\ 2001; 2002; 2006),
the Canadian Network for Observational Cosmology (CNOC)
Cluster Redshift Survey (Yee et al.\ 1996; Balogh et al.\ 1999),
the Massive Cluster Survey (MACS; Ebeling et al.\ 2001)
and the work of Wake et al.\ (2005) -- as well as the region explored by this work.
\begin{figure}
\centerline{\psfig{file=lxz.ps,angle=0,width=3.5in}}
\vspace*{-0.7cm}
\caption{X-ray luminosity-redshift parameter space diagram.
The grey squares represent XBACS, BCS, and eBCS clusters (Ebeling et al.\ 1996; 1998; 2000)
while the dashed line boxes show the region of parameter space probed by a small variety X-ray
selected cluster studies. Note that CNOC,
MACS and Wake et al.\ select clusters from catalogues other than XBACS, BCS, and eBCS
-- only LARCS and this present study use the clusters contained inside the boxes displayed
and we emphasize that all works concerned attempt to minimize Malmquist bias.
The plot serves to
demonstrate the bias to high $L_X$ clusters
being visible to high redshift and here we
contend that we need a lower redshift, homogeneously selected
baseline of intermediate-$L_X$ clusters to compare these other works to.
}
\label{fig:lxz}
\end{figure}
\begin{figure*}
\centerline{\psfig{file=Abell_1767.ps,angle=0,width=6.75in}}
\vspace*{-0.7cm}
\caption{Diagnostic plots for Abell~1767 -- a representative cluster from our sample.
Top Left: SDSS image of the inner 0.5 $\times$ 0.5 Mpc of the cluster core.
Top Right: RA and Dec of all galaxies within 10 Mpc of the cluster centre.
The inner circle denotes $r_{200}$ and the outer circle is $3 r_{200}$.
Bottom Left: Redshift histogram for the cluster with the fitted Gaussian
overplotted that was used to determine the redshift and velocity dispersion
of the cluster. Bottom Right: A diagnostic plot of local galaxy density ($\Sigma_{10}$)
as a function of projected radius from the cluster centre. The two vertical
lines correspond to radii of $r_{200}$ and $3 r_{200}$. Abell~1767 is a rich Bautz-Morgan
type II cluster fed by multiple filaments of galaxies (cf.\ Pimbblet et al.\ 2004)
with some substructure -- indicated by the peaks in $\Sigma_{10}$ that are prominent at $3 r_{200}$
and beyond.}
\label{fig:a1767}
\end{figure*}
Given that luminosity-selected cluster surveys can be subject to such a
Malmquist bias, a number of observational strategies are possible. The
MACS and CNOC studies opt for a high $L_X$, longitudinal redshift strategy
whereas the LARCS study focused on creating a homogeneous sample at low-redshift.
In contrast, Wake et al.\ probes a wide range of X-ray luminosities at intermediate
redshift by selecting clusters from a variety of surveys with different flux limits.
A drawback of selecting clusters from multiple sources though, is that unforeseen
biases may be introduced into the sample. Furthermore, such works do not tell
us much about intermediate-mass clusters; for example, Wake et al.\ (2005) only
looks at 12 clusters of which only 4 are found within the $L_X$ limits of
this work -- comparable arguments can be extended to other studies (e.g.\
Huertas-Company et al.\ 2009).
To address this issue, we assemble a sample of intermediate $L_X$ galaxy clusters
with spectroscopically confirmed members from the Sloan Digital Sky Survey
(SDSS Data Release 6, Adelman-McCarthy et al.\ 2008; see also York et al.\ 2000)
to examine the role of environment on the colour-magnitude relationship in
such systems.
On the colour-magnitude plane, there are two distinct galaxy populations
residing in galaxy clusters. The first population is mainly composed of
early-type (E and S0) galaxies which form a tight, linear relation
extending to the brightest magnitude galaxies on this plane.
This population is known as the red sequence and the ridge line
upon which the red sequence galaxies lie is known as the colour-magnitude relation
(CMR; Visvanathan \& Sandage 1977). Subsequent studies (e.g.\ Bower, Lucey \& Ellis 1992)
have demonstrated the universal existence of a red sequence in all clusters.
At faint magnitudes, the red sequence is observed to fan out on the
colour-magnitude plane (Kodama \& Bower 2001; Pimbblet et al.\ 2002).
This effect has been interpreted
in terms of the age-metallicity relation --
the increasing spread of galaxy colours is the result
of an increasing spread of galaxy metallicities and ages (Kodama et al.\ 1999; see also
Kodama \& Arimoto 1997).
The second population is mainly composed of bluer, fainter-magnitude, morphologically
spiral and star-forming
galaxies which lie underneath the CMR -- the so-called blue cloud (see, e.g.,
Poggianti et al.\ 2006, Cortese \& Hughes 2009, Mei et al.\ 2009 and references therein).
The evolution of this latter population in to the former has been
extensively investigated in
the literature (e.g.\ Kodama \& Bower 2001;
De Lucia et al.\ 2004; McIntosh et al.\ 2004;
Tanaka et al.\ 2005; Tran et al.\ 2005; Haines et al.\ 2006;
De Lucia et al.\ 2007; Stott et al.\ 2007; Ma et al.\ 2010;
Jim{\'e}nez et al.\ 2011) and relates to the origin of observed
correlations such as the morphology-density relation (Dressler 1980)
and related issues (cf.\ Lewis et al.\ 2002; G{\'o}mez et al.\ 2003;
Butcher \& Oemler 1984; see also Mahajan \& Raychaudhury 2009).
The developing picture is that as
low mass, blue galaxies accrete on to the cluster potential,
their star-formation is truncated
(possibly after an intense starburst event; cf.\ Sato \& Martin 2006)
and their morphologies altered
(by a variety of physically-motivated mechanisms)
as they join more massive dark matter halos
(e.g.\ Gunn \& Gott 1972;
Larson et al.\ 1980;
Byrd \& Valtonen 1990;
Moore et al.\ 1996;
Dressler et al.\ 1997;
Quilis et al.\ 2000;
Balogh et al.\ 2000;
Diaferio et al.\ 2001;
De Lucia et al.\ 2004).
In this context, a number of authors have presented evidence for
an environmental\footnote{The word `environment'
has been taken to mean various things in the literature by different authors (see Haas, Schaye,
\& Jeeson-Daniel 2011).
This includes but is not limited to: radius from a cluster centre, local galaxy density,
dark matter halo mass of a galaxy, and
large-scale structural situation (e.g.\ void vs.\ filament vs.\ cluster).}
dependence of galaxy observables.
For instance, Abraham et al.\ (1996) make use of a combined spectroscopic
and photometric survey of Abell~2370 to show that radial gradients across the
cluster exist out to a remarkably large radius ($\sim 5$ Mpc).
In particular, the colours of CMR galaxies are shown to become progressively
bluer with projected radius (independent of mass)
from the cluster centre which is mirrored by
a change in Balmer line indices.
The natural interpretation of these results is that the mean
luminosity-weighted age of the stellar populations in the cluster galaxies
is decreasing with increasing radius from the cluster centre as new galaxies
and groups are accreted from filaments, fuelling the cluster's growth
(see also Bernardi et al.\ 2006; Smith et al.\ 2006).
Other studies followed Abraham et al.\ (1996) and found similar results
in other clusters
(e.g.\ Terlevich et al.\ 2001 for the Coma cluster; Haines et al.\ 2004).
Pimbblet et al.\ (2002; 2006) generalized these results for a very X-ray luminous
sample of $z\sim0.1$ galaxy clusters: a gradient of $d(B-R)/d r_p = -0.022 \pm 0.004$
was found for CMR galaxies out to 6 Mpc suggesting that galaxies in the outer regions
of the rich clusters have not yet had their star-formation rates fully truncated.
Concerned that such results may not be typical or representative of all clusters (since
Pimbblet et al.\ only examined the extrema of the cluster mass function),
Wake et al.\ (2005) extend this study to lower X-ray luminosities at modest redshifts
and found similar results (see their Fig.~13; see also Hogg et al.\ 2004).
Yet, as noted above, this study only
constituted a small number of clusters at intermediate $L_X$. Therefore in this work,
we aim to generalize these results to intermediate $L_X$ ranges through
a purpose-constructed, representative sample of clusters.
The format of this work is as follows.
In Section~2, we construct a sample of 45 intermediate $L_X$ galaxy clusters at low redshift
that are covered by SDSS data. We show
that these are representative of the general cluster population for these luminosities
and extract SDSS data for all member galaxies.
In Section 3 we present Schechter function fits which we use to
define a characteristic magnitude for stacking galaxy clusters. We also present the
cluster colour-magnitude diagrams which are used to identify red sequence galaxies.
In Section~4, we stack the clusters to form a composite sample from
which we determine the photometric gradients of the red sequence galaxies in all SDSS
colours. We describe how our results fit in with previous studies and suggest
the existence of relationships of modal colour gradient with $L_X$ and redshift.
Our major results are then summarized in Section~5.
Throughout this work, we adopt the standard flat cosmology from Spergel et al.\ (2007):
$\Omega_M = 0.24$, $\Omega_{\Lambda}=0.76$ and $H_0=73$ km s$^{-1}$ Mpc$^{-1}$.
\section{Data}
Galaxy clusters are selected for this work from the X-ray Brightest Abell Cluster Survey (XBACS),
the ROSAT Brightest Cluster Sample (BCS) and the ROSAT Extended Brightest Cluster
Survey (eBCS) catalogues (Ebeling et al., 1996; 1998; 2000). Briefly, the XBACS,
BCS and eBCS catalogues list many extended, extragalactic X-ray sources from ROSAT
All-Sky Survey data. Clusters are detected in the soft X-ray band between
0.1--2.4 keV.
In total, the catalogues contain 462 unique galaxy clusters of which 214 satisfy
$0.7\times10^{44} < L_X < 4\times10^{44}$ ergs$^{-1}$ and $0.03<z<0.16$ -- our definition
of a local sample of intermediate $L_X$ galaxy clusters.
\begin{table*}
\begin{center}
\caption{Intermediate $L_X$ galaxy cluster sample used in this work.}
\begin{tabular}{lccccccc}
\hline
Cluster & $L_X$ & B-M Type & Redshift & $\sigma_z$ & $r_{200}$ & C & S \\[-1ex]
& [10$^{44}$ erg/s] & & & [km/s] & [Mpc] & & \\[1ex]
\hline
Abell 602 & 1.14 & III & 0.06195 $\pm$ 0.00006 & 1080 $\pm$ 40 & 2.39 $\pm$ 0.01 & 0.53 & yes \\
Abell 671 & 0.90 & II-III & 0.04926 $\pm$ 0.00003 & 600 $\pm$ 10 & 1.34 $\pm$ 0.01 & 0.64 & ? \\
Abell 743 & 2.71 & III & 0.13610 $\pm$ 0.00030 & 500 $\pm$ 200 & 0.93 $\pm$ 0.08 & 0.34 & no \\
Abell 744 & 0.77 & II & 0.07298 $\pm$ 0.00009 & 240 $\pm$ 40 & 0.53 $\pm$ 0.02 & 0.65 & no \\
Abell 757 & 0.90 & III & 0.05108 $\pm$ 0.00003 & 390 $\pm$ 10 & 0.88 $\pm$ 0.01 & 0.42 & no \\
Abell 763 & 2.34 & II-III & 0.08924 $\pm$ 0.00008 & 380 $\pm$ 50 & 0.81 $\pm$ 0.02 & 0.43 & no \\
Abell 923 & 2.07 & II & 0.11700 $\pm$ 0.00005 & 420 $\pm$ 30 & 0.88 $\pm$ 0.01 & 0.40 & no \\
Abell 957 & 0.78 & I-II & 0.04546 $\pm$ 0.00005 & 850 $\pm$ 30 & 1.91 $\pm$ 0.01 & 0.55 & no \\
Abell 961 & 3.14 & II-III & 0.12700 $\pm$ 0.00010 & 690 $\pm$ 40 & 1.43 $\pm$ 0.01 & 0.45 & no \\
Abell 971 & 1.44 & II & 0.09320 $\pm$ 0.00002 & 762 $\pm$ 9 & 1.63 $\pm$ 0.01 & 0.78 & no \\
Abell 1035 & 0.92 & II-III & 0.06721 $\pm$ 0.00005 & 720 $\pm$ 30 & 1.59 $\pm$ 0.01 & 0.61 & ? \\
Abell 1045 & 3.47 & II-III & 0.13780 $\pm$ 0.00070 & 600 $\pm$ 100 & 1.28 $\pm$ 0.05 & 0.28 & no \\
Abell 1126 & 1.15 & I-II & 0.08433 $\pm$ 0.00007 & 350 $\pm$ 50 & 0.75 $\pm$ 0.02 & 0.46 & no \\
Abell 1361 & 3.59 & I-II & 0.11549 $\pm$ 0.00005 & 640 $\pm$ 20 & 1.33 $\pm$ 0.01 & 0.45 & no \\
Abell 1446 & 1.30 & II-III & 0.10280 $\pm$ 0.00010 & 710 $\pm$ 60 & 1.51 $\pm$ 0.02 & 0.52 & no \\
Abell 1691 & 0.89 & II & 0.07176 $\pm$ 0.00002 & 650 $\pm$ 8 & 1.42 $\pm$ 0.01 & 0.52 & yes \\
Abell 1728 & 1.29 & I-II* & 0.08965 $\pm$ 0.00002 & 940 $\pm$ 10 & 2.02 $\pm$ 0.01 & 0.34 & yes \\
Abell 1767 & 2.47 & II & 0.07112 $\pm$ 0.00002 & 700 $\pm$ 20 & 1.53 $\pm$ 0.01 & 0.54 & ? \\
Abell 1773 & 1.53 & III & 0.07727 $\pm$ 0.00003 & 790 $\pm$ 40 & 1.72 $\pm$ 0.01 & 0.53 & yes \\
Abell 1809 & 1.61 & II & 0.07938 $\pm$ 0.00001 & 808 $\pm$ 7 & 1.76 $\pm$ 0.01 & 0.76 & ? \\
Abell 1814 & 2.82 & II & 0.12673 $\pm$ 0.00009 & 330 $\pm$ 50 & 0.68 $\pm$ 0.02 & 0.72 & no \\
Abell 1831 & 1.90 & III & 0.06311 $\pm$ 0.00003 & 480 $\pm$ 10 & 1.06 $\pm$ 0.01 & 0.52 & yes \\
Abell 1885 & 2.40 & II-III & 0.09200 $\pm$ 0.00200 & 1100 $\pm$ 500 & 2.40 $\pm$ 0.20 & 0.80 & ? \\
Abell 1925 & 1.56 & II & 0.10570 $\pm$ 0.00007 & 580 $\pm$ 30 & 1.22 $\pm$ 0.01 & 0.34 & ? \\
Abell 1927 & 2.14 & I-II & 0.09506 $\pm$ 0.00003 & 520 $\pm$ 20 & 1.12 $\pm$ 0.01 & 0.37 & no \\
Abell 2033 & 2.57 & III & 0.07828 $\pm$ 0.00003 & 1049 $\pm$ 9 & 2.28 $\pm$ 0.01 & 0.51 & yes \\
Abell 2108 & 1.97 & III & 0.09033 $\pm$ 0.00008 & 750 $\pm$ 30 & 1.60 $\pm$ 0.01 & 0.51 & no \\
Abell 2110 & 3.93 & I-II & 0.09728 $\pm$ 0.00005 & 630 $\pm$ 40 & 1.35 $\pm$ 0.02 & 0.42 & no \\
Abell 2124 & 1.35 & I & 0.06723 $\pm$ 0.00002 & 740 $\pm$ 10 & 1.62 $\pm$ 0.01 & 0.68 & no \\
Abell 2141 & 3.89 & II & 0.15900 $\pm$ 0.00030 & 900 $\pm$ 300 & 1.80 $\pm$ 0.10 & 0.80 & no \\
Abell 2148 & 1.39 & III* & 0.08843 $\pm$ 0.00005 & 570 $\pm$ 20 & 1.22 $\pm$ 0.01 & 0.57 & ? \\
Abell 2149 & 0.83 & II-III* & 0.06503 $\pm$ 0.00003 & 280 $\pm$ 20 & 0.61 $\pm$ 0.01 & 0.35 & no \\
Abell 2175 & 2.93 & II & 0.09646 $\pm$ 0.00004 & 780 $\pm$ 10 & 1.65 $\pm$ 0.01 & 0.44 & no \\
Abell 2199 & 3.70 & I & 0.03055 $\pm$ 0.00001 & 681 $\pm$ 1 & 1.56 $\pm$ 0.01 & 0.43 & yes \\
Abell 2228 & 2.81 & I-II & 0.10102 $\pm$ 0.00004 & 980 $\pm$ 20 & 2.09 $\pm$ 0.01 & 0.41 & no \\
RXJ0820.9+0751 & 2.09 & I-II* & 0.11032 $\pm$ 0.00007 & 560 $\pm$ 30 & 1.17 $\pm$ 0.01 & 0.54 & no \\
RXJ1000.5+4409 & 3.08 & III* & 0.15310 $\pm$ 0.00030 & 700 $\pm$ 100 & 1.33 $\pm$ 0.05 & 0.82 & ? \\
RXJ1053.7+5450 & 1.04 & III* & 0.07291 $\pm$ 0.00004 & 580 $\pm$ 20 & 1.27 $\pm$ 0.01 & 0.62 & yes \\
RXJ1423.9+4015 & 0.94 & III* & 0.08188 $\pm$ 0.00005 & 460 $\pm$ 10 & 0.99 $\pm$ 0.01 & 0.54 & ? \\
RXJ1442.2+2218 & 2.66 & II* & 0.09613 $\pm$ 0.00007 & 470 $\pm$ 70 & 1.01 $\pm$ 0.03 & 0.42 & ? \\
RXJ1652.6+4011 & 2.85 & III* & 0.14800 $\pm$ 0.00020 & 750 $\pm$ 90 & 1.52 $\pm$ 0.03 & 0.45 & yes \\
ZwCl 1478 & 2.38 & II-III* & 0.10300 $\pm$ 0.00080 & 400 $\pm$ 200 & 0.82 $\pm$ 0.08 & 0.35 & no \\
ZwCl 4905 & 1.20 & I-II* & 0.07641 $\pm$ 0.00003 & 480 $\pm$ 20 & 1.04 $\pm$ 0.01 & 0.48 & no \\
ZwCl 6718 & 1.24 & II* & 0.07220 $\pm$ 0.00020 & 400 $\pm$ 200 & 0.85 $\pm$ 0.09 & 0.55 & no \\
ZwCl 8197 & 2.89 & II* & 0.11320 $\pm$ 0.00010 & 830 $\pm$ 100 & 1.74 $\pm$ 0.04 & 0.31 & no \\
\hline
\noalign{\smallskip}
\end{tabular}
\label{tab:clusters}
\end{center}
\end{table*}
We use SDSS DR6 (Adelman-McCarthy et al.\ 2008) as our photometric and spectroscopic
catalogue in this work, which immediately cuts down the number of potential clusters
available to study due to the spatial coverage of SDSS and also restricts our
coverage to modestly bright magnitudes. The SDSS
main sample spectroscopic target list contains all galaxies
brighter than $r=17.77$ (Strauss et al.\ 2002) but due to fibre collisions
(cf.\ Blanton et al.\ 2003), the completeness level at $r=17.77$ may be
slightly lower than 100 per cent at this magnitude. In Fig.~\ref{fig:rhistlog}
we display our own estimate of the completeness of our catalogue and find
that at $r=17.77$, we are still $\sim 95$ per cent complete which should be more
than sufficient for the present work.
\begin{figure}
\centerline{\psfig{file=rhistlog.ps,angle=0,width=3.5in}}
\vspace*{-5cm}
\caption{Histogram of r-band magnitudes for our galaxy sample. To determine
how complete our sample is, we fit a line to the brighter part of this plot
($16<r<17$) where we are confident of being 100 per cent complete and extrapolate
this to fainter magnitudes. At $r=17.77$, our sample is still 95 per cent
complete and we suggest that this level of completeness is sufficient for
the present work.
}
\label{fig:rhistlog}
\end{figure}
Preliminary cluster membership is determined by extracting galaxies
within $z\pm0.02$ of the nominal cluster redshift given by XBACS, BCS and eBCS out to a
projected radius of 10 Mpc from the cluster centre (defined as the X-ray luminosity
peak of the cluster).
For each cluster, we determine final membership by fitting a Gaussian
to the redshift distribution of the preliminary members (Fig.~\ref{fig:a1767}).
Using the position of the peak of the Gaussian as the updated cluster redshift,
we proceed to eliminate any galaxies outside $3 \sigma_z / c$ of the updated
redshift (i.e.\ a sigma-clipping process;
cf.\ Yahil \& Vidal 1977). This process is iterated twice to produce the
final membership for the clusters in our sample. Although we may miss
real cluster members through this method (cf.\ section 4.2 of Pimbblet et al.\ 2006),
it will suffice for our purposes as we will later stack all of our clusters
together to form a composite.
To ensure that this method is robust against choice of bin size in
constructing the redshift histogram,
Gaussian fit coefficients for the redshift histograms are estimated
iteratively by performing $\chi^2$
minimizations on the histogram data points. One hundred iterations
are performed for each cluster with new random bin sizes on each
iteration. The best fit Gaussian coefficients and their associated
errors are calculated from the mean and standard deviation values
of the coefficients over 100 iterations. By randomising the bin sizes,
we ensure that our best fit Gaussians are independent of binning bias.
Our examination of the clusters generated by this method (e.g., Fig.~\ref{fig:a1767})
returns a mixture of usable and unusable clusters.
Here, we consider `unusable' to be clusters with fewer than 50 members (this is driven
by a need to maintain a high completeness in each cluster so that
we can fit a line to the colour-magnitude relation of an individual cluster and
so that an individually rich cluster will not dominate in a stacked, composite sample),
those located in the `whisker' regions of SDSS DR6 (i.e.\ those
contained in the long, narrow stripes of spectroscopically-sampled sky near the celestial equator
where we cannot probe out to $3r_{200}$) and over-lapping clusters.
For the former issue, this introduces a minor bias to our target selection: low redshift
clusters are much more able to enter our sample than higher redshift clusters
(clusters at $z<0.1$ have $182 \pm 48$ extra members compared to
$z>0.1$) meaning that some of the advantage our sample
has (Fig.~1) is removed. However, if we were to add in clusters
with $<50$ members,
their contribution to the final, composite cluster would be
fractional (one extra cluster with $<50$ members would increase our
sample size by $<<0.5$ per cent) and our final results would not
be significantly affected.
We define overlapping clusters as clusters which have their 10 Mpc projected
radius search areas overlap with the search areas of other clusters within our
$L_X$ range. Removing
the overlapping clusters simplifies our analysis because it means that we do
not have to worry about how to assign cluster membership to galaxies located
in the overlap regions and cross-contamination
(explicitly, an overlapping neighbour
will manifest itself as a second red sequence in a colour-magnitude diagram which will
mean any fit to the colour-magnitude relation of an individual cluster is suspect at best).
We caution that there may still be clusters overlapping with other clusters
that are outside of our $L_X$ range but we would anticipate that their influence on
our result to be minimal when we later combine our clusters in to a composite sample.
Our final sample consists of 45 intermediate $L_X$ galaxy clusters
at a median redshift of $z=0.0897$ which span a range of properties.
For example, our richest cluster (Abell 2199)
contains 1344 spectroscopically-confirmed members whereas our poorest
cluster (Abell 1814) has 54 members.
We summarize our cluster sample and their global
properties in Table~\ref{tab:clusters}.
To increase the utility of our analysis to
other researchers (and our own future work), we derive several additional
parameters for the clusters used in this work.
Firstly, included in Table~\ref{tab:clusters} are Bautz-Morgan classifications
of our clusters (Bautz \& Morgan 1970). Most of these are directly
sourced from the NASA Extragalactic Database (NED), but those marked
by an asterisk were manually determined by visual inspection by PCJ.
Values for $r_{200}$ are derived following Carlberg et al.\ (1997):
$R_{Virial} \approx r_{200} = \sqrt(3) \sigma_z / 10 H(z)$
where
$H(z) = H_0^2 (1+z)^2 (1+\Omega_M z)^2$
and
$\sigma_z$ is the velocity dispersion of the cluster.
The concentration indices ($C$) are computed by sorting
all galaxies within $3r_{200} \approx r_{100}$ in to
projected radius order and applying
$C = log_{10} (r_{60} / r_{20})$ where $r_{60}$ and $r_{20}$
are the 20th and 60th percentiles of this distribution.
We also make a qualitative judgment about whether the cluster
has substructure (column headed $S$ in Table~\ref{tab:clusters}) from inspection of a plot of local galaxy
density\footnote{We define local galaxy density as $\Sigma_{10}$ --
the surface area on the sky that is occupied by a given galaxy and its 10 nearest neighbours.
This measure of `environment' probes the internal densities of the dark matter halos
well (Muldrew et al.\ 2011).}
versus projected radius (Fig.~\ref{fig:a1767}). A peak outside of the cluster core in $\Sigma_{10}$
is taken as a signature of substructure for this purpose.
Although in principle we could have used other, quantitative methods (cf.\ Dressler \& Shectman 1988;
Ashman et al.\ 1994; see also Pimbblet et al.\ 2011 and references therein)
this approach is sufficient to make the statement that our sample has a range of clusters at
a number of evolutionary stages.
Including both clusters that have strong secondary $\Sigma_{10}$ peaks and those with
probable ones (i.e.\ both `yes' and `?' entries in the substructure column, $S$,
of Table~\ref{tab:clusters}),
our sample has 19 out of 45 clusters ($\sim 40$ per cent)
containing substructure. This is an under-estimate for intermediate $L_X$ clusters
overall since we have intentionally removed overlapping clusters to generate
our sample (above) and we don't search for line-of-sight substructure.
We note that this value is intermediate to that predicted for rich clusters
(e.g., Lacey \& Cole 1993) and that observed in poor clusters (Burgett et al.\ 2004).
This is in line with theoretical expectations that demonstrate
intermediate-$L_X$ clusters are expected to have accreted approximately 50 per cent
of their total mass in the past $\sim 7$ Gyr (see Fig.~13 of Lacey \& Cole 1993).
\begin{figure*}
\centerline{\psfig{file=BMHistogram.ps,angle=0,width=6.75in}}
\vspace*{-0.7cm}
\caption{Bautz-Morgan fractions for the XBACS, BCS, and eBCS
catalogues (462 clusters) and our cut-down intermediate $L_X$ cluster sample (45 clusters).
The vertical error bars are Poissionian. The two samples are not significant different at a $3\sigma$ level.
}
\label{fig:BMfracs}
\end{figure*}
To characterize the dataset and address the question of whether our sample of 45 galaxy clusters
is representative of the global intermediate $L_X$ cluster population we compare
the frequency of different
Bautz-Morgan classifications in our sample and the parent XBACS, BCS
and eBCS sample.
This is graphically displayed in Fig.~\ref{fig:BMfracs}. In comparison to
the full X-ray sample of intermediate $L_X$ clusters, our cut-down sample appears
to have a lower prevalence of BM I and slightly higher
prevalence of BM II types.
The fraction of BM III (and, indeed, intermediate BM types) appears
to be similar in both populations. Despite this, the differences
between the histograms are not significant at the $3\sigma$ level.
We therefore contend that we have a representative sample of all intermediate $L_X$ cluster
types for this work.
However, from Fig.~\ref{fig:lxz} it is also apparent that we are not probing the lower
right of the box denoting this work (i.e.\ clusters at the upper end of our redshift
interval with low $L_X$). In acknowledging that this biases our cluster sample, we note the
lookback time from the lowest redshift to the highest is some $\sim 1.5$ Gyr.
\section{Luminosity Function and Colour-Magnitude Relations}
Having defined our cluster sample and their galaxy members, we now create
plots of colour-magnitude relationships for all of our clusters in
a variety of broadbands (Fig.~\ref{fig:cmrs}).
In creating these colour-magnitude diagrams, we located a number of
discordant points -- galaxies that lie very far away from the bulk of the
population. These galaxies are explored in detail in Appendix~A but here
we simply note that they are removed from the subsequent analysis as they
may adversely affect the fit of the CMRs.
\begin{figure*}
\centerline{\psfig{file=cmr1767v2.ps,angle=0,width=6.5in}}
\vspace*{-1.5cm}
\caption{Colour-magnitude diagrams for Abell~1767 in a variety of broadbands
for all member galaxies within 10 Mpc of the cluster centre (see Fig.~\ref{fig:a1767}).
For each diagram, we fit the red sequence (diagonal solid line) as described in the text.
The vertical lines denote M$^{\star}$ (dashed line) and our fiducial magnitude
(solid line) which represents a 90 per cent completeness limit for our sample (see
Fig.~\ref{fig:rhistlog}).
We use $M^{\star}$ as the point about which we pivot the plots to create the composite cluster.
Analogous diagrams are constructed for the other clusters in our sample.
}
\label{fig:cmrs}
\end{figure*}
We first fit the absolute (k-corrected) magnitudes of the
galaxy cluster members with a Schechter function (Schechter 1976)
to determine $M^{\star}$ (a more physically meaningful parameter than
absolute magnitudes)
and therefore provide a pivot point which we will
use to combine our CMRs together in to a stacked cluster (dashed vertical
lines in Fig.~\ref{fig:cmrs}).
The k-corrections used in this work are from
the {\sc photoz} table of SDSS DR6
(http://cas.sdss.org/astro/en/help/browser/browser.asp?
n=Photoz\%20\&t=U).
Although these use photometric redshifts, the redshift contribution to the total
error is insignificant in comparison to the error budget of the flux density
which is rarely known to better than a few per cent (Hogg et al.\ 2002).
For each absolute magnitude band, the Schechter functions are fit from
the brightest magnitude bin up to the 90 per cent completeness limit (solid vertical
line in Fig.~\ref{fig:cmrs}; see Fig.~\ref{fig:rhistlog} for the definition).
One hundred $\chi^2$ minimizations are performed with $M^{\star}$, $N^{\star}$
and $\alpha$ as free parameters and with randomised bin sizes between 0.05 and
0.5 mag. The fit parameters for $M^{\star}$
and their associated errors were calculated as being
the mean and standard deviation of the 100 runs. These values for
$M^{\star}$ (critical to this study) are shown in Table~\ref{tab:lf}.
We note that single Schechter function fits are inadequate
descriptions of the luminosity function at both the bright end (where double
or Gaussian functions may be more appropriate; e.g.\
Thompson \& Gregory 1993; Dahl{\'e}n et al.\ 2004) and the faint end (since
we truncate our fits to the 90 per cent completeness limit). They do,
however, provide an excellent fit to the `knee' of the distribution
and consequentially $M^{\star}$ which is important to us as a pivot point since
it does not significantly vary with redshift (Mobasher et al.\ 2003 and references
therein). The values of $M^{\star}$ that we present
in Table~\ref{tab:lf} are similar to other values
reported for generic SDSS cluster populations (Popesso et al.\ 2005), but we
refrain from a more detailed comparison of these luminosity functions since
our cluster sample is constructed in a very different manner to those
studies and with different goals in mind.
\begin{table}
\begin{center}
\caption{Values for $M^{\star}$ from fitting Schechter functions to the
cluster galaxies.
\hfil}
\begin{tabular}{ll}
\noalign{\medskip}
\hline
Band & $M^{\star}$ \\
\hline
$u$ & -18.29$\pm$0.04 \\
$g$ & -20.35$\pm$0.05 \\
$r$ & -21.09$\pm$0.05 \\
$i$ & -21.53$\pm$0.04 \\
$z$ & -21.81$\pm$0.04 \\
\hline
\end{tabular}
\label{tab:lf}
\end{center}
\end{table}
To fit the CMRs, we only use galaxies brighter than the 90 per cent completeness
fiducial magnitudes (vertical solid line in Fig.~\ref{fig:cmrs}).
Since there are two overlapping galaxy populations on the colour-magnitude
plane (red sequence and blue cloud), ordinary least squares fitting techniques
are inadequate to fit the CMRs. Typically, ordinary least squares fitting yields
slopes which are too steep and intercepts which are too high --
the faint end of the CMR being dragged blueward by the blue cloud.
Ideally, one would like to remove the blue cloud galaxies and perform
an ordinary least squares fit to the remaining red sequence galaxies.
However, identifying blue cloud galaxies without already knowing the
position of the CMR is a non-trivial exercise. A more straightforward
approach is to perform a robust linear fit which effectively weights
red sequence galaxies higher than blue cloud galaxies (see for example Pimbblet et al.\ 2002
who use a merit function which minimizes the sum of the absolute values of the residuals).
Here, we follow a suggestion by Press et al.\ (1992) and use the Lorentzian merit
function:
\begin{equation}
\sum_i \frac{log(1+r^2_i / 2)}{\Delta y_i}
\end{equation}
where $\Delta y_i$ is magnitude of the error in y.
In comparison to the absolute deviation minimizing
merit function, the Lorentzian merit function visually
appears to converge at least as well or closer to the
apparent red sequence. The merit function was minimized
using the Nelder-Mead downhill simplex algorithm (Press et al.\
1992).
The Nelder-Mead algorithm requires initial guess values for the
linear fit coefficients -- we use ordinary least squares regression fit
coefficients as our starting points.
In almost all cases our approach
successfully located the CMR in each diagram.
However, the algorithm failed for a few clusters. In these cases, we
impose a colour envelope around the (eyeball determined) location of the CMR
to help the algorithm hone in on the CMR on a second run.
All fits are double-checked by eye to confirm the approach works (Fig.~\ref{fig:cmrs}).
\section{Composite cluster and Discussion}
We now exploit the homogeneity of our sample and stack our clusters together
to form a composite.
This is achieved by applying colour and magnitude
transformations to the CMRs of each of our clusters (Fig.~\ref{fig:cmrs})
in a two-step process.
Firstly, we remove the slope of the CMR and
secondly, we evolve the individual clusters to a common redshift.
For each cluster we identify a pivot point which we define to be the
intersection of the CMR with $M^{\star}$.
We rotate every point on the colour-magnitude diagram about this
pivot point such that the CMR becomes horizontal. We emphasize that
the slopes of the CMRs of the individual clusters have been
found in an homogenous manner for our sample -- an absolute requirement
given the uncertainties (cf.\ Pimbblet et al.\ 2002).
We then evolve the
clusters to our mean redshift ($z = 0.0897$) by performing a vertical
colour shift and an horizontal magnitude shift. The colour scale of
each cluster is shifted so that the red sequence ridge lines are
stacked on top of each other at a colour consistent with $M^{\star}$
at the mean redshift of the sample.
The magnitude scale of each cluster is also
shifted so that the $M^{\star}$ value of each cluster coincides.
Fig.~\ref{fig:composite} displays the colour-magnitude diagram of the composite cluster in
(g-i) versus i. Analogous plots are created for the other colour versus
magnitude combinations permitted by the SDSS data.
\begin{figure}
\centerline{\psfig{file=composite2.ps,angle=0,width=3.75in}}
\vspace*{-0.7cm}
\caption{Composite $z\sim0.09$ cluster colour-magnitude diagram in (g-i) versus i
for all 45 clusters in our sample.
The solid horizontal line denotes the transformed position of the CMR
of all the individual clusters whilst the vertical dashed line is $M^{\star}$.
}
\label{fig:composite}
\end{figure}
We now turn to the environmental dependence of the modal colour of the CMR.
We define modal colour to be the colour corresponding to the peak of
a Gaussian fitted to the CMR peak of colour histogram of the composite cluster.
To probe modal
colour dependence as a function of cluster environment, we employ a
method similar to the one described in Pimbblet et al.\ (2002; 2006).
Briefly, this is done by dividing the composite CMR (Fig.~\ref{fig:composite})
into projected radius (via a fixed metric) and local galaxy density bins
down to a fixed absolute magnitude limit ($M^{\star}+1$).
Although we could have chosen $r_{200}$ to place our clusters on to a more
physically-motivated spatial scale, we choose to use a fixed metric since
$r_{200} \approx r_{Virial}$ has only a weak dependence on X-ray luminosity
(Babul et al.\ 2002; see also Pimbblet et al.\ 2002). Given the narrow
$L_X$ range of our sample, the variation of $r_{200}$ for the cluster sample
is predicted to be small
(indeed, this prediction is bourne out in our calculations presented in
Table~\ref{tab:clusters}) hence the clusters
are scaled to the fixed metric for simplicity.
In Fig.~\ref{fig:depends} we present the dependence of the modal CMR value
on these two parameters. Qualitatively in-line with previous works (Abraham et al.\ 1996;
Terlevich et al.\ 2001; Pimbblet et al.\ 2002; Wake et al.\ 2005;
Pimbblet et al.\ 2006; see also Girardi et al.\ 2003; Haines et al.\ 2004),
we observe the modal red sequence
colour become progressively bluer with radius away from the
cluster centre and, equivalently, decreasing local galaxy density.
At the same time, the width of the red sequence is observed to increase.
Similar results are obtained for the other SDSS bands.
The strongest gradient in modal colour occurs for the inner portions (equivalently,
high density regions) of these plots (see Fig.~\ref{fig:depends}).
We interpret the increasing width and bluening of the CMR
as an age-metallicity effect (Kodama \& Arimoto 1997;
Kodama \& Bower 2001; Pimbblet et al.\ 2006) whereby
the red sequence galaxies on the outskirts of the
clusters have younger luminosity-weighted ages than those
at the core.
One potential issue with the increasing width is that as one
moves from the cluster core to its outskirts, the mixture
of the galaxies may be changing. For instance, the typical galaxy
at higher radius may have a lower mass. This would result in larger
error bars on the colour of the galaxy and could naturally explain
the increase in width as simply being a relic of measurement errors.
The colour error budget is, however, dominated by the fainter
galaxies at all radii (simply due to their numbers).
The median (g-r) colour error in our composite cluster
at $<1$ Mpc is 0.009, which increases to 0.010 at 5--6 Mpc.
These figures $\sim$halve if we consider only galaxies with
$M_r<-21$ and increase by some $\approx$0.005 for $M_r>-20$
galaxies. In all cases, the change in the photometric error
with radius from the cluster centre is much less than the CMR
width (Fig.~\ref{fig:depends}) and at most contributes a quarter
of the CMR's width (i.e.\ at low radii).
We also check the reality of the radial bluening by isolating
samples in different absolute magnitude ranges (i.e.\ a mass proxy).
In all cases, a blueward trend of modal CMR colours is found.
\begin{figure*}
\centerline{
\psfig{file=LogRadius_gi2.ps,angle=0,width=4.5in}
\hspace*{-2.5cm}
\psfig{file=LogSigma10_gi2.ps,angle=0,width=4.5in}
}
\vspace*{-2.5cm}
\caption{Environmental dependence of the width (upper panels)
and modal value of the red sequence peak (lower panels).
The colours of the CMR galaxies become progressively bluer with increasing radius
from the cluster centre and decreasing local galaxy density (the lines
in the lower panels are two linear fits to the points, divided where the gradient becomes
approximately zero).
This is accompanied with an increase in the width of the CMR.
}
\label{fig:depends}
\end{figure*}
In Table~\ref{tab:grads} we give the colour gradient for the inner regions
of the composite cluster to the limit where the gradient becomes zero
for all of the SDSS colours. This is the most comprehensive
reporting of modal colour variations across clusters to date that we are
aware of. Hence to
compare this with other studies, we
must restrict ourselves to more commonly used colours -- e.g.\
our reported $(g-r)$ colour gradient.
In Fig.~\ref{fig:compare} we display the radial $(g-r)$ gradient
with the results reported by
Abraham et al.\ (1996), Terlevich et al.\ (2001),
Pimbblet et al.\ (2002), and Wake et al.\ (2005)\footnote{We transform
the colours reported in these studies to $(g-r)$ colours
through the transforms presented by Smith et al.\ (2002).}
together with the gradient as a function of
mean $L_X$ for each of those studies.
\begin{table}
\begin{center}
\caption{Modal colour gradient for the various SDSS colours
as a function of projected radius and local galaxy density.
\hfil}
\begin{tabular}{ll}
\noalign{\medskip}
\hline
Radial Gradient & Value \\
\hline
$d(u-g)/d log(r_p)$ & $-0.069 \pm 0.007$ \\
$d(u-r)/d log(r_p)$ & $-0.099 \pm 0.008$ \\
$d(u-i)/d log(r_p)$ & $-0.130 \pm 0.020$ \\
$d(u-z)/d log(r_p)$ & $-0.130 \pm 0.010$ \\
$d(g-r)/d log(r_p)$ & $-0.031 \pm 0.003$ \\
$d(g-i)/d log(r_p)$ & $-0.049 \pm 0.004$ \\
$d(g-z)/d log(r_p)$ & $-0.073 \pm 0.006$ \\
$d(r-i)/d log(r_p)$ & $-0.018 \pm 0.002$ \\
$d(r-z)/d log(r_p)$ & $-0.035 \pm 0.004$ \\
$d(i-z)/d log(r_p)$ & $-0.015 \pm 0.002$ \\
\hline
Density Gradient & Value \\
\hline
$d(u-g)/d log(\Sigma)$ & $0.025 \pm 0.004$ \\
$d(u-r)/d log(\Sigma)$ & $0.041 \pm 0.004$ \\
$d(u-i)/d log(\Sigma)$ & $0.043 \pm 0.006$ \\
$d(u-z)/d log(\Sigma)$ & $0.048 \pm 0.007$ \\
$d(g-r)/d log(\Sigma)$ & $0.012 \pm 0.002$ \\
$d(g-i)/d log(\Sigma)$ & $0.020 \pm 0.003$ \\
$d(g-z)/d log(\Sigma)$ & $0.024 \pm 0.005$ \\
$d(r-i)/d log(\Sigma)$ & $0.007 \pm 0.001$ \\
$d(r-z)/d log(\Sigma)$ & $0.017 \pm 0.002$ \\
$d(i-z)/d log(\Sigma)$ & $0.009 \pm 0.001$ \\
\hline
\end{tabular}
\label{tab:grads}
\end{center}
\end{table}
\begin{figure*}
\centerline{\psfig{file=compare.ps,angle=0,width=6.75in}}
\vspace*{-0.7cm}
\caption{Comparison of reported $(g-r)$ CMR gradients from
other X-ray selected cluster surveys with the present work.
No trend is apparent with mean $L_X$ (left), but this may reflect
the nature of the individual clusters in each sample.
Conversely, these works strongly suggest
a steeper radial modal colour gradient with redshift (right).
}
\label{fig:compare}
\end{figure*}
There appears to be no correlation of X-ray luminosity -- and
therefore cluster mass by implication -- for the radial colour
gradients. However, Abraham et al.\ (1996) and Terlevich et al.\ (2001)
are only single-case studies (Abell~2370 and Coma respectively).
Therefore the gradients reported in those works may
reflect the individual accretion history of those clusters rather than
the more general results presented here and by Pimbblet et al.\ (2002)
and Wake et al.\ (2005). This is supported by the results of Stott et al.\ (2009)
who find the CMR slope of Coma is very unusual compared to other clusters.
Moreover, the range of $L_X$ probed by Wake et al.\ (2005) means that that
point should also be regarded as tentative.
For our study and Pimbblet et al.\ (2002), the cluster samples are intentionally
limited in $L_X$ and we therefore hypothesize that there may yet be a relationship
between colour gradient and $L_X$ such that the gradient becomes steeper
with increasing $L_X$ values.
A targeted study of a carefully constructed sample of low $L_X$ clusters
is therefore urgently needed to attempt to falsify this hypothesis.
Applying the same logic to the redshift evolution, we contend that the
radial variation of the modal CMR colour gets increasingly strong at higher redshift.
That said, we are not necessarily comparing like clusters with each other
over that redshift range -- the progenitors of our sample are most certainly not
going to be higher $L_X$ clusters at higher redshifts.
Given the median difference in $L_X$ from our sample to (e.g.) LARCS, we
estimate that the difference in total mass would be a factor of $\sim$ 2--3
(Popesso et al.\ 2005; see also Stott et al.\ 2009). Since we do not
expect significant variation in these cluster's luminosity functions
over such a mass range (cf.\ De Propris et al.\ 1999), we would expect
that any variation in cluster properties are reflective of evolutionary
change over the redshift range.
Further, we note that the range of $L_X$
probed by Wake et al.\ (2005) does include some clusters in our own $L_X$
interval, which adds weight to the above argument for redshift evolution.
This trend would also
be straight-forward to interpret: the red cores of clusters establish themselves
at early times and gain mass through accreting bluer `field' galaxies which
progressively redden over time (cf.\ De Lucia et al.\ 2007; Kodama \& Bower 2001;
Mart{\'{\i}}nez, Coenda \& Muriel 2010).
The colour gradient therefore becomes shallower as the blue galaxies
transform to redder galaxies at higher radii from cluster centres at a fixed mass
and reflects the hierarchical build-up of the red sequence over time.
\section{Summary}
In this work, we have assembled a sample of intermediate X-ray
luminosity galaxy clusters from SDSS photometry and spectroscopy.
Our principal results are as follows:\\
\noindent $\bullet$
Strong colour-magnitude red sequences exist for our $L_X$ range
in all SDSS colours probed. As with higher $L_X$ studies, these
CMRs can be accessed to a large radius from the cluster centres, out in
to the surrounding filamentary regime.\\
\noindent $\bullet$
The clusters exhibit a modal colour gradient for the red sequence galaxies
in projected radius and local galaxy density whereby red sequence galaxies
at the cluster outskirts are systematically bluer than those in the core
for all SDSS colours.
These gradients are interpreted as the galaxies at the cluster outskirts
having younger luminosity-weighted ages.\\
\noindent $\bullet$
Our results agree with earlier measurements of colour gradients and extend
them to comprehensively cover optical and near infra-red colours.
They suggest that there may be a relationship between
redshift and CMR colour gradient, but further studies
using carefully constructed cluster samples are needed to verify
this hypothesis and any potential relationship between $L_X$ and
colour gradient.
\section*{Acknowledgements}
PCJ and KAP contributed equally to this manuscript.
PCJ acknowledges receipt of an Australian Postgraduate Award.
KAP acknowledges partial support from the Australian Research Council
and a Monash University internal grant scheme.
We thank Warrick Couch and Michael Drinkwater for helpful feedback on the content
of this paper which is derived from PCJ's Honours Thesis, awarded by U.Queensland.
Further, we gratefully thank the Astronomical Society of Australia for the
award of the Bok Prize (2009) to PCJ for said thesis.
We also thank the anonymous referee for her/his
constructive comments that have improved the work reported here.
Funding for the SDSS and SDSS-II has been provided by the
Alfred P.\ Sloan Foundation, the Participating Institutions,
the National Science Foundation, the U.S. Department of Energy,
the National Aeronautics and Space Administration, the Japanese
Monbukagakusho, the Max Planck Society, and the Higher Education
Funding Council for England. The SDSS Web Site is http://www.sdss.org/.
The SDSS is managed by the Astrophysical Research Consortium for the
Participating Institutions. The Participating Institutions are the
American Museum of Natural History, Astrophysical Institute Potsdam,
University of Basel, University of Cambridge, Case Western Reserve
University, University of Chicago, Drexel University, Fermilab, the
Institute for Advanced Study, the Japan Participation Group, Johns
Hopkins University, the Joint Institute for Nuclear Astrophysics, the
Kavli Institute for Particle Astrophysics and Cosmology, the Korean
Scientist Group, the Chinese Academy of Sciences (LAMOST), Los Alamos
National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the
Max-Planck-Institute for Astrophysics (MPA), New Mexico State University,
Ohio State University, University of Pittsburgh, University of Portsmouth,
Princeton University, the United States Naval Observatory, and the University of Washington.
This research has made use of the NASA/IPAC Extragalactic Database (NED) which is
operated by the Jet Propulsion Laboratory, California Institute of Technology,
under contract with the National Aeronautics and Space Administration.
|
2,869,038,153,939 | arxiv | \section{Introduction}
The supersymmetric t-J model is often considered a candidate for describing
high $T_c$
superconductivity~\cite{Korepin:1994,Foerster:1993fp,Foerster:1993uk}. The
underlying symmetry is described by the supersymmetric (graded) Lie algebra
$sl(2,1)$. Integrable models with supersymmetry have been discussed also
in~\cite{Bassi:1999ua, Gohmann:1999av, Pfannmuller:1996vp, Ramos:1996my,
Maassarani:1995ac, Hinrichsen:1992nj}.
This article extends the results in~\cite{KarowskiSU(N),Zapletal:1998qw} on
matrix difference equations and a generalized version of the algebraic Bethe
ansatz for ordinary or quantum groups to this supersymmetric Lie algebra.
We recall that matrix difference equations play an important role in
mathematical physics, see
e.g.~\cite{Babujian:1998uw,Smirnov:1992,TarasovVarchenko,Frenkel:1992gx}. In
particular in the context of quantum integrable field theories they provide
solutions of the formfactor equations, which can be used to calculate
correlation functions~\cite{Berg:1979sw}. This type of matrix
difference equations can also be considered as a discrete version~\cite{Reshetikhin92} of a
Knizhnik-Zamolodchikov system~\cite{Knizhnik:1984nr}.
The conventional algebraic Bethe ansatz is used to solve the eigenvalue
problem of a hamiltonian in a way closely related to the underlying symmetry of the considered
model~(see e.g.~\cite{Faddeev:1981}). One constructs the
eigenvectors as highest
weight vectors of the corresponding irreducible representations either of the
ordinary Lie algebra or the $q$-deformed analogue, the quantum group. By this
construction one encounters `unwanted' terms. The eigenvalue equation is
fulfilled if all these `unwanted' terms vanish which leads to the so called
Bethe ansatz equations.
The `off-shell' Bethe ansatz~\cite{BabujianI, BabujianII, Reshetikhin92,
KarowskiSU(N), Zapletal:1998qw}
is used to solve matrix differential or difference equations. The solution
is represented as an integral or a sum over some lattice (integral of
Jackson type). The `unwanted'
terms arising in this case do not vanish due to the
Bethe ansatz equations but they sum up to zero under the integral or sum.
This modification of the Bethe ansatz has originally been introduced to
solve Knizhnik-Zamolodchikov
equations~\cite{BabujianI}. It has also been applied to
the quantization of dimensionally reduced gravity~\cite{Korotkin:1996vi} in
this connection.
Let $f^{1\cdots n}(\vec{x}):{\mathbb C}^n\rightarrow V^{1\cdots
n}=\bigotimes_{j=1}^{n}{\mathbb C}^3$ be a vector valued function having the following
symmetry property
\begin{eqnarray}
\label{eq:SSym}
f^{\cdots ij\cdots}(\cdots,x_i,x_j,\cdots)
=R_{ji}(x_j-x_i)f^{\cdots ji\cdots}(\cdots,x_j,x_i,\cdots),
\end{eqnarray}
where~$R$ is the $sl(2,1)$ $R$-matrix (see below).
We consider the set of matrix difference equations
\begin{eqnarray}
\label{eq:QPeriod}
f^{1\cdots n}(x_1,\cdots,x_i+\xi,\cdots,x_n)
=Q_{1\cdots n}(\vec{x}|i)f^{1\cdots n}(\vec{x})\hspace{1cm}(i=1,\cdots,n)
\end{eqnarray}
with an arbitrary shift-parameter~$\xi$ and some sort of generalized tranfer
matrix~$Q_{1\cdots n}(\vec{x}|i)$ invariant under~$sl(2,1)$.
Functions satisfying~\eqref{eq:SSym} and~\eqref{eq:QPeriod} will be called
$R$-symmetric and $Q$-periodic respectively.
\section{Matrix difference equation and generalized nested Bethe ansatz}
\label{ch:MaDiffEq}
Let $V^{1\cdots n}=V_1\otimes\cdots\otimes V_n$ be the tensor product of~$n$
isomorphic vector spaces $V_i=\Span{|1\rangle,|2\rangle,|3\rangle}\cong{\mathbb C}^3$.
The states~$|1\rangle$ and~$|2\rangle$ are supposed to be bosonic
while~$|3\rangle$ is fermionic~\cite{Foerster:1993uk}.
For later convenience we also define the reduced vector spaces
$\tilde{V}_i=\Span{|2\rangle,|3\rangle}\cong {\mathbb C}^2$
and $\tilde{V}^{1\cdots m}=\tilde{V}_1\otimes\cdots\otimes \tilde{V}_m$.
Vectors in~$V^{1\cdots n}$ will be denoted by $f^{1\cdots n}\in V^{1\cdots
n}$. Analogously vectors in the reduced spaces are in addition marked with
a tilde: $\tilde{f}^{1\cdots m}\in \tilde{V}^{1\cdots m}$.
Matrices acting in~$V^{1\cdots n}$ are denoted with index subscripts
$Q_{1\cdots n}: V^{1\cdots n}\rightarrow V^{1\cdots n}$.
As usual the $R$-matrix will depend on a spectral parameter~$\theta$. This
matrix $R_{ij}(\theta):V_i\otimes V_j\rightarrow V_j\otimes V_i$ is of the
form~\cite{Bracken:1990qy}
\begin{eqnarray}
R_{ij}(\theta)=b(\theta)\Sigma_{ij}+c(\theta)P_{ij}
\end{eqnarray}
where $P_{ij}: |\alpha\rangle\otimes|\beta\rangle\mapsto|\alpha\rangle\otimes|\beta\rangle$ is the permutation operator
and
\begin{eqnarray*}
\Sigma_{ij}: |\alpha\rangle\otimes|\beta\rangle\mapsto\sigma_{\alpha\beta}|\beta\rangle\otimes|\alpha\rangle
=\left\{\begin{array}{rl}
-|\beta\rangle\otimes|\alpha\rangle &, |\alpha\rangle=|\beta\rangle=|3\rangle\\
|\beta\rangle\otimes|\alpha\rangle &, \mbox{ else}.
\end{array}\right.
\end{eqnarray*}
The statistics factor~$\sigma_{\alpha\beta}=\pm1$ takes
the fermionic character of the state $|3\rangle$ into account. It has the value~$-1$ if and
only if both states are fermionic, i.e.~$\alpha=\beta=3$. The
functions~$b(\theta)$ and~$c(\theta)$ have the form
\begin{eqnarray*}
b(\theta)=\frac{\theta}{\theta+K}\hspace{4cm}c(\theta)=\frac{K}{\theta+K}
\end{eqnarray*}
with an arbitrary constant~$K$.
For later use we define the function $w(\theta)=-b(\theta)+c(\theta)$.
It is easy to check that $R(\theta)$ is unitary and satisfies the Yang-Baxter
equation:
\begin{eqnarray}
\label{eq:UnitaritySYBE}
R_{ab}(\theta)R_{ba}(-\theta)={\mbox{\bf 1}}\quad\mbox{ and }\quad
R_{12}(\theta_{12})R_{13}(\theta_{13})R_{23}(\theta_{23})
=R_{23}(\theta_{23})R_{13}(\theta_{13})R_{12}(\theta_{12}),
\end{eqnarray}
where $\theta_{ij}=\theta_i-\theta_j$.
Next we introduce different kinds of monodromy matrices which prove
to be useful in the following.
The monodromy matrix
\begin{eqnarray*}
T_{1\cdots n,a}(\vec{x}|u)=R_{1a}(x_1-u)\cdots R_{na}(x_n-u).
\end{eqnarray*}
is an operator $V^{1\cdots n}\otimes V_a \rightarrow V_a\otimes V^{1\cdots
n}$. The vector spaces $V^{1\cdots n}$ and $V_a$ are called quantum and
auxiliary space respectively. As usual we will consider this operator as a matrix
\begin{eqnarray*}
T_{1\cdots n,a}=\left(\begin{array}{ccc}
A & B_2 & B_3 \\
C^2 & D_2^2 & D_3^2 \\
C^3 & D_2^3 & D_3^3
\end{array}\right)
\end{eqnarray*}
over the auxiliary space with operators in the quantum space as entries.
As a consequence of~\eqref{eq:UnitaritySYBE} the monodromy
matrix fulfills the Yang-Baxter algebra relation
\begin{eqnarray}
R_{ab}(u-v)
T_{1\cdots n,b}(\vec{x}|v)T_{1\cdots n,a}(\vec{x}|u)
=T_{1\cdots n,a}(\vec{x}|u)T_{1\cdots n,b}(\vec{x}|v)
R_{ab}(u-v).
\end{eqnarray}
Following~\cite{KarowskiSU(N)} we also introduce another set of modified monodromy
matrices for~$i=1,\cdots,n$ given as
\begin{eqnarray*}
T_{1\cdots n,a}^Q(\vec{x}|i)=R_{1a}(x_1-x_i)\cdots R_{i-1a}(x_{i-1}-x_i) P_{ia}
R_{i+1a}(x_{i+1}-x_i^\prime)\cdots R_{na}(x_n-x_i^\prime),
\end{eqnarray*}
where $\vec{x}\,^\prime=\vec{x}+\xi\vec{e}_i$. In the same way as above they
should be considered as matrices in the auxiliary space.
This new type of monodromy matrix satisfies the two mixed Yang-Baxter relations
\begin{eqnarray*}
T_{1\cdots n,a}^Q(\vec{x}|i)T_{1\cdots n,b}(\vec{x}|u)R_{ab}(x_i^\prime-u)
&=&R_{ab}(x_i-u)T_{1\cdots n,b}(\vec{x\,}^\prime|u)
T_{1\cdots n,a}^Q(\vec{x}|i)\\
T_{1\cdots n,a}(\vec{x\,}^\prime|u)T_{1\cdots n,a}^Q(\vec{x}|i)
R_{ab}(u-x_i^\prime)
&=&R_{ab}(u-x_i)T_{1\cdots n,b}^Q(\vec{x}|i)T_{1\cdots n,a}(\vec{x}|u).
\end{eqnarray*}
For~$i=n$ the modified monodromy matrix is the same as the ordinary one.
We want to encode the fermionic nature of the state~$|3\rangle$ in such a
way that $sl(2,1)$ appears naturally.
To do so we define an additional monodromy matrix
\begin{eqnarray}
[{T^\star}_{1\cdots n,a}(\vec{x}|u)]_{\alpha,\{\mu\}}^{\beta,\{\nu\}}
=\sigma_{\alpha\beta}\sigma_{\beta\nu_1}\cdots\sigma_{\beta\nu_n}
[T_{1\cdots n,a}(\vec{x}|u)]_{\alpha,\{\mu\}}^{\beta,\{\nu\}},
\end{eqnarray}
where the quantum space indices are collected in the
notation~$\{\nu\}=\nu_1,\cdots,\nu_n$.
This definition is easily extended to a modified version as before.
The shift operator is defined by
\begin{eqnarray}
\label{eq:Shiftmatrix}
Q_{1\cdots n}(\vec{x}|i)=tr_a {T^\star}_{1\cdots n,a}^Q(\vec{x}|i)=A_{1\cdots
n,a}^Q(\vec{x}|i)+\sum_{\alpha=2,3} \left[{D^\star}_{1\cdots n,a}^Q(\vec{x}|i)\right]_\alpha^\alpha.
\end{eqnarray}
which is obviously closely related to usual transfer matrices.
For all operators just defined there also exists a counterpart in the reduced
spaces denoted by a tilde.
Using the Yang-Baxter relations given above we derive in a straightforward
way the commutation relations
\begin{eqnarray}
\label{eq:VTRone}
B_i(\vec{x}|u_2)B_j(\vec{x}|u_1)
&=&B_{j^\prime}(\vec{x}|u_1)B_{i^\prime}(\vec{x}|u_2)
R_{ji}^{i^\prime j^\prime}(u_1-u_2)\\
\label{eq:VTRthree}
A(\vec{x}|u_2)B_i(\vec{x}|u_1)
&=&\frac{1}{b(u_2-u_1)}B_i(\vec{x}|u_1)A(\vec{x}|u_2)
-\frac{c(u_2-u_1)}{b(u_2-u_1)}B_i(\vec{x}|u_2)A(\vec{x}|u_1)\\
\label{eq:VTRfour}
A^Q(\vec{x}|i)B_j(\vec{x}|u)
&=&\frac{1}{b(x_i^\prime-u)}B_j(\vec{x\,}^\prime|u)A^Q(\vec{x}|i)
-\frac{c(x_i^\prime-u)}{b(x_i^\prime-u)}B_j^Q(\vec{x}|i)A(\vec{x}|u)\\
\label{eq:VTRfive}
{D^\star}_j^i(\vec{x}|u_2)B_k(\vec{x}|u_1)
&=&\frac{\sigma_{ik}}{b(u_1-u_2)}B_{k^\prime}(\vec{x}|u_1){D^\star}_{j^\prime}^i(\vec{x}|u_2)
R_{kj}^{j^\prime k^\prime}(u_1-u_2)\\
&&\qquad\qquad\qquad\qquad-\sigma_{ik}\frac{c(u_1-u_2)}{b(u_1-u_2)}B_j(\vec{x}|u_2){D^\star}_k^i(\vec{x}|u_1)\nonumber\\
\label{eq:VTRsix}
{D^\star}_k^{Qj}(\vec{x}|i)B_l(\vec{x}|u)
&=&\sigma_{jl}\frac{1}{b(u-x_i)}B_{l^\prime}(\vec{x\,}^\prime|u)
{D^\star}_{k^\prime}^{Qj}(\vec{x}|i)R_{lk}^{k^\prime l^\prime}(u-x_i^\prime)\\
&&\qquad\qquad\qquad\qquad-\sigma_{jl}\frac{c(u-x_i)}{b(u-x_i)}B_k^Q(\vec{x}|i){D^\star}_l^j(\vec{x}|u)\nonumber
\end{eqnarray}
The first terms on the right hand side of each of these equations are called
`wanted' the others `unwanted'. These relations are slightly different than
those appearing in the $SU(N)$-case~\cite{KarowskiSU(N)} due to the
statistics factors~$\sigma$ in the last two equations.
To solve the system of~\eqref{eq:SSym} and the matrix difference
equations~\eqref{eq:QPeriod} we use the nested so called `off shell' Bethe
ansatz~\cite{BabujianI,BabujianII} with two levels. The first level is quite analogous
to the constructions in~\cite{KarowskiSU(N),Zapletal:1998qw}. Due to the
fermionic statistics of state~$|3\rangle$ which ensures supersymmetry the second level
is different. This problem is solved in the present paper. We write the vector valued
function~$f^{1\cdots n}:{\mathbb C}^n\rightarrow V^{1\cdots n}$ as a sum of
first level Bethe ansatz vectors
\begin{eqnarray}
\label{eq:BA.1}
f^{1\cdots n}(\vec{x})
=\sum_{\vec{u}} B_{\beta_m}(\vec{x}|u_m)\cdots B_{\beta_1}(\vec{x}|u_1)
\Omega^{1\cdots n}[g^{1\cdots m}(\vec{x}|\vec{u})]^{\beta_1\cdots \beta_m}.
\end{eqnarray}
The sum is extended over $\vec{u}\in\vec{u}_0-\xi{\mathbb Z}^m\subset{\mathbb C}^m$
(`integral of Jackson type', $\vec{u}_0\in{\mathbb C}^m$ arbitrary).
The reference state $\Omega^{1\cdots n}$ is given by $\Omega^{1\cdots n}=|1\rangle^{\otimes n}$
and the auxiliary function
$g^{1\cdots m}:{\mathbb C}^n\times{\mathbb C}^m\rightarrow \tilde{V}^{1\cdots m}$
is defined by
$g^{1\cdots m}(\vec{x}|\vec{u})
=\eta(\vec{x}|\vec{u})\tilde{f}^{1\cdots m}(\vec{u})$
with $\eta:{\mathbb C}^n\times{\mathbb C}^m\rightarrow{\mathbb C}$
\begin{eqnarray*}
\eta(\vec{x}|\vec{u})
=\prod_{i=1}^n\prod_{j=1}^m\psi(x_i-u_j)\prod_{1\leq i<j\leq m}\tau(u_i-u_j),
\end{eqnarray*}
where the scalar functions $\psi:{\mathbb C}\rightarrow{\mathbb C}$ and
$\tau:{\mathbb C}\rightarrow{\mathbb C}$ satisfy
\begin{eqnarray}
\label{eq:fktglone}
b(x)\psi(x)=\psi(x-\xi)\hspace{3cm}
\frac{\tau(x)}{b(x)}=\frac{\tau(x-\xi)}{b(\xi-x)}.
\end{eqnarray}
Possible solutions are
\begin{eqnarray}
\label{eq:solutions}
\psi(x)=\frac{\Gamma(1+\frac{K}{\xi}+\frac{x}{\xi})}{\Gamma(1+\frac{x}{\xi})}
\qquad\mbox{ and }\qquad
\tau(x)=x\frac{\Gamma(\frac{x}{\xi}-\frac{K}{\xi})}{\Gamma(1+\frac{x}{\xi}+\frac{K}{\xi})}.
\end{eqnarray}
They may be multiplied by an arbitrary function which is periodic in~$\xi$.
We prove that $f^{1\cdots n}(\vec{x})$ is $R$-symmetric and $Q$-periodic if
$\tilde{f}^{1\cdots m}(\vec{u})$ is $\tilde{R}$-symmetric and $\tilde{Q}$-periodic.
To compute the action of the shift operator~$Q$ on our Bethe ansatz
function~$f^{1\cdots n}(\vec{x})$ we start from~\eqref{eq:Shiftmatrix} and commute the
operators~$A^Q$ und~${D^\star}^Q$ through all the $B$-operators to the right where
they act on the reference states according to
$A^Q(\vec{x}|m)\Omega^{1\cdots n}=\Omega^{1\cdots n}$ and
$\left[{D^\star}^Q(\vec{x}|m)\right]_{\alpha}^{\alpha^\prime}\Omega^{1\cdots n}=0$.
If~$\tilde{f}^{1\cdots m}(\vec{u})$ is $\tilde{R}$-symmetric one obtains the
representations ($\vec{x\,}^\prime=\vec{x}+\xi\vec{e}_n$)
\begin{multline*}
\shoveleft{{A^\star}^Q(\vec{x}|n)f^{1\cdots n}(\vec{x})
=f^{1\cdots n}(\vec{x\,}^\prime)
+\sum_{\vec{u}}\sum_{i=1}^m
\Lambda_A^{(i)}(\vec{x}|\vec{u})B_{\beta_i}^Q(\vec{x}|n)}\hspace{5.5cm}\\
\qquad\times B_{\beta_m}(\vec{x}|u_m)
\cdots\widehat{B_{\beta_i}(\vec{x}|u_i)}\cdots B_{\beta_1}(\vec{x}|u_1)
\Omega^{1\cdots n}
\eta(\vec{x}|\vec{u})\left[\tilde{f}^{1\cdots
mi}(u_1,\cdots,u_m,u_i)\right]^{\beta_1\cdots\beta_m\beta_i},\\{}
\shoveleft{[{D^\star}^Q(\vec{x}|n)]_{\alpha}^{\alpha}f^{1\cdots n}(\vec{x})
=\sum_{\vec{u}}\sum_{i=1}^m\Lambda_D^{(i)}(\vec{x}|\vec{u})B_{\beta_i}^Q(\vec{x}|n)B_{\beta_m}(\vec{x}|u_m)
\cdots\widehat{B_{\beta_i}(\vec{x}|u_i)}\cdots B_{\beta_1}(\vec{x}|u_1)}\\
\times\Omega^{1\cdots n}\eta(\vec{x}|\vec{u})\left[\tilde{Q}(u_1,\cdots,u_m,u_i|i)\tilde{f}^{1\cdots mi}(u_1,\cdots,u_m,u_i)\right]^{\beta_1\cdots\beta_m\beta_i}.
\end{multline*}
The hat denotes a factor which is omitted and $\tilde{Q}(u_1,\cdots,u_m,u_i|i)$
is an analogue to the shift operator~\eqref{eq:Shiftmatrix} in the dimensionally
reduced space $\tilde{V}^{1\cdots mi}$. The `wanted' contributions already
ensure the validity of~\eqref{eq:QPeriod}, so
`unwanted' ones have to sum up to zero.
The representation can be obtained
as follows: The expression in front of the sum is a consequence of the `wanted'
parts of the commutation relations~\eqref{eq:VTRone}-\eqref{eq:VTRsix}.
To determine the functions $\Lambda_A^{(i)}(\vec{x}|\vec{u})$ and
$\Lambda_D^{(i)}(\vec{x}|\vec{u})$ one has to perform the following steps:
First move $B_{\beta_i}(\vec{x}|u_i)$ to the front of the $B$-operators according
to~\eqref{eq:VTRone} and use the $\tilde{R}$-symmetry of $\tilde{f}(\vec{u})$ to
absorb them. Then consider the `unwanted' contributions of~\eqref{eq:VTRfour}
and~\eqref{eq:VTRsix} respectively. Now commute the resulting
operators~$A(\vec{x}|u_i)$ and~${D^\star}(\vec{x}|u_i)$ to the right and only take
the `wanted' contributions into account. This gives a product of
$R$-matrices and statistics factors in the case of~${D^\star}$. The action on the reference state
is given by $A(\vec{x}|u_i)\Omega^{1\cdots n}=\Omega^{1\cdots n}$ and
$[{D^\star}(\vec{x}|u_i)]_{\alpha}^{\alpha^\prime}\Omega^{1\cdots n}=
\delta_{\alpha}^{\alpha^\prime}\sigma_{\alpha\alpha^\prime}\prod_{j=1}^{n}b(x_j-u_i)
\Omega^{1\cdots n}$ respectively. By an `ice rule' for fermions
one can regroup the statistics factors. Together with the $R$-matrices this
gives the reduced shift operator~$\tilde{Q}(u_1,\cdots,u_m,u_i|i)$.
Finally one obtains
\begin{eqnarray*}
\Lambda_A^{(i)}(\vec{x}|\vec{u})
&=&-\frac{c(x_n^\prime-u_i)}{b(x_n^\prime-u_i)}\prod_{l\neq i}\frac{1}{b(u_i-u_l)},\\
\Lambda_D^{(i)}(\vec{x}|\vec{u})
&=&-\frac{c(u_i-x_n)}{b(u_i-x_n)}\prod_{l\neq i}\frac{1}{b(u_l-u_i)}\prod_{l=1}^nb(x_l-u_i).
\end{eqnarray*}
As already mentioned we have to show that the contributions of the sums cancel.
Following the arguments of~\cite{KarowskiSU(N)}, i.e. using~\eqref{eq:fktglone} and the relation
$\frac{c(x)}{b(x)}=-\frac{c(-x)}{b(-x)}$, one can indeed
show that these `unwanted' contributions vanish after the summation,
if~$\tilde{f}^{1\cdots m}(\vec{u})$ is $\tilde{Q}$-periodic. The
symmetry of~$\eta(\vec{x}|\vec{u})$ in the arguments $x_1,\cdots,x_n$
combined with
$R_{ij}(\theta)\Omega^{1\cdots n}=\Omega^{1\cdots n}$
implies the $R$-symmetry of~$f^{1\cdots n}(\vec{x})$.
The next step consists in the construction of a function~$\tilde{f}^{1\cdots
m}(\vec{u})$ which is $\tilde{R}$-symmetric and $\tilde{Q}$-periodic.
As above we write
\begin{eqnarray}
\label{eq:BA.2}
\tilde{f}^{1\cdots m}(\vec{u})
=\sum_{\vec{v}} \tilde{B}(\vec{u}|v_k)\cdots \tilde{B}(\vec{u}|v_1)
\tilde{\Omega}^{1\cdots m}\tilde{g}(\vec{u}|\vec{v}).
\end{eqnarray}
The sum is extended over $\vec{v}\in\vec{v}_0-\xi{\mathbb Z}^k\subset{\mathbb C}^k$
($\vec{v}_0\in{\mathbb C}^k$ arbitrary). Here the reference state is given by
$\tilde{\Omega}^{1\cdots m}=|2\rangle^{\otimes m}$
and the auxiliary function
$\tilde{g}:{\mathbb C}^m\times{\mathbb C}^k\rightarrow{\mathbb C}$ reads
\begin{eqnarray*}
\tilde{g}(\vec{u}|\vec{v})
=\prod_{i=1}^m\prod_{j=1}^k\psi(u_i-v_j)
\prod_{1\leq i<j\leq k}\tilde{\tau}(v_i-v_j),
\end{eqnarray*}
where $\psi:{\mathbb C}\rightarrow{\mathbb C}$
and $\tilde{\tau}:{\mathbb C}\rightarrow{\mathbb C}$ satisfy
\begin{eqnarray}
\label{eq:fktgltwo}
b(x)\psi(x)=\psi(x-\xi)\hspace{3cm}
\frac{\tilde{\tau}(x)}{b(-x)}=\frac{\tilde{\tau}(x-\xi)}{b(\xi-x)}.
\end{eqnarray}
Possible solutions of~\eqref{eq:fktgltwo} are given by~\eqref{eq:solutions} and
$\tilde{\tau}(x)=x/(x-K)$. Again both functions may be multiplied by an
arbitrary function periodic in~$\xi$. Note that the supersymmetry has modified
the last equation compared to~\eqref{eq:fktglone}.
The Yang-Baxter relations imply the commutation relations
\begin{eqnarray*}
\tilde{B}(\vec{u}|v_2)\tilde{B}(\vec{u}|v_1)
&=&w(v_1-v_2)\tilde{B}(\vec{u}|v_1)\tilde{B}(\vec{u}|v_2)\\
\tilde{A}(\vec{u}|v_2)\tilde{B}(\vec{u}|v_1)
&=&\frac{1}{b(v_2-v_1)}\tilde{B}(\vec{u}|v_1)\tilde{A}(\vec{u}|v_2)
-\frac{c(v_2-v_1)}{b(v_2-v_1)}\tilde{B}(\vec{u}|v_2)\tilde{A}(\vec{u}|v_1)\\
\tilde{A}^Q(\vec{u}|i)\tilde{B}(\vec{u}|v)
&=&\frac{1}{b(u_i^\prime-v)}
\tilde{B}(\vec{u\,}^\prime|v)\tilde{A}^Q(\vec{u}|i)
-\frac{c(u_i^\prime-v)}{b(u_i^\prime-v)}
\tilde{B}^Q(\vec{u}|i)\tilde{A}(\vec{u}|v)\\
\tilde{{D^\star}}(\vec{u}|v_2)\tilde{B}(\vec{u}|v_1)
&=&-\frac{w(v_1-v_2)}{b(v_1-v_2)}\tilde{B}(\vec{u}|v_1)\tilde{{D^\star}}(\vec{u}|v_2
+\frac{c(v_1-v_2)}{b(v_1-v_2)}\tilde{B}(\vec{u}|v_2)\tilde{{D^\star}}(\vec{u}|v_1)\\%\nonumber\\
\tilde{{D^\star}}^Q(\vec{u}|i)\tilde{B}(\vec{u}|v)
&=&-\frac{w(v-u_i^\prime)}{b(v-u_i)}\tilde{B}(\vec{u\,}^\prime|v)
\tilde{{D^\star}}^Q(\vec{u}|i
+\frac{c(v-u_i)}{b(v-u_i)}\tilde{B}^Q(\vec{u}|i)\tilde{{D^\star}}(\vec{u}|v)
\end{eqnarray*}
Due to supersymmetry these relations are structurally different from those
for the ordinary group case~\cite{KarowskiSU(N)}.
As a consequence the function~$\tilde{\tau}$ has to satisfy a slightly modified
functional equation~\eqref{eq:fktgltwo} compared to~$\tau$ in~\eqref{eq:fktglone}.
Next we act with the shift operator~$\tilde{Q}^{1\cdots m}(\vec{u}|i)=\tilde{A}^Q(\vec{u}|i)+\tilde{{D^\star}}^Q(\vec{u}|i)$ on the Bethe
ansatz vector~$\tilde{f}^{1\cdots m}(\vec{u})$ and repeat the arguments
given above. The equations~\eqref{eq:QPeriod} are equivalent for all
$i=1,\cdots,m$, so we will restrict ourselves to~$i=m$.
Using the relations $\tilde{A}^Q(\vec{u}|m)\tilde{\Omega}^{1\cdots m}=\tilde{\Omega}^{1\cdots m}$
and $\tilde{{D^\star}}^Q(\vec{u}|m)\tilde{\Omega}^{1\cdots m}=0$ we get the
representations ($\vec{u\,}^\prime=\vec{u}+\xi\vec{e}_m$)
\begin{eqnarray}
\tilde{{A^\star}}^Q(\vec{u}|m)\tilde{f}^{1\cdots m}(\vec{u})
&=&\tilde{f}^{1\cdots m}(\vec{u\,}^\prime)\nonumber\\
&&\hspace{-2.5cm}+\sum_{\vec{v}}\sum_{i=1}^k \tilde{\Lambda}_A^{(i)}(\vec{u}|\vec{v})\tilde{B}^Q(\vec{u}|m)\tilde{B}(\vec{u}|v_k)
\cdots\widehat{\tilde{B}(\vec{u}|v_i)}\cdots\tilde{B}(\vec{u}|v_1)\tilde{\Omega}\tilde{g}(\vec{u}|\vec{v})\\
\tilde{{D^\star}}^Q(\vec{u}|m)\tilde{f}^{1\cdots m}(\vec{u})\nonumber\\
&&\hspace{-2.5cm}=\sum_{\vec{v}}\sum_{i=1}^k \tilde{\Lambda}_D^{(i)}(\vec{u}|\vec{v})\tilde{B}^Q(\vec{u}|m)\tilde{B}(\vec{u}|v_k)
\cdots\widehat{\tilde{B}(\vec{u}|v_i)}\cdots\tilde{B}(\vec{u}|v_1)\tilde{\Omega}\tilde{g}(\vec{u}|\vec{v}).
\end{eqnarray}
By similar arguments as before one can show that the
functions $\tilde{\Lambda}_A^{(i)}(\vec{u}|\vec{v})$ and
$\tilde{\Lambda}_D^{(i)}(\vec{u}|\vec{v})$ are given by
\begin{eqnarray*}
\tilde{\Lambda}_A^{(i)}(\vec{u}|\vec{v})
&=&-\frac{c(u_m^\prime-v_i)}{b(u_m^\prime-v_i)}\prod_{l<i}\frac{1}{b(v_i-v_l)}\prod_{l>i}\frac{-1}{b(v_l-v_i)}\\
\tilde{\Lambda}_D^{(i)}(\vec{u}|\vec{v})
&=&\frac{c(v_i-u_m)}{b(v_i-u_m)}\prod_{l<i}\frac{1}{b(v_i-v_l)}\prod_{l>i}
\frac{-1}{b(v_l-v_i)}\prod_{l=1}^m b(u_l-v_i).
\end{eqnarray*}
We made use of the fact that $w(\theta)w(-\theta)=1$ and
$\frac{w(\theta)}{b(\theta)}=\frac{-1}{b(-\theta)}$.
Again the `wanted' contributions already guarantee the validity
of~\eqref{eq:QPeriod} while a straightforward calculation
using~\eqref{eq:fktgltwo} and~$\frac{c(x)}{b(x)}=-\frac{c(-x)}{b(-x)}$ shows
that the `unwanted' contributions sum up to zero.
The $\tilde{R}$-symmetry is implied by the symmetry of
$\tilde{g}(\vec{u}|\vec{v})$ in the variables~$u_1,\cdots,u_m$ and the
property
$\tilde{R}_{ij}(\theta)\tilde{\Omega}^{1\cdots m}=\tilde{\Omega}^{1\cdots m}$.
Finally we have proved that~$f^{1\cdots n}$ given by the Bethe
ansatz~\eqref{eq:BA.1} solves the combined system of
$R$-symmetry~\eqref{eq:SSym} and
the matrix difference equations~\eqref{eq:QPeriod} if analogous
relations hold for~$\tilde{f}^{1\cdots m}$. It was shown that
solutions to the dimensional reduced
problem can explicitly be constructed by use of the Bethe
ansatz~\eqref{eq:BA.2}.
\section{Highest-weight property}
\label{ch:GroupTheory}
We now investigate the $sl(2,1)$-properties of the shift operator
$Q^{1\cdots n}$ and of the solutions~\eqref{eq:BA.1} constructed above.
The behaviour $R_{ab}(x)=\Sigma_{ab}+\frac{K}{x}P_{ab}+O(x^{-2})$
for $x\rightarrow\infty$ implies the asymptotic expansion
\begin{eqnarray*}
[T_{1\cdots n,a}(\vec{x}|u)]_{\alpha,\{\gamma\}}^{\beta,\{\nu\}}
&=&[\Sigma_{1a}\cdots\Sigma_{na}
+\frac{K}{u}\sum_{j=1}^n\Sigma_{1a}\cdots P_{ja}\cdots\Sigma_{na}]_{\alpha,\{\gamma\}}^{\beta,\{\nu\}}
+O(u^{-2})\\
&=&\sigma_{\alpha,\{\mu\}}\delta_\alpha^\beta\delta_{\mu_1}^{\nu_1}\cdots\delta_{\mu_n}^{\nu_n}
+\frac{K}{u}\sigma_{\beta\alpha}\sigma_{\beta,\{\nu\}}
M_{\alpha,\{\mu\}}^{\beta,\{\nu\}}
+O(u^{-2}).
\end{eqnarray*}
The operators $M_{\alpha,\{\mu\}}^{\beta,\{\nu\}}$ have the form
\begin{eqnarray}
\label{eq:GeneratorFormel}
M_{\alpha,\{\mu\}}^{\beta,\{\nu\}}=\sum_j\sigma_{\beta\nu_{j+1}}\cdots\sigma_{\beta\nu_{n}}\sigma_{\alpha\nu_{j+1}}\cdots\sigma_{\alpha\nu_{n}}\delta_{\mu_1}^{\nu_1}\cdots\delta_{\mu_{j-1}}^{\nu_{j-1}}\delta_{\mu_j}^{\beta}\delta_{\alpha}^{\nu_j}\delta_{\mu_{j+1}}^{\nu_{j+1}}\cdots\delta_{\mu_n}^{\nu_n}.
\end{eqnarray}
From this one derives the commutation relations
\begin{eqnarray}
\label{eq:VTRMT}
M_{\alpha}^{\alpha^\prime}{T^\star}_{\beta}^{\beta^\prime}\!(u)
-\sigma_{\alpha\beta}\sigma_{\alpha\beta^\prime}\sigma_{\alpha^\prime\beta}
\sigma_{\alpha^\prime\beta^\prime}{T^\star}_{\beta}^{\beta^\prime}\!(u) M_{\alpha}^{\alpha^\prime}
=\delta_{\beta}^{\alpha^\prime}{T^\star}_{\alpha}^{\beta^\prime}\!(u)
-\sigma_{\alpha\beta}\sigma_{\alpha\beta^\prime}\sigma_{\alpha^\prime\beta}
\sigma_{\alpha^\prime\beta^\prime}\delta_{\alpha}^{\beta^\prime}{T^\star}_{\beta}^{\alpha^\prime}\!(u),
\end{eqnarray}
where the quantum space indices have been neglected. A further consequence is
\begin{eqnarray*}
M_{\alpha}^{\alpha^\prime}M_{\beta}^{\beta^\prime}
-\sigma_{\alpha\beta}\sigma_{\alpha\beta^\prime}\sigma_{\alpha^\prime\beta}
\sigma_{\alpha^\prime\beta^\prime}M_{\beta}^{\beta^\prime} M_{\alpha}^{\alpha^\prime}
=\delta_{\beta}^{\alpha^\prime}M_{\alpha}^{\beta^\prime}
-\sigma_{\alpha\beta}\sigma_{\alpha\beta^\prime}\sigma_{\alpha^\prime\beta}
\sigma_{\alpha^\prime\beta^\prime}\delta_{\alpha}^{\beta^\prime}M_{\beta}^{\alpha^\prime}.
\end{eqnarray*}
for~$u\rightarrow\infty$. This implies that the operators $M_{\alpha}^{\alpha^\prime}$
are generators of~$sl(2,1)$ in the Cartan-Weyl basis
(see~\cite{Foerster:1993uk,Scheunert:1977wj}).
From~\eqref{eq:VTRMT} one can derive the invariance property
$[M_{\alpha}^{\alpha^\prime},Q(\vec{u}|i)]_-=0$. This means that from any
solution of~\eqref{eq:QPeriod} further solutions may be constructed by applying raising and
lowering operators of~$sl(2,1)$. The operators
$W_{\alpha}=M_{\alpha}^{\alpha}$ (no summation with respect to~$\alpha$)
satisfy the commutation relations $[W_{\alpha},W_{\beta}]_{-}=0$ and
generate the Cartan subalgebra. For~$\alpha=\beta$ the statistic signs
in~\eqref{eq:GeneratorFormel} cancel and therefore we get
\begin{eqnarray}
[W_{\alpha}]_{\{\mu\}}^{\{\nu\}}=\sum_j\delta_{\mu_1}^{\nu_1}\cdots\delta_{\mu_{j-1}}^{\nu_{j-1}}\delta_{\mu_j}^{\alpha}\delta_{\alpha}^{\nu_j}\delta_{\mu_{j+1}}^{\nu_{j+1}}\cdots\delta_{\mu_n}^{\nu_n}.
\end{eqnarray}
The highest-weight property of the Bethe ansatz functions
$M_{\alpha}^{\alpha^\prime}f^{1\cdots n}(\vec{x})=0$ for
$\alpha^\prime>\alpha$ is proven in a way
analogous to the one used in section~\ref{ch:MaDiffEq}. In other words one uses
commutation relations implied by~\eqref{eq:VTRMT}, then commutes the
matrices~$M_{\alpha}^{\alpha^\prime}$ through all $B$-operators to the right
and finally, one uses certain eigenvalue equations. Again one has `wanted' and
`unwanted' contributions and the summation guarantees the vanishing of the latter
(compare~\cite{KarowskiSU(N)}). After some calculation one obtains the
weight vector which is defined by $W_{\alpha}f(\vec{x})=w_{\alpha}f(\vec{x})$ and reads
\begin{eqnarray*}
\vec{w}=(n-m,m-k,k).
\end{eqnarray*}
The highest-weight conditions are~$w_1\geq w_2\geq-w_3$
and~$w_1,w_2,w_3\geq0$~\cite{Foerster:1993uk}.
\section{Conclusions and outlook}
\label{ch:Conclusions}
In this article we have discussed a combined system of matrix difference
equations based on the supersymmetric Lie algebra~$sl(2,1)$. Solutions
are constructed by means of a nested version of the so called off-shell
Bethe ansatz and
shown to be of highest weight with respect to~$sl(2,1)$. Due to the
invariance of the shift operator~$Q^{1\cdots m}$ under
the generators of~$sl(2,1)$ it is possible to construct and classify
further solutions by purely group theoretic considerations.
It would be interesting to see wether there is a quantum integrable
(relativistic) field theory associated to the supersymmetric t-J model.
In that case the methods presented here could be used to determine the
corresponding correlation functions. In this context the extension of our
results to the $q$-deformed case~$U_q[sl(2,1)]$ would also be of
interest~\cite{Foerster:1993fp,Foerster2}. Recently there has been discussed
an integrable quantum field theory based on the $osp(2,2)$ graded Lie
algebra~\cite{Bassi:1999ua} which is isomorphic to $sl(2,1)$.
\vspace{1cm}
{\!\!\!\bf Acknowledgements:}
The author would like to thank M. Karowski, R. Schrader and in particular
A. Zapletal for numerous helpful and stimulating discussions as well as
Z. Maassarani for pointing out references~\cite{Bassi:1999ua, Ramos:1996my,
Maassarani:1995ac} to him.
He is also grateful to Studienstiftung des deutschen Volkes for support.
The work was partially supported by DFG, Sonderforschungsbereich 288
`Differentialgeometrie und Quantenphysik'.
\bibliographystyle{utphys}
|
2,869,038,153,940 | arxiv | \section{Introduction}
The growth of dark matter halos and galaxies can be most accurately computed using numerical simulations. Understanding the physical origin of environmental quenching \citep[e.g.][]{kauffmann_EnvironmentalDependenceRelations_2004,peng_MassEnvironmentDrivers_2010}, intrinsic alignments \citep[e.g.][]{tempel_EvidenceSpinAlignment_2013,chisari_IntrinsicAlignmentsGalaxies_2015} or colour gradients in the cosmic web \citep{laigle_COSMOS2015PhotometricRedshifts_2018,kraljic_GalaxyEvolutionMetric_2018} are some of the most fundamental open problems in galaxy formation.
However, attaining a physical understanding of these effects of cosmological environment on individual galaxies is complicated by the wide variety of possible configurations that are generated by the Gaussian random initial conditions (ICs).
Currently, the main approach to disentangling the impact of environmental factors on galaxy formation is statistical in nature \citep{aubert_OriginImplicationsDark_2004,danovich_CoplanarStreamsPancakes_2012,codis_ConnectingCosmicWeb_2012,kraljic_GalaxiesFlowingOriented_2019,martizzi_BaryonsCosmicWeb_2020}. Analytic models can provide hypotheses for the causal relationships between ICs and final halos \citep[e.g.][]{press_FormationGalaxiesClusters_1974,sheth_EllipsoidalCollapseImproved_2001,hahn_TidalEffectsEnvironment_2009,codis_SpinAlignmentsCosmic_2015,musso_HowDoesCosmic_2018} but it is difficult to test these hypotheses at the level of individual halos \citep{borzyszkowski_ZOMGHowCosmic_2017,LucieSmith19}.
In this work, we extend the `genetic modification' (GM) technique \citep{roth_GeneticallyModifiedhaloes_2016}, which is designed specifically to construct controlled experiments in cosmological galaxy and halo formation. Previously, GM has been used to control the mass, merger history \citep{Pontzen17,rey_QuadraticGeneticModifications_2018} and angular momentum \citep{cadiou_AngularMomentumEvolution_2021} of individual objects. Our extension aims to manipulate instead the large-scale environment, while leaving the density structure of a target object's Lagrangian patch untouched.
We extend the code \textsc{genetIC}\xspace{} \citep{stopyra_GenetICNewInitial_2020}, to embed the ICs that will eventually collapse into a halo into new environments. This can be seen as a `splicing' operation, combining two Gaussian random fields into a single realisation. We apply this technique to investigate how the mass and concentration of halos in dark matter simulations are affected by environment.
The paper is structured as follows: we first present qualitatively the splicing method and the set of numerical simulations used throughout the paper in \cref{sec:qualitative-presentation-splicing}.
We then present their analysis in \cref{sec:results}.
Finally, we summarise and discuss our findings in \cref{sec:discussion-conclusion}.
A more detailed mathematical derivation of the splicing method can be found in \cref{sec:splicing-mathematical-derivation}.
\section{Methods}
\label{sec:qualitative-presentation-splicing}
\begin{figure}
%
%
\includegraphics[width=\columnwidth]{figures/splicing-1D_v2_noshade}
\caption{
%
%
%
%
Illustration of the splicing procedure applied to one dimensional initial conditions. We draw a field $a$ (in blue) and another independent field $b$ (in red).
We obtain the new initial conditions (in black) by `splicing' a given region of $a$ into $b$.
The spliced field has the value of $a$ in the spliced region and rapidly converges to the value of $b$ outside it, while remaining maximally consistent with the Gaussian random field statistics.
}\label{fig:splicing-1D}
\end{figure}
In this section, we first present the `splicing' technique;
a more formal derivation can be found in \cref{sec:splicing-mathematical-derivation}. We will then discuss how it has been applied to produce a suite of simulations for this first study.
The splicing operation is applied to the linear initial conditions, which we generate at $z=100$. We start from two Gaussian random fields representing the overdensity of independent realisations, denoted $a$ and $b$, and select an arbitrary region $\Gamma$.
To obtain the results in this paper, we choose $\Gamma$ to be the Lagrangian region of a $z=0$ halo (i.e.\ the region that its constituent particles occupied at $z=100$).
The splicing operation finds a new field $f$ which satisfies $f(x)=a(x)$ inside $\Gamma$, but which closely approximates $b(x)$ elsewhere in the simulation volume. It is not possible to set $f(x)=b(x)$ outside $\Gamma$ because this would cause discontinuities on the boundary; such discontinuities are incompatible with the assumption of a Gaussian random field. Instead, we minimise the $\chi^2$ of the field difference $f(x) - b(x)$. This approach has been motivated at length by~\cite{roth_GeneticallyModifiedhaloes_2016} and~\cite{rey_QuadraticGeneticModifications_2018}, and leads to fields that are maximally likely in the Gaussian random ensemble under the constraints.
Given the spliced density, we then use the Zel’dovich approximation to generate a corresponding set of particles with new positions and velocities.
These are used as initial conditions for a new $N$-body simulation.
The algorithm described above is equivalent to altering the field $b(x)$ with a list of modifications specifying the new value of $f(x)$ at every point $x_i$ in $\Gamma$. However, applying the existing GM algorithm to this problem becomes quickly impractical as the number of points in $\Gamma$ increases, requiring $\mathcal{O}\left(N^d \times N_\mathrm{pt}\right)$ memory, where $N$ is the number of cells in each direction, $d$ is the number of dimensions and $N_\mathrm{pt}$ is the number of constrained points.
To circumvent this problem, we instead solve the $\chi^2$ difference minimisation iteratively using a gradient descent method.
We have implemented the method within the code \textsc{genetIC}\xspace{} \citep{stopyra_GenetICNewInitial_2020} in v1.3 \citep{andrew_pontzen_2021_5079937}.
Further details can be found in \cref{sec:splicing-mathematical-derivation}.
A 1D example of a spliced Gaussian random field is illustrated in \cref{fig:splicing-1D}; the splicing region $\Gamma$ is indicated by grey shading. The independent fields $a$ and $b$ are shown in the top two panels; the spliced field $f$ is shown in the bottom panel (solid line) along with the relevant portions of the original fields for comparison (dotted lines). The spliced field $f$ can be seen to obey our requirements: it traces $a$ perfectly inside $\Gamma$; is continuous on the boundary of $\Gamma$; and closely approximates $b$ at large distances from $\Gamma$. The rate at which $f$ converges to $b$ depends both on the correlation function (or equivalently the power spectrum) and on the difference between fields $a$ and $b$ around the splicing region boundary. In this test, the reduced $\chi^2$ of realisations $a$, $b$ and $f$ are $1.00$, $1.03$ and $0.99$ respectively (with \num[group-separator={,},group-minimum-digits=3]{1499} degrees of freedom), indicating that $f$ is a likely draw from the underlying distribution despite being constructed artificially.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/illustration_splicing/illustration_splicing_with_inset7_smaller.pdf}
\caption{Slices of the dark matter density field evolved from an unmodified set of ICs (top row) and corresponding spliced ICs (bottom row).
Regions evolving from the original ICs are coloured in blue; the new external region is coloured in red.
The sphere (dashed lines) is tidally distorted over cosmic time, leading to differences between the two simulations in terms of the shape of the boundary.
Structures in the spliced region (bottom row, in blue) can be mapped on to their counterparts in the original simulation (top row).
Conversely, outside this region, the matter density fields in the two simulations bear no resemblance to each other.
}\label{fig:illustration_splicing}
\end{figure}
Having shown how splicing works in a 1D example, we next illustrate in \cref{fig:illustration_splicing} the cosmological evolution of a 3D spliced field.
The top left panel shows our reference ICs at redshift $z=100$; we use a $256^3$ grid in a domain of size \SI{100}{Mpc\per\hred} for a mass resolution of $M_\mathrm{DM} = \SI{7.7e9}{\Msun}$.
The transfer function is computed using \textsc{Camb} \citep{lewis_EfficientComputationCMB_2000} and cosmological parameters consistent with the values of~\cite{planckcollaboration_Planck2018Results_2018}.
The initial conditions are then evolved using \textsc{Ramses}\xspace{} \citep{teyssier_CosmologicalHydrodynamicsAdaptive_2002a}, as illustrated in the top row. Gravity is solved using a particle-mesh approach on an adaptive mesh.
We allow the mesh to be refined wherever it contains more than 8 dark matter particles.
The effective minimal force resolution reached by the simulation is \SI{9}{kpc} physical.
Next, we select a region $\Gamma$ in the ICs of the reference simulation.
As an illustrative example, in \cref{fig:illustration_splicing} we splice a sphere of comoving radius \SI{25}{Mpc\per\hred}.
Finally, we draw an independent overdensity field, and splice the sphere into it to form the new ICs; the result is shown in the bottom left panel. The region which is identical to the original ICs is shown in blue, while the external region is shown in red.
We evolve the new initial conditions using an identical simulation configuration to the original.
\begin{figure}
\includegraphics[width=\columnwidth]{figures/relative_mass_histogram.pdf}
\caption{When splicing halos into a new realisation, their mass changes due to environmental effects. For our six halos, each simulated in ten different environments, we find that the change in mass is modest. The histogram shows the new mass divided by the mean over the ten realisations. Vertical lines indicate the median (dashed) and \SI{68}{\percent} credible interval (dotted), showing that the mass typically scatters only by $\pm\SI{15}{\percent}$.
}%
\label{fig:mass_ratio_distribution}
\end{figure}
The time evolution of the sphere in the reference (top row) and spliced (bottom row) simulations can now be compared.
We indicate the edge of the sphere (dashed black line), defined by the set of particles that it contains in the ICs as a function of time.
The edge of the region is deformed by non-linear structure formation, becoming less spherical with time.
This deformation depends on the long-range tidal effect of the region outside the sphere and so the shape of the patches increasingly differs between the two simulations.
The density field within the sphere is identical, by construction, in the two sets of ICs. The subsequent interior gravitational evolution is similar; but it has small differences, due to the differing large-scale gravitational forces. The impact of these changes on halos is the focus of this paper.
By contrast, far from the sphere, the ICs are unrelated between the two simulations, and structures in one simulation cannot be mapped to the other. In the case illustrated, a large cosmic void is present in the rightmost region of the unaltered simulation, while a massive filament forms in the spliced simulation.
In the remainder of the paper, we will study how the large-scale environment contributes to setting the mass and concentration of dark matter halos, as an example of the splicing technique's promise.
For this purpose, we performed a reference simulation with identical cosmological and numerical parameters to the example described above, in a domain of size \SI{50}{Mpc\per\hred}, for a mass resolution of $M_\mathrm{DM} = \SI{9.7e8}{\Msun}$ and an effective minimal force resolution of \SI{2}{kpc} physical.
From this unmodified simulation, we selected six dark matter halos with masses between\footnote{The individual masses are $\{3.2, 3.3, 5.3, 5.9, 7.2, 8.6\} \times 10^{13}\,\mathrm{M}_{\odot}$.} $10^{13}$ and $10^{14}\,\mathrm{M_\odot}$ at $z=0$.
We select all their member particles as computed by the halo finder~-- including those in any of their subhalos~-- and trace these back to the ICs to obtain the Lagrangian patch.
At this point, we have six patches that will eventually form a dark matter halo in the reference simulation. We separately spliced each of these six patches into $10$ independent realisations of the box, for a total of $60$ new ICs which were evolved to $z=0$.
We extract halo catalogues using \textsc{AdaptaHOP}\xspace{} \citep{aubert_OriginImplicationsDark_2004} and the parameters presented in~\cite{tweed_BuildingMergerTrees_2009a} with the `Most massive Substructure Method' and a minimum number of \num{200} particles per halo.
We analyse the catalogues using \textsc{Tangos}\xspace{} \citep{pontzen_TANGOSAgileNumerical_2018}, which we employ to extract the virial radius $R_\mathrm{200c}$, virial mass $M_{200c}$ and concentration parameter $c$ as we will describe below.
\begin{figure}
\includegraphics[width=\columnwidth]{figures/hist_concentration_diff_with_ref.pdf}
\caption{
The scatter in the concentration induced by placing halos in a new environment is highly significant. For each halo, we calculate the scatter around its mean concentration in the ten environments. The shaded histogram shows the resulting distribution for all six halos, which can be compared to the scatter in concentration within the population (light histogram).
At least half the scatter of the concentration can be attributed to the effect of environment.
%
}%
\label{fig:concentration_diff_distribution}
\end{figure}
\begin{figure*}
%
\includegraphics*[width=\textwidth]{figures/all_halos_mass_vs_Rspliced_stripped_down_show_lag_patch.pdf}
\caption{
The ratio of the virial mass $M_\mathrm{200c}$ of the spliced halos to the reference halo, for the six reference halos.
We highlight the simulations where the spliced region includes only the Lagrangian patch (darker symbols) and their mean $R_\mathrm{spliced}$ (black arrow); all other simulations use a splicing that has been expanded.
The mass converges to the reference mass with increasing size of the spliced region at $z=0$, $R_\mathrm{spliced}$.
}\label{fig:expansion-vs-mass}
\end{figure*}
\section{Results}%
\label{sec:results}
We now investigate the effect of environment on dark matter halos' masses. Our set of sixty simulations corresponds to ten environmental realisations around each of six central halos.
For each of the six halos, we compute the mean virial mass $\langle M_{\mathrm{200c}} \rangle$ over the ten realisations. We then calculate, for each realisation, the ratio of its mass to this mean:
\begin{equation}
r = \frac{M_\mathrm{200c}}{\langle M_\mathrm{200c} \rangle}.
\end{equation}
This yields $60$ measurements of $r$, which are plotted as a histogram in \cref{fig:mass_ratio_distribution}; the masses are scattered by $\pm\SI{15}{\percent}$ around the halo's mean value.
Next, as an example of a more detailed structural property of halos, we measure the concentration parameter using the approach presented by \citet[][]{klypin_MultiDarkSimulationsStory_2016}; see their Equations~(18)-(20).
The NFW concentration parameter, $c$, is estimated using the implicit solution to
\begin{align}
\frac{V^2_\mathrm{circ,max}}{V_\mathrm{200c}^2} &= \frac{c}{x_\mathrm{max}} \frac{f(x_\mathrm{max})}{f(c)},\\
f(x) &\equiv \ln(1+x) - \frac{1}{1+x},\\
x_\mathrm{max}& = 2.163.
\end{align}
Here $V^2_\mathrm{circ}(r) = GM(<r)/r$ is the circular velocity, $V_\mathrm{circ,max}$ is its maximum value for $0\leq r \leq R_\mathrm{200c}$ and $V_\mathrm{200c} = V_\mathrm{circ}(R_\mathrm{200c})$.
We measure the circular velocities in \num{100} logarithmically spaced radial bins between $R_\mathrm{200c}/100$ and $R_\mathrm{200c}$. We use this procedure because it is much more stable than fitting the NFW profile directly through $\chi^2$ optimisation, which suffers from significant degeneracies. We verified the numerical stability of the \citet[][]{klypin_MultiDarkSimulationsStory_2016} estimator by calculating the change in $c$ for all our halos between two adjacent timesteps, finding that its r.m.s.\ variation is only $\pm 10\%$. This is negligible compared to the population scatter that we will discuss below.
For each of the six halo families, we compute $\langle c \rangle$, where the average is taken over the ten environments. We then calculate a distribution of $c - \langle c \rangle$ over all sixty simulations. To
contextualise this distribution, we create a second ensemble, consisting of all 88 halos in the original reference run in the same mass window as the six reference halos, $10^{13} < M_{\mathrm{200c}}/M_{\odot} < 10^{14}$.
We then calculate $c - \langle c \rangle$ over this entire second population.
The difference in the statistics of these two ensembles captures the effect of the environment.
The results are shown in \cref{fig:concentration_diff_distribution}.
The two distributions are non-Gaussian; in order to compare them quantitatively, we compute the \SI{68}{\percent} and \SI{90}{\percent} credible intervals.
The \SI{68}{\percent} interval for the spliced distribution (shaded histogram), characterising the impact of varying environment alone, is $[-1.0,1.8]$. By contrast, the corresponding credible interval of the concentration of the entire population is $[-3.2,2.7]$.
When using \SI{90}{\percent} credible intervals, the ranges expand to $[-2.1, 3.9]$ (spliced population) and $[-4.5, 4.0]$ (entire population).
Therefore, between half and $70\%$ of the scatter in the concentration at fixed mass can be attributed to the effect of environment.
Having looked at the effect of splicing the Lagrangian patch of halos into new environments, we now consider splicing larger regions. As discussed in \cref{sec:qualitative-presentation-splicing}, the size and shape of the spliced region can be chosen arbitrarily. Physically, one would expect that as the size of the spliced region expands, the influence of the external environment must become negligible because of the finite correlation length in $\Lambda$CDM\@. Accordingly, we expect the variation between environments of any measured halo property to become small.
We performed an additional set of 211 simulations, using three outer realisations around the same six inner families, but expanding the spliced region to progressively include all matter within some distance from the Lagrangian patch. As the region is expanded, it becomes progressively more spherical. We quantify the size of the resulting spliced regions at $z=0$ by an effective radius $R_{\mathrm{spliced}}$, where
\begin{equation}
R_\mathrm{spliced}^3 = \frac{3}{4\pi} \frac{M_\mathrm{region}}{\langle \rho \rangle_\mathrm{v}},
\end{equation}
where $M_{\mathrm{region}}$ is the total mass in the spliced region, and $\langle \rho \rangle_{\mathrm{v}}$ is the volume-weighted mean density in the region at $z=0$.
The results are shown in \cref{fig:expansion-vs-mass}. Each panel uses simulations from one of our six families, showing how the final halo mass divided by the reference (unspliced) halo mass changes as the patch is expanded. Qualitatively, the halo mass converges towards the reference value as the splice radius becomes larger, as expected. This agrees with the work of \citet{LucieSmith19} who found that the information relevant to determining the mass of halos is localised within scales that are somewhat larger than their Lagrangian patches. However, we caution that a quantitative measure of the convergence radius using our method would require a considerably larger box size than the \SI{50}{Mpc\per\hred} used in the present study.
\section{Discussion and conclusions}%
\label{sec:discussion-conclusion}
We have presented `gene splicing', a method for resimulating a chosen halo within a variety of environments, while respecting the statistical properties of $\Lambda$CDM initial conditions. This is an extension of the `genetic modification' approach \citep{roth_GeneticallyModifiedhaloes_2016,rey_QuadraticGeneticModifications_2018}, in which controlled experiments are carried out on a target halo, while the environment is minimally changed.
Manipulating Gaussian random fields in order to obtain insight into structure formation is an increasingly important tool \citep{aragon-calvo_MIPEnsembleSimulation_2016,Pontzen16InvertedICs,sawala_SettingStageStructures_2021b}. Because structure formation is localised, it is often desirable to make modifications in real space. This, however, requires a careful treatment to maintain consistency with $\Lambda$CDM correlation structure. Our approach to doing so follows in a long tradition of solving linear constrained systems in cosmological contexts \citep{bertschinger_PathIntegralMethods_1987,hoffman_ConstrainedRealizationsGaussian_1991,elsner_EfficientWienerFiltering_2013}.
As a first demonstration of the splicing method, we showed that at least half the scatter in the mass-concentration relation can be attributed to the effect of the large-scale environment.
This complements the results of~\cite{roth_GeneticallyModifiedhaloes_2016}, where it was shown that the time of collapse (encapsulated by the local density field) is not able on its own to account for the scatter in this relation. We also showed that as the size of the spliced patch increases, the variation in mass
decays towards zero in accordance with physical expectations. However, due to running a large number of simulations (274) we used a relatively small box of \SI{50}{Mpc\per\hred}. With the splicing approach, larger boxes would be needed to robustly measure the size of the region which contains information about halo collapse.
While the focus of this paper was on mass and concentration of dark matter halos, many properties of halos and galaxies are affected by their environment, and in future work, we will explore the underlying causal connections. For example, there is an observed correlation between galaxy quenched fraction and closeness to the nearest cosmological filament \citep{laigle_COSMOS2015PhotometricRedshifts_2018,kraljic_GalaxiesFlowingOriented_2019}, whose causal origin is as yet unclear
\citep{romano-diaz_ZOMGIIDoes_2017,musso_HowDoesCosmic_2018,song_HaloMassQuenching_2021}.
In future work, we intend to test these models by splicing a galaxy at different distances from a cosmic filament.
The splicing method may also prove useful in the study of the secondary bias problem \citep{gao_AssemblyBiasClustering_2007,dalal_HaloAssemblyBias_2008,hahn_TidalEffectsEnvironment_2009}.
In particular, it enables direct tests of how the anisotropy in the environment affects the relationship between bias and concentration \citep{paranjape_HaloAssemblyBias_2018}.
In order to fix the initial shear, the method is capable of splicing the potential field rather than the density field. This extension will be explored in future work.
\section*{Acknowledgements}
CC thanks S.~Codis and M.~Musso for stimulating discussions.
LLS thanks E.~Komatsu for useful comments on the manuscript.
This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No.\ 818085 GMGalaxies.
HVP's work was partially supported by the research project grant `Understanding the Dynamic Universe' funded by the Knut and Alice Wallenberg Foundation under Dnr KAW 2018.0067.
AP was supported by the Royal Society.
{This work used computing equipment funded by the Research Capital Investment Fund (RCIF) provided by UKRI, and partially funded by the UCL Cosmoparticle Initiative.}
The analysis was carried out using
\textsc{Colossus} \citep{diemer_COLOSSUSPythonToolkit_2018},
\textsc{Jupyter} notebooks \citep{soton403913},
\textsc{Matplotlib} \citep{hunter2007matplotlib},
\textsc{Numpy} \citep{harris_ArrayProgrammingNumPy_2020},
\textsc{Pynbody} \citep{pontzen_PynbodyNBodySPH_2013},
\textsc{Python},
\textsc{Tangos} \citep{pontzen_TANGOSAgileNumerical_2018} and
\textsc{Yt} \citep{turk_YtMulticodeAnalysis_2011}.
\section*{Author contributions}
The main roles of the authors were, using the CRediT (Contribution Roles Taxonomy) system (\url{https://authorservices.wiley.com/author-resources/Journal-Authors/open-access/credit.html}):
{\bf CC:} conceptualisation; methodology; validation; investigation; data curation; formal analysis; writing -- original draft; visualisation.
{\bf AP:} conceptualisation; methodology; software; validation and interpretation; writing -- review and editing; funding acquisition.
{\bf HVP:} conceptualisation; validation and interpretation; writing -- review and editing.
{\bf LLS:} conceptualisation; validation and interpretation; writing -- review and editing.
\section*{Data availability}
The data underlying this article will be shared on reasonable request to the corresponding author.
\bibliographystyle{mnras}
|
2,869,038,153,941 | arxiv | \section{Introduction}\label{sec:intro}
Information processing, in the sense of changing the information of or retrieving information from a signal, can only be accomplished by nonlinear systems, while causal, stable, linear systems do not affect a signal's entropy rate~\cite{Shannon_TheoryOfComm},~\cite[pp.~663]{Papoulis_Probability}. As a consequence, in the past information-theoretic measures in system analysis were almost exclusively used for highly nonlinear, chaotic systems, mainly motivated by the works of Kolmogorov~\cite{Kolmogorov_Entropy1,Kolmogorov_Entropy2} and Sinai~\cite{Sinai_Entropy}. On the contrary, linear systems and relatively simple nonlinear systems (e.g., containing static nonlinearities) usually lack information-theoretic descriptions and are often characterized by second-order statistics or energetic measures (e.g., transfer function, power spectrum, signal-to-distortion ratio, mean square error between input and output, correlation functions, etc.).
In this work, we characterize the amount of information lost by passing a signal through a static nonlinearity. These systems, although simple, are by no means irrelevant in technical applications: One of the major components of the energy detector, a low-complexity receiver architecture for wireless communications, is a square-law device. Rectifiers, omnipresent in electronic systems are another example for static nonlinearities, which further constitute the nonlinear components in Wiener and Hammerstein systems. This work thus acts as a first step towards the goal of a comprehensive information-theoretic framework for more general nonlinear systems, providing an alternative to the prevailing energetic descriptions. While an analysis of information rates will be left for future work, this paper is concerned with zeroth-order entropies only.
Information loss can most generally be expressed as the difference of mutual informations,
\begin{equation}
\mutinf{\hat{X};X}-\mutinf{\hat{X};Y}\label{eq:genloss}
\end{equation}
where the random variables (RV) $X$ and $Y$ are two descriptions for another RV $\hat{X}$.
In words, the difference in~\eqref{eq:genloss} is the information lost by changing the description from $X$ to $Y$ (cf. Fig.~\ref{fig:sysmod}). This kind of information loss is of particular interest for learning/coding/clustering (e.g., word clustering~\cite{Dhillon_LossLearning}) and triggered the development of optimal representation techniques~\cite{Tishby_InformationBottleneck}. Generally, changing the description from $X$ to $Y$ does not necessarily imply that the information loss is non-negative. In the special case that $Y$ is a function of $X$ -- the case we are concerned with -- the data processing inequality states that information can only be lost~\cite[pp.~35]{Cover_Information2}. In other words, the difference in~\eqref{eq:genloss} is non-negative.
In case $\hat{X}$ is identical to the RV $X$ itself, the information loss simplifies to (cf. proof of Theorem~\ref{thm:InfoLoss})
\begin{equation}
\ent{X|Y}
\end{equation}
i.e., to the conditional entropy of $X$ given the description $Y$. This equivocation, as Shannon termed it in his seminal paper~\cite{Shannon_TheoryOfComm}, was originally used to describe the information loss for stochastic relations between the RVs $X$ and $Y$. In contrary to that, we are concerned with deterministic functions $Y=g(X)$.
To our knowledge, little work has been done in this regard. Some results are available for the capacity of nonlinear channels~\cite{Zillmann_NonlinearChannel,Abou-Faycal_CapacityNLChannels}, and recently the capacity of a noisy (possibly nonlinear and non-injective) function was analyzed~\cite{Simon_NoisyFunct,Nazer_ComputationMAC}. Considering deterministic systems, we found that Pippenger used equivocation to characterize the information loss induced by multiplying two integer numbers~\cite{Pippenger_MultLoss}, while the coarse observation of discrete stochastic processes is analyzed in~\cite{Watanabe_InfoLoss}. An analysis of how much information is lost by passing a continuous RV through a static nonlinearity cannot be found in the literature.
Aside from providing information-theoretic descriptions for the nonlinear systems mentioned above, our results also apply to different fields of signal processing and communication theory. To be specific, the information loss may prove useful to compute error bounds for the reconstruction of nonlinearly distorted signals~\cite[pp.~38]{Cover_Information2} and in capacity considerations for nonlinear channels. To give another example, according to~\cite{Simon_NoisyFunct} the capacity of a noisy function $G(\cdot)$ (a noisy implementation of the determinisitic function $g(\cdot)$), is given as the maximum of
\begin{equation}
\ent{X|Y} + \mutinf{G(X);Y}
\end{equation}
over all (discrete) distributions of $X$. In this work, we give an expression for the first part of this equation, assuming that $X$ is a continuous RV.
After introducing the problem statement in Section~\ref{sec:problem}, an expression for the information loss is derived and related to the non-injectivity of the system in Section~\ref{sec:derivation}, while bounds on the information loss are presented in Section~\ref{sec:bounds}. Section~\ref{sec:examples} illustrates the theoretical results with the help of examples.
This is an extended version of a paper submitted to an IEEE conference~\cite{Geiger_Conf2011}.
\section{Problem Statement}
\label{sec:problem}
We focus our attention on a class of systems whose input-output behavior can be described by a piecewise strictly monotone function. While this excludes functions which are constant on some proper interval (e.g., limiters or quantizers, for which it can be shown that the information loss becomes infinite), many well-behaved functions can be interpreted in the light of the forthcoming Definition. Take, e.g., the function $g(x)=\cos(x)$ for some $\dom{X}=[0,L\pi)$. While the function is clearly not monotone on $\dom{X}$, it is strictly monotone for all $\dom{X}_i=[(i-1)\pi,i\pi)$, $i=1,\dots,L$. As it turns out, piecewise strict monotonicity does not rule out functions whose derivative is zero on a finite set. In addition to that, neither continuity nor differentiability are requirements imposed by Definition~\ref{def:function}, but only piecewise continuity and piecewise differentiability.
\begin{definition}\label{def:function}
Let $g{:}\ \dom{X}\to \dom{Y}$, $\dom{X},\dom{Y}\subseteq \mathbb{R}$, be a bounded, surjective, Borel measurable function which is piecewise strictly monotone on $L$ subdomains $\dom{X}_l$
\begin{eqnarray}
g(x) = \begin{cases}
g_1(x), & \text{if } x\in\dom{X}_1\\
g_2(x), &\text{if } x\in\dom{X}_2\\
\vdots\\
g_L(x), & \text{if } x\in\dom{X}_L
\end{cases}
\end{eqnarray}
where $g_l{:}\ \dom{X}_l\to\dom{Y}_l$ are bijective. We assume that the subdomains are an ordered set of disjoint, proper intervals with $\bigcup_{l=1}^L \dom{X}_l =\dom{X}$ and $x_i<x_j$ for all $x_i\in\dom{X}_i$, $x_j\in\dom{X}_j$ whenever $i<j$. We further require all $g_l(\cdot)$ to be differentiable on the interval enclosure of $\dom{X}_l$.
\end{definition}
Note that $\dom{X}$ does not need to be an interval itself. Strict monotonicity implies that the function is invertible on each interval $\dom{X}_l$, i.e., there exists an inverse function $g_l^{-1}{:}\ \dom{Y}_l \to\dom{X}_l$, where $\dom{Y}_l$ is the image of $\dom{X}_l$. However, the function $g(\cdot)$ needs not be invertible on $\dom{X}$, i.e., it can be non-injective.
Equivalently, the images of the intervals, $\dom{Y}_l$, unite to $\dom{Y}$, but need not be disjoint. Let $g(\cdot)$ describe the input-output behavior of the system under consideration (see Fig.~\ref{fig:sysmod}).
\begin{figure}[t]
\centering
\begin{pspicture}[showgrid=false](1,1)(8,3.5)
\pssignal(1,2){x}{$\hat{X}$}
\pssignal(3,1){n}{$\Delta_{\hat{x}} \to 0$}
\psfblock[framesize=1 0.75](3,2){oplus}{$Q(\cdot)$}
\psfblock[framesize=1.5 1](6,2){c}{$g(\cdot)$}
\pssignal(8,2){y}{$Y$}
\ncline[style=Arrow]{n}{oplus}
\ncline[style=Arrow]{oplus}{x}
\nclist[style=Arrow]{ncline}[naput]{oplus,c $X$,y}
\ncline[style=Arrow]{c}{oplus}
\psline[style=Dash](2,2.75)(4,2.75)
\psline[style=Dash](2,2.25)(2,2.75)
\psline[style=Dash](4,2.25)(4,2.75)
\psline[style=Dash](1.75,3.25)(7.25,3.25)
\psline[style=Dash](1.75,2.25)(1.75,3.25)
\psline[style=Dash](7.25,2.25)(7.25,3.25)
\rput*(3,2.75){\scriptsize{$\mutinf{\hat{X};X}$}}
\rput*(4.5,3.25){\scriptsize{$\mutinf{\hat{X};Y}$}}
\end{pspicture}
\caption{Equivalent model for computing the information loss of an input-output system with static nonlinearity $g(\cdot)$. $Q(\cdot)$ is a quantizer with quantization step size $\Delta_{\hat{x}}$. Note that $X$ can be modeled as the sum of $\hat{X}$ and an input-dependent finite-support noise term $N$ as in~\cite{Geiger_Conf2011}.}
\label{fig:sysmod}
\end{figure}
As an input to this system consider a sequence of independent samples, identically distributed with continuous cumulative distribution function (CDF) $F_X(x)$ and probability density function (PDF) $f_X(x)$. Without loss of generality, let the support of this RV be $\dom{X}$, i.e., $f_X(x)$ is positive on $\dom{X}$ and zero elsewhere.
As an immediate consequence of this system model, the conditional PDF of the output $Y$ given the input $X$ can be written as~\cite{Chi_TransformingDirac}
\begin{equation}
f_{Y|X}(x,y) = \delta(y-g(x))= \sum_{i\in\indset{y}} \frac{\delta(x-x_i)}{\left|g'\left(x_i\right)\right|} \label{eq:condPDFY}
\end{equation}
where $\delta(\cdot)$ is Dirac's delta distribution, $\indset{y} = \{i: y\in\dom{Y}_i\}$ and $x_i=g_i^{-1}(y)$ for all $i\in\indset{y}$. In other words, $\{x_i\}$ is the preimage of $y$ or the set of roots satisfying $y=g(x)$.
The marginal PDF of $Y$ is thus given as~\cite[pp.~130]{Papoulis_Probability},~\cite{Chi_TransformingDirac}
\begin{IEEEeqnarray}{RCL}
f_Y(y) &=& \sum_{i\in\indset{y}} \frac{f_{X}(x_i)}{\left|g'\left(x_i\right)\right|}\label{eq:fy}.
\end{IEEEeqnarray}
\section{Information Loss of Static Nonlinearities}
\label{sec:derivation}
In what follows we quantify the information loss induced by $g(\cdot)$, and we show that this information loss is identical to the remaining uncertainty from which interval $\dom{X}_l$ the input $x$ originated after observing the output $y$.
The main contribution of this work is thus concentrated in the following two Theorems.
\begin{lem}\label{lem:XforY}
Let $g{:}\ \dom{X}\to\dom{Y}$ and $f{:}\ \dom{Y}\to\dom{Z}$ be measurable functions. Further, let $X$ be a continuous RV on $\dom{X}$ and $Y=g(X)$. Then,
\begin{equation}
\expec{f(y)} = \int_\dom{Y} f(y) f_Y(y) dy = \int_\dom{X} f(g(x)) f_X(x) dx.
\end{equation}
\end{lem}
\begin{IEEEproof}
The proof is based on the fact that for a measurable $g(\cdot)$~\cite[pp.~142, Theorem~5-1]{Papoulis_Probability}
\begin{equation}
\expec{g(x)} = \int_\dom{X} g(x) f_X(x) dx.\label{eq:expec}
\end{equation}
Lemma~\ref{lem:XforY} follows from~\eqref{eq:expec} since for measurable $g(\cdot)$ and $f(\cdot)$ the composition $(f\circ g)(\cdot)=f(g(\cdot))$ is also measurable.
\end{IEEEproof}
\begin{thm}\label{thm:InfoLoss}
The information loss induced by a function $g(\cdot)$ satisfying Definition~\ref{def:function} is given as
\begin{equation}
\ent{X|Y}= \int_{\dom{X}} f_X(x) \log \left( \frac{\sum_{i\in\indset{g(x)}} \frac{f_{X}(x_i)}{\left|g'\left(x_i\right)\right|}}{\frac{f_X(x) }{\left|g'\left(x\right)\right|} } \right)dx.\label{eq:informationloss}
\end{equation}
\end{thm}
\begin{IEEEproof}
Using identities from~\cite{Cover_Information2} and the model in Fig.~\ref{fig:sysmod} the conditional entropy $\ent{X|Y}$ can be calculated as
\begin{IEEEeqnarray}{RCL}
\ent{X|Y}&=& \lim_{\hat{X}\rightarrow X}\left(\ent{\hat{X}|Y}-\ent{\hat{X}|X}\right)\notag\\
&=& \lim_{\hat{X}\rightarrow X}\left(\ent{\hat{X}}-\ent{\hat{X}|X}-\ent{\hat{X}}+\ent{\hat{X}|Y}\right)\notag\\
&=& \lim_{\hat{X}\rightarrow X} \left(\mutinf{\hat{X};X} - \mutinf{\hat{X};Y}\right).\label{eq:limsub}
\end{IEEEeqnarray}
where $\hat{X}$ is a discrete RV converging surely to $X$. This auxillary RV is necessary to ensure that the (discrete) entropies we use are well-defined.
Here, motivated by the data processing inequality~\cite[pp.~34]{Cover_Information2}, we have related the conditional entropy to a difference of mutual informations, which we have introduced as the most general notion of information loss in Section~\ref{sec:intro}. In addition to that, the mutual information has the benefit that it is defined for general joint distributions~\cite[pp.~252]{Cover_Information2}, which eliminates the requirement for a discrete $\hat{X}$.
For the mutual information between $X$ and $\hat{X}$ we can write with~\cite[pp.~251]{Cover_Information2}
\begin{equation}
\mutinf{\hat{X},X} = \int_\dom{X}\int_{\dom{X}} f_{\hat{X}X}(\hat{x},x) \log\left(\frac{f_{X|\hat{X}}(\hat{x},x)}{f_X(x)}\right)dxd\hat{x}. \label{eq:mutinf3}
\end{equation}
Similarily, with Lemma~\ref{lem:XforY} (the logarithm and all PDFs are measurable) we get for $\mutinf{\hat{X},Y}$
\begin{IEEEeqnarray}{RCL}
\mutinf{\hat{X};Y} &=& \int_\dom{X} \int_\dom{X} f_{\hat{X}X}(\hat{x},x) \log \left(\frac{f_{Y|\hat{X}}(\hat{x},g(x))}{f_Y(g(x))}\right) dxd\hat{x}.\notag\\\label{eq:mutinf1}
\end{IEEEeqnarray}
After subtracting these expressions according to~\eqref{eq:limsub} we can exchange limit and integration (see Appendix). In the limit the conditional PDFs assume $f_{X|\hat{X}}(\hat{x},x) = \delta(x-\hat{x})$ and $f_{Y|\hat{X}}(\cdot,\cdot)=f_{Y|X}(\cdot,\cdot)$, thus~\eqref{eq:condPDFY}, and using these we obtain~\eqref{eq:mutinf2} at the bottom of the next page.
\begin{figure*}[!b]
\normalsize
\hrulefill
\begin{IEEEeqnarray}{RCL}
\ent{X|Y}=\lim_{\hat{X}\rightarrow X} \left(\mutinf{\hat{X};X} - \mutinf{\hat{X};Y}\right)
&=& \int_\dom{X}\int_{\dom{X}} f_X(x) \delta(x-\hat{x}) \log \left(
\frac{\delta(x-\hat{x}) f_Y(g(x))}{f_X(x)\sum_{k\in\indset{g(x)}} \frac{\delta(\hat{x}-x_k)}{\left|g'\left(x_k\right)\right|}} \right)d\hat{x}dx \label{eq:mutinf2}
\end{IEEEeqnarray}
\end{figure*}
Since the integral over $\hat{x}$ is zero for $\hat{x}\neq x$ due to $\delta(x-\hat{x})$, only the term satisfying $x_k=x$ remains from the sum over Dirac's deltas in the denominator; this term cancels with the delta in the numerator. Integrating over $\hat{x}$ and substituting~\eqref{eq:fy} for $f_Y(\cdot)$ finally yields
\begin{equation}
\ent{X|Y}= \int_{\dom{X}} f_X(x) \log \left( \frac{\sum_{i\in\indset{g(x)}} \frac{f_{X}(x_i)}{\left|g'\left(x_i\right)\right|}}{\frac{f_X(x) }{\left|g'\left(x\right)\right|} } \right)dx\label{eq:equivoc}
\end{equation}
and completes the proof.
\end{IEEEproof}
Note that for $\hat{X}\to X$ both $\mutinf{\hat{X};X}$ and $\mutinf{\hat{X};Y}$ diverge to infinity, but their difference not necessarily does. Further, if for all $y=g(x)$ the preimage is a singleton ($|\indset{g(x)}|=1$ for all $x\in\dom{X}$), $g(\cdot)$ is injective (thus bijective by Definition~\ref{def:function}) and the information loss $\ent{X|Y}=0$.
In Theorem~\ref{thm:InfoLoss} we provided an expression to calculate the information loss induced by a static nonlinearity. The following Theorem is of a different nature: It gives an explanation of \emph{why} information is lost at all. Considering non-injective functions $g(\cdot)$ satisfying Definition~\ref{def:function}, multiple input values may lead to the same output -- the preimage of $y$ may contain multiple elements. Given the output, the input is uncertain only w.r.t. which of these elements has been fed into the system under consideration. Due to the piecewise strict monotonicity of $g(\cdot)$ each subdomain contains at most one element of the preimage of $y$. Therefore, the information loss is identical to the uncertainty about the interval $\dom{X}_l$ from which the input $x$ originated given the output value $y$. Before making this statement precise in Theorem~\ref{thm:equivToRoots}, let us introduce the following Definition:
\begin{definition}
Let $W$ be a discrete RV with $|\dom{W}|=L$ mass points which is defined as
\begin{equation}
W=w_i \ \ \text{if} \ \ x\in\dom{X}_i \label{eq:defW}
\end{equation}
for all $i=1,\dots,L$.
\end{definition}
In other words, $W$ is a discrete RV which depends on the interval $\dom{X}_l$ of $x$, and not on its actual value.
As an immediate consequence of this Definition we obtain
\begin{equation}
\Prob{W=w_i} = p(w_i) = \int_{\dom{X}_i} f_X(x) dx
\end{equation}
i.e., the probability mass contained in the $i$-th interval.
In accordance with the model in Fig.~\ref{fig:sysmod} and the reasoning in the Appendix, one can think of $W$ as $\hat{X}$ when the quantization bins are identical to $\dom{X}_l$. While in Theorem~\ref{thm:InfoLoss} we required $\hat{X}$ to converge to $X$ surely, the next Theorem shows that indeed such a convergence is not necessary as long as the quantization bins are chosen appropriately. This fact will then establish the link between the non-injectivity, piecewise strict monotonicity on intervals, and information loss.
We are now ready to state the main Theorem:
\begin{thm}[Main Theorem]
\label{thm:equivToRoots}
The uncertainty about the input value $x$ after observing the output $y$ is identical to the uncertainty about the interval $\dom{X}_l$ from which the input was taken, i.e.,
\begin{equation}
\ent{X|Y}=\ent{W|Y}.
\end{equation}
\end{thm}
\begin{IEEEproof}
\input{Proof2.tex}
\end{IEEEproof}
The information loss induced by a function satisfying Definition~\ref{def:function} is thus only related to the roots of the equation $y=g(x)$. Conversely, if the interval $\dom{X}_l$ of $x$ is known, the exact value of $x$ can be reconstructed after observing $y$: \begin{IEEEeqnarray}{RCL}
\ent{X|Y} &=& \ent{X,W|Y}\\
&=& \ent{X|W,Y}+\ent{W|Y}\\
&=&\ent{X|W,Y}+\ent{X|Y}
\end{IEEEeqnarray}
and thus $\ent{X|W,Y}=0$.
\begin{figure}[t]
\centering
\begin{pspicture}[showgrid=false](1,1.5)(8,4)
\pssignal(1,2){x}{$X$}
\psfblock[framesize=1.5 1](3,2){d}{$g(\cdot)$}
\psfblock[framesize=1.5 1](6,2){c}{$h(\cdot)$}
\pssignal(8,2){y}{$Z$}
\nclist[style=Arrow]{ncline}[naput]{x,d,c $Y$,y}
\psline[style=Dash](2,3)(4,3)
\psline[style=Dash](2,2.5)(2,3)
\psline[style=Dash](4,2.5)(4,3)
\psline[style=Dash](5,3)(7,3)
\psline[style=Dash](5,2.5)(5,3)
\psline[style=Dash](7,2.5)(7,3)
\psline[style=Dash](1.5,3.5)(7.5,3.5)
\psline[style=Dash](1.5,2.5)(1.5,3.5)
\psline[style=Dash](7.5,2.5)(7.5,3.5)
\rput*(3,3){\scriptsize\textcolor{black}{$\ent{X|Y}$}}
\rput*(6,3){\scriptsize\textcolor{black}{$\ent{Y|Z}$}}
\rput*(4.5,3.5){\scriptsize\textcolor{black}{$\ent{X|Z}$}}
\end{pspicture}
\caption{Cascade of systems}
\label{fig:cascade}
\end{figure}
Aside from the properties of conditional entropies (non-negativity~\cite[pp.~15]{Cover_Information2}, asymmetry in its arguments, etc.) the information loss has an important property concerning the cascade of deterministic, static systems, which is not shared by the conditional entropy in general. For such a cascade (see Fig.~\ref{fig:cascade}), which in the static case is equivalent to a composition of the implemented functions, we can prove the following Theorem:
\begin{thm}\label{thm:transitivity}
Consider two functions $g{:}\ \dom{X}\to\dom{Y}$ and $h{:}\ \dom{Y}\to\dom{Z}$ satisfying Definition~\ref{def:function} and a cascade of systems implementing these functions, as shown in Fig.~\ref{fig:cascade}. Let $Y=g(X)$ and $Z=h(Y)$. Then, the information loss induced by this cascade, or equivalently, by the composition $(h\circ g)(\cdot)=h(g(\cdot))$ is given by:
\begin{equation}
\ent{X|Z}=\ent{X|Y}+\ent{Y|Z}
\end{equation}
\end{thm}
\begin{IEEEproof}
The proof starts by expanding $\ent{X,Y|Z}$
\begin{IEEEeqnarray*}{RCL}
\ent{X,Y|Z} &=& \ent{X|Y,Z} + \ent{Y|Z}\\
&=& \ent{X|Y} + \ent{Y|Z}
\end{IEEEeqnarray*}
since $X \to Y \to Z$ forms a Markov chain and thus $X$ and $Z$ are independent given $Y$~\cite{Cover_Information2}. Further, $\ent{X,Y|Z}=\ent{X|Z}$ since $Y$ is a function of $X$. Thus,
\begin{equation}
\ent{X|Z}=\ent{X|Y}+\ent{Y|Z}
\end{equation}
and the proof is complete.
\end{IEEEproof}
Extending Theorem~\ref{thm:transitivity}, we obtain the following Corollary:
\begin{cor}
Consider a set of functions $g_i{:}\ \dom{X}_{i-1}\to\dom{X}_{i}$, $i=1,\dots,N$, each satisfying Definition~\ref{def:function}, and a cascade of systems implementing these functions. Let $X_i$, $i=1,2,\dots,N$, denote the output of the $i$th constituent system and, thus, the input of the $(i+1)$th system. Given the input of the first system, $X_0$, we have
\begin{equation}
\ent{X_0|X_N} = \sum_{i=1}^{N} \ent{X_{i-1}|X_i}.
\end{equation}
\end{cor}
\begin{proof}
The Corollary is proved by repeated application of Theorem~\ref{thm:transitivity}.
\end{proof}
This result does not imply that the order in which the functions can be arranged has no influence on the information loss of the cascade, as one would expect from stable, linear systems. Illustrative examples showing that this does not hold can be found, e.g., in~\cite{Johnson_ITNeural}. Moreover, calculating the individual information losses requires in each case the PDF of the input to the function under consideration. While this does not seem to yield an improvement compared to a direct evaluation of~\eqref{eq:informationloss}, Theorem~\ref{thm:transitivity} can be used to bound the information loss of the cascade efficiently whenever bounds on the individual information losses are available. We will introduce such bounds in the next Section.
\section{Upper Bounds on the Information Loss}\label{sec:bounds}
In many situations it might be inconvenient, or even impossible, to evaluate the information loss~\eqref{eq:informationloss} analytically since it involves the logarithm of a sum, for which only inequalities exist~\cite{Cover_Information2}. Therefore, one has to resort to numeric integration or use bounds on the information loss which are simpler to evaluate. In this Section we derive an upper bound which requires only minor knowledge about the function $g(\cdot)$ -- namely, the number of intervals $L$ -- and we show that this bound is tight.
\begin{figure*}[!ht]
\centering
\begin{pspicture}[showgrid=false](-6.0,-1)(6.0,4)
\psaxeslabels{->}(-4,0)(-4.2,-0.2)(4,4){$x$}{$F_X(x)$, \textcolor{red}{$g(x)$}, \textcolor{blue}{$h(x)$}}
\psplot[style=Graph]{-3}{3}{x 60 mul sin x add 3 div 1 add} \psplot[style=Graph]{-4}{-3}{0} \psplot[style=Graph]{3}{3.5}{2}
%
\psplot[style=Graph,linecolor=red]{-3}{-0.5}{x 60 mul sin x add -3 div 0.33 sub}
\psplot[style=Graph,linecolor=red]{-0.5}{0.5}{x 60 mul sin x add 3 div 0.33 add}
\psplot[style=Graph,linecolor=red]{0.5}{3}{x 60 mul sin x add 3 div 0.33 sub}\psplot[style=Graph,linecolor=red]{-4}{-3}{0.67} \psplot[style=Graph,linecolor=red]{3}{3.5}{0.67}
%
\psplot[style=Graph,linecolor=blue]{-3}{-0.5}{x 60 mul sin x add 3 div 3 add}
\psplot[style=Graph,linecolor=blue]{-0.5}{0.5}{x 60 mul sin x add 3 div 2.5 add}
\psplot[style=Graph,linecolor=blue]{0.5}{3}{x 60 mul sin x add -3 div 3.5 add}\psplot[style=Graph,linecolor=blue]{-4}{-3}{2} \psplot[style=Graph,linecolor=blue]{3}{3.5}{2.5}
\psTick{0}(-4,2) \rput[tr](-4.2,2){$1$}
\psset{braceWidthInner=5pt,braceWidthOuter=5pt,braceWidth=0.1pt}
\psbrace[ref=rC,linewidth=0.1pt,rot=180](-4,0.67)(-4,0){\textcolor{red}{$\dom{Z}$}}
\psbrace[ref=rC,rot=180,linewidth=0.1pt](-4,2.67)(-4,2){\textcolor{blue}{$\dom{Y}_1$}}
\psbrace[ref=lC,linewidth=0.1pt](0.5,2.17)(0.5,2.83){\textcolor{blue}{$\dom{Y}_2$}}
\psbrace[ref=lC,linewidth=0.1pt](3.5,2.5)(3.5,3.17){\textcolor{blue}{$\dom{Y}_3$}}
\psbrace[ref=t,rot=90,linewidth=0.1pt](-4,0)(-0.5,0){$\dom{X}_1$}
\psbrace[ref=t,rot=90,linewidth=0.1pt](-0.5,0)(0.5,0){$\dom{X}_2$}
\psbrace[ref=t,rot=90,linewidth=0.1pt](0.5,0)(3.5,0){$\dom{X}_3$}
\psline[style=Dash,linewidth=0.01, linecolor=gray](0.5,0)(0.5,3.5)
\psline[style=Dash,linewidth=0.01, linecolor=gray](-0.5,0)(-0.5,3.5)
\psline[style=Dash,linewidth=0.01](-0.5,2.17)(0.5,2.17) \psline[style=Dash,linewidth=0.01](0.5,3.17)(3.5,3.17)
\psline[style=Dash,linewidth=0.01](-4,2.67)(-0.5,2.67)
\psset{linecolor=blue}
\pszero(-0.5,2.67){p1} \dotnode(-0.5,2.17){d} \pszero(0.5,2.83){p1} \dotnode(0.5,3.17){d}
\psset{linecolor=red}
\pszero(0.5,0.67){p1} \dotnode(0.5,0){p1}
\end{pspicture}
\caption{Piecewise strictly monotone functions with $L=3$ satisfying conditions of Theorem~\ref{thm:UpperBoundLoss}. The function in blue, $h{:}\ \dom{X}\to\dom{Y}$, renders~\eqref{eq:reqForBound} piecewise constant but not constant due to improper setting of the constants $c_l$. Tightness is achieved in the smallest bound,~\eqref{eq:bound1}, only. The function in red, $g{:}\ \dom{X}\to\dom{Z}$, satisfies all conditions (i.e.,~\eqref{eq:reqForBound} is constant and $\dom{Z}_l=\dom{Z}$ for all $l$) and thus achieves equality in the largest bound~\eqref{eq:bound3}. Note that the subdomains $\dom{X}_l$ are chosen such that each subdomain contains the same probability mass.}
\label{fig:satTh4}
\end{figure*}
\begin{thm}\label{thm:UpperBoundLoss}
The information loss induced by a function $g(\cdot)$ satisfying Definition~\ref{def:function} can be upper bounded by the following ordered set of inequalities:
\begin{IEEEeqnarray}{RCL}
\ent{X|Y} &\leq& \int_\dom{Y} f_Y(y) \log \left(|\indset{y}|\right) dy \label{eq:bound1}\\
&\leq& \log \left(\sum_{l=1}^L \int_{\dom{Y}_l} f_Y(y) dy \right)\label{eq:bound2} \\&\leq& \log L\label{eq:bound3}
\end{IEEEeqnarray}
Bound~\eqref{eq:bound1} holds with equality if and only if
\begin{equation}
\sum_{k\in\indset{g(x)}} \frac{f_X(x_k)}{|g'(x_k)|}\frac{|g'(x)|}{f_X(x)} \label{eq:reqForBound}
\end{equation}
is piecewise constant. If this expression is constant for all $x\in\dom{X}$, bound~\eqref{eq:bound2} is tight. Bound~\eqref{eq:bound3} holds with equality if and only if additionally $\dom{Y}_l=\dom{Y}$ for all $l=1,\dots,L$, and thus~\eqref{eq:reqForBound} evaluates to $L$.
\end{thm}
\begin{IEEEproof}\input{Proof4.tex}
\end{IEEEproof}
An example of a function $g(\cdot)$ for which~\eqref{eq:reqForBound} is piecewise constant assumes on each interval $\dom{X}_l$ the cumulative distribution function $F_X(x)$, possibly modified with a sign and an additive constant. In other words, for all $l=1,\dots,L$
\begin{equation}
g_l(x) = b_lF_X(x) + c_l
\end{equation}
where $b_l\in\{1,-1\}$ and $c_l\in\mathbb{R}$ are arbitrary constants. Such a function $h{:}\ \dom{X}\to\dom{Y}$ is depicted in Fig.~\ref{fig:satTh4}. The constants $c_l$ and the probability masses in each interval are constrained if~\eqref{eq:reqForBound} shall be constant. As a special case, consider this constant to be equal to $L$, which guarantees tightness in the largest bound~\eqref{eq:bound3}. In order that appropriate constants $c_l$ exist, all intervals $\dom{X}_l$ have to contain the same probability mass, i.e.,
\begin{equation}
\int_{\dom{X}_l} f_X(x) dx = \frac{1}{L}.
\end{equation}
Since equal probability mass in all intervals is a necessary, but not a sufficient condition for equality in~\eqref{eq:bound3} (cf.~Fig.~\ref{fig:satTh4}), the constants $c_l$ have to be set to
\begin{equation}
c_l = -\sum_{i=1}^{l-1} \int_{\dom{X}_i} f_X(x) dx = -\frac{l-1}{L}\label{eq:cls}
\end{equation}
where we assume that the intervals are ordered and where $b_l=1$ for all $l$. A function $g{:}\ \dom{X}\to\dom{Z}$ satisfying these requirements is shown in Fig.~\ref{fig:satTh4}.
Another example of a function satisfying the tightness conditions of Theorem~\ref{thm:UpperBoundLoss} is given in Example 1 of Section~\ref{sec:examples}.
\section{Examples}
\label{sec:examples}
In this Section, the application of the obtained expression for the information loss and its upper bounds is illustrated. Unless otherwise noted, the logarithm is taken to base $2$.
\subsection{Example 1: Even PDF, Magnitude Function}\label{ssec:ex1}
Consider a continuous RV $X$ with an even PDF, i.e., $f_X(-x)=f_X(x)$. Let the support $\dom{X}=\mathbb{R}$ and let this RV be the input to the magnitude function, i.e.,
\begin{equation}
g(x)=|x|=
\begin{cases}
-x,& \text{if }x<0\\x,&\text{if }x\geq0
\end{cases}
.\label{eq:magnitude}
\end{equation}
The magnitude function is piecewise strictly monotone on $\dom{X}_1=(-\infty,0)$ and $\dom{X}_2=[0,\infty)$, and with $L=2$ we obtain the largest bound from Theorem~\ref{thm:UpperBoundLoss} as
\begin{equation}
\ent{X|Y} \leq \log 2=1.
\end{equation}
Both intervals are mapped to the positive (non-negative) real axis, i.e., $\dom{Y}_1\cup \{0\}=\dom{Y}_2=\dom{Y}=[0,\infty)$, which implies that the second bound in Theorem~\ref{thm:UpperBoundLoss} also yields $\ent{X|Y}\leq1$. The magnitude of the first derivative of $g(\cdot)$ is equal to unity on both $\dom{X}_1$ and $\dom{X}_2$. There are two partial inverses mapping $\dom{Y}$ to the subdomains of $\dom{X}$:
\begin{IEEEeqnarray}{RCL}
x_1=g_1^{-1}(y)&=&-y=-g(x)\text{, and}\\
x_2=g_2^{-1}(y)&=&y=g(x).
\end{IEEEeqnarray}
Thus for all $x\in\dom{X}$ we have $|\indset{g(x)}|=2$, which renders the smallest bound of Theorem~\ref{thm:UpperBoundLoss} as $\ent{X|Y}\leq 1$.
Combining~\eqref{eq:magnitude} with the two partial inverses, we obtain for $x\in\dom{X}_1$:
\begin{IEEEeqnarray}{RCL}
x_1&=&x\text{, and}\\
x_2&=&-x.
\end{IEEEeqnarray}
Conversely, for $x\in\dom{X}_2$ we have $x_1=-x$ and $x_2=x$. Using this in~\eqref{eq:informationloss} we obtain the information loss
\begin{IEEEeqnarray}{RCL}
\ent{X|Y} &=& \int_{\dom{X}_1} f_X(x) \log \left( \frac{f_X(x)+f_X(-x) }{f_X(x) } \right)dx\notag\\
&&{}\int_{\dom{X}_2} f_X(x) \log \left( \frac{f_X(-x)+f_X(x) }{f_X(x) } \right)dx\notag\\
&=& \log 2 \int_\dom{X} f_X(x) dx=1\notag
\end{IEEEeqnarray}
which shows that all bounds of Theorem~\ref{thm:UpperBoundLoss} are tight in this example.
The conditional entropy is identical to one bit. In other words, if an RV with an even PDF (thus, with equal probability masses for positive and negative values) is fed through a magnitude function, one bit of information is lost. Despite the fact that this result seems obvious, this is the first time that it is derived for a continuous input to the best knowledge of the authors.
\subsection{Example 2: Zero-Mean Uniform PDF, Piecewise Strictly Monotone Function}
\begin{figure}[t]
\centering
\begin{pspicture}[showgrid=false](-4.0,-0.5)(4.0,3.2)
\psaxeslabels{->}(0,0)(-3,-0.2)(3,3.2){$x$}{$g(x)$}
\psplot[style=Graph]{0}{1.7}{x} \psplot[style=Graph]{-1.7}{0}{x x mul}
\psTick{90}(-1.7,0) \rput[tr](-1.7,-0.2){$-a$}
\psTick{90}(1.7,0) \rput[tl](1.7,-0.2){$a$} \psTick{90}(1,0) \rput[tl](1.1,0.3){\footnotesize$1$}
\psTick{90}(-1.3,0) \rput[tc](-1.3,0.3){\footnotesize${-}\sqrt{a}$}\psline[style=Dash,linewidth=0.01,linecolor=gray](-1.7,1.7)(2,1.7)
\psline[style=Dash,linewidth=0.01,linecolor=gray](-1.3,1.8)(-1.3,0)
\psline[style=Dash,linewidth=0.01,linecolor=red](1,1)(1,0)\psline[style=Dash,linewidth=0.01,linecolor=red](-1,1)(-1,0)
\psline[style=Dash,linewidth=0.01,linecolor=red](-1,1)(1,1)
\psset{braceWidthInner=5pt,braceWidthOuter=5pt,braceWidth=0.1pt}
\psbrace[ref=lC,linewidth=0.01](2,0)(2,1.7){$\dom{Y}_2$}\psbrace[ref=rC,rot=180,linewidth=0.01](-2,2.89)(-2,0){$\dom{Y}_1$}
\psbrace[ref=t,rot=90,linewidth=0.01](-1.7,0)(0,0){$\dom{X}_1$}\psbrace[ref=t,rot=90,linewidth=0.01](0,0)(1.7,0){$\dom{X}_2$}
\end{pspicture}
\caption{Piecewise strictly monotone function of Example 2}
\label{fig:sqlin}
\end{figure}
Consider an RV $X$ uniformly distributed on $[-a,a]$, $a\geq 1$, and a function $g(\cdot)$ defined as:
\begin{equation}
g(x)=\begin{cases}
x^2,&\text{if }x<0\\x,&\text{if }x\geq0
\end{cases}.
\label{eq:squarelin}
\end{equation}
This function, depicted in Fig.~\ref{fig:sqlin}, is piecewise strictly monotone on $(-\infty,0)$ and $[0,\infty)$. We introduce the following partitioning:
\begin{IEEEeqnarray}{RCL}
\dom{X}_1=[-a,0) &\rightarrow& \dom{Y}_1=(0,a^2]\\
\dom{X}_2=[0,a]&\rightarrow& \dom{Y}_2=[0,a]
\end{IEEEeqnarray}
Since the function is not differentiable, we define $|g'(\cdot)|$ in a piecewise manner by the magnitude of the first derivatives of $g_l(\cdot)$:
\begin{equation}
|g'(x)|=\begin{cases}
2|x|,&\text{if }x<0\\1,&\text{if }x\geq0
\end{cases}
\end{equation}
The two partial inverses on $\dom{Y}_2$ are given by
\begin{IEEEeqnarray}{RCL}
x_1&=&-\sqrt{y}=-\sqrt{g(x)}=
\begin{cases}
x,&\text{if }x<0\\-\sqrt{x},&\text{if }x\geq0
\end{cases}\text{ and}\\
x_2&=&y=g(x)=
\begin{cases}
x^2,&\text{if }x<0\\x,&\text{if }x\geq0
\end{cases}
\end{IEEEeqnarray}
while on $\dom{Y}\backslash\dom{Y}_2=(a,a^2]$ only the root $x_1$ exists (i.e., this image is mapped to bijectively). Noticing that the information loss on the corresponding preimage $\dom{X}_b=[-a,-\sqrt{a})$ is zero, we can write for the conditional entropy:
\begin{IEEEeqnarray}{RCL}
\ent{X|Y} &=&
\int_{\dom{X}_1\setminus\dom{X}_b} f_X(x) \log \left( \frac{\frac{f_X(x)}{2|x|}+f_X(x^2) }{\frac{f_X(x)}{2|x|} } \right)dx\notag\\
&&{+}\:\int_{\dom{X}_2} f_X(x) \log \left( \frac{\frac{f_X(-\sqrt{x})}{2\sqrt{x}}+f_X(x) }{f_X(x) } \right)dx\notag\\
\end{IEEEeqnarray}
Since $f_X(x)=\frac{1}{2a}$ for all $x$, $x^2$, and $-\sqrt{x}$ in the designated integration ranges, we obtain
\begin{IEEEeqnarray}{RCL}
\ent{X|Y} &=& \int_{-\sqrt{a}}^{0} \frac{1}{2a} \log \left( 1+2|x| \right)dx\notag\\
&&{+}\:\int_{0}^{a} \frac{1}{2a} \log \left( 1+\frac{1}{2\sqrt{x}} \right)dx\notag\\
&=& \frac{4a+4\sqrt{a}+1}{8a} \log (2\sqrt{a}+1)\notag\\
&&{-}\:\frac{\log(2\sqrt{a})}{2}-\frac{1}{4\sqrt{a}\ln2}\notag
\end{IEEEeqnarray}
where $\ln$ is the natural logarithm. For $a=1$ this approximates to $\ent{X|Y}\approx 0.922$~bits. The information loss is slightly less than one bit, despite the fact that two equal probability masses collapse and the complete sign information is lost. This suggests that by observing the output part of the sign information can be retrieved. Looking at Fig.~\ref{fig:sqlin}, one can see that for the subdomain located on the negative real axis, i.e., for $\dom{X}_1$, more probability mass is mapped to smaller outputs than to higher outputs. Thus, for a small output value $y$ it is more likely that the input originated from $\dom{X}_1$ than from $\dom{X}_2$ (and vice-versa for large output values). Mathematically, this means that despite $p(w_1)=p(w_2)=0.5$, we have $p(w_1|y)\neq p(w_2|y)$, which according to Theorem~\ref{thm:equivToRoots} plays a central role in computing the conditional entropy $\ent{X|Y}$.
By evaluating the bounds from Theorem~\ref{thm:UpperBoundLoss} we obtain
\begin{equation}
\ent{X|Y} \leq \frac{1+\sqrt{a}}{2\sqrt{a}} \leq \log\left(\frac{3\sqrt{a}+1}{2\sqrt{a}}\right)\leq 1
\end{equation}
which for $a=1$ all evaluate to 1 bit. The bounds are not tight as the conditions of Theorem~\ref{thm:UpperBoundLoss} are not met in this case.
\subsection{Example 3: Normal PDF, Third-order Polynomial}\label{ex:3rd}
\begin{figure}[t]
\centering
\begin{pspicture}[showgrid=false](-4,-1.25)(4,2)
\psaxeslabels{->}(0,0)(-3,-1.25)(3,1.5){$x$}{$g(x)$}
\psplot[style=Graph]{-2.6}{2.6}{x x mul x mul x -4 mul add 0.3 mul 2 div}
\psTick{90}(-1.15,0) \rput[th](-1.15,-0.1){\footnotesize$-\frac{10}{\sqrt{3}}$}
\psTick{90}(2.31,0) \rput[th](2.31,-0.1){\footnotesize$\frac{20}{\sqrt{3}}$}
\psline[style=Dash,linewidth=0.01, linecolor=gray](-2.6,0.46)(2.6,0.46)
\end{pspicture}
\caption{Third-order polynomial of Example 3}
\label{fig:3rd}
\end{figure}
Finally, consider a Gaussian RV $X\sim\normdist{0}{\sigma^2}$ and the function
\begin{equation}
g(x) = x^3-100x
\end{equation}
depicted in Fig.~\ref{fig:3rd}. An analytic computation of the information loss is prevented by the logarithm of a sum in~\eqref{eq:informationloss}. Still, we will show that with the help of Theorem~\ref{thm:UpperBoundLoss} at least a bound on the information loss can be computed.
Judging from the extrema of this function, three piecewise monotone intervals can be defined.
Further, the domain which is mapped bijectively can be shown to be identical to
$\dom{X}_b = \left(-\infty,-\frac{20}{\sqrt{3}}\right] \cup \left[\frac{20}{\sqrt{3}},\infty\right)$
and contains a probability mass of
\begin{equation}
P_b = 2F_X\left(-\frac{20}{\sqrt{3}}\right) = 2Q\left(\frac{20}{\sqrt{3}\sigma}\right)
\end{equation}
where $Q(\cdot)$ is the $Q$-function.
With this result and the fact that
\begin{equation}
P_b=\int_{\dom{X}_b} f_X(x)dx=\int_{\dom{Y}_b} f_Y(y)dy
\end{equation}
for a bijective mapping $g(\cdot)$ between $\dom{X}_b$ and $\dom{Y}_b$ we can upper bound the information loss by Theorem~\ref{thm:UpperBoundLoss}:
\begin{IEEEeqnarray}{RCL}
\ent{X|Y} &\leq& \int_\dom{Y} f_Y(y)\log(|\indset{y}|)dy\\
&=&\int_{\dom{Y}\setminus\dom{Y}_b} f_Y(y)\log3dy
= (1-P_b)\log 3
\end{IEEEeqnarray}
since $|\indset{y}|=1$ if $y\in\dom{Y}_b$ and $|\indset{y}|=3$ if $y\in\dom{Y}\setminus\dom{Y}_b$.
\begin{figure}[t]
\centering
\includegraphics[width=0.49\textwidth]{3rd}
\caption{Information loss of Example 3}
\label{fig:3rdorderSIM}
\end{figure}
In Fig.~\ref{fig:3rdorderSIM}, this bound is illustrated together with the results from numerical integration of~\eqref{eq:informationloss} and from Monte-Carlo simulations of the information loss.
\begin{figure*}[b]
\normalsize
\hrulefill
\begin{IEEEeqnarray}{RCL}
\ent{X|Y} &=& \int f_{X}(x) \log\left(\frac{|g'(x)|f_{Y}(g(x))}{f_X(x)}\right)dx-\lim_{\hat{X}\rightarrow X}\iint f_{\hat{X}X}(\hat{x},x) \log\left(\sum_{i\in\indset{g(x)}}\frac{|g'(x)|f_{X|\hat{X}}(\hat{x},x_i)}{|g'(x_i)|f_{X|\hat{X}}(\hat{x},x)}\right)d\hat{x}dx\label{eq:long1}
\end{IEEEeqnarray}
\end{figure*}
\section{Conclusion}
We presented an analytic expression for the information loss induced by a static nonlinearity. It was shown that this information loss is strongly related to the non-injectivity of the system, i.e., to the fact that a particular output can result from multiple possible inputs. Conversely, given a certain output, the input to the system under consideration is uncertain only with respect to the roots of the equation describing the system. The information loss can be utilized, e.g., for estimating the reconstruction error for nonlinearly distorted signals.
Since the obtained expression involves the integral over the logarithm of a sum, bounds on the information loss were derived which can be computed with much less difficulty. In particular, it was shown that the information loss for a piecewise strictly monotone function is upper bounded by the logarithm of the number of subdomains, and that this bound is tight.
Generalizations of these results to rates of information loss and nonlinear systems with memory, as well as the extension to discrete random variables are the object of future work.
\section*{Acknowledgments}
The authors gratefully acknowledge discussions with Sebastian Tschiatschek concerning mathematical notation, and his comments improving the quality of this manuscript
|
2,869,038,153,942 | arxiv | \section{Introduction}
During the last decade, technological developments have made possible the
fabrication of one and zero dimensional nanostructures such as quantum wires
and quantum dots (QDs). The interest on these systems comes from their novel
optical and transport properties and has been stimulated by the success of
quantum wells in technology. The effect of reduced dimensionality on the
electronic excitations and the related optical properties has been the subject of
intensive investigation and nowadays it is more or less well
understood.
Semiconductor-doped glasses (SDGs) are particularly useful to investigate the
vibrational modes in quasi-zero-dimensional systems, because the use of
appropriate thermal annealing techniques makes it possible to grow
semiconductor nanocrystallites with small enough radius to show the effects
of spatial confinement on the optical vibrational modes. Raman spectroscopy
is a valuable tool to probe the active optical modes and also to obtain
information about the electronic system. In addition, resonant Raman
scattering (RRS) can be used as a size selective technique\cite{3a}, which
could play an important role on SDGs due to their broad dispersion in
microcrystallite sizes. Recently, the mechanism and features of Raman
scattering by semiconductor nanocrystallites have been studied\cite
{r1,r2,r3,r4}, showing the effects of the reduced dimensionality on the
Raman shift and lineshape. A preliminary theory of first-order RRS in
spherical microcrystallites has been developed in \onlinecite{r1}
and \onlinecite{chamb} on the basis of a continuum model for polar optical
vibrations. These models consider the electronic intermediate states as
uncorrelated electron-hole pair (EHP) states, that is, in the strong size
quantized regime (model I).
An extension to the above theories, considering the
electron-hole interaction effects has been recently
presented\cite{eduard} (model II).
The calculations performed in \onlinecite{eduard} are strictly valid
for excitons completely confined within dots.
Raman scattering in the Fr\"ohlich configuration
considerably depends on the differences between electron and hole
wave functions ({\it electron-hole
decompensation})\cite{lscatter}. The theoretical values
of the Raman cross section and lineshape should be modified by the
electron-hole model and the confinement potential used in the entire
calculation. Hence, in the framework of a free EHP model with infinite
barriers the same wave functions for electrons and holes are obtained and
null contribution of the Fr\"ohlich mechanism to the Raman cross section is
achieved. The scattering efficiencies following models I and II
considerably differ when absolute values are calculated, even for
QDs with radii smaller than the exciton Bohr radius. In
model I the finite confinement barriers are considered but
regardless of excitonic effects. Model II includes the
electron-hole correlation in an infinite barrier, but the chosen potential
diminishes the electron-hole decompensation occurring through the
finite band offsets potential. In \onlinecite{eduard} on
the lines of model II an effective radius $R_{ef}$ was introduced
in order to take into account the penetration of the exciton
wave function into the adjacent medium. This procedure allows, in some way,
the RRS calculations in real systems using the mathematical simplicity of
the infinite barrier basis functions.
We will show that within the above approach accurate exciton
ground state energies can be achieved. This approach underestimates the
calculated Raman absolute values. It is well established
that a reliable Raman scattering theory becomes necessary in order to
interpret RRS absolute values in semiconductors\cite{cant}.
The purpose of the present paper is to clarify
the electron-hole decompensation effect
on the absolute values of scattering intensities and Raman
lineshapes taking into account uncorrelated and correlated
electron-hole theories and using different confinement potential models.
The paper is organized as follows. In Sec.~II we provide the
theoretical basis needed to obtain the Raman cross section where
the electronic intermediate states are excitons in a finite
spherical potential box (model III). Theories I and II are
derived as proper limits from the more general model III. We
also compare the Raman intensity values for CdS QDs embedded
in a glass matrix, obtained along the lines of the above described
theoretical models. In Sec.~III we present the conclusions of the
present work.
\section{Results and discussion}
The Raman cross section $\partial^2 \sigma/\partial\Omega%
\partial\omega_s$ of a dot of radius $R$ can be expressed as\cite{eduard}
\begin{eqnarray} \label{ec:2}
\frac{\partial^2 \sigma}{\partial\Omega\partial\omega_s}=S_0\sum_{n_p}
\left|\sum_{N,N^{\prime}}\frac{f_N \langle
N|h_{E-P}^{(n_p)}|N^{\prime}\rangle f_{N^{\prime}}} {(\hbar\omega_s-E_{N^{%
\prime}}(R)+i\Gamma_{N^{\prime}})(\hbar\omega_l-E_{N}(R)+i\Gamma_{N})}
\right |^2 \times \nonumber \\
\frac{\Gamma_{n_p}/\pi}{(\hbar\omega_l-\hbar\omega_s-\hbar%
\omega_{n_p}(R))^2+\Gamma_{n_p}^2}.
\end{eqnarray}
Here, $\hbar\omega_l$ ($\hbar\omega_s$) is the incoming (outgoing)
photon energy, $E_N$ ($\Gamma_N$) is the energy (broadening) of the
intermediate $L=0$ electronic state $|N\rangle$ ($L$ being the quantum
number of the total electronic angular momentum square), $f_N$ their optical
strengths, $\langle N|h_{E-P}^{(n_p)}|N^{\prime}\rangle$ is the matrix element
of the electron-Fr\"{o}hlich-type lattice interaction (in dimensionless units%
\cite{eduard}) and $n_p$ is the vibron\cite{t13} quantum number with
angular momentum $l_p=0$ and frequency $\omega_{n_p}$. $S_0$ is a constant
which depends on the semiconductor parameters and the embedding medium%
\cite{eduard}.
The exciton wave function $\Psi({\bf r}_e,{\bf r}_h)$ is obtained by the
expansion
\begin{equation}
\Psi _{N,L,M}({\bf r}_e,{\bf r}_h)=\sum_{\alpha =\{n_e,n_h,l_e,l_h\}}C_{N,L,M}(\alpha
)\Phi _\alpha ({\bf r}_e,{\bf r}_h), \label{exc:funcion}
\end{equation}
where the basis functions $\Phi_\alpha ({\bf {r}_e,{r}_h)}$ are
eigenfunctions of the total angular momentum square $\hat L^2$, its z-projection $%
\hat L_z$, and the Hamiltonian of the free EHP in the dot. The functions $%
\Phi _\alpha ({\bf r}_e,{\bf r}_h)$ are constructed from the dot electron
and hole wave functions ($\phi _{n_e,l_e,m_e}({\bf r}_e)$ and $\phi
_{n_h,l_h,m_h}({\bf r}_h)$), through the relation
\begin{equation}
\Phi _\alpha ({\bf r}_e,{\bf r}_h)=\sum_{m_e,m_h}(l_el_hm_em_h|LM)\phi
_{n_e,l_e,m_e}({\bf r}_e)\phi _{n_h,l_h,m_h}({\bf r}_h). \label{ocho}
\end{equation}
$(l_el_hm_em_h|LM)$ being the well known Clebsch-Gordon coefficients.
The coefficients $C_{N,L,M}(\alpha )$ and the eigenenergy $E_N$ are obtained from
numerical diagonalization of the exciton Hamiltonian in a spherical potential well,
using the basis defined by
equation~(\ref{ocho})\cite{nota1}. If the uncorrelated theory
(model I) is considered, for every eigenstate there is only one
non-zero coefficient $C_{N,L,M}(\alpha )$ in
the expansion (\ref{exc:funcion}). This approach leads to the same results
as the formalism of ~\onlinecite{chamb}.
On the other hand, models II and III differ upon the
radial parts of the electronic wave functions
$\phi _{n_e,l_e,m_e}({\bf r}_e)$ and $\phi_{n_h,l_h,m_h}({\bf r}_h)$, which
depend on the chosen confinement potential.
The resonance condition with a particular electronic level $N$ is given by
the equations $\hbar \omega _s=E_N(R)$ (outgoing resonance) or $\hbar \omega
_l=E_N(R)$ (incoming resonance). In the dipole approximation only excitons
with $L=0$ are created or annihilated, corresponding to $l_e=l_h$ interband
transitions in the free EHP model. If the valence band mixing is neglected,
only $l_p=0$ vibrons contribute to the Raman scattering.
The calculation of the matrix elements of Eq.~(\ref{ec:2}) has been
performed in \onlinecite{eduard} for the case of totally confined
excitons, while the strong size quantized regime (non exciton effects) has
been developed in \onlinecite{chamb}. The parameters used in
our calculations correspond to a CdS QD of
radius $20$~\AA{}\cite{eduard}. This means that the dot is in the strong confinement
regime. In this regime the Coulomb attraction shifts the EHP energies to
lower values and small changes on the wave functions are expected.
Figure~\ref{fig1} shows the electron and hole density of probability for the
three lower $L=0$ excitonic eigenstates as functions of $r$,
the distance to the
dot center. The density of probability in the case of the uncorrelated
EHP model (I) is shown by dashed curve. For the $N=1,L=0$ excitonic state
the effect of correlation is to push both the electron and the hole to the
dot center. As can be seen, for the system under consideration
(CdS QD of radius $20$~\AA ) the effect of
the finite confinement on the electron-hole decompensation is larger than
that of the electron-hole interaction. As we shall see, if
the former effect is neglected considerable changes
on the predicted Raman cross section absolute values are obtained.
In Figure~\ref{fig2}(a) we compare the calculated Raman cross-
section for incoming light in resonance with the $N=1$ excitonic
state following models I, II and III. The incoming resonances
happen at $\hbar\omega _l=2.870$ eV in the finite barrier
excitonic model III
(solid curve), at $\hbar\omega _l=3.014$ eV in the
uncorrelated EHP model I
(dashed curve) and at $\hbar \omega _l=2.878$ eV for the excitonic model II
(dot-dashed curve), assuming an effective radius
$R_{ef}=26$~\AA{} to simulate the finite-barrier height. It can be seen that
accurate $N=1$ exciton energy can be obtained following the formalism of
model II.
The $N=1$ excitonic state, as can be seen in Table~\ref{tab2}(a),
is mainly composed of EHP states with quantum numbers $n_e=n_h=1,l_e=l_h=0$
with a large oscillator strength $|f_N|^2$, giving the main contribution to
the cross-section in the resonance condition. The line shape is almost the same
in the three models. The difference between those
models lays in the absolute values of the cross-section, which is
smaller in model II. It is clear that the dominant effect on absolute
values comes from: (a) the values of the oscillator strength\cite{11};
(b) the EHP wave functions decompensation produced by the finite
depth of the spherical well. It can be seen from Figure~\ref{fig1}
that the electron-hole decompensation for the first level is
slightly larger in model III than the free EHP theory (I), something that is
reflected on the values of the exciton-vibron matrix elements reported in
Table~\ref{tab2}. In
the case of excitons completely confined
(II) the exciton-vibron matrix elements
$\langle 1|h^{(n_p)}|1\rangle$ are one order of magnitude
smaller than I and III (see \onlinecite{eduard}).
However, we must note that in the case of the
electrons, the effective mass in the glass matrix is five times larger than
its value inside the dot, causing
an extremely large
decompensation. A similarly large effect can be achieved if one of the
barrier heights is too small.
Figure~\ref{fig2}(b) shows the Raman spectrum in the case of incoming
resonance with the $N=2$ exciton at
$\hbar\omega_l=3.439$, $3.205$ and $3.292$~eV in models I, II and
III respectively. In I, the $N=2$ state is
the free EHP with quantum numbers $n_e=1,n_h=2,l_e=l_h=0$ and it has a weak
optical activity, as can be seen from the corresponding oscillator strength $%
|f_N|^2$ in Table~\ref{tab2}(b). Nevertheless, the excitonic effects produced
by the Coulomb interaction greatly enhance its oscillator strength (see
Table~\ref{tab2}(a)) and a strong incoming resonance is obtained. It must be
noted that even when the matrix element $\langle 2|h^{(n_p)}|2\rangle$ is
maximum for $n_p=2$, the main contribution to the cross-section in Figure~\ref
{fig2}(b) corresponds to $n_p=1$, a fact that can be explained by
interference effects due to virtual transitions between $N=2$ and $N=3$
excitonic levels. Figure~\ref{fig1}(b) shows that electron-hole
decompensation is similar for I and III. Nevertheless a
completely confined exciton theory with an effective radius
gives a matrix elements $\langle 2|h^{(n_p)}|2\rangle$ one order
of magnitude smaller than those reported in Table~\ref{tab2}.
Figure~\ref{fig2}(c) shows the spectrum for the case of incoming resonance
with the $N=3$ level, at $\hbar\omega_l=3.479$, $3.310$ and $3.339$ eV in
models I, II and III, respectively. The results of the theories considered here
present great differences: (a) Model II predicts a cross-
section smaller than that for the $N=1$ incoming resonance (Fig.~\ref{fig2}%
(a)), while I and III predict larger cross-sections than those of
Fig.~\ref{fig2}(a). (b) In model III, the peak associated to the $n_p=2$
vibron becomes bigger than the $n_p=1$ peak. In models I and III, because
the energy of incoming resonance $\hbar\omega_l=E_3$ is very close to the
energy of outgoing resonance with the $N=2$ excitonic state $%
\hbar\omega_s\simeq E_2+\hbar\omega_p$ (see Table~\ref{tab2}), a
quasi-double resonant condition takes
place in the scattering process. Hence, the
Raman cross-section values are strongly dependent on the matrix elements $%
\langle 3|h^{(n_p)}|2\rangle$, which are maximum for $n_p=2$, explaining why
the $n_p=2$ peak is greatly enhanced. Moreover, the $n_p=1$ contribution is
dropped because of interference effects between $N=2$ and $N=3$ excitonic
transitions mediated by the matrix elements $\langle 3|h^{(n_p)} |3\rangle$
and $\langle 3|h^{(n_p)}|2\rangle$. Owing to symmetry,
the matrix element $\langle 3|h^{(n_p)}|2\rangle$ vanishes in the framework
of the free EHP model and the quasi-double resonance effect is not observed
in Figure~\ref{fig2}(c). We have also calculated the spectrum
in the outgoing resonance with $N=3$, using models I and
III. In this case the double-resonance condition is not fulfilled and the
obtained cross-section is similar in both models.
We have finally compared the integral Raman intensity for the $n_p=1$
vibron of a 20~\AA{} CdS QD and it is shown in Figure~\ref{fig3} as a
function of the incident photon energy. We have used the same broadening of $%
\Gamma=5$ meV for all the excitonic levels. This plot takes up the effects
already presented in previous figures over the absolute values of the Raman
spectra. The red shift of the resonances due to the attractive
electron hole interaction is shown. Due to the small optical oscillator
strength the intensities corresponding to the incoming and outgoing
resonances with the second EHP level in model I, are insignificant compared
to those of the first and third levels. Model III predicts stronger resonances
for the $N=1$ exciton than model I, a fact explained by the enhancement of
its oscillator strength. For all models the $N=1$ outgoing resonance is
stronger than the incoming one, but for the $N=2$ state the opposite is
obtained. The above feature is a general result of the Fr\"{o}hlich-like
interaction in a quantum dot. The $N=2$ excitonic state has an oscillator
strength equal to 1.08, a factor about 30 times larger than for the free EHP
(see Table~\ref{tab2}) and this is the cause of the strong $N=2$ incoming
resonance seen in the plot. The outgoing peak for the $N=3$ level is smaller
in model III than that of the free EHP theory. This is explained by the reduction
of the electron-hole decompensation observed in model III (see
Figure~\ref{fig1}(c)).
The intensities calculated according to model
II are two orders of magnitude smaller than that of
the exciton in the finite-barrier models. This calculation is not presented
in Fig.~\ref{fig3}.
\section{Conclusions}
We have studied the influence of excitonic and finite confinement effects
on the first-order Raman cross-sections for longitudinal
optical vibrons in nanospherical semiconductor quantum dots.
We have compared the
predictions of three models for the intermediate electronic states:
(I) uncorrelated electron hole pairs with finite dot confinement;
(II) excitons completely confined in an spherical box with an effective
radius; (III)
excitons in a finite confinement barrier.
The main conclusion of the present
work is that the Raman spectra and the resonance profile
absolute values for the Fr\"{o}hlich-type-interaction Hamiltonian
in QDs should be predicted by a theory that takes into consideration both
the finite confinement barrier height and electron-hole
correlation effects. Even in the strong quantum confinement regime excitons
and the conduction and valence-band offsets substantially modify the
features of the resonant Raman spectra, particularly in presence of
quasi-double resonances.
\acknowledgements
We would like to thank F. Comas and J. Tutor for critical readings of the manuscript.
Two of us (E. M.-P. and J. L. P.) would like to thank the
support of the Secretary of Public Education of Yucatan State, Mexico.
|
2,869,038,153,943 | arxiv | \section{Introduction}
It is well-known that many problems in engineering are subject to uncertainty in the input parameters, e.g.\ material data, boundary conditions, and geometry. In the present case, we are interested in the forward-propagation of such uncertainty to the quantity of interest, e.g.\ deformation or stress, which is usually obtained from the solution of a partial differential equation. Three classical categories of methods exist for the solution of such stochastic partial differential equations, namely the Stochastic Galerkin (SG)~\cite{Deb2001}, Stochastic Collocation (SC)~\cite{Babuska2007}, and Monte Carlo (MC)~\cite{Babuska2004} approaches. The selection of an appropriate method depends strongly on the number and independence assumptions of the random variables, as well as on the smoothness of the solution in stochastic state space. Regardless of the employed method, stochastic partial differential equations pose a challenge to being solved efficiently when the underlying deterministic problem is computationally costly. It is the purpose of this work to contribute to this field by proposing a method for reducing the computational complexity for a wide range of moderate-dimensional parametric problems encountered in engineering practice.
In the following, we consider a general class of problems from the field of computational mechanics, where solutions to inequality constrained stochastic partial differential equations are sought. Prominent examples include the computation of complex material behavior with uncertain material parameters as well as the contact of deformable bodies with rough surfaces in a generally nonlinear setting. Due to the complimentary condition, such problems are classified as being non-smooth in the random parameter domain~\cite{Bierig2014}. In both exemplary cases, the rough surface as well as the uncertain material parameter field is represented by a stochastic process which, provided that it has bounded second moment, can be approximated by a truncated Karhunen-Loève expansion. This leads to the assumption of a multilinear combination of independent random variables parametrizing the deterministic problem. Due to independence, it is possible to utilize a double-orthogonal polynomial basis of tensor-product structure for the state space, leading to a decoupling of the random dimensions (see e.g.~\cite{Forster2010, Babuska2007, Babuska2004}). This choice of basis renders the SG and SC approaches equivalent, resulting in a non-intrusive method.
Unlike Monte Carlo integration, the convergence of classical SG and SC methods relies on regularity properties of the quantity of interest in the random parameter domain~\cite{Babuska2007, Beck2014}. Unfortunately, the class of problems considered herein clearly violates the required smoothness assumptions~\cite{Bierig2014}, making MC methods an attractive alternative, despite their slow convergence. However, in many cases, areas of reduced regularity are confined to certain regions of state space, suggesting an error-adaptive approach to sparse grid SC methods~\cite{Ma2009, Ma2010, Gunzburger2014} for problems involving a moderately large number of stochastic dimensions. Such adaptive methods utilize local hierarchical basis functions with tensor product structure, naturally providing for an improvement estimate for each adaptively refined collocation point while overcoming the oscillations incurred by the use of global interpolating polynomials.
This work proposes a scheme for significantly reducing the computational complexity of discretized problems involving the non-smooth forward propagation of uncertainty by combining the adaptive hierarchical sparse grid stochastic collocation method~\cite{Ma2009, Ma2010, Gunzburger2014} with a hierarchy of successively finer spatial discretizations (e.g.\ finite elements) of the underlying deterministic problem. To achieve this, we build strongly upon ideas from the Multilevel Monte Carlo method (MLMC)~\cite{Heinrich1998, Heinrich2000, Heinrich2001, Giles2008}, which represents a well-established technique for the reduction of computational complexity in problems affected by both deterministic and stochastic error contributions. The resulting approach is termed the Multilevel Adaptive Sparse Grid Collocation (MLASGC) method. It is remarked that previous works on the topic of multilevel methods in adaptive sparse grid stochastic collocation methods~\cite{Galindo2015} focus on the acceleration of iterative solvers by using a low-fidelity interpolant of the stochastic state space as an initial guess for newly added collocation points. It is emphasized that, while the underlying ideas are very similar, our approach is more classical in that we consider the term ``multilevel'' to apply to a hierarchy of successively finer spatial discretizations, as suggested in~\cite{Wyk2014}.
\section{The multilevel adaptive sparse grid collocation method}
We begin by summarizing the so-called Adaptive Lagrangian Sparse Grid Collocation method (ALSGC)~\cite{Klimke2005, Ma2009, Ma2010} and subsequently extend it to the use of multilevel deterministic discretizations.
\subsection{Construction of an initial sparse grid}
For the construction of a $d$-dimensional sparse grid on $I_1 \times \ldots \times I_j \times \ldots \times I_d$ by the Smolyak algorithm, introduce the multi-index $\bs{i} = (i_1, \ldots, i_j, \ldots, i_d)$, in which $i_j \in \field{N}^+$. Further, define $\mc{S}_l = \{\bs{i} \;|\; 1-d+|\bs{i}| = l\}$ as the set of multi-indexes belonging to level $l$ of the sparse grid. In order to relate the level $i_j$ of the $j$-th dimension to a certain number of univariate points, we choose a nested rule, e.g.\ the Clenshaw-Curtis rule:
\eq{
n(i_j) =
\begin{cases}
1 & \text{if} \; i_j = 1\\
2^{i_j-1}+1 & \text{if} \; i_j > 1 \,
\end{cases}
}
and denote $\Theta_{i_j}$ as the resulting set of $n(i_j)$ univariate points on the interval $I_j$. For each level $l$, the sparse grid is constructed by the relation
\eq{
\bs{\Theta}_l = \bigcup\limits_{\bs{i} \in \mc{S}_l} (\Theta_{i_1} \times \ldots \times \Theta_{i_j} \times \ldots \times \Theta_{i_d}) \, .
}
The adaptive grid discussed in the following section requires an initial set of sparse grid points up to level $L_\tx{init}$ as a starting point for refinement. Hence, we define the initial grid $\bs{\Theta}_\tx{init}$ as the union of the incremental, i.e.\ level-wise, grids $\bs{\Theta}_l$:
\eq{
\bs{\Theta}_\tx{init} = \bigcup\limits_{l = 1}^{L_\tx{init}} \bs{\Theta}_l \, .
}
\subsection{Tree structure of sparse grid points}
Due to the nested structure of the collocation points, the sparse grid introduced in the previous section admits a $k$-ary tree structure. In particular, every parent collocation point on level $l$ has at most two children per dimension on level $l+1$, i.e.\ $d \le k \le 2d$. In order to make this notion precise, we identify univariate points on level $i_j$ of the $j$-th dimension by integers $m^{i_j}_j \in \{ m \in \field{N}^+ : m \le n(i_j )\}$. Given a univariate parent collocation point on level $i_j$ identified by the index $m^{i_j}_j$, its index on level $i_j + 1$ is given by
\eq{
m^{i_j+1}_j =
\begin{cases}
2 & \text{if} \; i_j = 1\\
2 m^{i_j}_j - 1 & \text{if} \; i_j > 1 \, .
\end{cases}
}
It is emphasized that an existing collocation point identified by the index $m^{i_j}_j$ exists only on level $i_j$, while its index $m^{i_j+1}_j$ on level $i_j+1$ simply corresponds to a non-existent placeholder resulting from the nested structure. This convention allows for the identification of the at most two univariate children on level $i_j+1$ to point $m^{i_j}_j$ as
\eq{
\bs{c}^{m_j}_{i_j} = \begin{cases}
\left\{m^{i_j+1}_j + 1\right\} & \text{if} \; m^{i_j+1}_j = 1\vspace{.05cm} \\
\left\{m^{i_j+1}_j - 1\right\} & \text{if} \; m^{i_j+1}_j = n(i_j+1)\vspace{.05cm}\\
\left\{m^{i_j+1}_j - 1, m^{i_j+1}_j + 1\right\} & \text{if otherwise}\, .
\end{cases}
}
For a univariate point identified by the integer $m^{i_j}_j$ as well as the level $i_j$, it is straightforward to recover its coordinate. For instance, in the case of univariate points distributed evenly on the interval $I_j = [-1,1]$, the coordinate is
\eq{
\hat{\xi}(m^{i_j}_j) =
\begin{cases}
0 & \text{if} \; n(i_j) = 1\\
-1+\frac{2\left(m^{i_j}_j-1\right)}{n(i_j)-1} & \text{if otherwise} \, .\\
\end{cases}
}
Turning back to the multi-dimensional case, collocation points on level $l$ are identified by the multi-index $\bs{m}^{\bs{i}} = (m^{i_1}_1, \ldots, m^{i_j}_j, \ldots, m^{i_d}_d)$, where $\bs{i}\in\mc{S}_l$. We are then able to identify its coordinates
\eq{
\hat{\bs{\xi}}\left(\bs{m}^{\bs{i}}\right) = \left\{\hat{\xi}\left(m^{i_1}_1\right)\right\} \times \cdots \times \left\{\hat{\xi}\left(m^{i_j}_j\right)\right\} \times \cdots \times \left\{\hat{\xi}\left(m^{i_d}_d\right)\right\}
}
as well as the indices of its children
\eq{
\bs{c}
^{\bs{m}}_{\bs{i}} = \left\{ \left\{m^{i_1}_1\right\} \times \ldots \times \left\{m^{i_{j-1}}_{j-1}\right\} \times \bs{c}^{m_j}_{i_j} \times \left\{m^{i_{j+1}}_{j+1}\right\} \times \ldots \times \left\{m^{i_{d}}_d\right\} \;|\; j = 1, \ldots, d \right\} \, .
}
Finally, we denote
\eq{
\mc{C}_l =
\begin{cases}
\left\{ \left\{ \{(1)^d\} \right\} \right\} & \text{if } l = 1\\
\left\{\bs{c}^{\bs{m}}_{\bs{i}} \;|\; \bs{m}^{\bs{i}} \in \mc{C}_{l-1} \right\} & \text{if } l > 1
\end{cases}
}
as the set of all collocation point indices on level $l$. For an example of the hierarchical construction in two dimensions, see tab.~\ref{tab:tab1}.
\subsection{Local hierarchical Lagrange interpolation}
As already mentioned, the ALSGC approach uses local hierarchical basis functions to overcome the drawback of oscillations, well-known from interpolations using a global Lagrange polynomial basis. We begin with the construction of univariate basis functions, which will then be extended to the multi-dimensional case.
For identifying the support of the hierarchical basis, every point index $m^{i_j}_j$ of an existing collocation point on level $i_{j} > 1$ possesses at most two neighbor point indices
\eq{
\bs{n}^{m_j}_{i_j} = \begin{cases}
\left\{m^{i_j}_j + 1\right\} & \text{if} \; m^{i_j}_j = 1 \vspace{.05cm}\\
\left\{m^{i_j}_j - 1\right\} & \text{if} \; m^{i_j}_j = n(i_j)\vspace{.05cm}\\
\left\{m^{i_j}_j - 1, m^{i_j}_j + 1\right\} & \text{if otherwise}\, .
\end{cases}
\label{eq:nneigh}
}
per dimension $j$. It is remarked that these neighbor point indices need not correspond to existing points. In case of non-existence, they correspond to placeholders in the nested structure. We emphasize that the linear basis function $a^{m_j}_{i_j}$ with significant points
\eq{
\bs{p}_{i_j}^{m_j} = \left\{ \hat{\xi}(n) \; | \; n \in \bs{n}^{m_j}_{i_j} \right\} \cup \left\{ \hat{\xi}\left(m^{i_j}_j\right) \right\}
}
has bounded support $\mrm{supp}\left(a^{m_j}_{i_j}\right) = \left[\,\inf \bs{p}_{i_j}^{m_j},\, \sup \bs{p}_{i_j}^{m_j}\,\right]$ and fulfills the following conditions:
\eq{
a^{m_j}_{i_j}(\xi) = \begin{cases}
1 & \text{if} \; \xi = \hat{\xi}(m_{i_j}) \\
0 & \text{if}\, \xi \not\in \mrm{supp}(a^m_{i_j}) \, .
\end{cases}
}
The choice of linear basis functions is made for reasons of simplicity. Other possible candidates include multi-resolution wavelet basis functions~\cite{Gunzburger2014} and higher-order Lagrange polynomials~\cite{Zhang2013}. In all cases, the basis function for level $i_{j}=1$ is defined as
\eq{
a^1_{1} = 1 \, .
}
Returning to the multidimensional case, the $d$-dimensional basis function can be constructed using tensor products
\eq{
\bs{a}^{\bs{m}}_{\bs{i}} = a^{m_1}_{i_{1}} \otimes \cdots \otimes a^{m_j}_{i_{j}} \otimes \cdots \otimes a^{m_d}_{i_{d}} = \Motimes\limits_{j=1}^{d} a^{m_j}_{i_{j}}
}
of univariate basis functions. The set of all $d$-dimensional basis functions $\bs{a}^{\bs{m}}_{\bs{i}}$ on level $l$ is denoted by $\mc{A}_{l} = \left\{ \bs{a}^{\bs{m}}_{\bs{i}} \; | \; \bs{m}^{\bs{i}} \in \mc{C}_l \right\}$. For an example of the hierarchical basis in two dimensions, see table~\ref{tab:basis}.
The use of hierarchical basis functions subdivides the sparse grid interpolation space
\eq{
V_{\Gamma} = \Moplus\limits_{l=1}^L W_{l},
}
into an orthogonal sum of hierarchical difference spaces
\eq{
W_{l} = \text{span}\left\{ \bs{a}^{\bs{m}}_{\bs{i}} : \bs{a}^{\bs{m}}_{\bs{i}} \in \mc{A}_{l} \right\}\, .
}
The interpolant of the hierarchical difference space for a function $f : I^d \rightarrow \field{R}$ is defined as
\begin{align}
\mc{I}_{l}(f)(\bs{\xi}) &= \begin{cases}
\sum\limits_{\bs{m}^{\bs{i}} \in \, \mc{C}_{l}} \bs{a}^{\bs{m}}_{\bs{i}}(\bs{\xi}) \cdot f\left(\hat{\bs{\xi}}\left(\bs{m}^{\bs{i}}\right) \right) & \text{if } l = 1 \\
\sum\limits_{\bs{m}^{\bs{i}} \in \, \mc{C}_{l}} \bs{a}^{\bs{m}}_{\bs{i}}(\bs{\xi}) \cdot \left[ f\left(\hat{\bs{\xi}}\left(\bs{m}^{\bs{i}}\right) \right) - \mc{I}_{l-1}(f)\left(\hat{\bs{\xi}}\left(\bs{m}^{\bs{i}}\right)\right) \right] & \text{if } l > 1
\end{cases} \\
&= \sum\limits_{\bs{m}^{\bs{i}} \in \, \mc{C}_{l}} {\bs{a}^{\bs{m}}_{\bs{i}}}(\bs{\xi}) \cdot w^{\bs{m}}_{\bs{i}} \, ,
\end{align}
where $w^{\bs{m}}_{\bs{i}}$ denotes the hierarchical surplus belonging to the basis function $\bs{a}^{\bs{m}}_{\bs{i}}$ on level $l$. The hierarchical surplus on level $l=1$ is simply the function value at the coordinates of the collocation point located at $\hat{\bs{\xi}} \left(\bs{m}^{\bs{i}} = (m^1_{1},\ldots,m^1_{j},\ldots,m^1_{d})=(1,\ldots,1,\ldots,1)\right)$. For levels $l>1$, the hierarchical surplus is constructed using the difference of the function value at the coordinates of a collocation point $\hat{\bs{\xi}}\left(\bs{m}^{\bs{i}}\right)$ where $\bs{i} \in \mc{S}_l$ and the value of the interpolation $\mc{I}_{l-1}$ of level $l-1$ at the same coordinates. The complete interpolant is then recovered by the sum over all hierarchical difference interpolants:
\eq{
\mc{I}^\mrm{SL}(f)(\bs{\xi}) = \sum\limits_{l=1}^L \mc{I}_l(f)(\bs{\xi}) \, .
}
\subsection{Adaptive refinement}
The hierarchical surplus provides for an improvement prediction of the local interpolation when transitioning from level $l$ to $l+1$. It therefore comes naturally to use the hierarchical surplus as the criterion for the adaptive refinement. Hence, for a given tolerance $\epsilon_\mrm{ref} \in \field{R}^{+}$, children $\bs{c}^{\bs{m}}_{\bs{i}}$ of a collocation point on level $l$ identified by the multi-index $\bs{m}^{\bs{i}}$ are created if $|w^{\bs{m}}_{\bs{i}}| > \epsilon_\mrm{ref}$. This allows for the identification of the set of adaptively refined collocation points on level $l+1$ as
\eq{
\mc{C}_{l+1} =
\left\{\bs{c}^{\bs{m}}_{\bs{i}} \;|\; \bs{m}^{\bs{i}} \in \mc{C}_{l} \wedge |w^{\bs{m}}_{\bs{i}}| > \epsilon_\mrm{ref} \right\} \, .
}
It is remarked, that for a proper adaptive refinement the initial level $L_\mrm{init}$ of the sparse grid $\bs{\Theta}_\tx{init}$ must be chosen sufficiently high. Otherwise, the possibility increases that the refinement stops prematurely when a function value at a refined point coincidentally equals the value of the previous level interpolant.
\subsection{Multilevel splitting}
In the most general case, it is assumed that the quantity of interest $u(\cdot, \bs{\xi})$ of the underlying deterministic problem can be approximated to a certain precision level $r$ by an arbitrary scheme, e.g.\ an ordinary differential equation integrated using a difference scheme with time step $\Delta t_r$, a partial differential equation solved via finite elements with characteristic mesh-width $h_r$, or simply a $16\cdot2^r$-bit floating point precision evaluation of a function. In practice, we may be interested in computing the underlying deterministic problem to a precision level $R$, hence it is straightforward to acknowledge that the telescopic sum
\eq{
u_R = u_{1} + \sum\limits_{r=2}^R u_r - u_{r-1}
}
realizes this requirement. It is remarked that computations to precision $r$ are usually significantly more costly than computations to lesser precision $r-1$.
Due to the hierarchy of successively finer discretizations, we assume that the variance of the level correction $\mathbb{V}[u_r - u_{r-1}] \rightarrow 0$ as $r \rightarrow \infty$. In analogy to MLMC methods, it appears sensible that the number collocation points required for achieving a given interpolation error tolerance is related to the variance of the interpolated function. Hence, the variance decay of the level correction $u_r - u_{r-1}$ is exploited in order to reduce the overall cost of the computation. In particular, we estimate $u_1$ as well as the subsequent corrections $u_r - u_{r-1}$ independently using the ALSGC method. We then recover the response surface approximating $u_R(\bs{\xi})$ by
\eq{
\mc{I}^\mrm{ML}[u_R](\bs{\xi}) := \mc{I}^\mrm{SL}[u_1](\bs{\xi}) + \sum\limits_{r=2}^R \mc{I}^\mrm{SL}[u_r-u_{r-1}](\bs{\xi}) \, .
}
Subsequent integration of the response surface $\mc{I}^\mrm{ML}[u_R](\bs{\xi})$ for purposes of stochastic moment estimation then follows in a straightforward manner.
\section{Numerical results}
\begin{figure}
\begin{center}
\includegraphics{GM-figure0}
\end{center}
\caption{Exact solution $u(t=1, \bs{\xi})$ of ODE ($\bs{\xi} \in [-1,1]^2$)}
\label{fig:exactODE}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics{GM-figure1}
\end{center}
\caption{Error vs.\ cost of the MLASGC, ALSGC, and MLMC methods}
\label{fig:err}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics{GM-figure2}
\end{center}
\caption{Number of collocation points needed per discretization level $r$ of the MLASGC method (R=15, $\epsilon_\mrm{ref} = 30^{-1}\cdot2^{-15}$)}
\label{fig:levelpts}
\end{figure}
For a preliminary numerical investigation, we choose a parametric ($\bs{\xi} \in [-1,1]^2$) first-order linear ordinary differential equation
\eq{
\tdiff{u}{t} + \left(|2-(\xi_1-1)^2-(\xi_2-1)^2|+\delta \right) u = 1
}
with initial condition $u(t = 0, \bs{\xi}) = 0$ as well as a regularization parameter $\delta = 10^{-1}$. For reasons of simplicity, we will only be interested in the final value $u(t = 1, \bs{\xi})$. The problem admits an exact solution
\eq{
u(t, \bs{\xi}) = \frac{1-\exp\left(-t\left(|2-(\xi_1-1)^2-(\xi_2-1)^2|+\delta \right)\right)}{|2-(\xi_1-1)^2-(\xi_2-1)^2|+\delta} \, ,
}
which is shown in fig.~\ref{fig:exactODE} and used as a reference for the numerical solution $u_r(t = 1, \bs{\xi})$, obtained using a forward Euler integration scheme with time-step $\Delta t_r = 30^{-1}\cdot 2^{-r}$.
We define the total error, including interpolation as well as discretization contributions, in the $L^2$-norm as
\eq{
\left\| u - \mc{I}^\mrm{ML/SL}[u_R] \right\|_{L^2([-1,1]^2)} = \left(\,\int\limits_{[-1,1]^2} \left| u(\bs{\xi}) - \mc{I}^\mrm{ML/SL}[u_R](\bs{\xi}) \right|^2 \mathrm{d} \bs{\xi}\right)^{\frac{1}{2}} \, .
}
Figure~\ref{fig:err} shows this error, computed by Monte Carlo integration using $10^5$ points, achieved by the MLASGC and ALSGC methods over computation time. The data points correspond to discretization levels $R=4,\ldots,15$. The adaptive refinement tolerance $\epsilon_\mrm{ref} = \Delta t_R$ for the single-level ALSGC method is chosen equal to the global truncation error $\epsilon_\mrm{discr} \sim \Delta t_R$ of the forward Euler method, aiming to balance the contributions of discretization and interpolation error. For the MLASGC method we distribute the desired overall refinement tolerance $\epsilon_\mrm{ref} = \Delta t_R$ across the $R$ multilevel interpolants in a linear manner such that
\eq{
\epsilon_\mrm{ref} = \sum\limits_{r=1}^R\epsilon^\mrm{ML}_\mrm{ref,r} = \sum\limits_{r=1}^R \frac{2r\epsilon_\mrm{ref} }{R(R+1)} \, .
}
The linear increase of the refinement tolerance across levels $r$ aims to enforce a high refinement tolerance on coarse discretizations and a lesser refinement tolerance on fine discretizations where additional collocation points would be costly. This choice performed better than the uniform tolerance distribution $\epsilon^\mrm{ML}_\mrm{ref,r} = \epsilon_\mrm{ref}/R$. The idea behind this rather heuristic attempt to balancing cost is confirmed by the number of points required per level $r$ of the multilevel interpolant for achieving an overall refinement error in the order of $\Delta t_R$ (see fig.~\ref{fig:levelpts}). It is however emphasized that even a uniform distribution of the refinement tolerance leads to a significant decay of the required number of collocation points across level corrections, such that the linear distribution should only be considered a slight correction.
\section{Conclusions}
The preliminary results for the low-dimensional, non-smooth parametric ODE problem considered herein are promising: the proposed MLASGC method exhibits an error/cost-relation of $\varepsilon \sim t^{-0.95}$ and therefore significantly outperforms the single-level ALSGC ($\varepsilon \sim t^{-0.65}$) and MLMC methods ($\varepsilon \lesssim t^{-0.5}$~\cite{Cliffe2011, Bierig2014}). Due to a lack of mathematical analysis of the new MLASGC method, no special cost/error-balancing is performed, leaving room for further optimization. It remains to be investigated if the techniques presented in~\cite{Wyk2014, Teckentrup2014} in terms of non-adaptive multilevel collocation methods are similarly applicable to the MLASGC method. It is also emphasized that the new method is not limited to the use of the ALSGC method for the interpolation of the level correction. In fact, a further performance increase would be obtainable by the use of a multi-resolution hierarchical wavelet basis in state space which introduces a true local error estimate due to the fulfillment of the Riesz property (see e.g.~\cite{Gunzburger2014}). Finally, we remark that the non-intrusive nature that the MLASGC method shares with other variants of collocation methods lends itself excellently to parallelization and implementation into existing deterministic frameworks.
\newpage
\section{Appendix}
\subsection{Example: Grid and basis function construction in 2D}
\begin{table}[ht]
\newcolumntype{L}{>{\centering\arraybackslash} m{.01\textwidth} }
\newcolumntype{C}{>{\centering\arraybackslash} m{.6\textwidth} }
\newcolumntype{R}{>{\centering\arraybackslash} m{.25\textwidth} }
\centering
\begin{tabular}{ L | C | R }
$l$ & index-sets $ \mc{S}_l$ and children-sets $\mc{C}_{l}$ & $\bs{m}^{\bs{i}}$ \\
\hline
$1$ & $\begin{array} {lcl} \mc{S}_1 & = & \{\bs{i} \, | \, i_{1}+i_{2}=2 \} \\ & = & \{(1,1)\} \\ \mc{C}_1 & = & \left\{\left\{\left\{(1,1)\right\}\right\}\right\} \end{array}$ & \vspace{.1cm}\includegraphics[scale=.8]{GM-figure3_.pdf}\\
\hline
$2$ & $\begin{array} {lcl} \mc{S}_2 & = & \{\bs{i} \, | \, i_{1}+i_{2}=3 \} \\ & = & \{(2,1),(1,2)\} \\ \mc{C}_{2} & = & \left\{ \bs{c}^{\bs{m} = (1,1)}_{\bs{i} = (1,1)} = \{\{1,3\} \times \{1\}, \{1\} \times \{1,3\} \} \right\} \\ & = & \left\{\left\{ \{(1,1),(3,1)\}^{\bs{i}=(2,1)},\{(1,1),(1,3)\}^{\bs{i}=(1,2)} \right\}\right\} \end{array}$ & \vspace{.05cm}\includegraphics[scale=.8]{GM-figure4_.pdf}\\
\hline
$3$ & $\begin{array} {lcl} \mc{S}_3 & = & \{\bs{i} \, | \, i_{1}+i_{2}=4 \} \\ & = & \{(3,1), (2,2), (1,3)\} \\ \mc{C}_{3} & = & \left\{ \bs{c}^{(1,1)}_{(2,1)}, \bs{c}^{(3,1)}_{(2,1)}, \bs{c}^{(1,1)}_{(1,2)}, \bs{c}^{(1,3)}_{(1,2)} \right\} \\ & = & \big\{ \{ \{2\}\times\{1\}, \{ 1\} \times \{1,3\} \} , \\ && \{ \{4\}\times\{1\}, \{ 3\} \times \{1,3\} \},\\&& \{ \{1,3\}\times\{1\}, \{ 1\} \times \{2\} \},\\&& \{ \{1,3\}\times\{3\}, \{ 1\} \times \{4\} \}\big\}\\ &=& \big\{ \{ \{(2,1)\}^{(3,1)}, \{ (1,1), (1,3) \}^{(2,2)} \} , \\ && \{ \{(4,1)\}^{(3,1)}, \{ (3,1), (3,3)\}^{(2,2)} \},\\&& \{ \{(1,1),(3,1)\}^{(2,2)}, \{ (1,2)\}^{(1,3)} \},\\&& \{ \{(1,3),(3,3)\}^{(2,2)}, \{ (1,4)\}^{(1,3)} \}\big\} \end{array}$ & \vspace{.05cm}\includegraphics[scale=.8]{GM-figure5_.pdf}
\end{tabular}
\vspace{.2cm}
\caption{Example: Construction of the sparse grid level $\bs{\Theta}_l$, index-sets $\mc{S}_l$ and children-sets $\mc{C}_l$ in 2D}
\label{tab:tab1}
\end{table}
\begin{table}[ht]
\newcolumntype{L}{>{\centering\arraybackslash} m{.01\textwidth} }
\newcolumntype{C}{>{\centering\arraybackslash} m{.9\textwidth} }
\centering
\begin{tabular}{ L | C}
$l$ &basis functions $\bs{a}^{\bs{m}}_{\bs{i}}$ \\
\hline
1 & \vspace*{.2cm}\includegraphics[scale=.22]{basis1.pdf} \\
\hline
2 & \vspace*{.2cm}\begin{tabular}{ c c} \includegraphics[scale=.22]{basis21.pdf} & \includegraphics[scale=.22]{basis12.pdf} \end{tabular} \\
\hline
3 & \vspace*{.2cm}\begin{tabular}{ c c c} \includegraphics[scale=.22]{basis31.pdf} & \includegraphics[scale=.22]{basis22.pdf} & \includegraphics[scale=.22]{basis13.pdf} \end{tabular}
\end{tabular}
\vspace{.2cm}
\caption{Example: Construction of the basis functions $\bs{a}^{\bs{m}}_{\bs{i}}$ in 2D}
\label{tab:basis}
\end{table}
\section[2][\empty]{%
\boldmath\sectionOld[#1]{#2}\unboldmath%
}
\usepackage{enumerate}
\usepackage{siunitx}
\usepackage{units}
\usepackage{upgreek}
\usepackage{tensor}
\usepackage{pgfplots}
\pgfplotsset{compat=newest, plot coordinates/math parser=false,
tick label style={font=\tiny}, legend style={font=\scriptsize}}
\newlength\figureheight
\newlength\figurewidth
\usepackage[hang,center]{subfigure}
\pgfplotsset{every axis/.append style={
scaled x ticks=false,
scaled y ticks=false,
yticklabel style={/pgf/number format/.cd,fixed,fixed zerofill,precision=1},
xticklabel style={/pgf/number format/.cd,fixed,fixed zerofill,precision=1}
}
}
\pgfplotsset{
every colorbar/.append style={try min ticks=5, max space between ticks=18pt, at={(1.05,1)} },
colorbar/width=0.05\figurewidth,
every axis/.append style={font=\scriptsize}
}
\usepackage{listings}
\definecolor{javaBlue}{RGB}{42,0.0,255}
\definecolor{javaGreen}{RGB}{63,127,95}
\definecolor{javaPurple}{RGB}{127,0,85}
\lstloadlanguages{Matlab}
\lstset{
language=Matlab,
keywordstyle=\color{javaPurple},
commentstyle=\color{javaGreen},
stringstyle=\color{javaGreen},
numbers=left,
stepnumber=1,
numbersep=5pt,
numberstyle=\tiny,
breaklines=true,
breakautoindent=true,
breakatwhitespace=false,
postbreak=\space,
tabsize=2,
basicstyle=\ttfamily\scriptsize,
showspaces=false,
showstringspaces=false,
extendedchars=true,
backgroundcolor=\color{white}}
\input{commands}
\makeatletter
\renewenvironment{thebibliography}[1]
{\list{\@biblabel{\@arabic\c@enumiv}}%
{\settowidth\labelwidth{\@biblabel{#1}}%
\leftmargin\labelwidth
\advance\leftmargin\labelsep
\@openbib@code
\usecounter{enumiv}%
\let\p@enumiv\@empty
\renewcommand\theenumiv{\@arabic\c@enumiv}}%
\sloppy
\clubpenalty4000
\@clubpenalty \clubpenalty
\widowpenalty4000%
\sfcode`\.\@m}
{\def\@noitemerr
{\@latex@warning{Empty `thebibliography' environment}}%
\endlist}
\makeatother
\DeclareMathOperator*{\argmin}{arg\,min}
\usepackage[hang]{footmisc}
\let\originalleft\left
\let\originalright\right
\renewcommand{\left}{\mathopen{}\mathclose\bgroup\originalleft}
\renewcommand{\right}{\aftergroup\egroup\originalright}
\DeclareMathOperator*{\Motimes}{\text{\raisebox{0.25ex}{\scalebox{0.8}{$\bigotimes$}}}}
\DeclareMathOperator*{\Moplus}{\text{\raisebox{0.25ex}{\scalebox{0.8}{$\bigoplus$}}}}
\begin{document}
\selectlanguage{USenglish}
\singlespacing
\pagestyle{fancy}
\lhead[\MakeUppercase{IBNM Preprint 09/2015}]{\MakeUppercase{IBNM Preprint 09/2015}}
\rhead[]{}
\pagenumbering{roman}
\setcounter{tocdepth}{3}
\parindent=.5cm
\selectlanguage{USenglish}
\selectlanguage{USenglish}
\parindent=.5cm
\pagestyle{plain}
\cleardoublepage
\pagenumbering{arabic}
\pagestyle{fancy}
\include{c1}
\section{Bibliography}
|
2,869,038,153,944 | arxiv | \section*{Note Added:}
\noindent After this letter was accepted for publication, we became aware of
the work of Girvin and MacDonald \cite{girvin}, where they
showed that the gauge-transformed Laughlin
wave-function [ eq. (7) of their paper] shows off-diagonal long-range order.
It then immediately follows that the Calogero-Sutherland ground state
wave-function in two dimensions as given by \eq{grst} [which is identical to
eq. (7) of \cite{girvin}] also exhibits off-diagonal long-range order.
|
2,869,038,153,945 | arxiv | \section{INTRODUCTION}
\citet{HE1327_Nature} recently reported the discovery of the dwarf or subgiant
HE~1327$-$2326, the most iron-poor star known to date (with
$\mbox{[Fe/H]}_{\rm{NLTE}}=-5.4$). Abundances were derived for nine elements
and upper limits for a further eight \citep{Aokihe1327}, including oxygen
($\mbox{[O/Fe]}<4.0$). No detection of molecular OH lines in the UV was
possible from their Subaru/HDS spectrum. A new attempt is presented here to
measure the oxygen abundance of HE~1327$-$2326 from different O indicators:
UV-OH lines, the [O\,I] at 6300\,{\AA} and the O\,I triplet at 7774\,{\AA},
using a higher quality VLT/UVES spectrum. A measurement of the O abundance of
HE~1327$-$2326 is desired for the investigation into the origin of the star.
As the third most common element in the Universe, O generally is an ideal
tracer of its chemical history. Hence, it has been studied in extensive detail
in metal-poor stars to unravel the earliest evolutionary phases of the Galaxy
which is crucial for an understanding of the formation mechanism of the first
generations of stars. It is not clear how the first low-mass stars could form
in the early Universe. A possibility might involve the C and O yields from
Population III supernova which act as sufficient cooling sources in
star-forming gas clouds producing the first low-mass stars
(e.g. \citealt{UmedaNomotoNature, brommnature}). However, the picture which
emerged from the observational studies is not free from inconsistencies,
making the scientific interpretation difficult. A discrepancy of the O
abundances derived from different O indicators poses a serious, not yet
resolved, problem (for a recent discussion see \citealt{asplund_araa}).
Our new observations of HE1327-2326 are presented in \S 2 and the O abundance
measurements are described in \S 3. We discuss the implications in \S 4.
\section{OBSERVATIONS AND DATA REDUCTION}
Between March and May 2005, HE~1327$-$2326 was observed with the
Ultraviolet-Visual Echelle Spectrograph \citep{Dekkeretal:2000} at the Very
Large Telescope, Chile. For the service mode observations we made use of the
dichroic mode and three wavelength settings. The total exposure time of 18\,h
was divided into 18 one hour exposures with the BLUE 346\,nm setting covering
3050--3870\,{\AA}, 15 simultaneous one hour exposures with the RED 580\,nm
setting covering 4780--6805\,{\AA}, and 3 simultaneous one hour exposures with
the RED 760\,nm setting covering 5720--9470\,{\AA}. A $1''$ slit width was
used in the blue arm of the spectrograph, yielding a resolving power of
$R\sim46,000$ while a $0.6''$ slit width was used in the red arm, yielding
$R\sim70,000$. All data have been reduced with the \texttt{REDUCE} package
\citep{reduce}. Overlapping echelle orders were subsequently merged, and the
resulting spectra rebinned to an appropriate sampling. A signal-to-noise of
$S/N\sim40$ was estimated at $\sim3110$\,{\AA}. To ensure the detection of
weak features, unmerged individual orders were used for their verification.
\section{THE OXYGEN ABUNDANCE}
\subsection{The Model Atmosphere}
We performed a 1D LTE abundance analysis of the newly acquired VLT/UVES
spectrum. The latest version of the MARCS code\footnote{Numerous models for
different stellar parameters and compositions are readily available at
http://marcs.astro.uu.se} (Gustafsson et al., in preparation) was used to
compute a model tailored to the chemical abundances observed in HE~1327$-$2326
based on the subgiant abundances reported in \citet{HE1327_Nature}.
Furthermore, we adopted their effective temperature of \mbox{T$_{\rm
eff}=6180$}\,K as well as the two solutions for the surface gravity, $\log
g=3.7$ (subgiant) and $\log g=4.5$ (dwarf). For more details and the
derivation of the stellar parameters of HE~1327$-$2326, we refer the reader to
\citet{Aokihe1327}. For the OH analysis we used the \citet{gillis_ohlinelist}
line list. For the O\,I triplet lines we used data taken from the NIST
database. The $\log gf$ value for the resonance [O\,I] line was taken from
\citet{storey}.
To confirm the validity of our abundance determination technique we took a
MARCS model with the stellar parameters matching those of the well-studied
subgiant HD140283 (we adopt \mbox{T$_{\rm eff}=5850$}\,K, $\log g=3.6$,
$\mbox{[Fe/H]}=-2.46$; see \citealt{boesgaard99} for details). We computed
synthetic spectra with different O abundances to reproduce the Boesgaard et
al. spectrum of HD140283 in the UV-OH line region around $3135$\,{\AA}. From
the comparison of the synthetic with the observed spectrum we derive an
abundance of $\mbox{[O/Fe]}=1.1\pm0.2$ which is in good agreement with the
results derived by \citet{boesgaard99} ($\mbox{[O/Fe]}=1.05$) using the solar
abundance adopted by them. From the O triplet lines we derived
$\mbox{[O/Fe]}=0.5$ which also agrees well with the \citet{boesgaard99} value
($\mbox{[O/Fe]}=0.6$).
\subsection{The 1D LTE Analysis}
In our new spectrum we detect 12 Fe\,I lines in the UV spectral range. Seven
of those have already been detected in the Subaru spectrum of our previous
analysis (see \citealt{Aokihe1327} for more details). Our LTE metallicity
derived from these lines is $\mbox{[Fe/H]}_{\rm{LTE}}=-5.7\pm0.2$ for both the
subgiant and dwarf solution. Employing the same NLTE correction as in
\citet{HE1327_Nature} results in an iron abundance for HE~1327$-$2326 in good
agreement with the previously reported metallicity. Unfortunately it is not
possible to detect any Fe\,II lines. Our upper limits are
$\mbox{[Fe\,II/H]}_{\rm{LTE}}<-5.4$ (subgiant) and
$\mbox{[Fe\,II/H]}_{\rm{LTE}}<-5.2$ (dwarf) which are significantly tighter
than the previous values of $\mbox{[Fe\,II/H]}_{\rm{LTE}}<-4.4$ and
$\mbox{[Fe\,II/H]}_{\rm{LTE}}<-4.1$, respectively \citep{Aokihe1327}.
Molecular lines from the OH \mbox{$A\;^{2}\Sigma-X\;^{2}\Pi$} system in the
ultraviolet range of our UVES spectrum are clearly detected. Examples are
presented in Figure \ref{OH_plot}. A spectrum synthesis analysis of eight of
the most prominent OH features between 3110 and 3142\,{\AA} was performed. We
note that many CH lines are present over the entire UV spectral range. In this
region, however, the strongest OH lines are visible and some are not strongly
contaminated by CH features. Lines that are as free as possible from such
contamination were used. The total number of OH features, however, is small
and additionally they are in some instances very weak, so that this attempt
was hampered in a few cases. To account for these contaminations we
re-determined the C abundance from UV CH $C-X$ lines around $3180$\,{\AA} (see
Table~\ref{results}). The newly derived values are consistent with our
previous 1D LTE measurements presented in \citet{HE1327_Nature} and
\citet{Aokihe1327}: $\mbox{[C/Fe]}=4.1$ (subgiant) and $\mbox{[C/Fe]}=3.9$
(dwarf) based on CH feature from the $A-X$ and $B-X$ system. A set of
synthetic spectra was computed for a variety of O abundances with a C
abundance set to our new value. Abundances were measured from several OH lines
comparing the observed normalized spectrum with a set of synthetic spectra and
minimizing the $\chi^{2}$ between the synthetic and observed spectrum. We
adopt the average abundance as derived from the individual fits to the OH
features as our final 1D LTE O abundance. The dispersion of the individual
abundance measurements is 0.3\,dex, and the standard error of the mean is
0.1\,dex. However, systematic uncertainties could arise from continuum
placement and, more significantly, from any error in the effective temperature
and the gravity. Taking these uncertainties into account we estimate a total
error of $0.2\,$dex. Thus, $\mbox{[O/Fe]}_{\rm{OH}}=3.7\pm0.2$ (subgiant) and
$\mbox{[O/Fe]}_{\rm{OH}}=3.4\pm0.2$ (dwarf) were adopted as the final averaged
1D LTE abundances. These values are consistent with the upper limit for O of
$\mbox{[O/Fe]}_{\rm{OH}}<4.0$ reported in \citet{HE1327_Nature}. For the solar
O abundance we adopt $\log \epsilon(\rm{O})_{\odot}=8.66$ \citep{solar_abund}.
Considering the very high overabundance derived from the OH features one might
anticipate a detection of the O\,I triplet lines at 7772, 7774 and
7775\,{\AA}. However, despite very high quality data ($S/N\sim260$ at
$\sim7770$\,{\AA}) none of the three lines was detected. Using the formula
$\sigma=w\times \sqrt{n_{pix}}/(S/N)$ (where $w$ is the pixel width, $n_{pix}$
the number of pixels across the line and $S/N$ per pixel; \citealt{bohlin}) we
calculate a $3\sigma$ upper limit ($W_{\lambda}<2$\,m{\AA}) for the strongest
triplet line (7772\,{\AA}). The abundance obtained from this equivalent width
estimate is significantly lower than the abundance derived from OH (employing
the 1D LTE analysis). Unfortunately, the upper limit (derived for a line
strength limit $W_{\lambda} <1.8$\,{\AA}) for the forbidden [O\,I] line at
6300\,{\AA} is quite large and therefore has little meaning.
The abundances and upper limits can be found in Table~\ref{results}.
\subsection{Application of 3D and NLTE Corrections}
From theoretical work which investigates beyond the ``classical'' 1D analysis
it is known that the 1D LTE abundance derived from the OH features in
metal-poor stars is significantly higher than the 3D counterpart
\citep{ohnlte_asplund}. This is particularly the case for stars close to the
turnoff. The O triplet lines in turn mostly suffer from 1D NLTE effects
(e.g. \citealt{kiselman93}). For forbidden lines (e.g. [O\,I]), the LTE
assumption is valid and 3D LTE effects are expected to be relatively minor
\citep{nissen02}. The observational discrepancy of O abundances derived from
different indicators are eased with the application of such 3D and/or NLTE
corrections \citep{ohnlte_asplund}. Unfortuntely, appropriate corrections are
not always available. For the O\,I triplet lines we computed new NLTE
corrections for HE~1327$-$2326 (without consideration of inelastic H
collisions). The 1D NLTE abundance corrections are $-0.3$\,dex for both the
subgiant and dwarf case. We note that 3D NLTE calculations for O\,I in
metal-poor stars with parameters appropriate for HE~1327$-$2326 are not yet available but
preliminary investigations for other halo stars reveal similar 3D NLTE effects
as in 1D \citep{asplund_araa}. See Table~\ref{results} for the corrected
abundances.
\citet{ohnlte_asplund} investigated the difference of 3D LTE model compared to
standard 1D LTE analysis using UV-OH lines. In their Table~2, they provide
corrections for a small set of stars with different stellar parameters, one
set of which is close to that of HE~1327$-$2326. For the two lines
investigated the correction is $-1.0$\,dex for the abundance derived from the
3139.17\,{\AA} OH line and $-0.9$\,dex from the 3167.17\,{\AA} OH line. Since
both lines are too weak to be detectable in HE~1327$-$2326, we simply adopt
the average of the two corrections and apply a $-$1.0\,dex 3D correction to
our 1D LTE O abundance from OH. The abundances from UV-OH lines are thus
lowered significantly and agree well with the upper limits derived from O
triplet lines. NLTE effects on the abundance derived from the OH lines have
not been studied in detail. We wish to caution here that since HE~1327$-$2326
has a much lower iron abundance than the 3D model from which we inferred the
adopted 3D LTE corrections, it is possible that the real 3D correction might
even be larger than the $-1.0$\,dex applied here.
However, no calculations tailored for the specific abundances of
HE~1327$-$2326 are currently available to further test this assumption.
The formation of molecular CH and NH features is likely to be very similar to
those of OH. \citet{asplund_araa} computed corrections for turnoff stars with
$\mbox{[Fe/H]}=-3.0$ of $-0.6$\,dex for the C abundance and $-0.9$\,dex for
the N abundance. For completeness we thus apply the 3D corrections to the 1D
abundances derived from CH and NH. See Table~\ref{results} for the 3D
corrected abundances. However, any effects would tend to cancel out if the
ratio of any of those elements is to be used. Similarly large 3D corrections
have recently been computed \citep{colletIAU} for HE~0107$-$5240
\citep{HE0107_Nature}.
\section{DISCUSSION}
\subsection{Implications of a High Oxygen Abundance}
In order to learn about the earliest stages of star formation in the Universe
it is very important to identify the origin of the elements observed in
HE~1327$-$2326. Oxygen is a key element in this quest because it provides
strong constraints on the different origin scenarios previously invoked for
the star. Of particular importance is whether HE~1327$-$2326 is an early
Population II or a Population III star. Recently, \citet{iwamoto_science} made
an attempt to explain the abundance pattern of HE~1327$-$2326. They invoke a
pre-enrichment scenario in which a faint $25\,M_{\odot}$ Population~III
supernova undergoes a mixing and fallback process producing ejecta containing
little iron and large amounts of CNO. Based on the 1D LTE abundances of
HE~1327$-$2326 and constrained by the 1D LTE upper limit of oxygen
\citep{HE1327_Nature} they compute an O abundance of $\mbox{[O/Fe]}\sim4.0$
which is close to our 1D~LTE abundance derived from OH lines. However, our
adopted 3D abundance is significantly lower. It remains to be seen if their
model could also reproduce our new CNO values since it might be difficult to
simultaneously fit a lower O together with e.g. the high Mg abundance
(potential 3D corrections for Mg are expected to be less severe than for
OH).
\citet{meynet05} predict a similarly high oxygen abundance
($\mbox{[O/Fe]}=3.5$) based on their combined stellar wind and supernova
ejecta of their rotating $\mbox{[Fe/H]}=-6.6$ stellar models. This is in
qualitative agreement with the observed excesses of O in HE~1327$-$2326 and
other metal-poor stars.
Following \citet{suda}, a Population~III scenario might explain the origin of
HE~1327$-$2326 in terms of a binary system. It would then have accreted its
heavier elements from the interstellar medium and the lighter elements from an
erstwhile AGB companion in a binary system. However, the absence of
significant radial velocity variations (see Figure \ref{radvel}) over a period
of just over one year does not support this idea. Within the overall error
there is no change to report in the radial velocity so far. The slight offset
between the Subaru and UVES data points in Figure \ref{radvel} can be
accounted to uncertainties in the wavelength calibrations. Further work is
required to ascertain whether the O abundance of HE~1327$-$2326 can be explained in this
manner.
Despite the uncertainties of the corrections to the 1D LTE analysis, it is
clear that HE~1327$-$2326 belongs to the group of stars displaying very large
CNO abundances. It appears that the majority of these objects have very low
metallicities (i.e. $\mbox{[Fe/H]}<-3.0$) and that HE~1327$-$2326 is the most
extreme example of the group. However, HE~1327$-$2326 has a similar overall
CNO abundance pattern compared to the only other known star having
$\mbox{[Fe/H]}<-5.0$, HE~0107$-$5240 \citep{HE0107_Nature, O_he0107}. The
unusually high excesses of O of these objects underline that there is no
defined trend amongst the stellar O abundances at the lowest
metallicities. This suggests that there might not be a simple explanation for
the origin of O in the very earliest phases of the Galaxy.
\subsection{Concluding Remarks}
In summary, we adopt the final O abundance to be $\mbox{[O/Fe]}=2.8\pm0.2$
(subgiant) or $\mbox{[O/Fe]}=2.5\pm0.2$ (dwarf). These values are consistent
with the upper limits derived from the O\,I triplet at $\sim 7775$\,{\AA} and
the [O\,I] line at 6300\,{\AA}. This would not be the case if the 1D LTE
abundances derived from OH had been adopted. We note here that atomic
diffusion might have modified the abundances of HE~1327$-$2326. According to
theoretical calculations of \citet{richard2002}, the O/Fe ratio might
originally have been higher. However, observational confirmation of
their calculations is still pending. In any case HE~1327$-$2326 provides
strong observational evidence that 3D LTE effects for the O abundances derived
from OH lines using 1D LTE model atmospheres have to be taken into account,
especially for hotter metal-poor stars. Where already available, such
corrections should generally be applied when deriving O abundances for
metal-poor stars. A systematic investigation of newly corrected O abundances
with respect to metallicity is clearly desirable.
The newly derived O abundance provides additional constraints on the
Population II models proposed for HE~1327$-$2326 and other metal-poor
stars. Whether or not the new abundance can be reproduced by those models
remains to be seen. Finally we wish to mention that the
\citet{iwamoto_science} model does not include neutron-capture elements. Thus
it is not clear whether the high Sr abundance in HE~1327$-$2326 could be
accounted for with their model. However, recent computations by
\citet{froehlich} indicate that a Sr excess could be in agreement with the
faint SN scenario of Iwamoto et al. We note too that the absence of Li lacks
explanation. The Population III binary scenario might account for the low Li
abundance \citep{HE1327_Nature, Aokihe1327}, but radial velocity variations
have not yet been detected. Hence, a longer time span is needed to monitor the
star for such variations in order to draw a final conclusion. In the absence
of such data we favor the Population II interpretation of HE~1327$-$2326.
\acknowledgments We thank K. Eriksson for computing a tailored MARCS model for
us and A. Korn for helpful comments. We express our gratitude to the ESO staff
on Paranal for carrying out the observations with
VLT-UT2. A.F. thanks N. Piskunov for help with the data reduction and
acknowledges generous hospitality by the Uppsala Astronomical Observatory
where the reduction was carried out. A.F., J.E.N. and M.A. acknowledge support
from the Australian Research Council under grant DP0342613 and N.C. from
Deutsche Forschungsgemeinschaft under grants Ch~214/3 and Re~353/44. This
research has made use of the NIST atomic database, operated by the National
Institute of Standards and Technology.
{\it Facility:} \facility{VLT:Kueyen(UVES)}.
\newpage
\newpage
|
2,869,038,153,946 | arxiv |
\section{Introduction}
The theory of Byzantine fault tolerance was conceived by Leslie Lamport \cite{lamport} for describing the problems that distributed systems face in the context of adversarial faults. Byzantine faulty components in a distributed system might behave in an arbitrary way in order to disrupt the operations of the system. Byzantine fault tolerance has long been considered to be the highest fault tolerance possible, because these faults can exhibit any behaviour whatsoever. Satoshi Nakamoto relied on these concepts in order to describe the fault tolerance of Bitcoin, when using the concept of ``honest" and ``dishonest" nodes \cite{bitcoin}. However the notion that some nodes are honest and some nodes are Byzantine faulty is divorced from the economic realities of Bitcoin. In an economic model we would prefer to assume that every actor is able to modify the code of their component or somehow induce it to commit faulty behaviour, but we would like to imagine that they would do it as a strategic choice. On the other hand, it may be very difficult to provide guarantees in a context where all components are strategically faulty and it may be realistic that only some nodes would be strategic, and therefore mixed models have been proposed \cite{bar}.
In this paper we introduce the theory of {\em VLSMs -- validating labelled state transition and message production systems} -- a theoretical tool for specifying and formally analysing faulty distributed systems. The central fault we investigate in this paper is that of {\em equivocation}. In the Byzantine fault tolerant consensus protocol literature, an ``equivocation" refers to inconsistent messages sent by a Byzantine node in order to fool honest, protocol-following nodes, into making inconsistent decisions \cite{clement,madsen,jaffe}. We introduce a more general theory of equivocation, including {\em state-equivocation} where a component splits off a parallel copy of itself, and {\em message-equivocation} where components receive messages that have not been sent in the current trace of the system. In consensus protocols, it is common for components to ``validate" the received messages in order to be sure that malformed messages are not received. We formalise this idea into a general formal notion of {\em validators} for a system. We are then able to show that the effect that Byzantine nodes can have on honest nodes is no different than the effect equivocating validators can have on non-equivocating validators, without making any synchronization assumptions.
VLSMs were derived in the course of research on the CBC Casper consensus protocols \cite{vlad-2019}, as a tool for specifying both the protocols and the properties they should satisfy. In particular, VLSMs were developed to give a theory that can be used to express and prove the soundness of equivocation detectors, and to express and prove liveness properties. In the process, we conducted a thorough investigation into full node validators and equivocation that is further presented in this paper. The end result is a formal framework for describing faults in distributed systems that is able to account for all the influence of Byzantine nodes on honest nodes with only the influence of equivocation faulty validators. Replacing Byzantine nodes with equivocating validators forms the foundation for an alternative to Byzantine fault tolerance analysis. This work opens the way for protocol designers to reason precisely about different types of faults and to budget for them separately, and thereby to create better consensus protocols than they could when they were budgeting only for Byzantine faults.
\subsubsection*{Coq Formalisation}
We formalised and checked our theory of VLSMs using the Coq proof assistant. The formalisation\footnote{\url{https://github.com/runtimeverification/vlsm/releases/tag/v1.2}}$^,$\footnote{\url{https://papillon.unbounded.network/projects/github/runtimeverification/vlsm/master}} is compatible with Coq 8.15~\cite{Coq815} and uses Coq-std++ 1.7.0\footnote{\url{https://gitlab.mpi-sws.org/iris/stdpp/}}, Equations 1.3\footnote{\url{https://github.com/mattam82/Coq-Equations}}, and Itauto 8.15.0\footnote{\url{https://gitlab.inria.fr/fbesson/itauto}}. We express VLSM concepts primarily using Coq's typeclasses~\cite{CoqTypeClasses}. The Coq code consists of two parts:
\begin{itemize}
\item A collection of general utility definitions and results, e.g., on polymorphic lists, that extend Coq's standard library and the Coq-std++ library; this part is around 3 thousand lines of code (kLOC) of specifications and 4 kLOC of proofs.
\item VLSM definitions and results, which are around 15 kLOC of specifications and 18 kLOC of proofs.
\end{itemize}
The VLSM part uses axioms from Coq's standard library known to be consistent with Coq's logic, including on functional extensionality and classical logic.
Throughout the paper, we will use the symbol (\coqref{toc}{}) to reference the corresponding formalisation of a definition or result in Coq.
\section{VLSMs: A Simple Theory of Faulty Distributed Systems}\label{vlsm}
In our abstract framework, a validating labelled state transition and message production system will play the role of a node or a coomponent in a distributed system. The reader may note a familial resemblance between VLSMs and labeled transition systems or I/O automata used in distributed systems~\cite{io-automata}.
Throughout this paper we will use the symbol $\nomessage$ to stand for {\em no message}. For any set $M$ of messages, we will refer to the set $M \cup \{\nomessage\}$ as {\em the option set of messages} and denote it by $\opt{M}$. We call a message {\em proper} if it is not $\nomessage$.
\begin{definition}[\coqref{VLSM.Core.VLSM}{VLSM}]
A \textbf{validating labelled state transition and message production system} (\textbf{VLSM}, for short) is a structure of the form
$$\mathcal{V} = (L,S,S_0,M, M_0, \tau, \beta),$$
where
\begin{list}
{$\cdot$}{}
\vspace{-.2cm}
\item $L$ is a set of \textbf{labels},
\vspace{-.1cm}
\item $S$ is a non-empty set of \textbf{states},
\vspace{-.1cm}
\item $S_0 \subseteq S$ is a non-empty set of \textbf{initial states},
\vspace{-.1cm}
\item $M$ is a set of \textbf{messages},
\vspace{-.1cm}
\item $M_0 \subseteq M$ is a set of \textbf{initial messages},
\vspace{-.1cm}
\item $\tau: L \times S \times \opt{M} \rightarrow S \times \opt{M}$ is a labelled state transition and message production function (a \textbf{transition function}, for short),
\vspace{-.1cm}
\item $\beta \subseteq L \times S \times \opt{M}$ is a validity constraint on the inputs of the transition function.
\end{list}
\end{definition}
The transition function in a VLSM is defined as a total function; however, it indirectly becomes a partial function since the validity constraint can filter out some of its inputs. The set of labels in a VLSM implies a deterministic behaviour; however, it is possible to have multiple parallel transitions between two states, each with their own label.
When clear from the context, we will denote a VLSM $\mathcal{V} = (L,S,S_0,M, M_0, \tau, \beta)$ simply by $\mathcal{V}$. Similarly, we will denote an indexed set of VLSMs $\{\mathcal{V}_i = (L_i,S_i, S_{i,0}, M, M_{i,0}, \tau_i, \beta_i)\}_{i=1}^n$ simply by $\{\mathcal{V}_i \}_{i=1}^n$. Sometimes, we will refer to a VLSM also as a {\em (VLSM) component}.
In the sequel, let $\mathcal{V}$ be a VLSM. We will denote the projections on states and optional messages by $\tau^s$ and $\tau^m$, respectively. Formally, if $\tau(l,s,m) = (s',m')$, then $\tau^s(l,s,m) = s'$ and $\tau^m(l,s,m) = m'$.
We denote a transition $\tau(l,s,m) = (s',m')$ by
$$\transition{l}{s}{m}{s'}{m'}.$$
A transition $\tau(l,s,m) = (s',m')$ is called \textit{constrained} if $\beta(l,s,m)$ holds. We denote a constrained transition $\tau(l,s,m) = (s',m')$ by
$$\transitionG{l}{s}{m}{s'}{m'}.$$ When clear from the context, we will refer to constrained transitions simply as transitions.
The validity constraint of a VLSM uniquely determines the sets of reachable states and messages which we will call {\em valid states} and {\em valid messages}, respectively. We consider $\nomessage$ to be valid, independent of the constrained transitions. Formally, these notions are defined as follows.
\begin{definition}[\coqref{VLSM.Core.VLSM}{valid_state_message_prop}]\label{states-messages}
Let $\mathcal{V}$ be a VLSM. The sets $S_\mathcal{V}$ of \textbf{valid states} and $M_\mathcal{V}$ of \textbf{valid messages} associated with $\mathcal{V}$ are defined by:
\begin{align*}
S_{\mathcal{V}} = \bigcup_{n=0}^\infty S_{\mathcal{V},n} \hspace{1cm} \mbox{ and } \hspace{1cm} M_{\mathcal{V}} = \bigcup_{n=0}^\infty M_{\mathcal{V},n},
\end{align*}
where
\begin{align*}
S^{\mathcal{V}}_{0} &= S_0 \mbox{ and } M^{\mathcal{V}}_{0} = M_0 \cup \{\nomessage\}, \\
S^{\mathcal{V}}_{n+1} &= S^{\mathcal{V}}_{n}\ \cup \{\tau^s(l,s,m)\, \ |\ l \in L,\ s \in S^{\mathcal{V}}_{n},\ m \in M^{\mathcal{V}}_{n},\ \beta(l,s,m) \}, \\
M^{\mathcal{V}}_{n+1} &= M^{\mathcal{V}}_{n} \cup \{\tau^m(l,s,m)\ |\ l \in L,\ s \in S^{\mathcal{V}}_{n},\ m \in M^{\mathcal{V}}_{n},\ \beta(l,s,m) \}.
\end{align*}
\end{definition}
A transition is called \textit{valid} if it is a constrained transition which uses only valid states and valid messages. We denote a valid transition $\tau(l,s,m) = (s',m')$ by
$$\transitionV{l}{s}{m}{s'}{m'}.$$
A \textit{constrained trace} is a sequence of constrained transitions which starts in an initial state, while a \textit{valid trace} is a sequence of valid transitions which starts in an initial state. In a valid trace, the valid input messages can be produced on different traces. A state is \textit{constrained (valid)} if there is a constrained (valid) trace leading to it. Similarly, a message is \textit{constrained (valid)} if it is produced on a constrained (valid) trace.
In general, the question whether a state or a message is valid is undecidable. For example, take the set of states to be the configurations of a Turing machine and consider as constrained transitions the updates of the Turing machine's state. Since the halting problem is undecidable, we cannot know for an arbitrary Turing machine whether it will reach that state. We can also say that the VLSM emits a valid message if and only if the Turing machine halts, which makes the question of whether a message is a valid one undecidable in general. However, we often make extra assumptions about the shape of states and messages that enable us to decide this problem.
\subsection{Examples}
\subsubsection{A countdown example}
Let us consider a VLSM $\mathcal{D}$ which \textit{counts down from a natural number}. The VLSM $\mathcal{D}$ has only one label $d$, the set of states consists of pairs of integers of the form $\langle n,i\rangle$ with $n,i \in \mathbb{Z}$, the initial states are those states of the form $\langle n,n\rangle$ with $n \geq 0$, the messages are all the integers, and $2$ is the only initial message. For any integers $n, i,$ and $j$, we consider the transition
\begin{align*}
\transition{d}{\langle n,i \rangle}{\ \, j}{\langle n,i-j\rangle}{2j\ },
\end{align*}
while the validity constraint is defined as
\begin{align*}
\beta = \{(d, \langle n,i \rangle, j)\ |\ i \geq j \geq 1\}.
\end{align*}
Let us explore how all the above terminology translates into this example.
\begin{description}
\item[Transitions.] For example, let us consider $\transition{d}{\langle 4,5 \rangle}{\ 10}{\langle 4,-5\rangle}{20}$. This clearly is a transition in the VLSM $\mathcal{D}$, however it is not a constrained transition as $5 \not\geq 10$.
\item[Constrained transitions.] For example, we can see that $\transitionG{d}{\langle 4,5 \rangle}{\ 1}{\langle 4,4\rangle}{2}$ is a constrained transition. The input $(d,\langle 4,5\rangle,1)$ for this transition satisfies the validity constraint, therefore this is a constrained transition in $\mathcal{D}$. However, it is not a valid transition as the message $1$ cannot be produced by any constrained transition.
\item[Valid transitions.] For example, $\transitionV{d}{\langle 4,2 \rangle}{\ 2}{\langle 4,0\rangle}{4}$ is a valid transition. Clearly, the validity constraint is satisfied for the input $(d,\langle 4,2\rangle,2)$. The message $2$ is valid as all initial messages are valid. Moreover, the state $\langle 4,2 \rangle$ is a valid state as it can be reached by a valid trace starting in the initial state $\langle 4,4 \rangle$ and receiving the valid message $2$.
\item[Constrained traces.] Let us consider the sequence
$\transitionV{d}{\langle 5,5\rangle}{2}{\langle 5,3\rangle}{4}
\transitionVT{d}{2}{\langle 5,1\rangle}{4}
\transitionGT{d}{1}{\langle 5,0\rangle}{2}
$. This is a sequence of constrained transitions starting from the initial state $\langle 5,5\rangle$. While the first two transitions are valid, the last one is not a valid transition as the message $1$ is not valid (since it cannot be produced by any transition). Note that from the state $\langle 5,0\rangle$ there is no possible further constrained transition since $0 \not \geq 1$ as required in the validity constraint. So this is an example of a constrained trace.
\item [Valid trace.] For example, the sequence
$\transitionV{d}{\langle 8,8\rangle}{4}{\langle 8,4\rangle}{8}
\transitionVT{d}{2}{\langle 8,2\rangle}{4}
\transitionVT{d}{2}{\langle 8,0\rangle}{4}
$ is a valid trace. All these transitions are constrained and, moreover, they involve only valid states and messages.
\item[Constrained states.] The constrained states are all states of the form $\langle n, i \rangle$ with $n \geq i \geq 0$. This can be easily seen since constrained states can be reached from initial states (in this case states of the form $\langle n,n\rangle$ with $n \geq 0$) by a sequence of constrained transition in which we can only subtract positive numbers from the second component of the state such that we cannot obtain a negative number on this component.
\item[Constrained messages.] The constrained messages are the even natural numbers (except $0$). The transition function produces even numbers as output messages. Since constrained transitions use only positive numbers greater that $1$ as inputs, they can only produce natural numbers as output messages and $0$ cannot be produced.
\item[Valid states.] The valid states are of those states of the form $\langle 2n + 1, 2i + 1 \rangle$ or $\langle 2n, 2i \rangle$ with $n \geq i \geq 0$. Valid states are reachable from initial states of the form $\langle n,n \rangle$ with $n \geq 0$ by constrained transitions which use valid messages. In particular, if we start from an initial state and use only the valid message $2$ (which is an initial message), than we can reach all states which have the same parity between its two component.
\item[Valid messages.] We can see that the valid messages are the powers of $2$. Valid messages are a subset of constrained messages, so valid messages must be even natural numbers. Since the only initial message is $2$ and valid messages are those produced by constrained transitions using only valid messages, we get that valid messages are the powers of $2$.
\end{description}
\vspace{.4cm}
\subsubsection{UMO components}\label{UMO}
Let us describe a VLSM which can record all its history. In other words, it keeps track in its states of all the transitions that it performed. UMO is an acronym for {\em Unvalidating Message Observer}.
An UMO component has two labels, $\mathit{send}$ and $\mathit{receive}$. An UMO component's state is a tuple of the form $\langle o, i \rangle$ between a (finite) list of message observations $o$ and a natural number $i$ which represent an address identifying the component. A {\em message observation} is a tuple of the form $(\mathit{send}, m)$ or $(\mathit{receive}, m)$, where $m$ is a message, and $\mathit{send}$ and $\mathit{receive}$ are labels. {\em Messages} are states. When an UMO component sends a message to another component, it actually sends its current state. Therefore, sometimes in this context it is easier to think of states as messages.
In order to formally define UMO components, let us first define the set of {\em states} as the set $S = \bigcup_{n=0}^\infty S_{n}$,
where\footnote{$[]$ stands for the empty list, while $\, +\hspace{-.1cm}+\, $ stands for the concatenation of lists.}
\vspace{.2cm}
\begin{align*}
S_{0} &= \{\langle [], i \rangle\mid i\in \mathbb{N}\}, \\
S_{n+1} &= S_{n} \cup \bigcup_{\substack{\langle o, i \rangle\, \in\, S_{n} \\ \langle o', i' \rangle\, \in\, S_{n} }} \{\langle o \, +\hspace{-.1cm}+\, [(\mathit{send}, \langle o', i'\rangle)], i \rangle, \langle o \, +\hspace{-.1cm}+\, [(\mathit{receive}, \langle o', i'\rangle)], i \rangle \}.
\end{align*}
\vspace{.2cm}
Let ${\it obs}$ and ${\it id}$ denote the projections of states to their list of messages observations and address (or identifier), respectively. Formally, ${\it obs}(\langle o, i \rangle) = o$ and ${\it id}(\langle o, i \rangle) = i$, for any $\langle o, i\rangle \in S$.
\vspace{.1cm}
\begin{definition}[\coqref{VLSM.Core.ELMO.UMO}{UMOComponentMachine}]\label{UMO-component}
The \textbf{UMO component of address $i \in \mathbb{N}$} is the VLSM $\mathcal{U}_i = (L,S,S^i_0,M, M_0, \tau, \beta_{\mathit{UMO}})$, where
\begin{list}
{$\cdot$}{}
\vspace{-.2cm}
\item $L = \{\mathit{send},\mathit{receive}\}$ is the set of labels,
\vspace{-.1cm}
\item $S$ is the set of states defined as above,
\vspace{-.1cm}
\item $S_0^i = \{\langle [], i \rangle\}$ is the initial state which has no message observations and is identified by the address of the component,
\vspace{-.1cm}
\item $M = S$ is the set of messages which coincides with the set of states,
\vspace{-.1cm}
\item $M_0 = \emptyset$ is the empty set of initial messages,
\vspace{-.1cm}
\item the transition function is defined as
\begin{align*}
\tau(\mathit{send},s,m) &= (s,m), \\
\tau(\mathit{send},s,\nomessage) &= (\langle {\it obs}(s) \, +\hspace{-.1cm}+\, [(\mathit{send}, s)], {\it id}(s) \rangle, s), \\
\tau(\mathit{receive},s,\nomessage) &= (s,\nomessage), \\
\tau(\mathit{receive},s, m) &= (\langle {\it obs}(s) \, +\hspace{-.1cm}+\, [(\mathit{receive},m)], {\it id}(s) \rangle, \nomessage),
\end{align*}
\vspace{-.6cm}
\item the validity constraint is defined as
\begin{align*}
\beta_{\mathit{UMO}}(l,s,m) = ( l = \mathit{send}\ \wedge\ m = \nomessage)\ \vee\ ( l = \mathit{receive}\ \wedge\ m \neq \nomessage).
\end{align*}
\end{list}
\end{definition}
In the sequel, let $\mathcal{U}_i$ be the UMO component of address $i \in \mathbb{N}$. For all states $s$ of $\mathcal{U}_i$, we have ${\it id}(s)=i$. As in any VLSM, the transition function of $\mathcal{U}_i$ is a total function. However, due to the validity constraint, the constrained transitions are those of the form $\tau(\mathit{send},s,\nomessage)$ or $\tau(\mathit{receive},s, m)$. Therefore, for simplicity, we will denote a constrained transition in $\mathcal{U}_i$ simply by
$$
s \xrightarrow[m]{l} s',
$$
where $l$ can be either $\mathit{send}$ (in which case the denoted constrained transition is $\tau(\mathit{send},s,\nomessage) = (s',m)$ and $m = s$) or $l$ can be $\mathit{receive}$ (in which case the denoted constrained transition is $\tau(\mathit{receive},s,m) = (s',\nomessage)$).
For any state $s$ of $\mathcal{U}_i$, we define the sets of messages observed as sent or received in that state, respectively:
\begin{align*}
\mathit{sent\_messages}(s) &= \{ m \mid (\mathit{send}, m) \in {\it obs}(s) \}, \\
\mathit{received\_messages}(s) &= \{ m \mid (\mathit{receive}, m) \in {\it obs}(s) \}, \\
\mathit{messages}(s) &= \mathit{sent\_messages}(s) \cup \mathit{received\_messages}(s).
\end{align*}
The following is a constrained trace of the UMO component of address $2$:
\begin{align*}
& \state{[]}{2}
\xrightarrow[s_1 = \state{[]}{2}]{\mathit{send}} \\
& \state{[{(\mathit{send}, s_1)}]}{2}
\xrightarrow[s_2 = \state{[{(\mathit{send}, s_1)}]}{2}]{\mathit{send}} \\
& \state{[(\mathit{send}, s_1), (\mathit{send}, s_2)]}{2}
\xrightarrow[m_1 = \state{[(\mathit{send}, \state{[]}{1}), (\mathit{send}, \state{[]}{2})]}{1} ]{\mathit{receive}} \\
& \state{[(\mathit{send}, s_1), (\mathit{send}, s_2), (\mathit{receive}, m_1)]}{2}.
\end{align*}
The last state in the above constrained trace is a constrained state, however it is not a valid state as the message $\langle [(\mathit{send}, \langle [],1 \rangle), (\mathit{send}, \langle [],2 \rangle)],1 \rangle$ cannot be obtained as the output of any valid trace.
Of course, by definition, for any constrained state there is a constrained trace leading to it. However, what is typically for UMO components is that in any constrained state of an UMO component, there is a unique trace encoded into that state which leads to it.
\begin{lemma}[\coqref{VLSM.Core.ELMO.UMO}{valid_state_contains_unique_valid_trace_Ri}]\label{UMO-component-trace}
From every constrained state of an UMO component we can extract a unique constrained trace reaching it.
\end{lemma}
\section{Composition} \label{composition}
A single VLSM component represents the local point of view of a node in a distributed system. We can obtain a global point of view of a system by combining VLSM components. We will combine VLSMs defined over the same set of messages. The most natural way of putting together VLSM components is via a {\em free composition} in which we consider the product of their states and let the global transition function and the global constraint predicate to be defined component-wise, guided by labels belonging to individual components. Formally, we have the following definition.
\begin{definition}[\coqref{VLSM.Core.Composition}{free_composite_vlsm}]
Let $\{\mathcal{V}_i \}_{i=1}^n$ be an indexed set of VLSMs over the same set of messages $M$. The \textbf{free (VLSM) composition} of this family is the VLSM
$${ \sum_{i=1}^n}\, \mathcal{V}_i = (L, S, S_0, M, M_0,\tau, \beta),$$
where
\begin{list}
{$\cdot$}{}
\vspace{-.2cm}
\item $L = {\bigcup_{i=1}^n \{i\} \times L_i}$ is the disjoint union of labels,
\vspace{-.1cm}
\item $S = {\prod_{i=1}^n S_i}$ is the product of states,
\vspace{-.1cm}
\item $S_0 = {\prod_{i=1}^n S_{i,0}}$ is the product of initial states,
\vspace{-.1cm}
\item $M$ is the same set of messages as for each VLSM in the family,
\vspace{-.1cm}
\item $M_0 = { \bigcup_{i=1}^n M_{i,0}}$ is the union of all initial messages,
\vspace{-.1cm}
\item $\tau: L \times S \times \opt{M} \rightarrow S \times \opt{M}$ is defined component-wise, guided by labels,
\begin{align*}
\tau(\langle j, l_j \rangle, \langle s_1, {\scriptscriptstyle \ldots} s_n\rangle, m) = \left(\langle s_1,{\scriptscriptstyle \ldots}, s_{j-1}, \tau^s_j(l_j, s_j, m), s_{j+1}, {\scriptscriptstyle \ldots} s_n\rangle, \tau^m_j(l_j, s_j, m)\right),
\end{align*}
\vspace{-.6cm}
\item $\beta \subseteq L \times S \times \opt{M}$ is defined component-wise, guided by labels,
\begin{align*}
\beta(\langle j,l_j \rangle,\langle s_1,{\scriptscriptstyle \ldots},s_n\rangle,m) = \beta_j(l_j,s_j,m).
\end{align*}
\end{list}
\end{definition}
The free composition allows messages produced by one VLSM to be received by any other VLSM, including itself. However, note that a VLSM may receive a message that was not sent earlier in a trace.
We address the issue of receiving messages produced on alternative traces in the sections dedicated to equivocation.
The validity constraint in the free composition lift globally the local validity constraint of each component involved in the composition. We can imagine systems in which their designers could impose more global restrictions on the system, stronger than the ones that can be specified locally. We can model this idea by the notion of a {\em composition constraint} which can be enforced further in a free composition.
\begin{definition}[\coqref{VLSM.Core.Composition}{composite_vlsm}]\label{constraint-composition}
Let ${\sum_{i=1}^n} \mathcal{V}_i $ be the free VLSM composition of an indexed set of VLSMs $\{\mathcal{V}_i\}_{i=1}^n$.
A \textbf{composition constraint} $\varphi$ is a predicate filtering the inputs for the composed transition function, i.e.,
$$\varphi \subseteq L \times S \times \opt{M}.$$
The \textbf{constrained (VLSM) composition (under $\varphi$)} of $\{\mathcal{V}_i\}_{i=1}^n$ is the VLSM which has the same components as the free composition, except for the validity constraint which is further constrained by $\varphi$:
\begin{align*}
\Bigr({\sum_{i=1}^n} \mathcal{V}_i \Bigr) \Bigr|_\varphi &= (L,S, S_0, M, M_0,\tau, \beta \wedge \varphi).
\end{align*}
\end{definition}
We will refer to the states of a free or constrained composition as {\em composite states}. Obviously, if a composition constraint $\varphi$ does not filter out any inputs, i.e., $\varphi = L \times S \times \opt{M}$, then the two notions of composition coincide. Note that the constrained composition can have fewer valid states and valid messages than the free composition.
Let $\mathcal{V} =\ \bigr({\sum_{i=1}^n} \mathcal{V}_i\bigr)\bigr|_\varphi$ be a constrained composition. Given a transition in $\mathcal{V}$ of the form
$$\transition{\langle j,l_j\rangle}{\langle s_1,{\scriptscriptstyle \ldots},s_j,{\scriptscriptstyle \ldots},s_n\rangle}{m}{\langle s_1,{\scriptscriptstyle \ldots},s_j',{\scriptscriptstyle \ldots},s_n\rangle}{m'},$$
its {\em (transition) projection} on component $j$ is
$$\transition{l_j}{s_j}{m}{s_j'}{m'}.$$
Given a trace $\mathit{tr}$ in $\mathcal{V}$, its {\em (trace) projection} on component $j$ consists of all the projections of the transitions from $\mathit{tr}$ with labels of the form $\langle j,l_j\rangle$ taken in the same order as in $\mathit{tr}$.
\begin{definition}[\coqref{VLSM.Core.ProjectionTraces}{composite_vlsm_induced_projection}]\label{induced-projection}
Let $\mathcal{V} =\ \bigr({\sum_{i=1}^n} \mathcal{V}_i\bigr)\bigr|_\varphi $ be the composition under $\varphi$ of $\{\mathcal{V}_i\}_{i=1}^n$. For any $j \in \{1,{\scriptscriptstyle \ldots}, n\}$, the \textbf{induced $j$th projection} of $\mathcal{V}$ is the VLSM
$$\mathit{Proj}_j(\mathcal{V}) = (L_{j}, S_{j}, S_{j,0}, M, M_{\mathcal{V}},\tau_{j}, \beta_{j}),$$
where $L_{j}, S_{j}, S_{j,0}, M, \tau_{j}, \beta_{j}$ are the same as in the original component $\mathcal{V}_j$, and the set of initial messages is the set of all valid messages of the composition $\mathcal{V}$.
\end{definition}
The induced $\mathrm{j}$th projection and $\mathcal{V}_j$ do not usually coincide. While $\mathcal{V}_j$ accepts as valid messages only those that can be produced by itself, $\mathit{Proj}_j(\mathcal{V})$ accepts as valid messages all valid messages of the composition.
The projections of the valid traces from $\mathcal{V}$ are valid in their corresponding induced projections.
Due to the constrained composition, there can be valid traces in the induced projections which cannot be lifted to valid traces in the constrained composition.
However, valid traces in the composition (under the same constrained composition) of the original components and of the induced projections coincide.
\subsection{Example: the UMO protocol}
Let $\{\mathcal{U}_i\}_{i=1}^n$ be a set of UMO components indexed by their addresses, i.e, $\mathcal{U}_i$ is the UMO component of address $i$). The \textbf{UMO protocol} $\mathrm{UMO}(\mathcal{U}_i)_{i=1}^n$ is defined as the free VLSM composition of $\{\mathcal{U}_i\}_{i=1}^n$ (\coqref{VLSM.Core.ELMO.UMO}{UMO.UMOProtocol}).
The following result allows us to recover a trace from a constrained composite state of an UMO protocol by combining the traces extracted by Lemma \ref{UMO-component-trace} from each component of the composite state. However, from any constrained composite state we can extract more traces leading to it, depending how we combine the traces leading to the components of the composite state.
\begin{lemma}[\coqref{VLSM.Core.ELMO.UMO}{finite_valid_trace_from_to_UMO_state2trace_RUMO}]\label{UMO-protocol-trace}
From every constrained state of an UMO protocol we can extract a constrained trace reaching it.
\end{lemma}
As a particular case, let us consider the UMO components $\mathcal{U}_1$, $\mathcal{U}_2$, and $\mathcal{U}_3$ of address $1$, $2$, and $3$, respectively. The following is a constrained trace of $\mathrm{UMO}(\mathcal{U}_i)_{i=1}^3$:
\begin{align*}
\begin{bmatrix}
\state{[]}{1} \\
\state{[]}{2} \\
\state{[]}{3} \\
\end{bmatrix}
\xrightarrow[m_1 = \state{[]}{1}]{\langle 1, \mathit{send} \rangle}
\begin{bmatrix}
\state{[(\mathit{send},m_1)]}{1} \\
\state{[]}{2} \\
\state{[]}{3} \\
\end{bmatrix}
\xrightarrow[m_2 = \state{[]}{2}]{\langle 2, \mathit{send} \rangle}
\begin{bmatrix}
\state{[(\mathit{send},m_1)]}{1} \\
\state{[(\mathit{send},m_2)]}{2} \\
\state{[]}{3} \\
\end{bmatrix}
\xrightarrow[m_1]{\langle 3, \mathit{receive} \rangle}
\begin{bmatrix}
\state{[(\mathit{send},m_1)]}{1} \\
\state{[(\mathit{send},m_2)]}{2} \\
\state{[(\mathit{receive},m_1)]}{3} \\
\end{bmatrix}
\end{align*}
Now suppose we want to define a refinement of the above UMO protocol in which $\mathcal{U}_1$ can receive messages only from $\mathcal{U}_2$, $\mathcal{U}_2$ can receive messages only from $\mathcal{U}_3$, and $\mathcal{U}_3$ can receive messages only from $\mathcal{U}_1$. We can obtain this (global) restriction using the following composition constraint:
\begin{align*}
\varphi = \{(\langle i,\mathit{receive} \rangle,\sigma, m)\ |\ \sigma \in S \times S \times S,\ m \in S,\ {\it id}(m) = (i\ \mathrm{mod}\ 3) + 1\}.
\end{align*}
\section{Validators}
In a distributed system we are interested in dealing with valid messages, as they allow us to filter out junk or malformed information. Sometimes a single component does not have the capabilities to filter out malformed information. For example, the UMO components introduced in Section \ref{UMO} cannot establish if a message is valid. Indeed, let us consider the following constrained transition in the UMO componenet $\mathcal{U}_1$:
\[
s \xrightarrow[{ \langle [(\mathit{send}, \langle [],2 \rangle), (\mathit{send}, \langle [],3 \rangle)],2 \rangle}]{\mathit{receive}} s'.
\]
The message $\langle [(\mathit{send}, \langle [],2 \rangle), (\mathit{send}, \langle [],3 \rangle)],2 \rangle$ cannot be emitted from any composite state of the UMO protocol $\mathrm{UMO}(\mathcal{U}_i)_{i=1}^3$, as it is ruled out by the definition of the transition function. Hence this is not a valid message in the free composition.
In this section we investigate a class of VLSMs called {\em validators} which are components strong enough to locally guarantee that a message is valid. A validator will enforce a (global) composition constraint locally, in the sense that if a transition would cause its view of the system to violate the global constraint, then it will not be a constrained transition in the component.
Let $\mathcal{V} =\ \bigr({\sum_{i=1}^n} \mathcal{V}_i\bigr)\bigr|_\varphi $ be the composition under $\varphi$ of $\{\mathcal{V}_i\}_{i=1}^n$.
Let $j \in \{1,{\scriptscriptstyle \ldots},n\}$ be the index of a component.
\begin{definition}[\coqref{VLSM.Core.Validator}{transition_validator}]
The component $\mathcal{V}_j$ is a \textbf{validator} for $\mathcal{V}$ if
any constrained transition from a constrained state of $\mathcal{V}_j$ can be lifted to a valid transition in $\mathcal{V}$. Formally, $\mathcal{V}_j$ is a validator for $\mathcal{V}$ if for any constrained state $s_j$ and any constrained transition in $\mathcal{V}_j$
$$s_j\xrightarrow[m\ \rightarrow \ m']{l} s_j',$$
there exists a valid transition in $\mathcal{V}$
$$\sigma\xrightarrow[m\ \rightarrow \ m']{\langle j,l\rangle} \sigma'$$
such that the $j$th components of $\sigma$ and $\sigma'$ are $s_j$ and $s_j'$, respectively.
\end{definition}
When a component is not a validator for a composition, we can construct a more constrained version of the component which will act as a validator for the composition.
\begin{definition}[\coqref{VLSM.Core.Validator}{composite_vlsm_induced_projection_validator}]\label{induced-validator}
The \textbf{induced jth validator} of $\mathcal{V}$ is the VLSM
$$\mathit{Validator}_j(\mathcal{V}) = (L_{j}, S_{j}, S_{j,0}, M, M_{j,0},\tau_{j}, (\beta \wedge \varphi)|_j),$$
where $L_{j}, S_{j}, S_{j,0}, M, M_{j,0}, \tau_{j}$ are the same as for the component $\mathcal{V}_j$ and the validity constraint is defined as
\begin{align*}
(\beta \wedge \varphi)|_j \mbox{ holds} \quad \mbox{iff} & \quad m \mbox{ is valid in } \mathcal{V} \mbox{ and there is a valid state } \sigma=\langle s_1,{\scriptscriptstyle \ldots},s_{j-1},s,s_{j+1},{\scriptscriptstyle \ldots},s_n\rangle \mbox{ in } \mathcal{V} \\
& \hspace{.3cm} \mbox{ such that } (\beta \wedge \varphi)(\langle j,l\rangle,\sigma,m) \mbox{ holds}.
\end{align*}
\end{definition}
The induced jth validator for a free composition can be obtained as a particular case of the above definition.
The $(\beta \wedge \varphi)|_j)$ predicate ensures that constrained transitions can be lifted to valid transitions in the composition.
The induced $\mathrm{j}$th validator and $\mathcal{V}_j$ do not usually coincide. Even though the induced $\mathrm{j}$th validator has the same states as $\mathcal{V}_j$, it has a potentially different set of valid states than $\mathcal{V}_j$, in particular due to the interactions with other components and the possible composition constraint.
The following lemmas characterise validators induced by validating components.
\begin{lemma}[\coqref{VLSM.Core.Validator}{projection_validator_messages_transitions}]
If $\beta_j(l,s_j,m)$ implies that $(\beta \wedge \varphi)|_j(l,s_j,m)$ holds for any $l \in L_j$, constrained state $s_j$ in $\mathcal{V}_j$, and $m\in M$, then $\mathcal{V}_j$ is a validator for $\mathcal{V}$.
\end{lemma}
Similarly with the case of the induced projections, valid traces in the composition under the same constrained composition of the original components and of the induced validators coincide.
The free composition of induced validators does not necessarily globally satisfy $\varphi$, but it is the best approximation of the constrained composition one can obtain using a free composition.
\subsection{Example: the MO protocol}\label{MO}
As we noticed at the beginning of the section, in UMO components we have no control over the pattern of the received messages and we cannot establish if a message is valid. We refine the notion of UMO components by adding more restrictions on the validity constraint about the messages a component can receive. We call this refinement a MO component, acronym for {\em Message Observer}.
In order to make the presentation easier, let us fix a natural number $n$. We will define only components with addresses in the set $\{1,{\scriptscriptstyle \ldots},n\}$. We are going to allow components to receive only messages for which their senders have the addresses in the set $\{1,{\scriptscriptstyle \ldots},n\}$.
\begin{definition}[\coqref{VLSM.Core.ELMO.MO}{MOComponentMachine}]\label{MO-component}
The \textbf{MO component of address $i \in \mathbb{N}$} is the VLSM $\mathcal{M}_i = (L,S,S^i_0,M, M_0, \tau, \beta_{\mathit{MO}})$ which has the same elements as the UMO component of address $i$ from Definition \ref{UMO-component}, except for the validity constraint
$$\beta_{\mathit{MO}}(l,s,m) = \beta_{\mathit{UMO}}(l,s,m)\ \wedge\ (l = \mathit{receive} \rightarrow \psi_{msg\_valid}(m))$$
where $\psi_{msg\_valid}$ is defined by(\coqref{VLSM.Core.ELMO.MO}{MO_msg_valid})
\begin{align*}
&\psi_{msg\_valid}(\state{[]}{j}) = j \in \{1,{\scriptscriptstyle \ldots},n\}, \\
&\psi_{msg\_valid}(\state{o \, +\hspace{-.1cm}+\, [(l_p,\state{o_p}{j_p})]}{j}) = \psi_{msg\_valid}(\state{o}{j})\ \wedge \\
&\hspace{2cm} (l_p = \mathit{send} \rightarrow (j_p = j\ \wedge\ o_p = o))\ \wedge\ (l_p = \mathit{receive} \rightarrow \psi_{msg\_valid}(\state{o_p}{j_p})).
\end{align*}
\end{definition}
The recursive invocation in formula $\psi_{msg\_valid}$(\coqref{VLSM.Core.ELMO.MO}{MO_msg_valid}) is terminating as the list of message observations contained in a state is finite and it is decreasing at each recursive invocation.
The following is not a constrained trace of the MO component of address $2$ as $\psi_{msg\_valid}(m_1)$ does not hold:
\begin{align*}
& \state{[]}{2}
\xrightarrow[s_1 = \state{[]}{2}]{\mathit{send}} \\[.1cm]
& \state{[{(\mathit{send}, s_1)}]}{2}
\xrightarrow[s_2 = \state{[{(\mathit{send}, s_1)}]}{2}]{\mathit{send}} \\[.1cm]
& \state{[(\mathit{send}, s_1), (\mathit{send}, s_2)]}{2}
\xdashrightarrow[m_1 = \state{[(\mathit{send}, \state{[]}{1}), (\mathit{send}, \state{[]}{2})]}{1} ]{\mathit{receive}} \\[.1cm]
& \state{[(\mathit{send}, s_1), (\mathit{send}, s_2), (\mathit{receive}, m_1)]}{2}.
\end{align*}
Let $\{\mathcal{M}_i\}_{i=1}^n$ be a set of MO components indexed by their addresses, i.e, $\mathcal{M}_i$ is the MO component of address $i$. The \textbf{MO protocol} $\mathrm{UMO}(\mathcal{M}_i)_{i=1}^n$ is defined as the free VLSM composition of $\{\mathcal{M}_i\}_{i=1}^n$ (\coqref{VLSM.Core.ELMO.MO}{MO.MOProtocol}).
\begin{remark}\label{MO-component-trace}
{\normalfont
Lemmas \ref{UMO-component-trace} and \ref{UMO-protocol-trace} can be proved for MO components and protocols as well (\coqref{VLSM.Core.ELMO.MO}{valid_state_contains_unique_valid_trace_RMi} and \coqref{VLSM.Core.ELMO.MO}{finite_valid_trace_from_to_MO_state2trace_RMO}). }
\end{remark}
As we can see in the next result, MO components are strong enough to ensure that the received messages are valid in a MO protocol.
\begin{theorem}[\coqref{VLSM.Core.ELMO.MO}{MO_component_validating}]\label{MO-validator}
Let $\mathcal{M} = \mathrm{MO}(\mathcal{M}_i)_{i=1}^n$ be a MO protocol. Every component $\mathcal{M}_i$ is a validator for $\mathcal{M}$.
\end{theorem}
\section{Evidence of Equivocation}\label{evidence-equivocation}
In the consensus literature, equivocation refers to claiming different beliefs about the state of the protocol to different parts of the network \cite{clement,madsen,jaffe}. For example, if a network is trying to come to consensus about the value of a bit, an equivocating node may claim to think the bit is 0 to one part of the network and 1 to another part. In CBC Casper, an equivocating node may issue two blocks, neither of which is in the justification of the other \cite{vlad-2019}.
Equivocation refers to claiming different beliefs to different parts of the system and equivocating components behave as-if running multiple copies of the protocol. Pure equivocation is hard to detect, but we can look for {\em evidence of equivocation}. Evidence of equivocation can be either {\em local}, in a single component where we have access only to states of the component, or {\em global}, in a composite system where we have access to composite states.
In order to be able to express these concepts, we need to make further assumptions about the VLSMs involved.
\begin{figure}[t]
\begin{tikzpicture}
\node[] (msgDep) at (-1,0) {\shortstack{message\\mathit{dependencies}\\assumption}};
\node[] (msgDepInd) at (-1,-1.75) {\shortstack{indirect message \\ dependency relation}};
\node[] (beenDirObs) at (-1,-3.5) {\shortstack{directly observed\\information}};
\node[] (beenSent) at (-4.5,-4.25) {\shortstack{sent\\assumption}};
\node[] (beenReceived) at (-4.5,-2.75) {\shortstack{received\\assumption}};
\node[] (channelAuth) at (3,0) {\shortstack{channel\\authentification\\assumption}};
\node[] (inc) at (3,-1.75) {\shortstack{incomparable\\mathit{messages}}};
\node[] (beenObs) at (3,-3.5) {\shortstack{indirectly observed\\information}};
\node[] (localEquiv) at (6,-1.75) {\shortstack{local \\equivocation}};
\path[->, line width=1pt, mDarkTeal!50] (msgDep) edge (msgDepInd);
\path[->, line width=1pt, mDarkTeal!50] (msgDepInd) edge (inc);
\path[->, line width=1pt, mDarkTeal!50] (channelAuth) edge (inc);
\path[->, line width=1pt, mDarkTeal!50] (beenSent) edge (beenDirObs);
\path[->, line width=1pt, mDarkTeal!50] (beenReceived) edge (beenDirObs);
\path[->, line width=1pt, mDarkTeal!50] (msgDepInd) edge (beenObs);
\path[->, line width=1pt, mDarkTeal!50] (beenDirObs) edge (beenObs);
\path[->, line width=1pt, mDarkTeal!50] (channelAuth) edge (localEquiv);
\path[->, line width=1pt, mDarkTeal!50] (inc) edge (localEquiv);
\path[->, line width=1pt, mDarkTeal!50] (beenObs) edge (localEquiv);
\end{tikzpicture}
\centering
\caption{Assumptions for the notion of local evidence of equivocation.}
\label{fig:local-equivocation}
\end{figure}
\begin{description}
\item[Channel authentication assumption.] It is an oracle which for each message $m$ gives its sender, denoted by $sender(m)$ (\coqref{VLSM.Core.Equivocation}{channel_authentication_prop}).
\item[Message dependencies assumption.] It is an oracle which expresses direct dependencies between messages (\coqref{VLSM.Core.MessageDependencies}{MessageDependencies}). This assumption implies a dependency relation between messages (\coqref{VLSM.Core.MessageDependencies}{msg_dep_rel}).
\item [Received assumption.] It is an oracle which can tell if a message was received on any trace leading to a component state (\coqref{VLSM.Core.Equivocation}{HasBeenReceivedCapability}). This assumption guarantees that any trace reaching the component state would contain this message. This assumption can be naturally lifted to composite states (\coqref{VLSM.Core.Equivocation}{composite_HasBeenReceivedCapability}). If such an oracle exists for a message $m$ and a state $s$, we denote its associated predicate by $\receivedM{s}{m}$.
\item [Sent assumption.] It is an oracle which can tell if a message was emitted on any trace leading to a component state (\coqref{VLSM.Core.Equivocation}{HasBeenSentCapability}). This assumption guarantees that any trace reaching the component state would contain this message. This assumption can be naturally lifted to composite states (\coqref{VLSM.Core.Equivocation}{composite_HasBeenSentCapability}). If such an oracle exists for a message $m$ and a state $s$, we denote its associated predicate by $\sentM{s}{m}$.
\end{description}
From the above assumptions, we can derive the following notions needed to formalise the concept of equivocation. We illustrate the connections between them in Figures \ref{fig:local-equivocation} and \ref{fig:global-equivocation}.\footnote{The assumptions in boldface are expressed for composite states.}
\begin{description}
\item [Indirect message dependency relation.] This relation can be extracted as the transitive closure of the dependency relation between messages implied by the message dependencies assumption\ (\coqref{VLSM.Core.MessageDependencies}{msg_dep_happens_before}). We further assume that there is an order relation between the messages emitted by same component on the same run of the protocol (\coqref{VLSM.Core.MessageDependencies}{has_been_sent_msg_dep_comparable_prop}).
\item [Incomparable messages.] This notion is a way of expressing when two messages which have the same sender are incomparable with respect to the indirect message dependency relation.
\item [Directly observed information.] This notion is a way of expressing if a message is directly observed in a component state, using the sent assumption\ and the received assumption\ (\coqref{VLSM.Core.Equivocation}{HasBeenDirectlyObservedCapability}, \coqref{VLSM.Core.Equivocation}{HasBeenDirectlyObservedCapability_from_sent_received}). This notion can be naturally lifted to composite states (\coqref{VLSM.Core.Equivocation}{composite_HasBeenDirectlyObservedCapability}).
\item [Indirectly observed information.] A message is indirectly observed in a component state if either it is directly observed in the state, or it is an indirect dependency of a directly observed message in the state (\coqref{VLSM.Core.MessageDependencies}{HasBeenObserved}). This notion can be naturally lifted to composite states (\coqref{VLSM.Core.MessageDependencies}{composite_HasBeenObserved_iff}).
\end{description}
Now we can define the notions of evidence of equivocation mentioned at the beginning of this section.
\begin{definition}[\coqref{VLSM.Core.MessageDependencies}{msg_dep_is_locally_equivocating}]\label{local-equivocation}
A pair of messages is a \textbf{local evidence of equivocation} for their sender in a component state of a VLSM if
\begin{list}
{($\arabic{cont}$)}{\usecounter{cont}}
\item the messages have the same sender,
\item the messages have been (indirectly) observed in the component state, and
\item the messages could not have been produced by their sender in a single run of the protocol.
\end{list}
We denote by $\mathit{local\_equivocators}(s)$ the set of components for which there is a local evidence of equivocation in a component state $s$.
\end{definition}
\begin{definition}[\coqref{VLSM.Core.MessageDependencies}{msg_dep_is_globally_equivocating}]\label{global-equivocation}
A message is an \textbf{global evidence of equivocation} for its sender in a composite state of a composite VLSM if
\begin{list}
{($\arabic{cont}$)}{\usecounter{cont}}
\item the message has been (indirectly) observed in the composite state, and
\item the message was not observed as a sent message in the composite state.
\end{list}
We denote by $\mathit{global\_equivocators}(\sigma)$ the set of components for which there is a global evidence of equivocation in a composite state $\sigma$.
\end{definition}
The following result shows the connection between local and global evidence of equivocations.
\begin{theorem}[\coqref{VLSM.Core.MessageDependencies}{msg_dep_locally_is_globally_equivocating}]
Let $\mathcal{V} =\ \bigr({\sum_{i=1}^n} \mathcal{V}_i\bigr)\bigr|_\varphi $ be the VLSM composition under $\varphi$ of $\{\mathcal{V}_i\}_{i=1}^n$. For any constrained component state $s$ and any constrained composite state $\sigma$ such that one of its components is $s$, we have
$$\mathit{local\_equivocators}(s) \subseteq \mathit{global\_equivocators}(\sigma).$$
\end{theorem}
\begin{figure}[t]
\begin{tikzpicture}
\node[] (msgDep) at (-1,0) {\shortstack{message\\mathit{dependencies}\\assumption}};
\node[] (msgDepInd) at (-1,-1.75) {\shortstack{indirect message \\ dependency relation}};
\node[] (beenDirObs) at (-1,-3.5) {\textbf{\shortstack{directly observed\\information}}};
\node[] (beenSent) at (-4.5,-4.25) {\textbf{\shortstack{sent\\assumption}}};
\node[] (beenReceived) at (-4.5,-2.75) {\textbf{\shortstack{received\\assumption}}};
\node[] (channelAuth) at (3,0) {\shortstack{channel\\authentification\\assumption}};
\node[] (beenObs) at (3,-3.5) {\textbf{\shortstack{indirectly observed\\information}}};
\node[] (globalEquiv) at (6,-1.75) {\shortstack{global \\equivocation}};
\path[->, line width=1pt, mDarkTeal!50] (msgDep) edge (msgDepInd);
\path[->, line width=1pt, mDarkTeal!50] (channelAuth) edge (globalEquiv);
\path[->, line width=1pt, mDarkTeal!50] (beenSent) edge (beenDirObs);
\path[->, line width=1pt, mDarkTeal!50] (beenReceived) edge (beenDirObs);
\path[->, line width=1pt, mDarkTeal!50] (msgDepInd) edge (beenObs);
\path[->, line width=1pt, mDarkTeal!50] (beenDirObs) edge (beenObs);
\path[->, line width=1pt, mDarkTeal!50] (beenObs) edge (globalEquiv);
\path[-, line width=1pt, mDarkTeal!50] (beenSent) edge (-4.5,-5.25);
\path[-, line width=1pt, mDarkTeal!50] (-4.5,-5.25) edge (6,-5.25);
\path[->, line width=1pt, mDarkTeal!50] (6,-5.25) edge (globalEquiv);
\end{tikzpicture}
\centering
\caption{Assumptions for the notion of global evidence of equivocation.}
\label{fig:global-equivocation}
\end{figure}
Note that the other implication is not true in general. The local evidence of equivocation exposed by a component is a persistent evidence, in the sense that whatever state the protocol transition to, this equivocation will still be exposed. In contrast, the global evidence of equivocation is not a persistent notion as, for example, the message that was an evidence of equivocation in a composite state $\sigma$ could be observed as a sent message in a composite state $\sigma'$ reachable from $\sigma$. However, we can construct a trace reaching a composite state which does not expose more global evidence of equivocation at any step than that composite state.
\begin{theorem}[\coqref{VLSM.Core.TraceableVLSM.MinimalEquivocationTrace}{state_to_minimal_equivocation_trace_equivocation_monotonic}]
Let $\{\mathcal{V}_i\}_{i=1}^n$ be an indexed set of VLSMs such that for each component, the state successor relation induced by its transition function is well-founded.
Let $\mathcal{V} =\ \bigr({\sum_{i=1}^n} \mathcal{V}_i\bigr)\bigr|_\varphi $ be the VLSM composition under a composition constraint $\varphi$ of $\{\mathcal{V}_i\}_{i=1}^n$, and $\sigma$ be a constrained composite state of $\mathcal{V}$. Then there is a constrained trace reaching $\sigma$ such that for any composite states $\sigma'$ and $\sigma''$ in this trace, $\sigma'$ appearing before $\sigma''$, we have
$$\mathit{global\_equivocators}(\sigma') \subseteq \mathit{global\_equivocators}(\sigma'') \subseteq \mathit{global\_equivocators}(\sigma).$$
\end{theorem}
We can make one further assumption about the VLSMs involved which can simplify the process of detecting equivocation.
\begin{description}
\item[Full node assumption.] This assumption ensures that before receiving a message, a VLSM has previously observed all the message dependencies of that message (\coqref{VLSM.Core.MessageDependencies}{message_dependencies_full_node_condition_prop}).
\end{description}
The {\em full node assumption} is a way of limiting the amount of new equivocation when receiving a message. Under the full node assumption, the only new equivocation which can be introduced when receiving a message is that of the sender of the message, since the equivocation introduced by its dependencies has already been accounted for.
Under the full node assumption, the notions of local and global evidence of equivocation can be further simplified by only considering directly observed messages in a state.
\subsection{Examples}
\subsubsection{The MO protocol}
As the theory of VLSMs follows the correct-by-construction methodology, we will show that MO components from Section \ref{MO} satisfy all the above assumptions needed for the notions of local and global evidence of equivocation and we will show how we can instantiate them. Note that in doing so, we obtain the notions of local and global evidence of equivocation for MO components derived from Definitions \ref{local-equivocation} and \ref{global-equivocation}.
\begin{description}
\item[Channel authentication assumption.] For any message $m$, ${\it id}(m)$ is the address of its sender.
\item[Message dependencies assumption.] The dependencies of a message $m$ are given by the set of message observations contained in $m$, namely
$$\mathit{dependencies}(m) = \{m' \mid (l',m') \in {\it obs}(s)\}.$$
\item [Received assumption.] A state $s$ has received a message $m$ if there is a message observation of the form $(\mathit{receive},m)$ in ${\it obs}(s)$.
\item [Sent assumption.] A state $s$ has sent a message $m$ if there is a message observation of the form $(\mathit{send},m)$ in ${\it obs}(s)$.
\end{description}
Now we can instantiate the derived notions from the above assumptions for MO component.
\begin{description}
\item [Indirect message dependency relation.] It is the least transitive relation, denoted by $<$, satisfying the condition $m_1 < m_2$ whenever $m_1 \in \mathit{messages} (m_2)$.
\item [Incomparable messages.] We say that two messages $m_1$ and $m_2$ are incomparable, denoted by $m_1 \perp m_2$, if they have the same sender and neither of them can be (indirectly) observed in the other.
\item [Directly observed information.] A message $m$ is directly observed in a state $s$ if $m \in \mathit{messages}(s)$.
\item [Indirectly observed information.] A message $m$ is observed in a state $s$ if $m < s$, where $s$ is viewed as a message.
\end{description}
\subsubsection{The ELMO protocol}
When dealing with equivocation, a design goal is to limit the global equivocation. This can be achieved by means of a composition constraint which keeps the global evidence of equivocation under a threshold. However, the components do not have access to global information and therefore can keep track only of local evidence of equivocation.
We will illustrate these ideas with a refinement of MO components called ELMO components. ELMO is an acronym for {\em Equivocation-Limited Message Observer}. We will define an ELMO protocol as a constrained composition of ELMO components which will ensure that the global equivocation exhibited by the system remains under a fixed threshold. We will show that ELMO components are validators for an ELMO protocol, meaning that they are able to locally impose this condition on equivocation.
Let us fix an {\em equivocation threshold} $t$ (a positive real number) and a function $\mathit{weight}$ from component addresses to positive real numbers. The function $\mathit{weight}$ will be used to count the weight of the equivocating components; this reduces to the count when the weights are all $1$.
In order to define ELMO components, we need to introduce some extra definitions for MO components. Therefore, let $\mathcal{M}_i$ be the MO component of address $i$.
The following notion of local evidence of equivocators is a refinement of the similar notion under the full node assumption and relies on the assumption that a component cannot self-equivocate.
\begin{definition}[\coqref{VLSM.Core.ELMO.ELMO}{local_equivocators_full}]
Let $s$ be a state of $\mathcal{M}_i$ such that ${\it obs}(s) = {\it obs}(s') \, +\hspace{-.1cm}+\, [(l,m)]$. We define the set of \textbf{locally evidenced equivocators} in $s$ as follows
\[\mathit{local\_equivocators_{full}}(s) =
\left \{
\begin{array}{rl}
\mathit{local\_equivocators_{full}}(s') \cup \{{\it id}(m)\}, & \mbox{if } l = \mathit{receive} \mbox{ and there exists } \\
& m' \in \mathit{received\_messages}(s')\\
& \mbox{such that } m \perp m', \\
\mathit{local\_equivocators_{full}}(s'), & \mbox{otherwise.}
\end{array}
\right.
\]
\end{definition}
Let us now define the notion of ELMO components.
\begin{definition}[\coqref{VLSM.Core.ELMO.ELMO}{ELMOComponentMachine}]
The \textbf{ELMO component of address $i \in \mathbb{N}$} is the VLSM $\mathcal{E}_i = (L,S,S^i_0,M, M_0, \tau, \beta_{\mathit{ELMO}})$ which has the same elements as the MO component of address $i$ from Definition \ref{MO-component}, except for the validity constraint
\begin{align*}
\beta_{\mathit{ELMO}}(l,s,m) = \beta_{\mathit{UMO}}(l,s,m)\ \wedge\ (l = \mathit{receive} \rightarrow \psi_{\mathit{ELMO}}(s,m))
\end{align*}
\vspace{.1cm}
where $\psi_{\mathit{ELMO}}(s,m)$ is defined by(\coqref{VLSM.Core.ELMO.ELMO}{ELMO_recv_valid})
\begin{align*}
&\psi_{\mathit{ELMO}}(s,m) = \psi_{\mathit{full\_node}}(s,m)\ \wedge\ \psi_{msg\_valid\_full}(m)\ \wedge\ \psi_{no\_self\_equiv}(s,m)\ \wedge \ \psi_{equiv}(\tau^s(\mathit{receive},s,m))), \\[1em]
&\psi_{\mathit{full\_node}}(s,m) = \mathit{dependencies}(m) \subseteq \mathit{messages}(s), \\[.5em]
&\psi_{no\_self\_equiv}(s,m) = ({\it id}(m) = i\, \wedge\, {\it id}(s) = i) \rightarrow m \in \mathit{sent\_messages}(s), \\[.5em]
&\psi_{msg\_valid\_full}(\state{[]}{j}) = j \in \{1,{\scriptscriptstyle \ldots},n\}, \\[.5em]
&\psi_{msg\_valid\_full}(\state{o \, +\hspace{-.1cm}+\, [(l_p,\state{o_p}{j_p})]}{j}) = \psi_{msg\_valid\_full}(\state{o}{j})\ \wedge \\
&\hspace{1cm} (l_p = \mathit{send} \rightarrow j_p = j \wedge o_p = o)\ \wedge\ \\
&\hspace{1.5cm} (l_p = \mathit{receive} \rightarrow \psi_{\mathit{full\_node}}(\state{o}{j},\state{o_p}{j_p}) \wedge \psi_{no\_self\_equiv}(\state{o}{j},\state{o_p}{j_p})), \\[.5em]
&\psi_{equiv}(s) = \sum_{j \in \mathit{local\_equivocators_{full}}(s)}\mathit{weight}(j) < t.
\end{align*}
\end{definition}
The validity constraint for an ELMO component enforces the full node assumption, checks message validity, ensures that the component does not self-equivocate, and it only allows receiving a message if this action will not bring the total weight of locally-visible equivocating components above the equivocation threshold.
For ELMO components, we can prove that the two notions of local evidence of equivocation coincide.
\begin{lemma}[\coqref{VLSM.Core.ELMO.ELMO}{local_equivocators_iff_full}]
Let $\mathcal{E}_i$ be the ELMO component of address $i$. For any constrained state $s$ of $\mathcal{E}_i$, we have
$$\mathit{local\_equivocators}(s) = \mathit{local\_equivocators_{full}}(s).$$
\end{lemma}
Let us denote by $\mathit{global\_equivocators_{full}}$ the notion of global evidence of equivocation under the full node assumption.
Let $\{\mathcal{E}_i\}_{i=1}^n$ be a set of ELMO components indexed by their addresses, i.e, $\mathcal{E}_i$ is the ELMO component of address $i$. The \textbf{ELMO protocol} $\mathrm{ELMO}(\mathcal{E}_i)_{i=1}^n$ is defined(\coqref{VLSM.Core.ELMO.ELMO}{ELMOProtocol}) as the constrained composition of $\{\mathcal{E}_i\}_{i=1}^n$ under the composition constraint(\coqref{VLSM.Core.ELMO.ELMO}{ELMO_global_constraint})
\begin{align*}
\varphi_{\mathrm{ELMO}}(\langle i,\mathit{receive} \rangle, \sigma , m) = \sum_{j\ \in\ \mathit{global\_equivocators_{full}}(\sigma')} \mathit{weight} (j) < t.
\end{align*}
where $\sigma' = \tau^s(\langle i,\mathit{receive} \rangle, \sigma , m)$.
The following result shows that the local validity constraints of the ELMO components are strong enough to ensure that the composition constraint holds.
\begin{theorem}[\coqref{VLSM.Core.ELMO.ELMO}{ELMOComponents_validating}]
Let $\mathcal{E} = \mathrm{ELMO}(\mathcal{E}_i)_{i=1}^n$ be an ELMO protocol. Every component $\mathcal{E}_i$ is a validator for $\mathcal{E}$.
\end{theorem}
\section{Models of Equivocation}\label{modelsOfEquivocation}
As explained in the previous section, equivocation occurs when receiving a message which has not been previously sent in the current trace; the sender of the message is then said to be equivocating. In this section we introduce two models of equivocation in the VLSM framework: the \textit{state-equivocation model} and the \textit{message-equivocation model}. These models allow one to model equivocation within a (composed) VLSM, as their transition functions and validity constraints formalise the idea of evidence of equivocation internally in the (composed) VLSM. We investigate when these two models of equivocation are equivalent. We consider two scenarios: (1) a fixed subset of components can equivocate, and (2) the set of equivocating components is weight-limited.
\subsection{State- and message-equivocation models for a fixed-set of equivocators}
Let $\{\mathcal{V}_i\}_{i=1}^n$ be an indexed set of VLSMs over the same set of messages. We assume that each $\mathcal{V}_i$ satisfies the {\em sent assumption\ } and the {\em channel authentication assumption}.
Let us fix a subset $E\subseteq \{1,{\scriptscriptstyle \ldots},n\}$. We assume that the only components which can equivocate are those $\mathcal{V}_i$ with $i \in E$. Below we describe the two models of equivocation for this scenario and investigate the conditions under which they are equivalent.
\subsubsection{The state-equivocation model}
In the \textit{state-equivocation model} we allow an equivocating component to perform \textit{state-equivocations} by forking itself or spawning new machines. In order to model this, we can associate to any VLSM its {\em equivocating version} which can fork itself or spawn new copies of itself at any moment. A state for the equivocating version of an VLSM is a list of states, each element of the list being a state of the original VLSM; the state of the equivocating VLSM keeps track of all the possible states that can be reached by equivocation. A VLSM and its equivocating version are defined over the same set of messages. Formally, we have the following definition.
\begin{definition}[\coqref{VLSM.Core.Equivocators.Equivocators}{equivocator_vlsm}]\label{equivocator-VLSM}
The \textbf{equivocating VLSM} $\mathcal{V}^e = (L^e,S^e, S_{0}^e, M, M_{0}, \tau^e, \beta^e)$ associated to a VLSM $\mathcal{V} = (L,S, S_{0}, M, M_0, \tau, \beta)$ is defined by:
\begin{list}
{-}{}
\item the set of labels is $L^e = \mathbb{N}^* \times (L \cup \{\mathrm{duplicate}\} \cup (\{\mathrm{new\_machine}\} \times S_{0})),$
\item states are lists of states from $S$, i.e., $S^e = [S],$\footnote{For any set $A$, we denote by $[A]$ the set of all finite lists over $A$. For any list $l$ over $A$, we denote by $\mathit{len}(l)$ the length of $l$ and by $l[n]$ its nth element.}
\item $S_{0}^e = \{[s]\ |\ s \in S_0\}$,
\item the same set of messages and initial messages as for $\mathcal{V}$,
\item the transition function is defined as
{\small
\begin{align*}
&\tau^e(\langle i, l \rangle, \gamma,m) = ([\gamma[1],{\scriptscriptstyle \ldots},\gamma[i-1],\tau^{s}(l,\gamma[i],m),\gamma[i+1],{\scriptscriptstyle \ldots}], \tau^{m}(l,\gamma[i],m)) \mbox{ with } l \in L,\\
&\tau^e(\langle i, \mathrm{duplicate} \rangle, \gamma,m) = ([\gamma[1],{\scriptscriptstyle \ldots},\gamma[i-1],\gamma[i],\gamma[i],\gamma[i+1],{\scriptscriptstyle \ldots}], \nomessage),\\
&\tau^e(\langle i, (\mathrm{new\_machine},s_{0}) \rangle, \gamma,m) = ([\gamma[1],{\scriptscriptstyle \ldots},\gamma[i],s_{0},\gamma[i+1],{\scriptscriptstyle \ldots}], \nomessage),
\end{align*}
}
\item the validity constraint is defined as
{\small
\begin{align*}
&\beta^e(\langle i, l \rangle,\gamma,m) = i \leq \mathit{len}(\gamma) \wedge \beta(l,\gamma[i],m), \hspace{4cm} \\
&\beta^e(\langle i, \mathrm{duplicate}\rangle, \gamma ,\nomessage) = i \leq \mathit{len}(\gamma), \\
&\beta^e(\langle i, (\mathrm{new\_machine},s_{0}) \rangle, \gamma ,\nomessage) = i \leq \mathit{len}(\gamma).
\end{align*}
}
\end{list}
\end{definition}
The $\beta^e$ predicate of an equivocating VLSM ensures that we can refer to an already existing copy of the component. The first component of a label indicates which copy of the machine will be used for a transition. We can show~\coqref{VLSM.Core.Equivocators.MessageProperties}{equivocator_HasBeenSentCapability} that if $\mathcal{V}$ satisfies the {\em sent assumption}, then $\mathcal{V}^e$ also satisfies the {\em sent assumption\ } by considering that a message was emitted on any trace leading to a state of the equivocating VLSM if there is a copy of the VLSM in that state for which the message was emitted on any trace leading to it.
Using the notion of equivocating VLSM, we can define the {\em state-equivocation model for a fixed-set of equivocators $E$} as a VLSM composition in which we impose the composition constraint that components may only receive messages that have been sent in the current trace of the composition.
\begin{definition}[\coqref{VLSM.Core.Equivocators.Composition.LimitedEquivocation.FixedEquivocation}{equivocators_fixed_equivocations_vlsm}]
The \textbf{state-equivocation model of $\{\mathcal{V}_i\}_{i=1}^n$ for the fixed-set of equivocators $E$} is the constrained VLSM composition in which we replace each component which can equivocate by its corresponding equivocating VLSM and use the composition constraint that components may only receive messages that have been sent in the current trace. Formally, the state-equivocation model is the VLSM
$$\mathcal{V}_{s\_{\it eqv}}^E =\ \Bigr({\sum_{i=1}^n} \mathcal{V}_i' \Bigr) \Bigr|_{\varphi_{s\_{\it eqv}}} = (L,S, S_0, M, M_0,\tau, \beta \wedge \varphi_{s\_{\it eqv}}) $$
where, for any $1\leq i \leq n$, $\mathcal{V}_i' = \mathcal{V}_i$ if $i\not\in E$ and $\mathcal{V}_i' = \mathcal{V}_i^e$ if $i\in E$, and
\[
\varphi_{s\_{\it eqv}}( \iota , \langle \gamma_1,{\scriptscriptstyle \ldots},\gamma_n\rangle ,m) = \sentM{\gamma_{\senderM{m}}}{m}.
\]
\end{definition}
At any point, each equivocating component can perform a state-equivocation either by making a copy of one of the states or introducing a new initial state.
Each copy of an equivocating component can evolve independently, but can only receive messages that appear in the current trace of this new machine.
Let $\mathcal{V}_{s\_{\it eqv}}^E$ be a state-equivocation model of $\{\mathcal{V}_i\}_{i=1}^n$ for the fixed-set of equivocators $E$.
Given a state of the form $\gamma = \langle \gamma_1,{\scriptscriptstyle \ldots},\gamma_n \rangle$ in $\mathcal{V}_{s\_{\it eqv}}^E$, its {\em state reduct} to a state of the composition of $\{\mathcal{V}_i\}_{i=1}^n$ is of the form $\overline{\gamma} = \langle s_1,{\scriptscriptstyle \ldots},s_n\rangle$, where $s_i$ is $\gamma_i$ if $i \not\in E$ and $s_i$ is $\gamma_i${\footnotesize$[1]$} otherwise.
Given a transition in $\mathcal{V}_{s\_{\it eqv}}^E$ of the form
$$\transition{\langle j,\langle 1, l_j\rangle \rangle}{\gamma}{m}{\gamma'}{m'},$$
its {\em transition reduct} to a transition in the composition of $\{\mathcal{V}_i\}_{i=1}^n$ is
$$\transition{\langle j, l_j\rangle}{\overline{\gamma}}{m}{\overline{\gamma'}}{m'}.$$
Given a trace $\mathit{tr}$ in $\mathcal{V}_{s\_{\it eqv}}^E$, its {\em trace reduct} to a trace of the composition of $\{\mathcal{V}_i\}_{i=1}^n$ consists of the transition reducts of all transitions with labels of the form $\langle j,\langle 1, l_j\rangle \rangle$, taken in the same order as in $\mathit{tr}$.
\begin{example}\label{state-model-example}
{\normalfont
Let us consider the MO components $\mathcal{M}_1$ and $\mathcal{M}_2$ of addresses $1$ and $2$, respectively. We assume that $E = \{2\}$, meaning that only $\mathcal{M}_2$ can equivocate. The following is a constrained trace of the state-equivocation model of $\mathcal{M}_1$ and $\mathcal{M}_2$ for the fixed-set of equivocators $E$:
\begin{align*}
&\begin{bmatrix}
\state{[]}{1} \\
[\state{[]}{2}] \\
\end{bmatrix}
\xrightarrow[m_1 = \state{[]}{1}]{\langle 1, \mathit{send} \rangle}
\begin{bmatrix}
\state{[(\mathit{send},m_1)]}{1} \\
[\state{[]}{2}] \\
\end{bmatrix}
\xrightarrow[m_2 = \state{[]}{2}]{\langle 2, \langle 1, \mathit{send} \rangle \rangle}
\begin{bmatrix}
\state{[(\mathit{send},m_1)]}{1} \\
[\state{[(\mathit{send},m_2)]}{2}] \\
\end{bmatrix}\\[1em]
&\xrightarrow[\nomessage]{\langle 2, \langle 1, (\mathrm{new\_machine},\state{[]}{2})\rangle \rangle}
\begin{bmatrix}
\state{[(\mathit{send},m_1)]}{1} \\
[\state{[(\mathit{send},m_2)]}{2}, \state{[]}{2}] \\
\end{bmatrix}\\[1em]
&\xrightarrow[m_1]{\langle 2, \langle 2, \mathit{receive} \rangle \rangle}
\begin{bmatrix}
\state{[(\mathit{send},m_1)]}{1} \\
[\state{[(\mathit{send},m_2)]}{2}, \state{[(\mathit{receive},m_1)]}{2}] \\
\end{bmatrix}\\[1em]
&\xrightarrow[m_3 = \state{[(\mathit{receive},m_1)]}{2}]{\langle 2, \langle 2, \mathit{send} \rangle \rangle}
\begin{bmatrix}
\state{[(\mathit{send},m_1)]}{1} \\
[\state{[(\mathit{send},m_2)]}{2}, \state{[(\mathit{receive},m_1),(\mathit{send},m_3)]}{2}] \\
\end{bmatrix}
\end{align*}
If after these transitions, $\mathcal{M}_1$ would receive the messages $m_2$ and $m_3$ emitted by $\mathcal{M}_2$ then it could infer that $m_2$ and $m_3$ are a local evidence that $\mathcal{M}_2$ is equivocating in the sense described in Section \ref{evidence-equivocation}.
}
\end{example}
For the empty set of equivocators, the state-equivocation model coincides with the VLSM composition under the composition constraint which ensures that components may only receive messages that have been sent in a current trace of the composition. On the other hand, if we take the set of equivocators to be the whole set of indices, we obtain a composition in which each component can state-equivocate freely, while message-equivocation is still not allowed.
\subsubsection{The message-equivocation model}\label{fix-set message-equivocation}
In the VLSM framework, messages are always available for receiving, even if they were not emitted or it might not be valid to receive them. As explained previously, a \textit{message-equivocation} is the receipt of a message that has not yet been sent in that trace.
\begin{definition}[\coqref{VLSM.Core.Equivocation.MsgDepFixedSetEquivocation}{full_node_fixed_set_equivocation}]\label{message-equivocation-I}
The \textbf{message-equivocation model of $\{\mathcal{V}_i\}_{i=1}^n$ for the fixed-set of equivocators $E$} is the VLSM composition under the constraint that the only message-equivocations allowed are those between the equivocating components. Formally, the message-equivocation model is the VLSM
$$\mathcal{V}_{m\_{\it eqv}}^E =\ \Bigr({\sum_{i=1}^n} \mathcal{V}_i \Bigr) \Bigr|_{\varphi_{m\_{\it eqv}}} = (L,S, S_0, M, M_0,\tau, \beta \wedge {\varphi_{m\_{\it eqv}}}),$$
where
\begin{align*}
\varphi_{m\_{\it eqv}}&(\langle i,l\rangle, \langle s_1,{\scriptscriptstyle \ldots},s_n\rangle,m)\ =
\ \sentM{s_{\senderM{m}}}{m} \vee ( i \in E \wedge \senderM{m} \in E).
\end{align*}
\end{definition}
For any $j \not\in E$, we call the valid traces of $\mathit{Proj}_{j}(\mathcal{V}_{m\_{\it eqv}}^E)$ {\em traces exposed to $E$-fixed equivocation behaviour}.
For a family $\{\mathcal{V}_i\}_{i=1}^n$ of VLSMs, we can consider the \textbf{\textit{no message equivocation constraint assumption\,}}\ to be the composition constraint which ensures that components may only receive messages that have been sent in the current trace of the composition. This assumption depends on the {\em sent assumption}\ and {\em channel authentication assumption}. Formally, the \textit{no message equivocation constraint assumption\,}\ means that we have the following composition constraint
$$
\varphi_{no\_equiv}(\langle i,l\rangle, \langle s_1,{\scriptscriptstyle \ldots},s_n\rangle,m) = \sentM{s_{\senderM{m}}}{m}.$$
Note that for the empty set of equivocators, the above message-equivocation model coincides with the VLSM composition under the \textit{no message equivocation constraint assumption\,}.
On the other hand, if we take the set of equivocators to be the whole set of indices, then the composition constraint always holds, so $\mathcal{V}_{m\_{\it eqv}}^{\{1,{\scriptscriptstyle \ldots},n\}}$ and ${\sum_{i=1}^n} \mathcal{V}_i$ coincide.
\subsubsection{Equivalence between state- and message-equivocation models}
Let us investigate when the two models coincide. We begin by analysing the following example.
\begin{example}\label{singleton-equivocation}
{\normalfont
Let us consider the state and message-equivocation models for just one VLSM $\mathcal{V}$ which is also an equivocator. Remark that $\mathcal{V}_{s\_{\it eqv}}^E$ and $\mathcal{V}_{m\_{\it eqv}}^E$ do not coincide. However, consider only the first state in the list of states maintained by $\mathcal{V}_{s\_{\it eqv}}^E$. Even though a copy of $\mathcal{V}$ in the list cannot receive a message unless at least one of the copies has sent the message earlier in the trace, $\mathcal{V}_{s\_{\it eqv}}^E$ has the ability to copy the initial state, run that copy ahead to the point where a message has been produced, then go back to the first copy in the list and have that one receive the message. Therefore, given a constrained trace of $\mathcal{V}_{s\_{\it eqv}}^E$, the subtrace consisting of all the constrained transitions using only labels of the form $\langle 1,l \rangle$, with $l$ a label in $\mathcal{V}$, is a constrained trace of $\mathcal{V}_{m\_{\it eqv}}^E$, and all constrained traces of $\mathcal{V}_{m\_{\it eqv}}^E$ arise in this way. It is in this sense that we consider message-equivocation to be equivalent to state-equivocation.
}
\end{example}
For any indexed family of VLSMs, we might hope that the traces are the same in the two models for equivocation if we restrict our attention to the first element of each list maintained by a component in $E$, as happened in Example \ref{singleton-equivocation}. This, however, is not true. While components can receive messages from any trace of a message equivocator, state equivocators rely on interacting with other components. If those other components do not equivocate, it restricts the state equivocators' behaviour in the future. For example, suppose that we have three components $\mathcal{V}_0, \mathcal{V}_1$, and $\mathcal{V}_2,$ where $\mathcal{V}_1$ is an equivocator. $\mathcal{V}_0$ can send either the message $a$ or the message $b,$ but not both. $\mathcal{V}_1$ sends $c$ in response to $a$ or $d$ in response to $b$. In the message-equivocation model, $\mathcal{V}_2$ can receive both $c$ and $d$, but in the state-equivocation model, it cannot.
Since an equivocating VLSM is a collection of copies of the original VLSM, it is clear that if the original VLSM satisfies the full node assumption\ then each of the copies satisfies the full node assumption.
Assuming each component from an indexed family of VLSMs satisfies the full node assumption, then their state-equivocation model also satisfies the full node assumption, as it does not introduce new messages.
We can extend this remark to VLSMs with a composition constraint $\varphi$ by checking if there is some valid state among the states in the product of the equivocator states. However, under some simple composition constraints, state-equivocation is not equivalent to message-equivocation as can be seen in the next example.
\begin{example}
{\normalfont
Consider two VLSMs $\mathcal{V}_0$ and $\mathcal{V}_1$, $\mathcal{V}_0$ being an equivocator. The component $\mathcal{V}_0$ has states $s_0, s_1$, $s_0$ being initial, while $\mathcal{V}_1$ has states $q_0, q_1$, $q_0$ being initial. There is a single message $m$ and three labels, $l_0, l_1$ and $l_2$. The initial state of the composition is $(s_0, q_0).$ Let us consider a composition constraint which allows the composite VLSM to transition under $l_0$ to $(s_1, q_0)$ or under $l_1$ to $(s_0, q_1)$, while the state $(s_1, q_1)$ transitions under $l_2$ to itself and sends the message $m$. In the message-equivocation model, the state $(s_1, q_1)$ is unreachable and $m$ is not valid. But in the state-equivocation model, the system can evolve as follows and emit $m$:
\begin{align*}
\langle [s_0],q_0 \rangle\xrightarrow[\nomessage\ \rightarrow\ \nomessage]{\langle 1, (1,l_0)\rangle}
\langle [s_1],q_0 \rangle \xrightarrow[\nomessage\ \rightarrow\ \nomessage]{\langle 1,(1,(\mathrm{new\_machine},s_0))\rangle}
\langle [s_0,s_1], q_0 \rangle
\xrightarrow[\nomessage\ \rightarrow\ \nomessage]{\langle 2, l_1\rangle} \langle [s_0,s_1], q_1 \rangle
\xrightarrow[\nomessage\ \rightarrow\ m]{\langle 1, (2,l_2)\rangle}
\langle [s_0,s_1], q_1 \rangle.
\end{align*}
}
\end{example}
Therefore, for message and state-equivocation to be equivalent, we cannot support all composition constraints. However, we can support free compositions (as a special case of fixed-set equivocation). We will also examine the case of weight-limited equivocation in the next section.
\begin{theorem}\label{fixed-set-equivocation}
For any fixed-set of equivocators,
\begin{enumerate}
\item The trace reduct of a valid trace of the state-equivocation model is a valid trace for the message-equivocation model (\coqref{VLSM.Core.Equivocators.Composition.LimitedEquivocation.FixedEquivocation}{fixed_equivocators_vlsm_projection}).
\item Under full node assumption\ for each component, each valid trace for the message-equivocation model can be ``lifted'' to a valid trace for the state-equivocation model such that its trace reduct is the original trace (\coqref{VLSM.Core.Equivocators.Composition.LimitedEquivocation.FixedEquivocationSimulation}{fixed_equivocators_finite_valid_trace_init_to_rev}).
\end{enumerate}
\end{theorem}
\subsection{State- and message-equivocation models for a weight-limited set of equivocators}
Most consensus protocols require some bound on the number of equivocating parties. Some generalise away from the number of parties to the $\mathit{weight}$ of the parties; this reduces to the count when the weights are all 1.
Let $\{\mathcal{V}_i\}_{i=1}^n$ be an indexed set of VLSMs over the same set of messages equipped with a function $\mathit{weight}$ from the (addresses of) components to positive real numbers. As in the previous section, we assume that each $\mathcal{V}_i$ satisfies the {\em sent assumption} and the {\em channel authentication assumption}.
Let us fix an {\em equivocator threshold} $t$. We describe below the two models of equivocation for this scenario and investigate again when they are equivalent.
\subsubsection{The state-equivocation model}
In this scenario we allow all VLSMs to state equivocate but place a limit on the total weight of equivocators allowed. Note that an equivocator which is not allowed to state-equivocate is essentially no different than the corresponding regular component.
\begin{definition}[\coqref{VLSM.Core.Equivocators.Composition.LimitedEquivocation.LimitedStateEquivocation}{equivocators_limited_equivocations_vlsm}]
The \textbf{$t$-limited state-equivocation model of $\{\mathcal{V}_i\}_{i=1}^n$} is the constrained VLSM composition in which we replace each component with its equivocating VLSM under the composition constraint that components may only receive messages that have been sent in the current trace and the total weight of the equivocators does not exceed $t$. Formally, the $t$-limited state-equivocation model is the VLSM
$$\mathcal{V}_{{s\_{\it eqv}}}^{<t} =\ \Bigr({\sum_{i=1}^n} \mathcal{V}_i^e \Bigr) \Bigr|_{\varphi_{s\_{\it eqv}}^{<t}} = (L,S, S_0, M, M_0,\tau, \beta \wedge {\varphi_{s\_{\it eqv}}^{<t}}) $$
where
{\small
\begin{align*}
\varphi_{s\_{\it eqv}}^{<t}&( \iota , \langle \gamma_1,{\scriptscriptstyle \ldots},\gamma_n\rangle,m) \ =\ \sentM{\gamma_{\senderM{m}}}{m}\ \wedge \hspace{-.2cm} \sum_{\substack{k=1, \\ 1 < \mathit{len}(\gamma_k')}}^n \hspace{-.2cm}\mathit{weight}(k) < t,
\end{align*}
}
with $\langle \gamma_1',{\scriptscriptstyle \ldots},\gamma_n'\rangle = \tau^s( \iota , \langle \gamma_1,{\scriptscriptstyle \ldots},\gamma_n\rangle,m)$.
\end{definition}
Before we continue, let us point out some connections between the fixed-set and the weight-limited state-equivocation models. First, note that equivocating traces have no message-equivocation and so they constitute a proof of validity of all their messages. Therefore, if the sum of weights of a set of indices $E$ is limited by the threshold $t$, it follows that any valid trace of $\mathcal{V}_{s\_{\it eqv}}^E$ is also a valid trace of $\mathcal{V}_{s\_{\it eqv}}^{<t}$, simply by replacing regular nodes with corresponding equivocating nodes which are not allowed to equivocate (\coqref{VLSM.Core.Equivocators.Composition.LimitedEquivocation.LimitedEquivocationSimulation}{equivocators_Fixed_incl_Limited}).
Conversely, given a valid trace $\mathit{tr}$ of $\mathcal{V}_{s\_{\it eqv}}^{<t}$ whose last state is $\gamma = \langle \gamma_1,{\scriptscriptstyle \ldots},\gamma_n \rangle$, by an argument similar to the above one (i.e., replacing equivocating nodes which are not allowed to equivocate with regular nodes), $\mathit{tr}$ is a valid trace of $\mathcal{V}_{s\_{\it eqv}}^E$, where $E$ is the set of proper equivocators of $\mathit{tr}$ which can be computed as $E = \{ i \ |\ 1 < \mathit{len}(\gamma_i) \}$ (\coqref{VLSM.Core.Equivocators.Composition.LimitedEquivocation.LimitedStateEquivocation}{equivocators_limited_valid_trace_is_fixed}).
\subsubsection{The message-equivocation model}
Based on the fixed-set equivocation model, we can define the collection of traces with weight-limited message-equivocation by taking the union of all valid traces for $\mathcal{V}^E_{m\_{\it eqv}}$ for all subsets $E$ whose weight is limited by $t$. We call them {\em traces under $t$-limited equivocation}. It is relatively easy to see that these traces correspond to subtraces of the weight-limited state-equivocation model, by the following argument:
\begin{list}
{\arabic{cont}.}{\usecounter{cont}}
\item For a valid trace $\mathit{tr}_m$ of $\mathcal{V}^E_{m\_{\it eqv}}$ with a subset $E$ whose weight is limited by $t$, by Theorem~\ref{fixed-set-equivocation}, there is a valid trace $\mathit{tr}_s$ of $\mathcal{V}^E_{s\_{\it eqv}}$ corresponding to $\mathit{tr}$. By the above remark, $\mathit{tr}_s$ is also valid in $\mathcal{V}_{s\_{\it eqv}}^{<t}$ (\coqref{VLSM.Core.Equivocators.Composition.LimitedEquivocation.LimitedEquivocationSimulation}{limited_equivocators_finite_valid_trace_init_to_rev}).
\item Conversely, given a valid trace $\mathit{tr}_s$ of $\mathcal{V}_{s\_{\it eqv}}^{<t}$, let $\gamma = \langle \gamma_1,{\scriptscriptstyle \ldots},\gamma_n \rangle$ be its final state, and let $$E = \{ i\ |\ 1 < \mathit{len}(\gamma_i) \}$$ be the equivocating indices in $\mathit{tr}_s$. We have that the weight of $E$ is limited by the threshold, and, by one of the above remarks, $\mathit{tr}_s$ is also valid in $\mathcal{V}_{s\_{\it eqv}}^E$. By Theorem~\ref{fixed-set-equivocation}, there is a valid trace $\mathit{tr}_m$ of $\mathcal{V}^E_{m\_{\it eqv}}$ corresponding to $\mathit{tr}_s$. Since $E$ is of limited weight, equivocation in $\mathit{tr}_m$ is limited (\coqref{VLSM.Core.Equivocators.Composition.LimitedEquivocation.LimitedStateEquivocation}{equivocators_limited_valid_trace_projects_to_fixed_limited_equivocation}).
\end{list}
In the remainder of this section we will define a VLSM whose valid traces are precisely the traces with limited equivocation described above.
First, note that expressing weight-limited equivocation as a constraint for the composition of regular components is problematic as such a constraint must detect the amount of equivocation encountered by only looking at the states of the individual components.
To understand why this is a non-trivial task, consider the following example.
\begin{example}
{\normalfont
Given an initial state $\langle \sigma_1, \sigma_2 \rangle$ and two transitions, one on component $1$, receiving nothing, transitioning to $\sigma'_1$ and producing $m_1$ and one on component $2$, receiving $m_1$, transitioning to $\sigma'_2$ and producing nothing. After both transitions have occurred, the new state is $\langle \sigma_1', \sigma_2'\rangle$. If the transitions occurred in the order described above, there should be no equivocation. However, if they occurred in the reverse order, component $1$ should be considered as equivocating. Hence in some traces the weight of $\langle \sigma_1', \sigma_2'\rangle$ should be $0$, while in others it should be the weight of component $1$.
}
\end{example}
In what follows, we will present a simple model for weight-limited message-equivocation based on annotating states with the set of equivocators observed in the current trace so far, which makes the task of detecting the equivocators in a state trivial.
\begin{definition}[\coqref{VLSM.Core.Equivocation.MsgDepLimitedEquivocation}{full_node_limited_equivocation_vlsm}]\label{t-weight-mesage-equivocation}
The \textbf{$t$-limited message-equivocation model of $\{\mathcal{V}_i\}_{i=1}^n$} is obtained from the free composition $\sum_{i = 1}^n{V_i}$ by annotating the states (of the free composition) with sets of equivocators, updating those sets during transitions by adding the sender of a message if the message is an equivocation, and further constraining the validity constraint to only accept inputs that lead to states whose sets of equivocators are of limited weight. Formally, the $t$-limited message-equivocation model is the VLSM $$\mathcal{V}_{m\_{\it eqv}}^{<t} = (L^{<t}, S^{<t}, S^{0,<t}, M^{<t}, M^{0,<t}, \tau^{<t}, \beta^{<t})$$ with the following components:
\vspace{-.2cm}
\begin{list}
{-}{}
\item $L^{<t} = L$ is the same set of labels as for the free composition,
\item $S^{<t} = \{\langle \sigma,{\it eqv}\rangle\ |\ \sigma \in S \mbox{ and } {\it eqv} \subseteq \{1,{\scriptscriptstyle \ldots},n\} \}$ consists of pairs of states of the free composition and sets of indices,
\item $S^{<t}_0 = \{\langle \sigma,\emptyset\rangle\ |\ \sigma \in S_0\}$ pairs the initial states of the free composition with empty sets of indices,
\item $M^{<t} = M$ is the same set of messages as for the free composition,
\item $M^{<t}_0 = M_0$ is the same set of initial messages as for the free composition,
\item $\tau^{<t}: L^{<t} \times S^{<t} \times M^{<t}? \rightarrow S^{<t} \times M^{<t}?$ is defined as
\begin{align*}
\tau^{<t}(\iota,\langle \sigma,{\it eqv} \rangle,m) = (\langle \sigma',{\it eqv}' \rangle, m'),
\quad \mbox{ where } &\sigma' = \tau^s(\iota,\sigma,m),\ m' = \tau^m(\iota,\sigma,m), \mbox{ and } \\
& {\it eqv}' = \left\{\begin{array}{rl}
{\it eqv}, & \mbox{ if } \sentM{\sigma}{m} \\
{\it eqv}\ \cup \{\senderM{m}\}, & \mbox{otherwise}
\end{array}\right. ,
\end{align*}
\item $\beta^{<t} \subseteq L^{<t} \times S^{<t} \times M^{<t}?$ is defined as
\begin{align*}
\beta^{<t}(\iota, \langle \sigma,{\it eqv}\rangle,m) = \beta(\iota,\sigma,m) \wedge (\mathit{weight}({\it eqv}') < t),
\end{align*}
where $\tau^{<t}(\iota,\langle \sigma,{\it eqv} \rangle,m) = (\langle \sigma',{\it eqv}' \rangle, m')$ and $\beta$ is the validity constraint in the free composition.
\end{list}
\end{definition}
We call a {\em trace under $t$-limited equivocation behaviour} any valid trace of $\mathcal{V}_{m\_{\it eqv}}^{<t}$.
\subsubsection{Equivalence between state- and message-equivocation models}
As in the case of the fixed-set message-equivocation model, the {\em full node assumption} is essential in ensuring the adequacy of the model because whenever the received message is an equivocation, the only (possibly new) equivocator must be its sender, as all its dependencies were already received (and equivocations accounted for).
\begin{theorem} Under full node assumption\ for all components,
\begin{enumerate}
\item The trace reduct of a valid trace of the state-equivocation model is a valid trace for the message-equivocation model (\coqref{VLSM.Core.Equivocators.Composition.LimitedEquivocation.LimitedStateEquivocation}{equivocators_limited_valid_trace_projects_to_annotated_limited_equivocation}).
\item Each valid trace for the message-equivocation model can be ``lifted'' to a valid trace for the state-equivocation model such that its trace reduct is the original trace (\coqref{VLSM.Core.Equivocators.Composition.LimitedEquivocation.LimitedEquivocationSimulation}{equivocators_limited_valid_trace_projects_to_annotated_limited_equivocation_rev}).
\end{enumerate}
\end{theorem}
\section{Byzantine vs. equivocating behaviour} \label{Byzantine}
Traditionally, consensus literature has defined a Byzantine participant in a consensus protocol to be one with arbitrary behaviour \cite{lamport}. Sometimes Byzantine nodes have a measure of control over the network, with the ability to delay, duplicate, or drop messages.
In the VLSM framework, messages can be received at any time, and they may be received multiple times or not at all. We can model a Byzantine component as a VLSM that can send or receive any message at any time. However, we want Byzantine components to not be able to forge messages on behalf of other components. We capture this through the validity constraint ensuring that any message sent by a Byzantine component is attributed to its sender (\coqref{VLSM.Core.ByzantineTraces.FixedSetByzantineTraces}{emit_any_signed_message_vlsm_machine}).
\begin{definition}[\coqref{VLSM.Core.ByzantineTraces}{emit_any_message_vlsm}]\label{Byzantine-VLSM}
A \textbf{Byzantine component of address $i$} is a VLSM of the form $$\mathcal{B} = (L,S,S_{0}, M, M_{0}, \tau, \beta)$$ where
\begin{list}
{$\cdot$}{}
\item the labels are messages, $L = M$,
\item there is only one state which is also an initial one, $S = S_0 = \{s\}$,
\item the set of messages and initial messages coincide, $M_0 = M$,
\item the transition function ignores the input message and produces the label message
$$\tau(m,s,m') = (s,m),$$
\item the validity constraint is defined as
\begin{align*}
\beta(l,s,m) = (sender(l) = i).
\end{align*}
\end{list}
\end{definition}
We will argue that validators in a VLSM composition do not distinguish between Byzantine components and equivocating components as defined in Section \ref{modelsOfEquivocation}. Therefore, when analysing the security of a protocol, it suffices to consider only equivocating components. We will analyse two scenarios: one in which the number of Byzantine components is fixed, and one in which the number of Byzantine components is limited by their weights.
Let $\{\mathcal{V}_i\}_{i=1}^n$ be an indexed set of VLSMs over the same set of messages. We assume that each $\mathcal{V}_i$ satisfies the {\em channel authentication assumption}.
\paragraph{Fixed-set of Byzantine components.}
Let us fix a subset $B \subseteq \{1,{\scriptscriptstyle \ldots},n\}$. We assume that, for any $i\in B$, each component $\mathcal{V}_i$ can be replaced with the corresponding Byzantine component of address $i$, $\mathcal{V}_i^B$, defined as in Definition \ref{Byzantine-VLSM}.
Assuming that the components $\mathcal{V}_i$ with $i\not \in B$ are protocol-following, they will only use messages seen in the current trace.
Formally (\coqref{VLSM.Core.ByzantineTraces.FixedSetByzantineTraces}{non_byzantine_not_equivocating_constraint}), let us define the constrained VLSM
$$\mathcal{V}_{\mathit{Byz}}^B =\ \Bigr({\sum_{i=1}^n} \mathcal{V}_i' \Bigr) \Bigr|_{\varphi_{\mathit{Byz}}} = (L,S, S_0, M, M_0,\tau, \beta \wedge {\varphi_{\mathit{Byz}}}) $$
where, for any $1\leq i \leq n$, $\mathcal{V}_i' = \mathcal{V}_i$ if $i\not\in B$ and $\mathcal{V}_i' = \mathcal{V}_i^B$ if $i\in B$, and
$$\varphi_{\mathit{Byz}}(\langle i,l\rangle, \langle s_1,{\scriptscriptstyle \ldots},s_n\rangle,m) = \sentM{s_{\senderM{m}}}{m} \vee i \in B.$$
Let $\mathit{\mathit{NonB}}$ denote the non-Byzantine nodes, i.e., $\mathit{\mathit{NonB}} = \{1,{\scriptscriptstyle \ldots},n\}\setminus B$.
For any $j \in \mathit{\mathit{NonB}}$, we call the valid traces of $\mathit{Proj}_{j}(\mathcal{V}_{\mathit{Byz}}^B)$ {\em traces exposed to $B$-fixed Byzantine behaviour}.
\begin{theorem}[\coqref{VLSM.Core.ByzantineTraces.FixedSetByzantineTraces}{validator_fixed_non_byzantine_eq_fixed_non_equivocating}]\label{fixed-byzantine-fault-tolerance}
Under the full node assumption, if the components from $\mathit{\mathit{NonB}}$ are validators for the message-equivocation model for the fixed-set of equivocators $B$, $\mathcal{V}_{m\_{\it eqv}}^B$, then the traces exposed to $B$-fixed Byzantine behaviour coincide with the traces exposed to $B$-fixed equivocation behaviour.
\end{theorem}
\paragraph{Weight-limited subset of Byzantine components.}
Let us fix a {\em threshold} $t$. We call a {\em trace under $t$-limited Byzantine behaviour} any valid trace of the constrained composition $\mathcal{V}_{\mathit{Byz}}^B$ defined above for some $B$ whose weight is limited by $t$.
We say that a component $\mathcal{V}_i$ is a {\em validator for the $t$-limited message-equivocation model}, $\mathcal{V}_{m\_{\it eqv}}^{<t}$, if for any $l\in L_i$, $s_i\in S_i$, and $m\in M \cup \{\nomessage\}$, if $\beta_i(l,s_i,m)$ holds, then there exists a valid state $\langle \sigma, {\it eqv}\rangle$ of $\mathcal{V}_{m\_{\it eqv}}^{<t}$ such that the $i$th component of $\sigma$ is $s_i$, $m$ is valid in $\mathcal{V}_{m\_{\it eqv}}^{<t}$, and $\beta_{<t}(\langle i,l\rangle, \langle \sigma,{\it eqv}\rangle,m)$ holds.
\begin{theorem}[\coqref{VLSM.Core.ByzantineTraces.LimitedByzantineTraces}{msg_dep_validator_limited_non_byzantine_traces_are_limited_non_equivocating}]\label{limited-byzantine-fault-tolerance}
Under the full node assumption, if all components are validators for the $t$-limited message-equivocation model, $\mathcal{V}_{m\_{\it eqv}}^{<t}$, then the possible behaviours of the non-faulty components are the same under $t$-limited Byzantine behaviour as under $t$-limited equivocation behaviour.
\end{theorem}
For both Theorems \ref{fixed-byzantine-fault-tolerance} and \ref{limited-byzantine-fault-tolerance} we assumed that the non-Byzantine components satisfy the {\em full node assumption}. We needed this assumption since we have used a very simple model for Byzantine components, assuming that they only satisfy the {\em channel authentication assumption}. We believe that the full node assumption\ could be dropped if the Byzantine components would additionally satisfy the {\em message dependencies assumption} (and thus, the {\em unforgeability assumption}), and if the models of equivocation are updated accordingly.
\section{Concluding Remarks}
The goal of this work is to provide foundations for a theory of typed fault tolerance that can replace Byzantine fault tolerance analysis in settings where it is practical to have validators. We have shown that equivocation faults are exactly as expressive as Byzantine faults when it comes to their influence on validators. This result means that in an asynchronous network without guarantee of message arrival, Byzantine behaviour is precisely equivocation behaviour as far as validators are concerned. Traces exposed to equivocation behaviour thereby account for the effects of all possible hostile environments that validators might find themselves in regardless of the bound on Byzantine faults, and form a complete basis for defining and limiting all possible types of faults.
We showed that limiting Byzantine behaviour and limiting equivocating behaviour has the same effect on the validators of equivocation limited VLSMs. Limited equivocation does not guarantee that messages are delivered at all. Our full node assumption\ insists that messages are received after their dependencies, but no other assumption on the timing or order of the arrival of messages was made during the course of this investigation. This leaves it to later work to define, account for, and limit synchronization faults.
This invites us to explore distributed systems design in different adversarial settings. For example, we can have distinct and independent limits on equivocation faults and synchronization faults. We showed that all components of an ELMO protocol are examples of equivocation limited validators. In future work will also show more examples of equivocation limited validators, including consensus protocols that are safe and non-trivial. Further specifying consensus protocols that exhibit provable liveness and high performance in systems with limited synchronization faults is also left for future work.
\bibliographystyle{eptcs}
|
2,869,038,153,947 | arxiv |
\section{Introduction}
\label{sec:intro}
The study of the properties of the Higgs boson, which is responsible
for electroweak symmetry breaking, is one of the main goals of the LHC
program. The standard model (SM) relates the mass of a fermion to its
Yukawa coupling, \ie, the strength of its interaction with the Higgs
boson, as $g_\mathrm{f} = \sqrt{2} m_\mathrm{f}/v$, where
$m_\mathrm{f}$ is the fermion mass and $v = 246.22\GeV$ is the vacuum
expectation value of the Higgs potential~\cite{Weinberg:1967tq},
obtained from a measurement of the $\mu^+$ lifetime~\cite{Webber:2010zf}.
Since fermionic masses are not predicted by the SM, their values are
only constrained by experimental observations. Given the measured value
of the top quark mass of $\ensuremath{m_{\PQt}}\xspace = 172.4 \pm
0.5\GeV$~\cite{Khachatryan:2015hba}, the top quark is the heaviest
fermion and therefore provides access to the largest Yukawa coupling,
which is expected to be close to unity in the SM\@.
It is important to verify this prediction experimentally.
We define \ensuremath{Y_{\PQt}}\xspace as the ratio of the
top quark Yukawa coupling to its SM value. In this definition, \ensuremath{Y_{\PQt}}\xspace is
equal to $\kappa_\PQt$ as defined in the ``$\kappa$
framework''~\cite{Heinemeyer:2013tqa}, which introduces coupling
modifiers to test for deviations in the SM couplings of the
Higgs boson to other particles. Several Higgs boson production
processes are sensitive to \ensuremath{Y_{\PQt}}\xspace, in particular Higgs boson
production via gluon fusion~\cite{Sirunyan:2017exp,Sirunyan:2018ouh}
and Higgs boson production in association with top quark pairs,
$\ttbar\PH$~\cite{Sirunyan:2018hoz}. In both cases, in addition to
\ensuremath{Y_{\PQt}}\xspace, the rate depends on the Higgs boson coupling to the decay
products, \eg, bottom quarks or $\tau$ leptons. The only Higgs boson
production process that is sensitive exclusively to \ensuremath{Y_{\PQt}}\xspace is
$\ttbar\PH$ production with the Higgs boson decaying to a \ttbar pair,
leading to a four top quark final state~\cite{Sirunyan:2017roi}. In
this paper, we explore a complementary approach to measure \ensuremath{Y_{\PQt}}\xspace
independently of the Higgs coupling to other particles by utilizing a
precise measurement of the top quark pair production cross section,
which is affected by a virtual Higgs boson exchange. It has been shown
that in the top quark pair production threshold region, which
corresponds to a small relative velocity between the top quark and
antiquark, the \ttbar cross section is sensitive to the top quark
Yukawa coupling through weak force mediated
corrections~\cite{Kuhn:2013zoa}. For example, doubling the Yukawa
coupling would lead to a change in the observed differential cross
section comparable to the current experimental precision of
around 6\%~\cite{Khachatryan:2016mnb}. A detailed study of the
differential \ttbar kinematic properties close to the production
threshold could, therefore, determine the value of the top quark Yukawa
coupling. This approach is similar to the threshold scan methods
proposed for \Pe{}$^+$\Pe{}$^-$
colliders~\cite{Strassler:1990nw,Beneke:2015lwa}.
We calculate the weak interaction correction factors for different
values of \ensuremath{Y_{\PQt}}\xspace using \textsc{hathor}\xspace (v2.1)~\cite{Aliev:2010zk} and apply
them at the parton level to existing \ttbar simulated samples. From
these modified simulations, we obtain distributions at detector level
that can be directly compared to data. The Yukawa coupling is extracted
from the distributions of the invariant mass of the top quark pair,
\ensuremath{M_{\ttbar}}\xspace, and the rapidity difference between the top quark and antiquark,
$\ensuremath{\Delta y_{\ttbar}}\xspace = y_{\PQt} - y_{\PAQt}$, for different jet
multiplicities. The low \ensuremath{M_{\ttbar}}\xspace and small $\abs{\ensuremath{\Delta y_{\ttbar}}\xspace}$ regions are
the most sensitive to \ensuremath{Y_{\PQt}}\xspace.
Top quarks decay almost exclusively via $\PQt\to\PW\PQb$ and the
final topology depends on the \PW~boson decays. When one \PW~boson
decays leptonically and the other decays hadronically, $\PQt\PAQt
\to \PW^+\PQb\,\PW^-\PAQb\to \ell^+\nu \PQb \, \PQq\PAQq^\prime
\PAQb$ + charge conjugate, the final state at leading order (LO) consists of an
isolated lepton (electron or muon in this analysis), missing transverse
momentum (from the neutrino), and four jets (from two {\cPqb} quarks and two
light quarks). This final state has a sizable branching fraction of
34\%, low backgrounds, and allows for the kinematic reconstruction of
the original top quark candidates. This analysis follows the methodology
employed in Ref.~\cite{Sirunyan:2018wem} and introduces a novel
algorithm to reconstruct the \ttbar pair when only three jets are
detected.
The outline of this paper is as follows. Section~\ref{sec:reweight} introduces
the method of implementing the weak force corrections in simulated events
as well as the variables sensitive to the top quark Yukawa coupling.
Section~\ref{sec:detector} describes the CMS detector. The
data and simulated samples used in the analysis are described in
Section~\ref{sec:dataset}. The event selection criteria are discussed in
Section~\ref{sec:selection}. The algorithm used to reconstruct \ttbar
events is described in Section~\ref{sec:reco}. Details on background
estimation and event yields are covered in Sections~\ref{sec:bck}
and~\ref{sec:controlplots}. The statistical methodologies and the
systematic uncertainties are described in Sections~\ref{sec:stat}
and~\ref{sec:sys}, respectively. Section~\ref{sec:limit} presents the
results of the fit to data. Section~\ref{sec:summary} summarizes the results.
\section{Weak interaction corrections to \texorpdfstring{\ttbar}{ttbar} production}
\label{sec:reweight}
Recent calculations provide next-to-next-to-leading-order (NNLO) predictions
within the framework of perturbative quantum chromodynamics (QCD) for
the \ttbar production cross
section~\cite{Czakon:2017wor,Czakon:2019txp}. Photon-mediated
corrections have been determined to be small~\cite{Hollik:2007sw}. The weak
force corrections to the \ttbar production cross section were
originally calculated~\cite{Beenakker:1993yr} before the top quark
discovery and were found to have a very small effect on the total cross
section, so they are typically not implemented in Monte Carlo (MC)
event generators. Nevertheless, they can have a sizable impact on differential
distributions and on the top quark charge asymmetry. There is no
interference term of order $\alpS\ensuremath{\alpha_\text{weak}}\xspace$ between
the lowest-order strong force mediated and neutral current amplitudes
in the quark-induced processes. The weak force corrections start entering
the cross section at loop-induced order $\alpS^2\ensuremath{\alpha_\text{weak}}\xspace$
(as shown in Fig.~\ref{p:intro:feynman}). A majority of weak
corrections do not depend on the top quark Yukawa coupling. Amplitudes
linear in \ensuremath{Y_{\PQt}}\xspace, which arise from the production of an intermediate
$s$-channel Higgs boson through a closed {\cPqb} quark loop, can be ignored
because of the small {\cPqb} quark mass. However, the amplitude of the Higgs
boson contribution to the loop ($\Gamma=\PH$ in
Fig.~\ref{p:intro:feynman}) is proportional to $\ensuremath{Y_{\PQt}}\xspace^2$. The
interference of this process with the Born-level \ttbar production has a
cross section proportional to $\alpS^2\ensuremath{Y_{\PQt}}\xspace^2$. Thus, in
some kinematic regions, the weak corrections become large and may lead
to significant distortions of differential distributions.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.35\textwidth]{Figure_001-a.pdf}
\hspace{1cm}
\includegraphics[width=0.35\textwidth]{Figure_001-b.pdf}
\caption{Example of Feynman diagrams for gluon- and \qqbar-induced processes of \ttbar production and the virtual corrections. The symbol $\Gamma$ stands for all contributions from gauge and Higgs boson exchanges.}
\label{p:intro:feynman}
\end{figure}
The \textsc{hathor}\xspace generator calculates the partonic cross section value,
including the next-to-leading-order (NLO) weak corrections at order
$\mathcal{O}(\alpS^2\ensuremath{\alpha_\text{weak}}\xspace)$ for given \ensuremath{M_{\ttbar}}\xspace and
$\abs{\ensuremath{\Delta y_{\ttbar}}\xspace}$. The mass of the top quark is fixed at $\ensuremath{m_{\PQt}}\xspace = 172.5\GeV$,
and its uncertainty is treated as a source of systematic uncertainty.
We use \textsc{hathor}\xspace to extract a two-dimensional correction factor that
contains the ratio of the \ttbar production cross section with weak
corrections over the LO QCD production cross section in bins of \ensuremath{M_{\ttbar}}\xspace
and $\abs{\ensuremath{\Delta y_{\ttbar}}\xspace}$. This is done for different hypothesized values of
\ensuremath{Y_{\PQt}}\xspace, as shown in projections in Fig.~\ref{p:reweight:1d}.
The largest effects arise near the \ttbar production threshold region and can be as high as 12\% for \ensuremath{Y_{\PQt}}\xspace = 2.
We then apply this correction factor at the parton level as a weight to each \ttbar event
simulated with \POWHEG
(v2)~\cite{Nason:2004rx,Frixione:2007vw,Alioli:2010xd,Campbell:2014kua}.
In the distributions at the detector level,
the experimental resolutions and the systematic uncertainties, which
are especially significant in the low-\ensuremath{M_{\ttbar}}\xspace region, will reduce the
sensitivity to this effect.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.45\textwidth]{Figure_002-a.pdf}
\includegraphics[width=0.45\textwidth]{Figure_002-b.pdf}
\caption{The dependence of the ratio of weak force corrections over the LO QCD production cross section as calculated by \textsc{hathor}\xspace on the sensitive kinematic variables \ensuremath{M_{\ttbar}}\xspace and \ensuremath{\Delta y_{\ttbar}}\xspace at the generator level for different values of \ensuremath{Y_{\PQt}}\xspace. The lines contain an uncertainty band (generally not visible) derived from the dependence of the weak correction on the top quark mass varied by $\pm$1\GeV.
}
\label{p:reweight:1d}
\end{figure}
\section{The CMS detector}
\label{sec:detector}
The central feature of the CMS detector is a superconducting solenoid
of 6\unit{m} internal diameter, providing a magnetic field of 3.8\unit{T}. Within
the solenoid volume are a silicon pixel and strip tracker, a lead
tungstate crystal electromagnetic calorimeter (ECAL), and a brass and
scintillator hadron calorimeter (HCAL), each composed of a barrel and
two endcap sections. Forward calorimeters extend the coverage provided
by the barrel and endcap detectors. Muons are measured in
gas-ionization detectors embedded in the steel flux-return yoke outside
the solenoid. A more detailed description of the CMS detector, together
with a definition of the coordinate system and relevant kinematical
variables, can be found in Ref.~\cite{Chatrchyan:2008aa}.
The particle-flow (PF) algorithm~\cite{ref:particleflow} reconstructs
and identifies each individual particle with an optimized combination
of information from the various elements of the detector systems. The
energy of photons is directly obtained from the ECAL measurements,
corrected for zero-suppression effects. The energy of electrons is
determined from a combination of the electron momentum at the primary
interaction vertex as determined by the tracker, the energy of the
corresponding ECAL cluster, and the energy sum of all bremsstrahlung
photons spatially compatible with originating from the electron track.
The momentum of muons is obtained from the curvature of the
corresponding track, combining information from the silicon tracker and
the muon system. The energy of charged hadrons is determined from a
combination of their momentum measured in the tracker and the matching
ECAL and HCAL energy deposits, corrected for zero-suppression effects
and for the response function of the calorimeters to hadronic showers.
Finally, the energy of neutral hadrons is obtained from the
corresponding corrected ECAL and HCAL energy. The reconstructed vertex
with the largest value of the sum of the physics objects transverse
momentum squared, $\pt^2$, is taken to be
the primary proton-proton (\Pp{}\Pp{}) interaction vertex.
\section{Data set and modeling}
\label{sec:dataset}
The data used for this analysis corresponds to an integrated luminosity
of 35.8\fbinv at a center-of-mass energy of 13\TeV. Events are selected
if they pass single-lepton triggers~\cite{Khachatryan:2016bia}. These
require a transverse momentum $\pt > 27\GeV$ for electrons and $\pt >
24\GeV$ for muons, each within pseudorapidity $\abs{\eta} < 2.4$, as
well as various quality and isolation criteria.
The MC event generator \POWHEG is used to simulate \ttbar events. It calculates up to NLO QCD matrix
elements and uses \PYTHIA (v8.205)~\cite{Sjostrand:2014zea} with the
CUETP8M2T4 tune~\cite{ISR_FSR} for the parton shower simulations. The
default parametrization of the parton distribution functions (PDFs)
used in all simulations is NNPDF3.0~\cite{Ball:2014uwa}. A top quark
mass of 172.5\GeV is used. When compared to the data, the simulation
is normalized to an inclusive \ttbar production cross section of
$832^{+40}_{-46}$\unit{pb}~\cite{Czakon:2011xx}. This value is calculated at
NNLO accuracy, including the resummation of
next-to-next-to-leading-logarithmic soft gluon terms. The quoted
uncertainty is from the choice of hadronization, factorization, and
renormalization scales and the PDF uncertainties.
The background processes are modeled using the same techniques. The
\MGvATNLO generator~\cite{Alwall:2014hca} is used to simulate \PW~boson
and Drell--Yan (DY) production in association with jets and $t$-channel
single top quark production. The \POWHEG generator is used to simulate
a single top quark produced in association with a \PW~boson
($\PW\cPqt$), and \PYTHIA is used for QCD multijet production. In all
cases, the parton shower and the hadronization are simulated by
\PYTHIA. The \PW~boson and DY backgrounds are normalized to their NNLO
cross sections calculated with \FEWZ~\cite{Li:2012wna}. The cross
sections of single top quark processes are normalized to NLO
calculations~\cite{Kant:2014oha,Kidonakis:2012rm}, and the QCD multijet
simulation is normalized to the LO cross section from \PYTHIA. As
explained in Section~\ref{sec:bck}, the shape and the overall
normalization of the QCD multijet contribution to the background are
derived using data in a control region. The QCD multijet simulation is
only used to determine relative contributions from different regions.
The detector response is simulated using
\GEANTfour{}~\cite{Agostinelli:2002hh}. The same algorithms that are
applied to the collider data are used to reconstruct the simulated
data. Multiple proton-proton interactions per bunch crossing (pileup)
are included in the simulation. To correct the simulation to be in
agreement with the pileup conditions observed during the data taking,
the average number of pileup events is calculated
for the measured instantaneous luminosity. The simulated events are
weighted, depending on their number of pileup interactions, to
reproduce the measured pileup distribution.
\section{Event reconstruction and selection}
\label{sec:selection}
Jets are reconstructed from the PF candidates and are clustered by the anti-\kt
algorithm~\cite{Cacciari:2008gp,Cacciari:2011ma} with a distance
parameter $R = 0.4$.
The jet momentum is determined as the vectorial sum of the momenta of all
PF candidates in the jet. An offset correction is applied to jet
energies to take into account the contribution from pileup within the
same or nearby bunch crossings. Jet energy corrections are derived from
simulation and are improved with in situ measurements of the energy
balance in dijet, QCD multijet, photon+jet, and leptonically decaying \PZ+jet
events~\cite{Chatrchyan:2011ds,Khachatryan:2016kdb}. Additional
selection criteria are applied to each event to remove spurious
jet-like features originating from isolated noise patterns in certain
HCAL and ECAL regions~\cite{CMS-PAS-JME-10-003}.
Jets are identified as originating from {\cPqb} quarks using the combined
secondary vertex algorithm (CSV) v2~\cite{Sirunyan:2017ezt}. Data
samples are used to measure the probability of correctly identifying
jets as originating from {\cPqb} quarks ({\cPqb} tagging efficiency), and the
probability of misidentifying jets originating from light-flavor
partons (\cPqu, \cPqd, \cPqs quarks or gluons) or a charm quark as a {\cPqb}-tagged jet
(the light-flavor and charm mistag
probabilities)~\cite{Sirunyan:2017ezt}. To identify a jet as a {\cPqb} jet, its
CSV discriminant is required to be greater than 0.85. This working
point yields a {\cPqb} tagging efficiency of 63\% for jets with \pt typical
of \ttbar events, and charm and light-flavor mistag probabilities of
approximately 12 and 2\%, respectively (around 3\% in total).
The missing transverse momentum, \ptvecmiss, is calculated as the
negative vector sum of the transverse momenta of all PF candidates in
an event. The energy scale corrections applied to jets are propagated
to \ptvecmiss. Its magnitude is referred to as \ptmiss.
Candidate signal events are defined by the presence of a muon or an
electron that is isolated from other activity in the event,
specifically jets, and \ptvecmiss associated with a neutrino. The
isolation variables exclude the contributions from the physics object
itself and from pileup events. The efficiencies of lepton
identification and selection criteria are derived using a tag-and-probe
method in \pt and $\eta$ regions~\cite{TNPREF}. The same lepton isolation
criteria described in Ref.~\cite{Sirunyan:2018wem} are followed here.
To reduce the background contributions and to optimize the \ttbar
reconstruction, additional requirements on the events
are imposed. Only events with exactly one isolated
muon~\cite{Chatrchyan:2012xi} or electron~\cite{Khachatryan:2015hwa}
with $\pt > 30\GeV$ and $\abs{\eta} < 2.4$ are selected; no additional
isolated muons or electrons with $\pt > 15\GeV$ and $\abs{\eta} < 2.4$
are allowed; at least three jets with $\pt > 30\GeV$ and $\abs{\eta}
< 2.4$ are required, and at least two of them must be {\cPqb} tagged.
The \PW~boson transverse mass, defined as $M_\mathrm{T}(\PW) =
\sqrt{\smash[b]{2\pt^\ell\ptmiss[1-\cos(\Delta\phi_{\ell,\ptvecmiss})]}}$, is required to be less than 140\GeV, where
$\pt^\ell$ is the transverse momentum of the lepton. For \ttbar
events with only three jets in the final state, the \pt of the leading
\cPqb-tagged jet is required to be greater than 50\GeV.
\section{Reconstruction of the top quark-antiquark system}
\label{sec:reco}
The goal of reconstructing \ttbar events is to determine the top quark
and antiquark four-momenta. For this, it is necessary to correctly
match the final-state objects to the top quark and antiquark decay products.
We always assume that the two \cPqb-tagged jets with the
highest CSV discriminant values are associated with the two \cPqb quarks
from \ttbar decays. For each event, we test the possible assignments of
jets as \ttbar decay products and select the one with the highest value
of a likelihood discriminant constructed based on the available
information.
The first step in building the likelihood discriminant is to
reconstruct the neutrino four-momentum $p_\nu$ based on the measured
\ptvecmiss, the lepton momentum $p_\ell$, and the momentum
$p_{\ensuremath{\cPqb_\ell}\xspace}$ of the jet associated with the \cPqb quark from the top
quark decay. The neutrino solver algorithm~\cite{Betchart:2013nba} uses
a geometric approach to find all possible solutions for the neutrino
momentum based on the two mass constraints $(p_\nu + p_\ell)^2 =
m_{\PW}^2 = (80.4\GeV)^2$ and $(p_\nu + p_\ell + p_{\ensuremath{\cPqb_\ell}\xspace})^2 = m_\cPqt^2$. Each
equation describes an ellipsoid in the three-dimensional neutrino
momentum space. The intersection of these two ellipsoids is usually an
ellipse. We select $p_\nu$ as the point on the ellipse for which the
distance $\ensuremath{D_{\nu,\mathrm{min}}}\xspace$ between the ellipse projection onto the
transverse plane ($p_{\nu x}$,$p_{\nu y}$) and the measured \ptvecmiss
is minimal. The algorithm leads to a unique solution for the
longitudinal component of the neutrino momentum and an improved
resolution for its transverse component. When the invariant mass of
the lepton and the \ensuremath{\cPqb_\ell}\xspace candidate is above \ensuremath{m_{\PQt}}\xspace, no solution
can be found and this jet assignment is discarded. If both
\ensuremath{\cPqb_\ell}\xspace candidates fail this requirement, then the event is
rejected. The algorithm is applied for each of the two $\cPqb$ jet possibilities
and the minimum distance $\ensuremath{D_{\nu,\mathrm{min}}}\xspace$ is used to identify the correct
$\cPqb$ jet in the leptonic top quark decay, \ensuremath{\cPqb_\ell}\xspace, as described below.
\subsection{Reconstruction of events with at least four jets}
\label{sec:reco4j}
The likelihood discriminant for events with at least four reconstructed
jets is built to minimize the calculated \ensuremath{D_{\nu,\mathrm{min}}}\xspace, and to simultaneously
ensure that the invariant mass of the two jets hypothesized to
originate from the \PW~boson decay (\ensuremath{M_{\PW_{\mathrm{h}}}}\xspace) is
consistent with the \PW~boson mass, and that the invariant mass of the
three jets hypothesized to originate from the hadronically decaying top
quark (\ensuremath{M_{\PQt_{\mathrm{h}}}}\xspace) is consistent with $\ensuremath{m_{\PQt}}\xspace$. The
likelihood discriminant for events with at least four jets, $\lambda_4$, is constructed as
\begin{equation}
{-}\ln [ \lambda_4 ] = {-}\ln \left [ P_\mathrm{m}(\ensuremath{M_{\PW_{\mathrm{h}}}}\xspace, \ensuremath{M_{\PQt_{\mathrm{h}}}}\xspace ) \right ] {-}\ln \left [ P_{\nu}(\ensuremath{D_{\nu,\mathrm{min}}}\xspace) \right ],
\label{TTRECEQ1}
\end{equation}
where $P_\mathrm{m}$ is the two-dimensional probability density to
correctly reconstruct the \PW~boson and top quark invariant masses, and
$P_{\nu}$ is the probability density describing the distribution of \ensuremath{D_{\nu,\mathrm{min}}}\xspace for a
correctly selected \ensuremath{\cPqb_\ell}\xspace. On average, the distance \ensuremath{D_{\nu,\mathrm{min}}}\xspace for a
correctly selected \ensuremath{\cPqb_\ell}\xspace is smaller and has a lower tail
compared to the distance obtained for other jets. Jet assignments with
values of $\ensuremath{D_{\nu,\mathrm{min}}}\xspace > 150\GeV$ are rejected since they are very unlikely
to originate from a correct \ensuremath{\cPqb_\ell}\xspace association. The
distributions from which $P_\mathrm{m}$ and $P_{\nu}$ are derived,
together with $\lambda_4$ are shown in Figs.~2 (top-left), 2 (bottom-left) and~4 (left) of Ref.~\cite{Sirunyan:2018wem}, respectively.
The efficiency of the reconstruction algorithm is defined as the
probability that the most likely assignment, as identified by the
largest value of $\lambda_4$, is the correct one, given that all decay
products from the \ttbar decay are reconstructed and selected. Since
the number of possible assignments increases drastically with the
number of jets, it is more likely to select a wrong assignment if there
are additional jets. The algorithm identifies the correct assignment in
around 84\% of the four-jet events, 69\% of the five-jet events, and
53\% of the six-jet events.
\subsection{Reconstruction of events with exactly three jets}
\label{sec:reco3j}
The most sensitive region of the phase space to probe the size of the
top quark Yukawa coupling is at the threshold of \ttbar production.
However, the efficiency for selecting \ttbar events in this region is
rather low, since one or more quarks from the \ttbar decay are likely
to have \pt or $\eta$ outside of the selection thresholds resulting in
a missing jet. To mitigate this effect, an algorithm was developed for
the reconstruction of
\ttbar events with one missing jet~\cite{Demina:2013wda}.
As the missing jet in 93\% of the selected three-jet events is associated
with a quark from the \PW~boson decay, we assume the two jets with the highest
CSV discriminant are associated with {\cPqb} quarks from the \ttbar decay.
The remaining two-fold ambiguity is in the assignment of the {\cPqb}-tagged jets:
which one originates from the hadronic and which one from the
semileptonic top quark decay. For each of the two possible {\cPqb} jet
assignments, the algorithm uses the neutrino solver to calculate the
corresponding minimum distance $\ensuremath{D_{\nu,\mathrm{min}}}\xspace$. If the
neutrino solver yields no solution, this jet assignment is discarded
and the other solution is used if available. Events with no solutions
are discarded. If both {\cPqb} jet candidates have solutions for
neutrino momentum, a likelihood discriminant is constructed using the
minimum distance $\ensuremath{D_{\nu,\mathrm{min}}}\xspace$ and the invariant mass
\ensuremath{M_{\PQt_{\mathrm{h}}}}\xspace of the two jets hypothesized to belong to the
hadronic top quark decay. We choose the jet assignment with the lowest
value of the negative log likelihood ${-}\ln[\lambda_3]$ defined as
\begin{equation}
{-}\ln[\lambda_3] = {-}\ln \left [ P_{\ensuremath{M_{\PQt_{\mathrm{h}}}}\xspace} \right ] {-}\ln \left [ P_{\nu}(\ensuremath{D_{\nu,\mathrm{min}}}\xspace) \right ],
\label{TTRECEQ2}
\end{equation}
where the label 3 refers to the requirement of three jets.
The function $P_{\nu}(\ensuremath{D_{\nu,\mathrm{min}}}\xspace)$ is the probability density of
$\ensuremath{D_{\nu,\mathrm{min}}}\xspace$ to correctly identify \ensuremath{\cPqb_\ell}\xspace, and
$P_{\ensuremath{M_{\PQt_{\mathrm{h}}}}\xspace}$ is the probability density of the invariant
mass of the hypothesized \ensuremath{\cPqb_\mathrm{h}}\xspace and the jet from the
\PW~boson decay. Figures~\ref{p:reco3j} (\cmsLeft) and (middle) show
the separation between correct and incorrect $\cPqb$ assignments in the
relevant variables for signal events.
The distribution of $-\ln[\lambda_3]$ is shown in the \cmsRight plot
of Fig.~\ref{p:reco3j}. Jet assignments with values of $-\ln[\lambda_3]
> 13$ are discarded to improve the signal-to-background ratio.
Overall, this algorithm identifies the correct {\cPqb} jet assignment in 80\% of three-jet events.
Semileptonic top quark decays are fully reconstructible, regardless of
whether the event has three or four jets. The hadronically decaying top
quark candidate in the missing jet category is approximated by the
system of two jets identified to be associated with the hadronic top
quark decay. Figure~\ref{p:reco3j:resol} shows the relative difference between the reconstructed and generated invariant mass of the \ttbar system and of the difference in rapidity
for three-jet events, compared to those with four jets. Because of the
missing jet, the observed value of \ensuremath{M_{\ttbar}}\xspace in the three-jet category
tends to be lower than in the four-jet category. However, this shift
does not affect the \ensuremath{Y_{\PQt}}\xspace measurement since the data are compared to
the simulation in each different jet multiplicity bin: only the widths
of these distributions are important.
Figure~\ref{p:reco3j:resol} demonstrates that the three-jet
reconstruction is competitive with the one achieved in the four-jet
category.
To summarize, the newly developed three-jet reconstruction algorithm
allows us to increase the yields in the sensitive low-\ensuremath{M_{\ttbar}}\xspace region. As
will be shown in Section~\ref{sec:sys}, the addition of three-jet
events also helps to reduce the systematic uncertainty from effects
that cause migration between jet multiplicity bins, \eg, jet energy
scale variation and the hadronization model. The analysis is performed
in three independent channels based on the jet multiplicity of the
event: three, four, and five or more jets.
\begin{figure}[tbp]
\centering
\includegraphics[width=\cmsFigWidth]{Figure_003-a.pdf}
\includegraphics[width=\cmsFigWidth]{Figure_003-b.pdf}
\includegraphics[width=\cmsFigWidth]{Figure_003-c.pdf}
\caption{Three-jet reconstruction. Distributions of the distance \ensuremath{D_{\nu,\mathrm{min}}}\xspace for correctly and wrongly selected \ensuremath{\cPqb_\ell}\xspace candidates (\cmsLeft). Mass distribution of the correctly and wrongly selected \ensuremath{\cPqb_\mathrm{h}}\xspace and the jet from the \PW~boson (middle). Distribution of the negative combined log-likelihood (\cmsRight). All distributions are normalized to have unit area.}
\label{p:reco3j}
\end{figure}
\begin{figure}[tbp]
\centering
\includegraphics[width=\cmsFigWidth]{Figure_004-a.pdf}
\includegraphics[width=\cmsFigWidth]{Figure_004-b.pdf}
\caption{Relative difference between the reconstructed and generated \ensuremath{M_{\ttbar}}\xspace (\cmsLeft) and \ensuremath{\Delta y_{\ttbar}}\xspace (\cmsRight) for three-jet and four-jet event categories.}
\label{p:reco3j:resol}
\end{figure}
\section{Background estimation}
\label{sec:bck}
The backgrounds in this analysis arise from QCD multijet production, single
top quark production, and vector boson production in association with
jets (V+jets). The expected number of events from $\PW\PW$ and $\PW\PZ$
production is negligible and we ignore this contribution in the signal
region (SR).
The contributions from single top quark and V+jets production are
estimated from the simulated samples. Rather than relying on the
relatively small simulated sample of QCD multijet events, smoother
distributions in \ensuremath{M_{\ttbar}}\xspace and $\abs{\ensuremath{\Delta y_{\ttbar}}\xspace}$ are obtained from data in a
control region (CR).
Events in the CR are selected in the same way as the signal events,
except that the maximum value of the CSV discriminant of jets in each
event has to be less than 0.6. Hence, events in the CR originate
predominately from V+jets and QCD multijet processes. The simulation in
this background-enriched CR describes the data well within uncertainties. We take the
distributions in \ensuremath{M_{\ttbar}}\xspace and $\abs{\ensuremath{\Delta y_{\ttbar}}\xspace}$ from data in the CR, after
subtracting the expected contribution from the V+jets, single
top quark, \ttbar, and $\PW\PW$ and $\PW\PZ$ processes. To obtain distributions in
the SR, the distributions in the CR are then normalized by the ratio of
the number of events in the SR ($\mathrm{N_{QCD MC}^{SR}}$) and CR
($\mathrm{N_{QCD MC}^{CR}}$) determined from simulated QCD multijet events:
\begin{equation}
\mathrm{N_{QCD}^{SR}} = \mathrm{N_{resDATA}^{CR}} \, \frac{\mathrm{N_{QCD MC}^{SR}}}{\mathrm{N_{QCD MC}^{CR}}},
\label{eq:bck:qcdnorm}
\end{equation}
where $\mathrm{N_{resDATA}^{CR}}$ is the residual yield in data (after
subtracting the background contributions not from QCD multijet).
The SR-to-CR simulated events ratio in Eq.~(\ref{eq:bck:qcdnorm}) is 0.043 $\pm$ 0.014,
0.041 $\pm$ 0.012, and 0.081 $\pm$ 0.015 for three, four, and five or
more jets, respectively. The normalization
uncertainty is estimated to be 30\%. The shape uncertainty due to the
CR definition is evaluated by selecting events for which the lepton
fails the isolation requirement. The uncertainty is defined by the
difference between the distributions of events that pass or fail the
CSV discriminant requirement and can be as large as 60\% in some
regions of phase space.
\section{Event yields and control plots}
\label{sec:controlplots}
Table~\ref{tab:expyields} shows the expected and observed event yields
after event selection and \ttbar reconstruction, including the
statistical uncertainties in the expected yields. All of the \ttbar
components depend on the top quark Yukawa coupling from the production,
so all of them are considered as signal. Here, the signal simulation is
divided into the following categories: correctly reconstructed \ttbar
systems (\ttbar right reco); events where all required decay products
are available, but the algorithm failed to identify the correct jet
assignments (\ttbar wrong reco); $\ell$+jets~\ttbar events where at
least one required decay product is missing (\ttbar
nonreconstructible); and \ttbar events from dileptonic,
$\PW \to\tau\nu$, or fully hadronic decays (\ttbar background).
\begin{table}[htbp]
\centering
\topcaption{
Expected and observed yields after event selection and \ttbar reconstruction, with statistical uncertainties in the expected yields. The QCD multijet yield is derived from Eq.~(\ref{eq:bck:qcdnorm}) and its uncertainty is the statistical uncertainty in the control region from the data-based QCD multijet determination described in Section~\ref{sec:bck}.}
\renewcommand{\arraystretch}{1.}
\begin{scotch}{l r@{$\pm$}l r@{$\pm$}l r@{$\pm$}l}
Source & \multicolumn{2}{c}{3 jets} & \multicolumn{2}{c}{4 jets} & \multicolumn{2}{c}{$\geq$5 jets} \\
\hline
\ttbar right reco & 130\,520 &150 & 92\,900 &130 & 71\,640 &110 \\
\ttbar wrong reco & 29\,298 &73 & 17\,356 &57 & 43\,073 &89 \\
\ttbar nonreco & 50\,695 &96 & 88\,760 &130 & 80\,960 &120 \\
\ttbar background & 53\,465 &99 & 26\,085 &69 & 25\,047 &68 \\[\cmsTabSkip]
Single \PQt & 17\,849 & 40 & 6922 &27 & 6294 &26 \\
V+jets & 8990 & 100& 2824 &52 & 2478 &49 \\
QCD multijet & 19\,840 & 69 & 2100 &25 & 1080 &30 \\[\cmsTabSkip]
Expected sum & 310\,650 &250 & 236\,950 &210 & 230\,570 &210\\
Data & \multicolumn{2}{l}{308\,932} & \multicolumn{2}{l}{237\,491} & \multicolumn{2}{l}{226\,788}\\
\end{scotch}
\label{tab:expyields}
\end{table}
Figures~\ref{p:controlplots:3j}--\ref{p:controlplots:5j} show the
comparison of data and simulation for \ptvecmiss, the pseudorapidity
of the lepton, and several kinematic variables of the top quarks and
\ttbar system.
In general, good agreement between data and prediction is observed.
The data appear to have a deficit for high top quark \pt with respect
to the available MC generators. This trend has been observed before in
Refs.~\cite{Aad:2015mbv,Khachatryan:2015oqa} and~\cite{Aaboud:2017fha,Sirunyan:2018wem}
both at 8 and 13\TeV, and recent differential
NNLO calculations~\cite{NNLO,Catani:2019hip} reduce the discrepancy.
\begin{figure*}[h!tbp]
\centering
\includegraphics[width=0.44\textwidth]{Figure_005-a.pdf}
\includegraphics[width=0.44\textwidth]{Figure_005-b.pdf}
\includegraphics[width=0.44\textwidth]{Figure_005-c.pdf}
\includegraphics[width=0.44\textwidth]{Figure_005-d.pdf}
\includegraphics[width=0.44\textwidth]{Figure_005-e.pdf}
\includegraphics[width=0.44\textwidth]{Figure_005-f.pdf}
\includegraphics[width=0.44\textwidth]{Figure_005-g.pdf}
\includegraphics[width=0.44\textwidth]{Figure_005-h.pdf}
\caption{Three-jet events after selection and \ttbar reconstruction. The plots show (left to right, upper to lower) the missing transverse momentum (\ptmiss), the lepton pseudorapidity, and $\pt$ and the absolute rapidity of the top quark decaying hadronically, semileptonically, and of the \ttbar system. The hatched band shows the total uncertainty associated with the signal and background predictions with the individual sources of uncertainty assumed to be uncorrelated. The ratios of data to the sum of the predicted yields are provided at the bottom of each panel.}
\label{p:controlplots:3j}
\end{figure*}
\begin{figure*}[h!tbp]
\centering
\includegraphics[width=0.44\textwidth]{Figure_006-a.pdf}
\includegraphics[width=0.44\textwidth]{Figure_006-b.pdf}
\includegraphics[width=0.44\textwidth]{Figure_006-c.pdf}
\includegraphics[width=0.44\textwidth]{Figure_006-d.pdf}
\includegraphics[width=0.44\textwidth]{Figure_006-e.pdf}
\includegraphics[width=0.44\textwidth]{Figure_006-f.pdf}
\includegraphics[width=0.44\textwidth]{Figure_006-g.pdf}
\includegraphics[width=0.44\textwidth]{Figure_006-h.pdf}
\caption{Four-jet events after selection and \ttbar reconstruction. Same distributions as described in Fig.~\ref{p:controlplots:3j}.}
\label{p:controlplots:4j}
\end{figure*}
\begin{figure*}[h!tbp]
\centering
\includegraphics[width=0.44\textwidth]{Figure_007-a.pdf}
\includegraphics[width=0.44\textwidth]{Figure_007-b.pdf}
\includegraphics[width=0.44\textwidth]{Figure_007-c.pdf}
\includegraphics[width=0.44\textwidth]{Figure_007-d.pdf}
\includegraphics[width=0.44\textwidth]{Figure_007-e.pdf}
\includegraphics[width=0.44\textwidth]{Figure_007-f.pdf}
\includegraphics[width=0.44\textwidth]{Figure_007-g.pdf}
\includegraphics[width=0.44\textwidth]{Figure_007-h.pdf}
\caption{Events with five or more jets after selection and \ttbar reconstruction. Same distributions as described in Fig.~\ref{p:controlplots:3j}.}
\label{p:controlplots:5j}
\end{figure*}
\section{Determination of \texorpdfstring{\ensuremath{Y_{\PQt}}\xspace}{Yukawa}}
\label{sec:stat}
The two-dimensional data distributions in (\ensuremath{M_{\ttbar}}\xspace, $\abs{\ensuremath{\Delta y_{\ttbar}}\xspace}$) are fit to
the sum of the predicted contributions to infer the value of \ensuremath{Y_{\PQt}}\xspace
for events with three, four, and five or more jets in the final state.
The bin limits are selected to capture the different behavior of the
weak interaction correction, as seen in
Fig.~\ref{p:reweight:1d}. There are three bins in $\abs{\ensuremath{\Delta y_{\ttbar}}\xspace}$: 0--0.6,
0.6--1.2, and $>$1.2. A minimum of 10\,000 simulated events are
required in each (\ensuremath{M_{\ttbar}}\xspace, $\abs{\ensuremath{\Delta y_{\ttbar}}\xspace}$) bin.
This results in 21, 17, and 17 bins for event categories with three,
four, and five or more jets, respectively.
The likelihood function is constructed as a product of
Poisson distributions for the observed number of events,
$n^{\mathrm{bin}}_{\mathrm{obs}}$, in each
(\ensuremath{M_{\ttbar}}\xspace, $\abs{\ensuremath{\Delta y_{\ttbar}}\xspace}$) bin~\cite{CMS-NOTE-2011-005}:
\begin{linenomath}
\ifthenelse{\boolean{cms@external}}
{
\begin{multline}
\label{eqlikelihood}
\mathcal{L}(\ensuremath{Y_{\PQt}}\xspace,\theta) = \prod_{\mathrm{bin}~\in(\ensuremath{M_{\ttbar}}\xspace,\abs{\ensuremath{\Delta y_{\ttbar}}\xspace})} \mathcal{L}_{\mathrm{bin}} = \\
\prod_{\mathrm{bin}} \mathrm{Pois} \left ( n^{\mathrm{bin}}_{\mathrm{obs}} | s^{\mathrm{bin}}(\theta) \, R^{\mathrm{bin}}(\ensuremath{Y_{\PQt}}\xspace,\theta) + b^{\mathrm{bin}}(\theta) \right ) \, \rho(\theta),
\end{multline}
}
{
\begin{equation}
\label{eqlikelihood}
\mathcal{L}(\ensuremath{Y_{\PQt}}\xspace,\theta) = \prod_{\mathrm{bin}~\in(\ensuremath{M_{\ttbar}}\xspace,\abs{\ensuremath{\Delta y_{\ttbar}}\xspace})} \mathcal{L}_{\mathrm{bin}} = \prod_{\mathrm{bin}} \mathrm{Pois} \left ( n^{\mathrm{bin}}_{\mathrm{obs}} | s^{\mathrm{bin}}(\theta) \, R^{\mathrm{bin}}(\ensuremath{Y_{\PQt}}\xspace,\theta) + b^{\mathrm{bin}}(\theta) \right ) \, \rho(\theta),
\end{equation}
}
\end{linenomath}
where $s^{\mathrm{bin}}$ is the \POWHEG prediction for the number of
signal \ttbar events; $b^{\mathrm{bin}}$ is the prediction for the
number of events from all background process (single top quark,
V+jets, and QCD multijet production);
$R^{\mathrm{bin}}(\ensuremath{Y_{\PQt}}\xspace,\theta)=s^{\mathrm{bin}}(\ensuremath{Y_{\PQt}}\xspace)/s^{\mathrm{bin}}(\POWHEG)$
encodes the effect of different \ensuremath{Y_{\PQt}}\xspace coupling scenarios,
parametrized with a quadratic dependence on $\ensuremath{Y_{\PQt}}\xspace$ in each bin
(shown in Figs.~\ref{p:sys:model:3j:dely1}
and~\ref{p:sys:model:45j:dely1} for the first $\abs{\ensuremath{\Delta y_{\ttbar}}\xspace}$ bin); and
$\theta$ represents the full suite of nuisance parameters with
$\rho(\theta)$ described by lognormal distributions parametrizing the
uncertainty on each source. The different sources of systematic
uncertainties are described in detail in Section~\ref{sec:sys}. The
quantity $R^{\mathrm{bin}}(\ensuremath{Y_{\PQt}}\xspace,\theta)$ is the main parameter of interest
in the fit, as it represents the strength of the weak correction over
the uncorrected \POWHEG yields.
\begin{figure*}[htbp]
\centering
\includegraphics[width=0.32\textwidth]{Figure_008-a.pdf}
\includegraphics[width=0.32\textwidth]{Figure_008-b.pdf}
\includegraphics[width=0.32\textwidth]{Figure_008-c.pdf}
\includegraphics[width=0.32\textwidth]{Figure_008-d.pdf}
\includegraphics[width=0.32\textwidth]{Figure_008-e.pdf}
\includegraphics[width=0.32\textwidth]{Figure_008-f.pdf}
\includegraphics[width=0.32\textwidth]{Figure_008-g.pdf}
\includegraphics[width=0.32\textwidth]{Figure_008-h.pdf}
\caption{The strength of the weak interaction correction, relative to the predicted \POWHEG signal, $R^\mathrm{bin}$, as a function of \ensuremath{Y_{\PQt}}\xspace in the three-jet category. The plots correspond to the first eight \ensuremath{M_{\ttbar}}\xspace bins for $\abs{\ensuremath{\Delta y_{\ttbar}}\xspace}<0.6$ (as shown in Fig.~\ref{p:results:2dshape:comb}). A quadratic fit is performed in each bin.}
\label{p:sys:model:3j:dely1}
\end{figure*}
\begin{figure*}[htbp]
\centering
\includegraphics[width=0.32\textwidth]{Figure_009-a.pdf}
\includegraphics[width=0.32\textwidth]{Figure_009-b.pdf}
\includegraphics[width=0.32\textwidth]{Figure_009-c.pdf}
\includegraphics[width=0.32\textwidth]{Figure_009-d.pdf}
\includegraphics[width=0.32\textwidth]{Figure_009-e.pdf}
\includegraphics[width=0.32\textwidth]{Figure_009-f.pdf}
\caption{The strength of the weak interaction correction, relative to the predicted \POWHEG signal, $R^\mathrm{bin}$, as a function of \ensuremath{Y_{\PQt}}\xspace in the categories
with four and five or more jets. The plots correspond to the first six \ensuremath{M_{\ttbar}}\xspace bins for $\abs{\ensuremath{\Delta y_{\ttbar}}\xspace}<0.6$ (as shown in Fig.~\ref{p:results:2dshape:comb}). A quadratic fit is performed in each bin.}
\label{p:sys:model:45j:dely1}
\end{figure*}
\section{Systematic uncertainties}
\label{sec:sys}
We describe here the different sources of experimental and theoretical
uncertainties and their effect on determining \ensuremath{Y_{\PQt}}\xspace.
Systematic uncertainties that do not alter the shape of the
distributions of \ensuremath{M_{\ttbar}}\xspace and \ensuremath{\Delta y_{\ttbar}}\xspace are treated as normalization
uncertainties, while the others are treated as shape uncertainties. The
latter are evaluated bin-by-bin in the likelihood function
Eq.~(\ref{eqlikelihood}). Table~\ref{tab:sys:summary} lists all the
systematic uncertainties.
The uncertainty in the integrated luminosity is 2.5\%~\cite{LUMI}. The
simulated samples are reweighted to match the measured data
distribution in the number of pileup events. The uncertainty in the
total inelastic \Pp{}\Pp{} cross section, which affects the pileup
estimate, is accounted for by varying the average number of pileup
events per bunch crossing by 5\%~\cite{Sirunyan:2018nqx}.
The lepton efficiency scale factors, which account for the differences
in the trigger, reconstruction, and identification efficiencies between
data and simulation, are measured using a tag-and-probe method in $\PZ
\to \ell^+\ell^-$ events~\cite{Khachatryan:2015hwa,TaP13TeV}. These
scale factors, measured in bins of lepton \pt, lepton $\eta$, and jet
multiplicity, are applied to the simulated events The overall
uncertainty in the final measurement from these lepton scale factors is
approximately 2\%.
The uncertainties in the jet energy calibration (JEC) are evaluated by
shifting the energies of jets in simulation up and down by one standard
deviation in bins of \pt and $\eta$. Accounting for different sources
of JEC uncertainties and jet flavors, a total of 19 shape variations
are considered. The uncertainty in the jet energy resolution (JER) is
calculated by broadening the resolution in simulation and recomputing
the acceptances~\cite{Khachatryan:2016kdb}, for which the resulting
effect is a change of less than 1\% in event yields. The {\cPqb}
tagging efficiency in the simulation is corrected using scale factors
in bins of jet \pt and $\eta$ determined from efficiencies measured in
data and simulation~\cite{Sirunyan:2017ezt}. The uncertainty in the
measured scale factors ranges between 1 and 20\% per jet, leading to an
overall effect on the final measurement of 2--3\%.
The single top quark background estimate is affected by a 15\%
normalization uncertainty, evaluated from the combined results of
$t$-channel and $\PW\PQt$
productions~\cite{Sirunyan:2018rlu,Sirunyan:2018bsr}. The systematic
uncertainty in the V+jets background prediction is 30\%, derived from
the leading contribution in the signal region: $\PW$+heavy flavor
production~\cite{Khachatryan:2016ipq}. The systematic uncertainties
described above for the signal are also derived for these background
estimates. The QCD multijet background estimates from the data CR
include a 30\% normalization uncertainty from
Eq.~(\ref{eq:bck:qcdnorm}), and a shape difference observed between
samples with different lepton isolation (as described in
Section~\ref{sec:bck}). The uncertainty from the determination of \ptmiss
due to the electron, muon, and unclustered energy uncertainties,
results in a negligible effect on the acceptance. All the major
experimental uncertainties described above are evaluated for each
process in all reconstruction channels.
In the following, we describe the theoretical uncertainties.
The uncertainties in the factorization and renormalization scales affect
the number of events expected in simulated samples. These
are evaluated by varying each scale independently up and down by a
factor of two. We consider separate variations of the renormalization
and factorization scales by taking the envelope of the observed
variations as the quoted uncertainty. To account for possible
correlation between the two sources of uncertainty, we also add an
additional shape nuisance parameter that corresponds to the
simultaneous variation of both parameters. The different replicas in
the NNPDF3.0 PDF set~\cite{Ball:2014uwa} are used to estimate the
corresponding uncertainty in the shape from the changed acceptance in
each bin, which amounts to a combined variation as large as 5\%. The
different replicas due to the variation of strong coupling constant
$\alpS$ result in changes of the acceptance of around 1\%.
The effect of the top quark mass experimental uncertainty is estimated
by the difference in simulations generated with $\ensuremath{m_{\PQt}}\xspace$ varied by
$\pm$1\GeV~\cite{Khachatryan:2015hba,Aaboud:2018zbu}, and it results in
a shape variation as large as 7\%. The dependence of \ensuremath{M_{\ttbar}}\xspace and \ensuremath{\Delta y_{\ttbar}}\xspace
on the correct description of the top quark $\pt$ in the simulation is
taken into account by checking the difference in the acceptance when
the nominal \POWHEG NLO samples are scaled to match the average top
quark and antiquark $\pt$ distributions calculated at NNLO in $\alpS$ in
Ref.~\cite{Czakon:2017dip}. This uncertainty is treated as a shape nuisance parameter in
the likelihood function for the \ttbar samples.
There are several sources of uncertainties arising from the parton
shower modeling. The uncertainty in matching the matrix element
calculation to the parton shower is estimated by changing the parameter
that regulates the damping of real emissions in the NLO
calculation~\cite{hdamp_underlying}, resulting in an effect of 1--5\%.
The scales, which determine initial- (ISR) and final-state radiation
(FSR) are also varied~\cite{ISR_FSR}, resulting in a maximum change of
4\% in the acceptance and shape variations as large as 10\%. The
uncertainty resulting from the modeling of the amount of multiple
parton interactions is derived following the studies of
Ref.~\cite{hdamp_underlying} and is found to have a negligible effect
on the result. Color reconnection reconfigures color strings after the
parton shower, affecting the hadronic \PW~boson
decays~\cite{hdamp_underlying}. This source of uncertainty typically
results in shape differences smaller than 1\%. The uncertainty in
{\cPqb} quark fragmentation, the momentum transfer from the {\cPqb}
quark to the {\cPqb} hadron, is estimated by varying the parametrized
function in the \PYTHIA simulation. It can produce a shape variation
as large as 3\%. As the {\cPqb} hadron semileptonic branching fractions
may change the {\cPqb} jet energy response, the acceptance is
recalculated after varying the $\PBp$, $\PBz$, $\PBs$, and $\PGLb$
semileptonic branching fractions up and down by their respective
experimental uncertainties~\cite{Tanabashi:2018oca}. The resulting
systematic uncertainty is around 3\%.
Finally, the weak interaction correction is implemented by reweighting
the nominal \POWHEG samples with the ratio of the weak correction over
the LO cross section calculated by \textsc{hathor}\xspace. As recommended by the
\textsc{hathor}\xspace authors~\cite{Kuhn:2013zoa}, the associated systematic
uncertainty for this procedure can be estimated from the difference
between the multiplicative and additive treatments, \ie,
$(1+\delta_{\mathrm{QCD}})(1+\delta_{\mathrm{W}})$ and
$(1+\delta_{\mathrm{QCD}} + \delta_{\mathrm{W}})$, where
$\delta_{\mathrm{QCD}}$ is estimated from the effect of varying the
factorization and renormalization scale up and down by a factor of two
on the NLO cross section, and $\delta_{\mathrm{W}}$ is the ratio of the
weak correction over the LO cross section obtained from \textsc{hathor}\xspace. The
difference is $\delta_{\mathrm{QCD}} \delta_{\mathrm{W}}$, which
is also a function of \ensuremath{Y_{\PQt}}\xspace.
This uncertainty is accounted for as a shape nuisance in the likelihood fit.
The experimental uncertainties are treated as 100\% correlated among signal
and background processes and across the jet multiplicity channels.
\begin{table*}[h!tbp]
\centering
\topcaption{Summary of the sources of systematic uncertainty, their effects and magnitudes on signal and backgrounds. If the uncertainty shows a shape dependence in the \ensuremath{M_{\ttbar}}\xspace and \ensuremath{\Delta y_{\ttbar}}\xspace distributions, it is treated as such in the likelihood. Only the luminosity, background normalization, and ISR uncertainties are not considered as shape uncertainties.}
\renewcommand{\arraystretch}{1.1}
\begin{scotch}{lcccc}
Uncertainty & \ttbar & Single \PQt & V+jets & QCD multijet \\
\hline
Integrated luminosity & 2.5\% & 2.5\% & 2.5\% & 2.5\% \\
Pileup & 0--1\% & 0--1\% & \NA & \NA \\
Lepton identification/trigger & 1.9\% & 1.9\% & 1.9\% & \NA \\
JEC & 0--5\% & 0--5\% & \NA & \NA \\
JER & 0--0.6\% & \NA & \NA & \NA \\
{\cPqb} tag scale factor & 3\% & 3\% & 2--3\% & \NA \\
{\cPqb} mistag scale factor & 0.5\% & 1\% & 3--6\% & \NA \\
Background normalization & \NA & 15\% & 30\% & 30\% \\
QCD multijet CR definition & \NA & \NA & \NA & 0--60\% \\[\cmsTabSkip]
Factorization and renormalization scales & 0--6\% & 2--5\% & 0--15\% & \NA \\
PDF & 0.5--1.5\% & 0.5--1.5\% & \NA & \NA \\
$\alpS(m_\PZ)$ in PDFs & 1\% & 1.5\% & \NA & \NA \\
Top quark mass & 1--5\% & \NA & \NA & \NA \\
Top quark \pt modeling & 0--0.5\% & \NA & \NA & \NA \\
Parton shower & & & & \\
~-NLO shower matching & 1.5--5\% & \NA & \NA & \NA \\
~-ISR & 2--3\% & \NA & \NA & \NA \\
~-FSR & 0--9\% & 0--12\% & \NA & \NA \\
~-Color reconnection & 0--3\% & \NA & \NA & \NA \\
~-{\cPqb} jet fragmentation & 0--3\% & 0--5\% & \NA & \NA \\
~-{\cPqb} hadron branching fraction & 3\% & 2.5--3\% & \NA & \NA \\
Weak correction $\delta_{\mathrm{QCD}}\delta_{\mathrm{W}}$ & 0--0.2\% (\ensuremath{Y_{\PQt}}\xspace=2) & \NA & \NA & \NA \\
\end{scotch}
\label{tab:sys:summary}
\end{table*}
\section{Results}
\label{sec:limit}
The data events are analyzed in
three exclusive channels, according to the number of jets in the final
state. The expected signal and background estimation shown in
Table~\ref{tab:expyields}, and the systematic uncertainties described in
Section~\ref{sec:sys} are used to construct a binned likelihood
(Eq.~(\ref{eqlikelihood})) as a product of the Poisson probabilities
from all bins in (\ensuremath{M_{\ttbar}}\xspace,$\abs{\ensuremath{\Delta y_{\ttbar}}\xspace}$). From this, we construct a profile
likelihood ratio test statistic
$q(\ensuremath{Y_{\PQt}}\xspace) = -2\ln \left [
\mathcal{L}(\ensuremath{Y_{\PQt}}\xspace, \hat{\hat{\theta}}) / \mathcal{L}(\hat{\ensuremath{Y_{\PQt}}\xspace}, \hat{\theta}) \right ]$,
where $\hat{\hat{\theta}}$ in the numerator
denotes the value of the estimator $\hat{\theta}$ that maximizes the
likelihood for a specific \ensuremath{Y_{\PQt}}\xspace, \ie, it is the conditional
maximum-likelihood estimator of $\theta$ (and thus is a function of
\ensuremath{Y_{\PQt}}\xspace). The denominator is the maximized (unconditional) likelihood
function, \ie, $\hat{\ensuremath{Y_{\PQt}}\xspace}$ and $\hat{\theta}$ are the values of the
estimators that simultaneously maximize the likelihood.
The statistical procedure to extract the parameter of interest is
detailed in Ref.~\cite{Conway:2011in}.
The distributions of \ensuremath{M_{\ttbar}}\xspace and $\abs{\ensuremath{\Delta y_{\ttbar}}\xspace}$ after performing the
combined likelihood fit are shown in Fig.~\ref{p:results:2dshape:comb}.
The analysis covers the phase space from the production
threshold in \ensuremath{M_{\ttbar}}\xspace (which is $\approx$200\GeV at the detector level
for events with three reconstructed jets) up to 2\TeV.
\begin{figure*}[htbp]
\centering
\includegraphics[width=1.0\textwidth]{Figure_010.pdf}
\caption{The $\ensuremath{M_{\ttbar}}\xspace$ distribution in $\abs{\ensuremath{\Delta y_{\ttbar}}\xspace}$ bins for all events combined, after the simultaneous likelihood fit in all jet channels. The hatched bands show the total post-fit uncertainty. The ratios
of data to the sum of the predicted yields are provided in the lower panel. To show the sensitivity of the data to \ensuremath{Y_{\PQt}}\xspace = 1 and \ensuremath{Y_{\PQt}}\xspace = 2, the pre-fit yields are shown in the upper panel, and the yield ratio $R^{\mathrm{bin}}(\ensuremath{Y_{\PQt}}\xspace=2)/R^{\mathrm{bin}}(\ensuremath{Y_{\PQt}}\xspace=1)$ in the lower panel.}
\label{p:results:2dshape:comb}
\end{figure*}
We measure the top quark Yukawa coupling by scanning the likelihood
function with respect to \ensuremath{Y_{\PQt}}\xspace. The likelihood scan distributions can
be found in Fig.~\ref{p:results:likelihoodscan}. The expected and
observed results are presented in Table~\ref{tab:results:limits}.
An upper limit on \ensuremath{Y_{\PQt}}\xspace is also determined, using a modified frequentist \CLs
procedure~\cite{Junk:1999kv,Read:2002hq} with the asymptotic method~\cite{Cowan:2010js}.
\begin{figure*}[tbp]
\centering
\includegraphics[width=0.49\textwidth]{Figure_011-a.pdf}
\includegraphics[width=0.49\textwidth]{Figure_011-b.pdf}
\includegraphics[width=0.49\textwidth]{Figure_011-c.pdf}
\includegraphics[width=0.49\textwidth]{Figure_011-d.pdf}
\caption{The test statistic scan versus \ensuremath{Y_{\PQt}}\xspace for each channel (three, four, and five or more jets), and all channels combined. The test statistic minimum indicates the best fit of \ensuremath{Y_{\PQt}}\xspace. The horizontal lines indicate 68 and 95\% \CL intervals.}
\label{p:results:likelihoodscan}
\end{figure*}
\setlength\extrarowheight{5pt}
\begin{table}[htbp]
\centering
\topcaption{The expected and observed best fit values and 95\% \CL upper limits on \ensuremath{Y_{\PQt}}\xspace.}
\renewcommand{\arraystretch}{1.2}
\begin{scotch}{lcccc}
Channel & \multicolumn{2}{c}{Best fit \ensuremath{Y_{\PQt}}\xspace} & \multicolumn{2}{c}{95\% \CL upper limit} \\
& Expected & Observed & Expected & Observed \\
\hline
3 jets & $1.00^{+0.66}_{-0.90}$ & $1.62^{+0.53}_{-0.78}$ & ${<}2.17$ & ${<}2.59$ \\
4 jets & $1.00^{+0.50}_{-0.72}$ & $0.87^{+0.51}_{-0.77}$ & ${<}1.88$ & ${<}1.77$ \\
$\geq$5 jets & $1.00^{+0.59}_{-0.83}$ & $1.27^{+0.55}_{-0.74}$ & ${<}2.03$ & ${<}2.23$ \\[\cmsTabSkip]
Combined & $1.00^{+0.35}_{-0.48}$ & $1.07^{+0.34}_{-0.43}$ & ${<}1.62$ & ${<}1.67$ \\
\end{scotch}
\label{tab:results:limits}
\end{table}
\section{Summary}
\label{sec:summary}
A measurement of the top quark Yukawa coupling is presented, extracted
by investigating \ttbar pair production in final states with an
electron or muon and several jets, using proton--proton data collected
by the CMS experiment at $\sqrt{s} = 13\TeV$, corresponding to an
integrated luminosity of 35.8\fbinv. The \ttbar production cross
section is sensitive to the top quark Yukawa coupling through weak
force corrections that can modify the distributions of the mass of top
quark--antiquark pairs, \ensuremath{M_{\ttbar}}\xspace, and the rapidity difference between top
quark and antiquark, \ensuremath{\Delta y_{\ttbar}}\xspace. The kinematic properties of these final
states are reconstructed in events with at least three jets, two of
which are identified as originating from bottom quarks. The inclusion
of events with only three reconstructed jets using a dedicated
algorithm improves the sensitivity of the analysis by increasing the
signal from events in the low-\ensuremath{M_{\ttbar}}\xspace region, which is most sensitive to
the Yukawa coupling. The ratio of the top quark Yukawa coupling to its
expected SM value, \ensuremath{Y_{\PQt}}\xspace, is extracted by comparing the data with the
expected $\ttbar$ signal for different values of \ensuremath{Y_{\PQt}}\xspace in a total of
55 bins in \ensuremath{M_{\ttbar}}\xspace, $\abs{\ensuremath{\Delta y_{\ttbar}}\xspace}$, and the number of reconstructed
jets. The measured value of \ensuremath{Y_{\PQt}}\xspace is $1.07^{+0.34}_{-0.43}$,
compared to an expected value of $1.00^{+0.35}_{-0.48}$. The observed upper
limit on \ensuremath{Y_{\PQt}}\xspace is 1.67 at 95\% confidence level (\CL), with an expected value of
1.62.
Although the method presented in this paper is not as sensitive as the combined
CMS measurement of \ensuremath{Y_{\PQt}}\xspace performed using Higgs boson production and
decays in multiple channels~\cite{Sirunyan:2018koj}, it has the
advantage that it does not depend on any assumptions about the
couplings of the Higgs boson to particles other than the top quark. The
result presented here is more sensitive than the only other result from
CMS exclusively dependent on \ensuremath{Y_{\PQt}}\xspace, namely the limit on the ${\ttbar}{\ttbar}$ cross section,
which constrains \ensuremath{Y_{\PQt}}\xspace to be less than 2.1 at 95\%
\CL~\cite{Sirunyan:2017roi}.
\begin{acknowledgments}
\label{ack}
We congratulate our colleagues in the CERN accelerator departments for the excellent performance of the LHC and thank the technical and administrative staffs at CERN and at other CMS institutes for their contributions to the success of the CMS effort. In addition, we gratefully acknowledge the computing centers and personnel of the Worldwide LHC Computing Grid for delivering so effectively the computing infrastructure essential to our analyses. Finally, we acknowledge the enduring support for the construction and operation of the LHC and the CMS detector provided by the following funding agencies: BMBWF and FWF (Austria); FNRS and FWO (Belgium); CNPq, CAPES, FAPERJ, FAPERGS, and FAPESP (Brazil); MES (Bulgaria); CERN; CAS, MoST, and NSFC (China); COLCIENCIAS (Colombia); MSES and CSF (Croatia); RPF (Cyprus); SENESCYT (Ecuador); MoER, ERC IUT, PUT and ERDF (Estonia); Academy of Finland, MEC, and HIP (Finland); CEA and CNRS/IN2P3 (France); BMBF, DFG, and HGF (Germany); GSRT (Greece); NKFIA (Hungary); DAE and DST (India); IPM (Iran); SFI (Ireland); INFN (Italy); MSIP and NRF (Republic of Korea); MES (Latvia); LAS (Lithuania); MOE and UM (Malaysia); BUAP, CINVESTAV, CONACYT, LNS, SEP, and UASLP-FAI (Mexico); MOS (Montenegro); MBIE (New Zealand); PAEC (Pakistan); MSHE and NSC (Poland); FCT (Portugal); JINR (Dubna); MON, RosAtom, RAS, RFBR, and NRC KI (Russia); MESTD (Serbia); SEIDI, CPAN, PCTI, and FEDER (Spain); MOSTR (Sri Lanka); Swiss Funding Agencies (Switzerland); MST (Taipei); ThEPCenter, IPST, STAR, and NSTDA (Thailand); TUBITAK and TAEK (Turkey); NASU and SFFR (Ukraine); STFC (United Kingdom); DOE and NSF (USA).
\hyphenation{Rachada-pisek} Individuals have received support from the Marie-Curie program and the European Research Council and Horizon 2020 Grant, contract Nos.\ 675440 and 765710 (European Union); the Leventis Foundation; the A.P.\ Sloan Foundation; the Alexander von Humboldt Foundation; the Belgian Federal Science Policy Office; the Fonds pour la Formation \`a la Recherche dans l'Industrie et dans l'Agriculture (FRIA-Belgium); the Agentschap voor Innovatie door Wetenschap en Technologie (IWT-Belgium); the F.R.S.-FNRS and FWO (Belgium) under the ``Excellence of Science -- EOS" -- be.h project n.\ 30820817; the Beijing Municipal Science \& Technology Commission, No. Z181100004218003; the Ministry of Education, Youth and Sports (MEYS) of the Czech Republic; the Lend\"ulet (``Momentum") Program and the J\'anos Bolyai Research Scholarship of the Hungarian Academy of Sciences, the New National Excellence Program \'UNKP, the NKFIA research grants 123842, 123959, 124845, 124850, 125105, 128713, 128786, and 129058 (Hungary); the Council of Science and Industrial Research, India; the HOMING PLUS program of the Foundation for Polish Science, cofinanced from European Union, Regional Development Fund, the Mobility Plus program of the Ministry of Science and Higher Education, the National Science Center (Poland), contracts Harmonia 2014/14/M/ST2/00428, Opus 2014/13/B/ST2/02543, 2014/15/B/ST2/03998, and 2015/19/B/ST2/02861, Sonata-bis 2012/07/E/ST2/01406; the National Priorities Research Program by Qatar National Research Fund; the Programa Estatal de Fomento de la Investigaci{\'o}n Cient{\'i}fica y T{\'e}cnica de Excelencia Mar\'{\i}a de Maeztu, grant MDM-2015-0509 and the Programa Severo Ochoa del Principado de Asturias; the Thalis and Aristeia programs cofinanced by EU-ESF and the Greek NSRF; the Rachadapisek Sompot Fund for Postdoctoral Fellowship, Chulalongkorn University and the Chulalongkorn Academic into Its 2nd Century Project Advancement Project (Thailand); the Welch Foundation, contract C-1845; and the Weston Havens Foundation (USA).
\end{acknowledgments}
|
2,869,038,153,948 | arxiv | \section{Introduction} \label{sec:intro}
The earliest luminous quasars, powered by billion solar mass supermassive black holes (SMBHs), can be used not only to constrain the physics of SMBH accretion and the assembly of the first generation of massive galaxies in the early Universe, but also to obtain critical information on the physical conditions of the intergalactic medium (IGM) during the epoch of reionization (EoR). Although more than 200 $z>6$ quasars have been found in the past few decades \citep[e.g.][]{Fan01,Willott10,Wu15,Jiang16,Banados16,Wang16,Matsuoka16,Reed17}, only several tens of them are at $z>6.5$ \citep[e.g.][]{Venemans15,Mazzucchelli17,Wang17,Wang19,Yang19,Reed19} and just six are currently known at $z>7$ \citep{Mortlock11,Banados18,Wang18,Matsuoka19a,Matsuoka19b,Yang19}. The limited number of known high redshift quasars is due to the combination of a rapid decline of quasar spatial density towards higher redshifts \citep[e.g.][]{Wang19}, the lack of deep wide-field near-infrared surveys, and the presence of a large number of contaminants from Galactic cool dwarf populations in the photometric quasar selection process. Near-infrared spectroscopic observations of these known quasars indicate that billion or even ten billion solar mass SMBHs are already in place in these luminous quasars \citep[e.g.][]{Wu15,Shen19}. The existence of these SMBHs in such a young Universe challenges our understanding of the formation and the growth mechanisms of SMBHs \citep[e.g.][]{Volonteri06,Pezzulli16,Wise19,Davies19b}.
Observations of the Lyman series forests in $z\gtrsim6$ quasars indicate that the IGM is already highly ionized by $z\sim6$ \citep[e.g.][]{Fan06,Bosman18,Eilers18,Eilers19,Yang20}, although the final completion of reionization might extend down to $z\sim5.5$ \citep[e.g.][]{Becker15,Davies18a,Kulkarni19,Keating20}. However, the Lyman series forests are very sensitive to neutral hydrogen and saturate even at low IGM neutral fraction (i.e. $\langle x_\mathrm{H\,I}\rangle \ga 10^{-4}$). On the other hand, if the neutral fraction is of order unity, one would expect to see appreciable absorption redward of the wavelength of the Ly$\alpha$ emission line, resulting in a damping wing profile \citep[e.g.][]{Miralda98} due to significant optical depth on the Lorentzian wing of the Ly$\alpha$ absorption. The first quasar with a damping wing detection is ULAS J1120+0641 \citep{Mortlock11} at $z=7.09$, although different analyses yielded different constraints on $\langle x_\mathrm{H\,I}\rangle$ \citep{Mortlock11,Bolton11,Bosman15,Greig17,Davies18b}, ranging from $\langle x_\mathrm{H\,I}\rangle\sim0$ to $\langle x_\mathrm{H\,I}\rangle\sim 0.5$ at $z\sim7.1$. Recently, the spectrum of quasar ULAS J1342+0928 \citep{Banados18} at $z=7.54$ shows a robust detection of the damping wing signal \citep{Banados18,Davies18b,Greig19, Durovcikova19}, yielding $\langle x_\mathrm{H\,I}\rangle\sim0.2-0.6$ at $z\sim7.5$. Compared to other probes of reionization history, such as CMB polarization \citep{Planck18} and Ly$\alpha$ emission line visibility in high-redshift galaxies \citep[e.g.][]{Ouchi10,Mason18}, a main advantage of IGM damping wing measurement is that it can be applied to individual quasar sight lines, thereby constraining not only the average neutral fraction, but also its scatter in different regions of the IGM. However, the damping wing experiment is only feasible at very high redshifts where the IGM is relatively neutral, and current damping wing analyses have been limited to these two sight-lines due to the lack of bright quasars at $z\gtrsim7$. Thus, it is crucial to investigate the damping wing experiment along more $z>7$ quasar sight lines.
In this paper, we present the detection of strong IGM damping wing absorption along the line of sight to a luminous $z=7$ quasar DES J025216.64--050331.8 \citep[hereinafter J0252--0503;][]{Yang19}, using new high quality optical/near-infrared spectroscopic observations; we also use the new spectrum to measure the mass and Eddington ratio of the central SMBH. In Section \ref{sec_obs}, we describe our photometric and spectroscopic observations for J0252--0503. In Section \ref{sec_bh}, we present the luminosity, BH mass and Eddington ratio measurements of J0252--0503. In Section \ref{sec_dp}, we discuss the reconstructions of the unabsorbed spectrum of the quasar and our constraints on the neutral fraction in the IGM at $z=7$ by modeling IGM Ly$\alpha$ absorption. Finally, in Section \ref{sec_sum} we summarize our results and briefly discuss the implications for the cosmic reionization history and BH growth constraints with larger quasar samples at $z\gtrsim7$ in the future. Throughout this paper, we assume a flat $\Lambda$CDM cosmology with $h=0.685$ \citep{Betoule14}, $\Omega_b=0.047$, $\Omega_m=0.3$, $\Omega_\Lambda=0.7$, and $\sigma_8=0.8$. All photometry in this paper is in the AB system.
\begin{figure*}[tbh]
\centering
\includegraphics[width=1.0\textwidth]{fig_spec_final.png}
\caption{Gemini/GMOS $+$ Keck/NIRES spectrum of J0252--0503. The spectrum is plotted using 200 $\rm km~ s^{-1}$ pixels (binned by $\sim 5$ native pixels). The black and magenta lines represent the Galactic extinction-corrected spectrum and the error array, respectively. The brown line denotes the quasar composite spectrum constructed with 83 SDSS quasars with similar C\,{\sc iv}\ blueshifts and line strengths. The green dashed line denotes the pseudo-continuum model which includes power-law, iron emission, and Balmer continuum components. The light blue points are flux densities determined from Galactic extinction-corrected photometry in the \emph{J}, \emph{H}, and \emph{K} bands. The left inset is the zoom-in of the Ly$\alpha$ region. In addition to the composite spectrum derived from 83 SDSS quasars, we also show another 100 composite spectra constructed via bootstrapping. The Ly$\alpha$ position is marked with a gray dashed line. J0252--0503 shows strong absorption on top of and redward of the Ly$\alpha$ line, indicating a strong damping wing signature. The right inset shows the Mg\,{\sc ii}\ line fitting with the cyan dot-dashed line denoting power-law continuum, the green dashed line denoting the pseudo-continuum model, and the red line representing total fit of pseudo-continuum and Mg\,{\sc ii}\ line.
\label{fig_spec}}
\end{figure*}
\section{Observations and Data Reduction}\label{sec_obs}
J0252--0503 \citep{Yang19} was selected as a quasar candidate using photometry from the Dark Energy Survey \citep[DES,][]{Abbott18} and the unblurred coadds of {\it WISE} \citep[unWISE,][]{Lang14} data. It was spectroscopically identified as a quasar at $z=7.02$ based on the strong Ly$\alpha$ break using observations from Magellan/LDSS-3. However, the lack of a near-infrared spectrum for this quasar precluded detailed analyses in the discovery paper.
We obtained a high quality near-infrared spectrum with the Near-Infrared Echellette Spectrometer \citep[NIRES\footnote{\url{https://www2.keck.hawaii.edu/inst/nires/}};][]{Wilson04} mounted on the Keck-2 telescope. NIRES is a prism cross-dispersed near-infrared spectrograph with a fixed configuration that simultaneously covers the \emph{Y}, \emph{J}, \emph{H}, and \emph{K} bands in five orders from 0.94 to 2.45 $\mu$m with a small gap between 1.85 and 1.88 $\mu$m. The mean spectral resolving power of NIRES is $R\sim2700$ with a fixed $0\farcs55$ narrow slit. We observed J0252--0503 with NIRES for a total of 4.8 hours of on-source integration on three nights: 1.4 hours on 2018 August 12, 1.0 hour on 2018 September 3, and 2.4 hours on 2018 October 1 (UT). The observations were separated into multiple 300\,s or 360\,s individual exposures with the standard ABBA dither pattern. We also observed the flux standard star Feige 110. We reduced the NIRES data using a newly developed open-source Python-based spectroscopic data reduction pipeline \cite[{\tt PypeIt}\footnote{\url{https://github.com/pypeit/PypeIt}};][]{pypeit}. Basic image processing (i.e. flat fielding) followed standard techniques. Wavelength (in vacuum) solutions for individual frames were derived from the night sky emission lines. Sky subtractions were performed on the 2-D images by including both image differencing and a B-spline fitting procedure. We used the optimal spectrum extraction technique \citep{Horne86} to extract 1-D spectra. We flux calibrated the individual 1-D spectra with the sensitivity function derived from the standard star Feige 110. We then stacked the fluxed 1-D spectra from each night and fitted a telluric absorption model directly to the stacked quasar spectra using the telluric model grids produced from the Line-By-Line Radiative Transfer Model \citep[{\tt LBLRTM} \footnote{\url{http://rtweb.aer.com/lblrtm.html}};][]{Clough05}. Finally, we combined all spectra obtained on different nights to produce the final processed 1-D spectrum.
Gemini GMOS-S \citep{Hook04} observations for J0252--0503 \citep[previously described by][]{Yang19} were performed in two wavelength setups both with the R400 grating to cover the small wavelength gaps between detectors, with one setup centered at 860\,nm and the other centered at 870\,nm. These two setups yields a wavelength coverage of 0.6--1.1 $\mu$m and spectral resolution of $R\sim1300$. Each setup was exposed for an hour. The GMOS data were also reduced with {\tt PypeIt}. The spectra were flux calibrated with the sensitivity function derived from flux standard star GD71 and telluric absorption was corrected using the same method as the NIRES data reduction.
\edit1{In order to combine the NIRES and GMOS spectra, we scaled the NIRES co-added spectrum to the GMOS flux level using the median in the overlapping wavelength region from 9800 to 10200 \AA. The flux level of the NIRES spectrum is only $\sim$6\% lower than that of GMOS spectrum and the shapes of these two spectra are perfectly matched. Finally, we computed the stacked spectrum in the overlap region after binning the NIRES spectrum to the GMOS wavelength grid.}
Since the flux calibration is crucial for the damping wing analyses, we also obtained near-infrared \emph{Y}, \emph{J}, \emph{H}, and \emph{K}-band photometry with UKIRT/WFCam on 2018 November 27. The on-source times were 8 min in each band. The data were processed using the standard VISTA/WFCAM data-flow system by M. Irwin \citep{Irwin04}. The magnitudes of J0252--0503 were measured to be \emph{Y}=20.33$\pm$0.07, \emph{J}=20.19$\pm$0.07, \emph{H}=20.02$\pm$0.07, and \emph{K}=19.92$\pm$0.08. We then scaled the combined NIRES and GMOS spectrum by carrying out synthetic photometry on the spectrum using the WFCAM \emph{J}-band filter response curve to match the \emph{J}-band photometry for absolute flux calibration. The magnitudes measured from the \emph{J}-band scaled spectrum in the \emph{Y}, \emph{H}, and \emph{K} bands are 20.36, 20.09, and 19.93 mag, respectively. The consistency of magnitudes derived from the fluxed spectrum and UKIRT observations indicates that the spectrophotometric calibration of the spectrum is accurate to within 10\%. Finally, we corrected for Galactic extinction using the dust extinction map derived by \cite{Schlegel98}. The spectrum was then de-redshifted with the systemic redshift $z=7.000\pm0.001$, derived from the IRAM NOrthern Extended Millimeter Array (NOEMA) observations of the far-infrared [C\,{\sc ii}] emission line\footnote{The host galaxy properties of J0252--0503 will be published separately together with [C\,{\sc ii}] observations of a sample of $z>6.5$ quasars.}. The final spectrum used for the following analyses is shown in Figure \ref{fig_spec}. Note that in Figure \ref{fig_spec}, the spectrum is plotted after being rebinned to 200 $\rm km~ s^{-1}$ pixels.
\section{Rest-Frame UV Properties and Black Hole Mass}\label{sec_bh}
In order to derive the rest-frame ultraviolet (UV) properties of J0252--0503, we fit a pseudo-continuum model which includes a power-law continuum, iron (Fe\,{\sc ii}\ and Fe\,{\sc iii}) emission \citep{Vestergaard01,Tsuzuki06}, and Balmer continuum \citep[e.g.][]{Derosa14} to the line-free region of the calibrated and deredshifted spectrum. This pseudo-continuum model is then subtracted from the quasar spectrum, leaving a line-only spectrum. We then fit the Mg\,{\sc ii}\ broad emission line in the continuum-subtracted spectrum with two Gaussian profiles. To estimate the uncertainties of our spectral measurements, we use a Monte Carlo approach \citep[e.g.][]{Shen19} to create 100 mock spectra by randomly adding Gaussian noise at each pixel with its scale equal to the spectral error at that pixel. We then apply the exact same fitting algorithm to these mock spectra. The uncertainties of measured spectral properties are then estimated based on the 16\% and 84\% percentile deviation from the median.
The pseudo-continuum model is shown in Figure \ref{fig_spec} and an enlargement of the Mg\,{\sc ii}\ region fitting is shown in the right insert panel of Figure \ref{fig_spec}. The fitting procedure yields a power-law continuum of $f_\lambda \propto \lambda^{-1.67\pm0.04}$, from which we measure the rest-frame 3000 \AA\ luminosity to be $\lambda L_{\rm3000\text{\normalfont\AA}}$=(2.5$\pm$0.2)$\times10^{46}$ erg s$^{-1}$, implying a bolometric luminosity of $L_{\rm bol}$=5.15$\times$ $\lambda L_{\rm3000 \text{\normalfont\AA}}$ = (1.3$\pm$0.1)$\times$10$^{47}$ erg s$^{-1}$ \citep{Shen11}. The rest-frame 1450 \AA\ magnitude is measured to be $M_{1450}=-26.63\pm0.07$. The full width at half maximum (FWHM) and equivalent width (EW) of the Mg\,{\sc ii}\ line are measured to be FWHM$\rm _{MgII}=3503\pm205$ km s$^{-1}$ and EW$_{\rm MgII}$=$18.83\pm 0.92$ \AA, respectively. The Mg\,{\sc ii}\ emission line is blueshifted by $\Delta_{v, {\rm MgII}}= (712\pm50)~ {\rm km~s^{-1}}$ relative to the systemic redshift determined from the [C\,{\sc ii}] line, similar to other luminous $z\sim7$ quasars in which Mg\,{\sc ii}\ blueshifts range from a few hundred to $\sim1000$ km s$^{-1}$ \citep[e.g.][]{Venemans16, Mazzucchelli17, Banados18, Decarli18}.
We adopt the empirical relation obtained by \cite{Vestergaard09} to estimate the black hole mass of J0252--0503, which yields $\rm M_{BH}= (1.39\pm0.16)\times10^{9}~ M_\odot$. \edit1{Note that the quoted black hole mass uncertainty does not include the systematic uncertainties of the scaling relation, which could be up to $\sim0.5$ dex \citep{Shen13}.} By comparing the bolometric luminosity estimated above with the Eddington luminosity, which is $L_{\rm Edd}=1.3\times10^{38}\times {\rm M_{BH}}$, we measure the Eddington ratio of J0252--0503 to be $\lambda_{\rm Edd}=0.7\pm0.1$. Note that the uncertainty quoted here does not consider the systematic uncertainties introduced by both single epoch BH mass estimators and monochromatic bolometric corrections. The Eddington ratio of J0252--0503 is slightly lower than that of the other three luminous $z\ge7$ quasars: $\lambda_{\rm Edd}=1.5^{+0.5}_{-0.4}$ for J1342+0928 \citep{Banados18} at $z=7.54$, $\lambda_{\rm Edd}=1.2^{+0.6}_{-0.5}$ for J1120+0641 at $z=7.09$ \citep{Mortlock11}, and $\lambda_{\rm Edd}=1.25\pm0.19$ for J0038--1527 at $z=7.02$ \citep{Wang18}. If J0252--0503 has been accreting at such Eddington ratio since $z\sim20$ with a radiative efficiency of 10\%, it would require a seed BH of $\sim10^5~{\rm M_\odot}$, which significantly exceeds the predicted mass range from stellar remnant BHs and requires more exotic seed formation mechanisms like direct collapse BHs.
\edit1{Even if it was accreting at the Eddington limit, J0252--0503 would still require the seed BH to be more massive than $\sim10^4~{\rm M_\odot}$. This indicates that J0252--0503 is one of the few quasars that put the most stringent constraints on SMBH formation and growth mechanisms.}
\begin{figure*}[tbh]
\centering
\includegraphics[width=1.0\textwidth]{fig_spec_dp.png}
\caption{
Top: Gemini/GMOS $+$ Keck/NIRES spectrum of J0252--0503, the same as shown in Figure \ref{fig_spec}. The red-side PCA fit and the blue-side prediction are overlaid as red and blue curves, respectively.
Bottom left: Zoom-in of the Ly$\alpha$ region. The brown and blue lines represent the composite spectrum, and PCA blue-side prediction, respectively. The thinner blue lines show 100 draws from the covariant blue-side prediction error calibrated from the 1\% of quasars that are most similar in the PCA training set. The composite spectra agree well with the PCA prediction, which implies that the detection of a strong damping wing is robust.
Bottom right: Transmission spectrum of J0252--0503 (the spectrum is normalized by the PCA model). The re-binned spectrum is shown as thick black line, while the un-binned spectrum is shown as a gray line. The blue solid curve shows the mean transmission spectrum of mock spectra with $\langle x_{\rm HI} \rangle = 1.0$ and $t_{\rm Q}=10^{6.3}$ yr, while the associated blue shaded region shows the 16th--84th percentile range for mock spectra with the above parameters. As a comparison, the transmission spectrum of a DLA model with column density of $N_{\rm HI}= 10^{21.04}~ {\rm cm^{-2}}$ at $z=6.94$, is plotted as a yellow dashed line. The metal line Al\,{\sc ii}\ $\lambda1670$ from the $z=4.8793$ absorption system is highlighted by a red transparent vertical line.
\label{fig_pca}}
\end{figure*}
\section{A Strong Ly$\alpha$ Damping Wing at $z=7$}\label{sec_dp}
Among the six public known $z\ge7$ quasars, two objects already have had damping wing analyses performed \citep{Mortlock11, Bolton11, Bosman15, Greig17, Greig19, Banados18, Davies18b,Durovcikova19}. Two other quasars are too faint ($M_{1450}\gtrsim-25$) for damping wing analyses with current facilities \citep{Matsuoka19a, Matsuoka19b}, and another is a broad absorption line (BAL) quasar in which strong absorption precludes determination of the intrinsic quasar spectrum \citep{Wang18}. Thus, J0252--0503 is the only known bright, non-BAL quasar at $z\ge7$ of which a damping wing analysis has not been performed yet. In order to examine whether the damping wing is present in the spectrum of J0252--0503, we need to know the intrinsic quasar spectrum in the Ly$\alpha$ region (i.e. before IGM attenuation). In the past few years, several methods have been proposed for constructing the quasar intrinsic spectra, including stacking of low-redshift quasar spectra with similar emission line properties \citep[e.g.][]{Mortlock11, Simcoe12, Banados18}, using the principal component analysis (PCA) decomposition approach \citep{Davies18b, Davies18c}, constructing the covariant relationships between parameters of Gaussian fits to Ly$\alpha$ line and those of Gaussian fits to other broad emission lines \citep{Greig17,Greig19}, and using the neural network method \citep{Durovcikova19}. In this paper, we adopt both the empirical composite method and the PCA method to construct the intrinsic spectrum for J0252--0503 as detailed below.
\subsection{Empirical Composite Spectra from Analogs}\label{subsec_composite}
Since there is a lack of spectral evolution of quasars from low redshifts to high redshifts \citep[e.g.][]{Shen19}, the large sample of SDSS/BOSS quasars at lower redshifts provides a good training set for constructing a high-redshift quasar intrinsic spectrum. First, we use a composite spectrum constructed from a sample of low-redshift quasar analogs to model the intrinsic spectrum. Because the C\,{\sc iv}\ line properties, and especially the line's blueshift, appear to be strongly connected with differences in the quasar spectral energy distribution \citep[e.g.][]{Richards11}, we select quasar analogs from SDSS/BOSS DR14 quasar catalog \citep{Paris18} by matching the C\,{\sc iv}\ blueshifts to J0252--0503. As most SDSS/BOSS quasars do not have [C\,{\sc ii}] redshifts, we measure the relative blueshifts between the C\,{\sc iv}\ and Mg\,{\sc ii}\ lines. This limits us to selecting quasars in the redshift range $2.0<z<2.5$ in order to get Ly$\alpha$, C\,{\sc iv}, and Mg\,{\sc ii}\ line properties from BOSS spectra. We also excluded quasars marked as BAL and those without Mg\,{\sc ii}\ redshift measurements in the catalog. This yields 85,535 quasars in total.
Before measuring the line properties from these quasars, we first fit a power-law continuum to the quasar spectrum and subtract it from the data. Instead of fitting the C\,{\sc iv}\ and Mg\,{\sc ii}\ lines directly, we use a more robust non-parametric scheme proposed by \cite{Coatman16} to measure the line centroids of C\,{\sc iv}\ and Mg\,{\sc ii}\ lines from the continuum subtracted spectra. The relative blueshift between these two lines is then defined as
\begin{equation}
\Delta v = c\times \left( \frac{1549.06-\lambda_{\rm half, CIV}}{1549.06} - \frac{2798.75-\lambda_{\rm half, MgII}}{2798.75} \right),
\end{equation}
where $c$ is the speed of light and $\lambda_{\rm half, CIV}$ ($\lambda_{\rm half, MgII}$) is the rest-frame wavelength that bisects the cumulative total line flux of C\,{\sc iv}\ (Mg\,{\sc ii}).
We applied this procedure to the spectra of both J0252--0503 and the 85,535 SDSS/BOSS quasars. The blueshift in J0252--0503 is measured to be 4090 km $\rm s^{-1}$. We then select quasars with blueshifts between 3,000 km $\rm s^{-1}$ and 5,000 km $\rm s^{-1}$ and mean spectral signal-to-noise ratios (SNRs) per pixel in the C\,{\sc iv}\ and Mg\,{\sc ii}\ regions greater than 4 and 2, respectively. These SNR limits were chosen to yield enough sight-lines to compute a composite. After this, we visually inspected the continuum normalized spectra and removed quasars that have BAL features, proximate damped Ly$\alpha$ systems (PDLAs) and strong intervening absorbers on top of the emission lines. We also reject objects with Mg\,{\sc ii}\ line measurements that are strongly affected by sky line residuals and remove targets that have strongly different C\,{\sc iv}\ and Mg\,{\sc ii}\ line profiles than J0252--0503 (objects were removed if the line peaks differ by more than three times the spectrum error vector of J0252--0503). In the end, our master quasar analog sample consists of 83 SDSS/BOSS quasars.
Before constructing the composite spectrum, each spectrum was divided by its best fit power-law continuum. Each spectrum was weighted by the average SNR of that spectrum when computing the composite. Then we multiplied the power-law fit from J0252--0503 with the constructed continuum normalized composite, obtaining the composite spectrum shown in Figure \ref{fig_spec}. In order to understand the uncertainties of the composite spectrum and minimize the bias introduced by visual checks, we resampled our parent sample with bootstrapping to construct another 100 composites which are shown as thin orange lines in the insert panel of Figure \ref{fig_spec}. Overall, the constructed composite matches the J0252--0503 spectrum very well across the whole spectral range, except for the Ly$\alpha$ line region. From the left inset of Figure \ref{fig_spec}, we can clearly see that these composites have higher fluxes redward of the Ly$\alpha$ emission line (from 1216\AA\ to 1250\AA\ in rest-frame) than J0252--0503, indicating strong absorption in the spectrum of J0252--0503.
\subsection{Principal Component Analysis}\label{subsec_pca}
Strong correlations between various broad emission lines of quasars from the rest-frame ultraviolet to the optical are known to exist \citep[e.g.][]{Richards11}. Taking this into account, in principle one can predict the shape of the Ly$\alpha$ line based on the properties of other broad emission lines. \cite{Davies18c} developed a PCA predictive approach based on a training set of $\sim13,000$ quasar spectra from SDSS/BOSS quasar catalog \citep{Paris17} to predict the ``blue-side'' (rest-frame 1175--1280\,\AA) quasar spectrum from the ``red-side'' (rest-frame 1280--2850\,\AA) spectrum. In brief, we performed a PCA decomposition of the training set truncated at 10 red-side and 6 blue-side basis spectra for each quasar. Then we derived a projection matrix relating the best-fit coefficients in the red-side and a template redshift to the coefficients in the blue-side \citep{Suzuki05,Paris11}. With this matrix, we can then predict the blue-side coefficients and thus the blue-side spectrum from a fit to the red-side coefficients and template redshift of a given quasar spectrum.
We quantify the uncertainties of this prediction by testing the full predictive procedure on every quasar in the training set and computing their relative continuum error \cite[See][for more details]{Davies18c}. We assume a multivariate Gaussian distribution for the relative continuum error, with the covariance matrix determined from the prediction errors measured for similar quasars, i.e., the 1\% nearest neighbors, as the uncertainties of the prediction.
The advantage of this PCA method compared to the composite spectrum discussed in \S \ref{subsec_composite} is that the PCA approach takes into account the properties of all broad emission lines in the red side rather than just the properties of the C\,{\sc iv}\ line. In addition, we can quantify uncertainties in the blue-side spectrum predictions by testing the method on the input training set.
In the upper panel of Figure \ref{fig_pca}, we show the red-side PCA fit and blue-side prediction for J0252--0503 on top of the GMOS+NIRES quasar spectrum. In the bottom left panel of Figure \ref{fig_pca}, we show a zoom-in of the Ly$\alpha$ region overlaid with both the blue-side PCA model and the composite spectrum constructed in \S \ref{subsec_composite}. From this zoomed-in plot, we can see that the intrinsic quasar spectrum predicted by the PCA model agrees very well with the composite spectrum. Both models suggest that there is a strong damping wing absorption imprinted on the Ly$\alpha$ emission line of the quasar. Since these two models are consistent with each other, we will only use the PCA continuum model for the following analyses so that we can make use of its well quantified uncertainties.
\begin{figure}
\centering
\includegraphics[width=0.47\textwidth]{fig_cog_nires.pdf}
\caption{Curve of growth analysis to derive column densities for the selected ions. For each panel, the 1$\sigma$ and 3$\sigma$ limits to the equivalent width and column density are shown with dotted and dashed lines, respectively. The colored curves represent $b$-parameters of 5, 10, 15 and 20 $\rm km~s^{-1}$.
\label{fig_cog}}
\end{figure}
\subsection{Modeling the Damping Wing as a Single DLA}\label{subsec_dla}
The smooth damped absorption profile can be imprinted by either an intervening high column density gravitationally bounded DLA system ($N_{\rm HI}>10^{20}~{\rm cm}^{-2}$) or substantially neutral gas in the IGM. However, DLA systems in the quasar vicinity are very rare at high redshifts. Among more than 250 known $z\gtrsim5.7$ quasars, only a few of them have been identified to be associated with such absorbers close to the quasar redshifts \citep[e.g.][]{DOdorico18, Banados19, Davies19, Farina19}, suggesting that the probability of the strong redward absorption seen in J0252--0503 being caused by a DLA is low. DLA systems are usually associated with a number of metal lines such as Si\,{\sc ii}\ $\lambda1260, \lambda1304,\lambda1526$, O\,{\sc i}\ $\lambda1302$, C\,{\sc ii}\ $\lambda1334$, C\,{\sc iv}\ $\lambda1548,\lambda1550$, Mg\,{\sc ii}\ $\lambda2796,\lambda2803$, and a series of Fe\,{\sc ii}\ lines. Thus, one way to distinguish a DLA damping wing from an IGM damping wing is to search for associated metal absorption features.
First, we need to determine the redshift of a potential DLA system. To do so, we fit a Voigt profile to the transmission spectrum which is normalized by the PCA continuum model. Since the Doppler parameter, $b$, does not strongly affect the Ly$\alpha$ profile \citep[e.g.][]{Crighton15}, we fixed the $b$ value to be $b=10 {\rm ~km~s^{-1}}$ and use the MCMC sampler \citep[{\tt emcee};][]{emcee} to jointly fit the redshift and H\,{\sc i}\ column density of a DLA model. During the fit, we masked the narrow absorption at $v\sim0 {\rm ~km~s^{-1}}$. This absorption could be caused by neutral gas inflow since we did not find any associated metal absorption from the quasar spectrum, and it is located at a slightly higher redshift than the quasar if it is caused by neutral hydrogen. The best fit parameters for the system are determined to be $N_{\rm HI}=10^{21.04\pm0.04}~{\rm cm}^{-2}$ and $z_{\rm DLA}=6.939\pm0.002$. In order to qualify the uncertainties of these parameters caused by the continuum model, we then fit DLA models to 100 transmission spectra normalized by the 100 PCA draws shown in the bottom left panel of Figure \ref{fig_pca}. The median values and the mean deviation of 16\% and 84\% percentiles from the median for $N_{\rm HI}$ and $z_{\rm DLA}$ are measured to be $N_{\rm HI}=10^{21.04\pm0.10}~{\rm cm}^{-2}$ and $z_{\rm DLA}=6.940\pm0.003$. To take both the fitting uncertainty and the PCA continuum uncertainty into account, we take $N_{\rm HI}=10^{21.04\pm0.14}~{\rm cm}^{-2}$ and $z_{\rm DLA}=6.940\pm0.004$ as our fiducial parameters for the DLA model, where the uncertainties are the sum of the uncertainties from the {\tt emcee} fitting on the transmission spectrum and the distribution of the 100 draws. This potential DLA system (if it exists) is $\sim2200{\rm ~km~s^{-1}}$ away from the quasar systemic redshift which seems unlikely to be associated with the quasar host galaxy. This best fit DLA model is shown as the yellow dashed line in the bottom right panel of Figure \ref{fig_pca}. We caution that the resolution of our spectrum is low in the Ly$\alpha$ region, so the DLA fitting procedure might overestimate the $N_{\rm HI}$ if there are some narrow Ly$\alpha$ transmission spikes in the quasar proximity zone that are unresolved in our spectrum.
We then searched for metal absorption lines at $z\sim6.94$ in the J0252--0503 spectrum. In the end, we did not find any evidence for metal-line absorption at redshifts close to the potential DLA system within $\Delta z \sim \pm0.04$, or $\sim \pm 1500$ km~s$^{-1}$, ten times wider than the redshift uncertainty of the potential DLA system. We also did not find any metal absorption features at the quasar systemic redshift (i.e. from the quasar host galaxy). We then calculated rest-frame equivalent width (EW) 1$\sigma$ limits for each expected metal absorption line as follows: $W_{\rm r, SiII~1260}\le0.029~{\rm \AA}$, $W_{\rm r, OI~1302}\le0.024~{\rm \AA}$, $W_{\rm r, CII~1334}\le0.025~{\rm \AA}$, $W_{\rm r, CIV~1548}\le0.019~{\rm \AA}$, $W_{\rm r, FeII~2586}\le0.067~{\rm \AA}$, $W_{\rm r, FeII~2600}\le0.049~{\rm \AA}$, $W_{\rm r, MgII~2796}\le0.040~{\rm \AA}$. The EW limits were measured by summing over the normalized pixels over an aperture spanning $\pm2\sigma_{\rm inst}$ from the center of each line, where $\sigma_{\rm inst}=47{\rm ~km~s^{-1}}$ was derived from the NIRES instrumental resolution. In order to derive the column densities for the selected iron, we carried out a curve of growth analysis for four different $b$-parameters following \cite{Simcoe12}. The curve of growth analysis is shown in Figure \ref{fig_cog}. Based on the solar abundance \citep{Lodders03} and the column densities derived by fixing $b=10 {\rm ~km~s^{-1}}$, we find that the metallicity of the potential DLA system is most tightly constrained by C\,{\sc iv}. However, whether high redshift DLAs exhibit C\,{\sc iv}\ is still debated \citep[e.g.][]{DOdorico18,Cooper19}. Thus we use Mg\,{\sc ii}\ which sets the second most stringent constraint on the DLA abundance with $\rm [Mg/H]<-4.0$ (3$\sigma$). The DLA abundance $3\sigma$ limits are estimated to be $\rm [Si/H]<-3.6$, $\rm [O/H]<-3.6$, $\rm [C/H]<-3.7$, and $\rm [Fe/H]<-3.3$ based on Si\,{\sc ii}\ $\lambda1260$, O\,{\sc i}\ $\lambda1302$, C\,{\sc ii}\ $\lambda1334$, and Fe\,{\sc ii}\ $\lambda 2600$, respectively.
Since the $b$-parameter could be as low as $b=8 {\rm ~km~s^{-1}}$ at high redshifts \cite{DOdorico18}, we also estimate the [Mg/H] based on $b=5 {\rm ~km~s^{-1}}$ and find that $\rm [Mg/H]<-3.7$ (3$\sigma$).
\begin{figure}
\centering
\includegraphics[width=0.47\textwidth]{fig_metal_stack_b10.pdf}
\caption{Composite stack of heavy-element transitions (O\,{\sc i}\ 1302, C\,{\sc ii}\ 1334, Si\,{\sc ii}\ 1260, C\,{\sc iv}\ 1548, Fe\,{\sc ii}\ 2586, Fe\,{\sc ii}\ 2600, and Mg\,{\sc ii}\ 2796) generated using an inverse-variance weighted mean for a DLA system at $z=6.94$. The shaded grey regions denote the 1-, 2-, and 3-$\sigma$ error vectors. The quasar systemic redshift is indicated by a black dotted line. Overlaid curves show predicted metal absorption profiles for a DLA with $N_{\rm HI}=10^{21.04}$, $b=10{\rm ~km~s^{-1}}$ and a range of metallicities. The stack shows no statistically significant absorption, suggesting that the metallicity of the absorption system would be more than 10,000 times lower than solar if the damped absorption was produced by a single-component DLA system.
\label{fig_metal}}
\end{figure}
To further investigate the properties of a possible DLA, we compute the composite stack of the heavy-element transitions shown in Figure \ref{fig_cog} by assuming that there is a metal-poor DLA at $z_{\rm DLA}=6.94$. We stacked the transmitted flux at the expected wavelength using an inverse-variance weighted mean. The composite stack of metal lines is shown in Figure \ref{fig_metal} which shows no significant absorption within $\Delta v \sim 1500 {\rm ~km~s^{-1}}$. Note that the absorption feature at $v\sim1450~{\rm km~s^{-1}}$ in the stack is caused by the Fe\,{\sc ii}\ 2344 transition from a $z=3.5425$ absorber (see below). The 1$\sigma$ limit for the dimensionless equivalent width, $W=W_{\lambda} / \lambda$ \citep{Draine11}, for the stack is measured to be $W\le7.3\times10^{-6}$ ($1\sigma$). This corresponds to a limit of $\rm [O/H]<-4.1$ ($3\sigma$) after scaling it to the cross-section and relative abundance of O\,{\sc i}.
We also compute a set of DLA models by adapting $b=10{\rm ~km~s^{-1}}$ and solar abundance pattern \citep{Lodders03} with varying metallicities. The DLA transmission spectra are computed in the same wavelength grid and same resolution as the spectrum of J0252--0503. The composite stack of these DLA models for different metallicities is also over-plotted in Figure \ref{fig_metal}. The composite stack with $\rm [Z/H]<-4.3$ matches the observed stack at $3\sigma$ level, consistent with our curve of growth analysis of the observed composite metal transitions within 0.2 dex.
From Figure \ref{fig_cog}, we note that most of the metal transitions are in the linear region of the curve of growth unless $b\lesssim 5 {\rm~km~s^{-1}}$. Thus the metallicity constraint does not change too much by varying the $b$-parameter. By varying $b$ from $5 {\rm~km~s^{-1}}$ to $20 {\rm~km~s^{-1}}$, we can constrain the metallicity of the potential DLA system to be $\rm [Z/H]<-4.5\sim-4.0$. Our analysis indicates that this potential DLA system would be among the most metal-poor DLA systems known \citep[e.g.][]{Cooke11, Banados19}. This suggests that the strong damped absorption is very unlikely to be caused by a DLA.
In addition, we also searched for absorbers at lower redshifts to make sure that the damped Ly$\alpha$ absorption is not contaminated by lower redshift absorbers. We identify five strong Mg\,{\sc ii}\ absorption systems at $z=4.8793$, $z=4.7144$, $z=4.2095$, $z=4.0338$, and $z=3.5425$. These systems also exhibit associated Fe\,{\sc ii}\ lines. The $z=4.8793$ system also has associated Al\,{\sc ii}\ $\lambda1670$ absorption line which falls into the damped absorption region which is masked in the following damping wing analysis. However, this line is very narrow (see the bottom right panel of Figure \ref{fig_pca}) and thus would not be responsible for the smoothed absorption profile on much larger scales. These analyses indicate that the damping wing absorption in the J0252--0503 spectrum is more likely to be imprinted by the neutral IGM rather than by a DLA system or other intervening absorbers, especially considering the fact that J1120+0641 and J1342+0928 also have similar (though slightly weaker) absorption profiles that are not associated with metals \citep[e.g.][]{Simcoe12, Banados18}.
\begin{figure}
\centering
\includegraphics[width=0.47\textwidth]{fig_xhi_1dpdf.pdf}
\caption{Posterior PDFs of $\langle x_{\rm HI} \rangle$ for all three $z\ge7$ quasars with reported damping wings. The solid orange line denotes J0252--0503. The solid magenta and solid blue lines denote J1342+0928 and J1120+0641 \citep{Davies18b}, respectively. The dotted magenta and dotted blue lines represent the analyses for J1342+0928 and J1120+0641 from \cite{Greig19}, and \cite{Greig17}, respectively. The PDFs for J0252--0503 and the analyses from \cite{Davies18c} are marginalized over quasar lifetime assuming a flat prior covering our entire model grid ($10^3 {\rm yr} < t_{\rm Q} < 10^8 {\rm yr}$).
\label{fig_pdf}}
\end{figure}
\subsection{Constraints on the IGM Neutral Fraction from A Strong Damping Wing at $z=7$}\label{subsec_xhi}
In order to quantitatively assess the damping wing strength and constrain the volume-averaged neutral hydrogen fraction at $z=7$, we applied the methodology from \cite{Davies18b} to this quasar sight-line. We refer the reader to \cite{Davies18b} for a detailed description. In brief, we model the reionization-era quasar transmission spectrum with a multi-scale hybrid model. This model combines large-scale semi-numerical reionization simulations around massive dark matter halos computed in a (400 Mpc)$^3$ volume with a modified version of {\tt 21cmFAST} \citep[][Davies \& Furlanetto in prep]{Mesinger11}, density, velocity, and temperature fields of 1200 hydrodynamical simulation skewers from a separate (100 Mpc/$h$)$^3$ {\tt Nyx} hydrodynamical simulation \citep{Almgren13, Lukic15}, and 1D ionizing radiative transfer which models the ionization and heating of the IGM by the quasar \citep{Davies16}. We then construct realistic forward modeled representations of quasar transmission spectra after accounting for the covariant intrinsic quasar continuum uncertainty from the PCA training. Finally, we use a Bayesian statistical method to recover the joint posterior probability distribution functions (PDFs) of $\langle x_{\rm HI} \rangle$ based on these mock transmission spectra.
The damping wing strength not only depends on the $\langle x_{\rm HI} \rangle$, but also strongly depends on the quasar lifetime, $t_{\rm Q}$, due to the ionization of pre-existing neutral hydrogen along the line of sight by the quasar. In order to measure $\langle x_{\rm HI} \rangle$, we conservatively explore a very broad $t_{\rm Q}$ range with a flat log-uniform $t_{\rm Q}$ prior covering $10^3 {\rm yr} < t_{\rm Q} < 10^8 {\rm yr}$. We then compute the posterior PDF for $\langle x_{\rm HI} \rangle$ by marginalizing over the entire model grid of $t_{\rm Q}$, which is shown in Figure \ref{fig_pdf}. The peak of the PDF leans to the high $\langle x_{\rm HI} \rangle$ end. This is consistent with what we have seen in Figure \ref{fig_pca}, where we show a quasar transmission spectrum model within a $\langle x_{\rm HI} \rangle=1.0$ IGM with a quasar lifetime of $ t_{\rm Q} = 10^{6.3}$ yr. The median and the central 68\% (95\%) confidence interval for $\langle x_{\rm HI} \rangle$ are estimated to be $\langle x_{\rm HI} \rangle = 0.70^{+0.20}_{-0.23} (^{+0.28}_{-0.48})$ from the posterior PDF. As a comparison, we also show the PDFs from the other two $z>7$ quasar sight-lines in Figure \ref{fig_pdf}. Although the redshift of J0252--0503 is lower than the other two quasars, the damping wing in J0252--0503 is the strongest one.
\begin{figure}
\centering
\includegraphics[width=0.47\textwidth]{fig_xhi.pdf}
\caption{Cosmic reionization history constraints from quasar spectroscopy and Planck observations \citep{Planck18}, with the dark and light grey shaded regions corresponding to the 68\% and 95\% credible intervals, respectively.
Constraints from quasar damping wings are shown as pentagons, with the orange solid pentagon denotes our new measurement with quasar J0252--0503. Also shown are constraints from the Ly$\alpha$+Ly$\beta$ forest dark gaps \citep[blue squares; ][]{McGreer15}, and the Ly$\alpha$+Ly$\beta$ forest opacity \citep[black circles;][]{Fan06}.
\label{fig_xhi}}
\end{figure}
In Figure \ref{fig_xhi}, we plot the $\langle x_{\rm HI} \rangle$ constraints from all three quasar damping wings. In this figure, we also show the $\langle x_{\rm HI} \rangle$ constraints from the Ly$\alpha$+Ly$\beta$ forest \citep{Fan06}, as well as Ly$\alpha$+Ly$\beta$ dark gaps \citep{McGreer15}. All three $z\ge7$ quasars for which a damping wing analysis can be done with current facilities and methodology show evidence of damping wing absorptions, suggesting that the IGM is substantially neutral at $z\ge7$. These constraints are consistent with the integral constraints of $\langle x_{\rm HI} \rangle$ measured from the electron scattering optical depth of the CMB \citep{Planck18} shown as the underlying shaded region in Figure \ref{fig_xhi}. They are also in broad agreement with recent calculations \citep[e.g.][]{Robertson15} and simulations \citep[e.g.][]{Kulkarni19} of the cosmic reionization history, as well as constrains from gamma-ray burst (GRB) damping wings \citep{Totani06, Totani16, Greiner09}, the detections of Ly$\alpha$ emissions from high redshift galaxies \citep[e.g.][]{Ouchi10,Mason18}, and Ly$\alpha$ luminosity functions \citep[e.g.][]{Kashikawa06,Konno18}.
\section{Summary and Discussion}\label{sec_sum}
In this paper we present high-quality near-infrared spectroscopic observations of a bright $z=7$ quasar, J0252--0503, to constrain the cosmic reionization with quasar damping wing modeling and the SMBH growth with BH mass and Eddington ratio measurements.
We measure the mass of the central SMBH to be $\rm M_{BH}= (1.39\pm0.16)\times10^{9}~ M_\odot$ based on the single-epoch virial method. The Eddington ratio of J0252--0503 is measured to be $\lambda_{\rm Edd}=0.7\pm0.1$, slightly lower than that of the other three $z\ge7$ quasars with similar luminosities. If J0252--0503 has been accreting at such Eddington ratio since $z\sim20$ with a radiative efficiency of 10\%, it would require a seed BH of $\sim10^5~{\rm M_\odot}$, which significantly exceeds the predicted mass range from stellar remnant BHs and requires more exotic seed formation mechanisms like direct collapse BHs. J0252--0503, along with the other three luminous $z>7$ quasars hosting billion solar-mass SMBHs, places the strongest constraints on early BH assembly mechanisms.
In order to investigate whether a damping wing is present in the spectrum of J0252--0503, we explored two different methods to construct the intrinsic spectrum of J0252--0503. The Ly$\alpha$ region of a composite spectrum computed from a sample of C\,{\sc iv}\ blueshift-matched low redshift quasar analogs is consistent with the prediction made by a PCA non-parametric predictive approach. Both methods suggest that a strong damping wing absorption is present in the J0252--0503 spectrum. We modeled the damping wing profile produced by either a single component DLA system or a significantly neutral IGM. However, there is no significant detection of metals at the potential DLA system redshift over a wide range of $\pm1500~{\rm km~s^{-1}}$, suggesting that the strong damping wing in the J0252--0503 spectrum is most likely imprinted by a significantly neutral IGM unless the metallicity of the putative DLA is more than 10,000 times lower than the solar metallicity.
To constrain the IGM neutral hydrogen fraction, $\langle x_{\rm HI} \rangle$, at $z=7$ with the damping wing in J0252--0503, we applied the hybrid model developed by \cite{Davies18b} to our PCA continuum prediction for J0252--0503. Our analysis shows that the damping wing in J0252--0503 is the strongest one yet seen in $z\ge7$ quasar spectra. By marginalizing over quasar lifetime with a log-uniform prior in the range of $10^3 < t_{\rm Q} < 10^8$ yr, we measure the median and the central 68\% (95\%) confidence interval for $\langle x_{\rm HI} \rangle$ to be $\langle x_{\rm HI} \rangle = 0.70^{+0.20}_{-0.23} (^{+0.28}_{-0.48})$ at $z\sim7$.
\edit1{The recent study by \cite{DAloisio20} suggests that unrelaxed gaseous structures may exist in the post-reionization IGM, meaning that the mean free path of ionizing photons is shorter compared with a model that assumes the gas is fully relaxed. The mean free path in the quasar proximity zone, however, should still be quite long due to the strong ionizing radiation of the central luminous quasar \citep{McQuinn11,Davies19}. Thus our constraints on $\langle x_{\rm HI} \rangle$ based on damping wing analysis should not be strongly affected by unrelaxed baryons in the proximity zone.}
Despite the limited precision of quasar continuum reconstructions and the degeneracy of $\langle x_{\rm HI} \rangle$ and quasar lifetime, the damping wing is still highly effective in constraining the reionization history. Although the currently available sample of quasar sight-lines at $z\gtrsim7$ is very small, more luminous $z\gtrsim7$ quasars are expected to be found in the next few years through ongoing quasar searches \citep[e.g.][]{Banados18, Wang18, Yang19, Matsuoka19a, Reed19}. Moreover, the {\it Euclid} wide survey will be online soon, and will discover more than 100 quasars at $z>7$ \citep{Barnett19}. In addition, the Near-Infrared Spectrograph (NIRSpec) on the James Webb Space Telescope (JWST) will provide much higher quality spectroscopic data for more precise quasar damping wing analyses. Thus, we expect that quasar damping wing analyses will have the capability to place increasingly strong constraints on the cosmic reionization history during the next several years.
\acknowledgments
Support for this work was provided by NASA through the NASA Hubble Fellowship grant \#HST-HF2-51448.001-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555.
J. Yang, X. Fan and M. Yue acknowledge support from the US NSF Grant AST-1515115 and NASA ADAP Grant NNX17AF28G.
Research by A.J.B.\ is supported by NSF grant AST-1907290.
X.-B.W. and L.J. acknowledge support from the National Key R\&D Program of China (2016YFA0400703) and the National Science Foundation of China (11533001 \& 11721303).
ACE acknowledges support by NASA through the NASA Hubble Fellowship grant \#HST-HF2-51434 awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS5-26555.
B.P.V. and F.W. acknowledge funding through the ERC grant “Cosmic Gas.”
The data presented in this paper were obtained at the W.M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation. This work was supported by a NASA Keck PI Data Award, administered by the NASA Exoplanet Science Institute. Data presented herein were obtained at the W. M. Keck Observatory from telescope time allocated to the National Aeronautics and Space Administration through the agency's scientific partnership with the California Institute of Technology and the University of California. This research based on observations obtained at the Gemini Observatory (GS-2018B-FT-202), which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the Gemini partnership: the National Science Foundation (United States), National Research Council (Canada), CONICYT (Chile), Ministerio de Ciencia, Tecnolog\'{i}a e Innovaci\'{o}n Productiva (Argentina), Minist\'{e}rio da Ci\^{e}ncia, Tecnologia e Inova\c{c}\~{a}o (Brazil), and Korea Astronomy and Space Science Institute (Republic of Korea). UKIRT is owned by the University of Hawaii (UH) and operated by the UH Institute for Astronomy; operations are enabled through the cooperation of the East Asian Observatory.
The authors thank Percy Gomez and Greg Doppmann for their expert support and advice during our NIRES observing runs. Some of the data presented herein were obtained using the UC Irvine Remote Observing Facility, made possible by a generous gift from John and Ruth Ann Evans.
The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Mauna Kea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain.
\vspace{5mm}
\facilities{Gemini(GMOS), Keck(NIRES), UKIRT(WFCam), NOEMA}
\software{astropy \citep{astropy}, emcee \citep{emcee}, matplotlib \citep{matplotlib}, PypeIt \citep{pypeit}}
\bigskip
|
2,869,038,153,949 | arxiv | \section{Introduction}
\setcounter{footnote}{0}
It is well known that optimising a proxy can lead to unintended outcomes:
a boat spins in circles collecting ``powerups'' instead of following the race track in a racing game \citep{clark2016faulty};
an evolved circuit listens in on radio signals from nearby computers' oscillators instead of building its own \citep{bird2002evolved}; universities reject the most qualified applicants in order to appear more selective and boost their ratings \citep{golden2001glass}.
In the context of reinforcement learning (RL), such failures are called \textbf{reward hacking}.
For AI systems that take actions in safety-critical real world environments such as autonomous vehicles,
algorithmic trading,
or content recommendation systems, %
these unintended outcomes can be catastrophic.
This makes it crucial to align autonomous AI systems with their users' intentions.
Precisely specifying which behaviours are or are not desirable is challenging, however.
One approach to this specification problem is to learn an approximation of the true reward function \citep{ng2000algorithms, ziebart2010modeling, leike2018scalable}.
Optimizing a learned proxy reward can be dangerous, however;
for instance, it might overlook side-effects \citep{Krakovna2018Penalizing, Turner2019Conservative} or encourage power-seeking \citep{turner2021optimalneurips} behavior.
This raises the question motivating our work: When is it safe to optimise a proxy?
To begin to answer this question, we consider a somewhat simpler one: When \textit{could} optimising a proxy lead to worse behaviour?
\enquote{Optimising}, in this context, does not refer to finding a global, or even local, optimum, but rather running a search process, such as stochastic gradient descent (SGD), that yields a sequence of candidate policies, and tends to move towards policies with higher (proxy) reward.
We make no assumptions about the path through policy space that optimisation takes.\footnote{
This assumption -- although conservative -- is reasonable because optimisation in state-of-the-art deep RL methods is poorly understood and results are often highly stochastic and suboptimal.
}
Instead, we ask whether there is \textit{any} way in which improving a policy according to the proxy could make the policy worse according to the true reward; this is equivalent to asking if there exists a pair of policies $\pi_1$, $\pi_2$ where the proxy prefers $\pi_1$, but the true reward function prefers $\pi_2$.
When this is the case, we refer to this pair of true reward function and proxy reward function as \textbf{hackable}.
Given the strictness of our definition, it is not immediately apparent that any non-trivial examples of unhackable reward function pairs exist.
And indeed, if we consider the set of all stochastic policies, they do not (Section~\ref{sec:results_all}).
However, restricting ourselves to \textit{any} finite set of policies guarantees at least one non-trivial unhackable pair (Section~\ref{sec:results_finite}).
Intuitively, we might expect the proxy to be a ``simpler'' %
version of the true reward function.
Noting that the definition of unhackability is symmetric, we introduce the asymmetric special case of \textbf{simplification}, and arrive at similar theoretical results for this notion.\footnote{See Section~\ref{sec:our_ definitions} for formal definitions.}
In the process, and through examples, we show that seemingly natural ways of simplifying reward functions often fail to produce simplifications in our formal sense, and %
in fact fail to
rule out the potential for reward hacking.
We conclude with a discussion of the implications and limitations of our work.
Briefly, our work suggests that a proxy reward function must satisfy demanding standards in order for it to be safe to optimize.
This in turn implies that the reward functions learned by methods such as reward modeling and inverse RL are perhaps best viewed as auxiliaries to policy learning, rather than specifications that should be optimized.
This conclusion is weakened, however, by the conservativeness of our chosen definitions; future work should explore when hackable proxies can be shown to be safe in a probabilistic or approximate sense, or when subject to only limited optimization.
\section{Example: Cleaning Robot}
Consider a household robot tasked with cleaning a house with three rooms: Attic \IconAttic, Bedroom \IconBedroom, and Kitchen \IconKitchen.
The robot's (deterministic) policy is a vector indicating which rooms it cleans:
$\pi = [\pi_1, \pi_2, \pi_3] \in \{0, 1\}^3$.
The robot receives a (non-negative) reward of $r_1, r_2, r_3$ for cleaning the attic, bedroom, and kitchen, respectively, and the total reward is given by $J(\pi) = \pi \cdot r$.
For example, if $r = [1, 2, 3]$ and the robot cleans the attic and the kitchen, it receives a reward of $1+3 = 4$.
\begin{figure*}[h!]
\vspace{-3pt}
\centering
\includegraphics[width=0.75\textwidth]{./figures/robot-figure-upd.pdf}
\caption{An illustration of hackable and unhackable proxy rewards arising from overlooking rewarding features. A human wants their house cleaned.
In (a), the robot draws an incorrect conclusion because of the proxy; this could lead to hacking. In (b), no such hacking can occur: the proxy is unhackable.}
\label{fig:front-fig}
\end{figure*}
\vspace{-4pt}
At least two ideas come to mind when thinking about \enquote{simplifying} a reward function.
The first one is \textit{overlooking rewarding features}: suppose the true reward
is equal for all the rooms, $r_\text{true} = [1, 1, 1]$,
but we only ask the robot to clean the attic and bedroom, $r_\text{proxy} = [1, 1, 0]$.
In this case, $r_\text{proxy}$ and $r_\text{true}$ are unhackable.
However, if we ask the robot to only clean the attic,
$r_\text{proxy} = [1, 0, 0]$, this is hackable with respect to $r_\text{true}$. %
To see this, note that according to $r_\text{proxy}$ cleaning the attic ($J_\text{proxy}=1$) is better than cleaning the bedroom and the kitchen ($J_\text{proxy}=0$).
Yet, $r_\text{true}$ says that cleaning the attic ($J_\text{true}=1$) is worse than cleaning the bedroom and the kitchen ($J_\text{true}=2$).
This situation is illustrated in Figure~\ref{fig:front-fig}.
The second seemingly natural way to simplify a reward function is \textit{overlooking fine details}:
suppose $r_\text{true} = [1, 1.5, 2]$,
and we ask the robot to clean all the rooms,
$r_\text{proxy} = [1, 1, 1]$. For these values, the proxy
and true reward are unhackable. However,
with a slightly less balanced true reward function
such as $r_\text{true} = [1, 1.5, 3]$ the proxy does lead to hacking,
since the robot would falsely calculate that it's
better to clean the attic and the bedroom than
the kitchen alone.
These two examples illustrate that while simplification of reward functions is sometimes possible, attempts at simplification can easily lead to %
reward hacking.
Intuitively, omitting/overlooking details is okay so long as all these details are not as important together as any of the details that we do share.
In general, it is not obvious what the proxy must look like to avoid reward hacking, suggesting we should take great care when using proxies.
For this specific environment, a proxy and a true reward are hackable exactly when there are two sets of rooms $S_1, S_2$ such that the true reward gives strictly higher value to cleaning $S_1$ than it does to cleaning $S_2$, and the proxy says the opposite: $J_1(S_1) > J_1(S_2) \; \& \; J_2(S_1) < J_2(S_2)$.
For a proof of this statement, see Appendix~\ref{app:cleaning_robot}.
\section{Related Work}
\begin{wrapfigure}{r}{0.265\textwidth}\centering
\vspace{-4mm} %
\includegraphics[width=\linewidth]{./figures/reward_gaming_training_curves.pdf}
\caption{An illustration of reward hacking when optimizing a hackable proxy. The true reward first increases and then drops off, while the proxy reward continues to increase.}\label{fig:learning-curves}
\vspace{-4mm} %
\end{wrapfigure}
While we are the first to define hackability, we are far from the first to study specification hacking.
The observation that optimizing proxy metrics tends to lead to perverse instantiations is often called ``Goodhart's Law'', and is attributed to \citet{goodhart1984problems}.
\citet{Manheim2018Categorizing} provide a list of four mechanisms underlying this observation.
Examples of such unintended behavior abound in both RL and other areas of AI; \citet{krakovna2020specification} provide an extensive list.
Notable recent instances include a robot positioning itself between the camera and the object it is supposed to grasp in a way that tricks the reward model \citep{Amodei2017learning}, the previously mentioned boat race example \citep{clark2016faulty}, %
and a multitude of examples of reward model hacking in Atari \citep{ibarz2018reward}.
Reward hacking can occur suddenly.
\citet{ibarz2018reward} and \citet{pan2022effects} showcase plots similar to one in Figure~\ref{fig:learning-curves}, where optimizing the proxy (either a learned reward model or a hand-specified reward function) first leads to both proxy and true rewards increasing, and then to a sudden phase transition where the true reward collapses while the proxy continues going up.
Note that not all of these examples correspond to optimal behavior according to the proxy.
Indeed, convergence to suboptimal policies is a well-known issue in RL \citep{thrun1993issues}.
As a consequence, improving optimization often leads to unexpected, qualitative changes in behavior.
For instance, \citet{Zhang2021onthe} demonstrate a novel cartwheeling behavior in the widely studied Half-Cheetah environment that exceeds previous performance so greatly that it breaks the simulator.
The unpredictability of RL optimization is a key motivation for our definition of hackability, since we cannot assume that agents will find an optimal policy.
Neither can we rule out the possibility of sudden improvements in proxy reward and corresponding qualitative changes in behavior.
Unhackability could provide confidence that reward hacking will not occur despite these challenges.
Despite the prevalence and potential severity of reward hacking, to our knowledge \citet{pan2022effects} provide the first peer-reviewed work that focuses specifically on it, although \citet{everitt2017reinforcement} tackle the closely related issue of reward corruption. %
The work of \citet{pan2022effects} is purely empirical; they manually construct proxy rewards for several diverse environments, and evaluate whether optimizing these proxies leads to reward hacking; in 5 out of 9 of their settings, it does.
In another closely related work, \citet{zhuang2020consequences} examine what happens when the proxy reward function depends on a strict subset of features relevant for the true reward.
They show that optimizing the proxy reward can lead to arbitrarily low true reward under suitable assumptions. This can be seen as a seemingly valid simplification of the true reward that turns out to be (highly) hackable.
While their result only applies to environments with decreasing marginal utility and increasing opportunity cost, we demonstrate hackability is an issue in arbitrary MDPs.
\newpage
Hackability is particularly concerning given arguments that reward optimizing behavior tends to be power-seeking \citep{turner2021optimalneurips}. %
But \citet{leike2018scalable} establish that any desired behavior (power-seeking or not) can in principle be specified as optimal via a reward function.\footnote{
Their result concerns non-stationary policies and use non-Markovian reward functions, but in Appendix~\ref{sec:any_policy_optimal}, we show how an analogous construction can be used with stationary policies and Markovian rewards.}
However, unlike us, they do not consider the entire policy preference ordering. %
Meanwhile, \citet{abel2021expressivity} note that Markov reward functions cannot specify arbitrary orderings over policies or trajectories, although they do not consider hackability.
Previous works consider reward functions to be equivalent if they preserve the ordering over policies \citep{ng1999policy, ng2000algorithms}. Unhackability relaxes this, allowing equalities to be refined to inequalities, and vice versa.
Unhackability provides a notion of what it means to be ``aligned enough''; \citet{brown2020value} provide an alternative.
They say a policy is $\varepsilon$-value aligned if its value at every state is close enough to optimal (according to the true reward function).
Neither notion implies the other.
\textit{Reward tampering} \citep{everitt2017reinforcement, kumar2020realab, uesato2020avoiding, everitt2021reward} can be viewed as a special case of reward hacking, and refers to an agent corrupting the process generating reward signals, e.g.\ by tampering with sensors, memory registers storing the reward signal, or other hardware.
\citet{everitt2017reinforcement} introduce the Corrupt Reward MDP (CRMDP), to model this possibility.
A CRMDP distinguishes corrupted and uncorrupted rewards; these are exactly analogous to the proxy and true reward discussed in our work and others.
\citet{leike2018scalable} distinguish reward tampering from \textit{reward gaming}, where an agent achieves inappropriately high reward without tampering.
However, in principle, a reward function could prohibit all forms of tampering if the effects of tampering are captured in the state.
So this distinction is somewhat imprecise, and the CRMDP framework is general enough to cover both forms of hacking.
\section{Preliminaries}
We begin with an overview of reinforcement learning (RL) to establish our notation and terminology.
Section~\ref{sec:our_ definitions} introduces our novel definitions of hackability and simplification.
\subsection{Reinforcement Learning}\label{sec: reinforcement learning}
We expect readers to be familiar with the basics of RL, which can be found in \citet{sutton2018reinforcement}.
RL methods attempt to solve a sequential decision problem, typically formalised as a \textbf{Markov decision process (MDP)} , which is a tuple $(S,A,T,I,\mathcal{R},\gamma)$ where $S$ is a set of states, $A$ is a set of actions, $T : S \times A \rightarrow \Delta(S)$ is a transition function, $I \in \Delta(S)$ is an initial state distribution, $\mathcal{R}$ is a reward function, the most general form of which is $\mathcal{R} : S \times A \times S \rightarrow \Delta(\mathbb{R})$, and $\gamma \in [0,1]$ is the discount factor.
Here $\Delta(X)$ is the set of all distributions over $X$.
A \textbf{stationary policy} is a function $\pi : S \rightarrow \Delta(A)$ that specifies a distribution over actions in each state, and a \textbf{non-stationary} policy is a function $\pi : (S \times A)^* \times S \rightarrow \Delta(A)$, where $*$ is the Kleene star.
A \textbf{trajectory} $\tau$ is a path $s_0,a_0,r_0,...$ through the MDP that is possible according to $T$, $I$, and $\mathcal{R}$.
The \textbf{return} of a trajectory is the discounted sum of rewards $G(\tau) \doteq \sum_{t=0}^\infty \gamma^t r_t$, and the \textbf{value} of a policy is the expected return $J(\pi) \doteq \mathbb{E}_{\tau \sim \pi}[G(\tau)]$.
We derive \textbf{policy (preference) orderings} from reward functions by ordering policies according to their value.
In this paper, we assume that $S$ and $A$ are finite, that $|A| > 1$, that all states are reachable, and that $\mathcal{R}(s,a,s')$ has finite mean for all $s,a,s'$.
In our work, we consider various reward functions for a given environment, which is then formally a \textbf{Markov decision process without reward} $MDP \setminus \mathcal{R} \doteq (S,A,T,I,\underline{\hspace*{0.3cm}},\gamma)$.
Having fixed an $MDP \setminus \mathcal{R}$, any reward function can be viewed as a function of only the current state and action by marginalizing over transitions: $\mathcal{R}(s,a) \doteq \sum_{s' \sim T(s' | s,a)} \mathcal{R}(s,a,s')$, we adopt this view from here on.
We define the \textbf{(discounted) visit counts} of a policy as $\mathcal{F}^\pi(s,a) \doteq \mathbb{E}_{\tau \sim \pi}[\sum_{i=0}^\infty \gamma^i \mathbbm{1}(s_i=s, a_i=a)]$. %
Note that
$J(\pi) = \sum_{s,a} \mathcal{R}(s,a) \mathcal{F}^\pi(s,a)$, which we also write as $\langle \mathcal{R}, \mathcal{F}^\pi\rangle$.
When considering multiple reward functions in an $MDP \setminus \mathcal{R}$, we define $J_\mathcal{R}(\pi) \doteq \langle \mathcal{R}, \mathcal{F}^\pi\rangle$ and sometimes use $J_i(\pi) \doteq \langle \mathcal{R}_i, \mathcal{F}^\pi \rangle$ as shorthand.
We also use $\mathcal{F}: \Pi \rightarrow \mathbb{R}^{|S||A|}$ to denote the embedding of policies into Euclidean space via their visit counts, and define $\mathcal{F}(\dot{\Pi}) \doteq \{\mathcal{F}(\pi: \pi \in \dot{\Pi})\}$ for any $\dot{\Pi}$.
Moreover, we also use a second way to embed policies into Euclidean space; let $\mathcal{G}(\pi)$ be the $|S||A|$-dimensional vector where $\mathcal{G}(\pi)[s,a] = \pi(a \mid s)$, and let $\mathcal{G}(\dot{\Pi}) \doteq \{\mathcal{G}(\pi: \pi \in \dot{\Pi})\}$.
\subsection{Definitions and Basic Properties of Hackability and Simplification} \label{sec:our_ definitions}
Here, we formally define \emph{hackability} as a binary relation between reward functions.
\begin{definition}\label{def:unhackable}
A pair of reward functions $\mathcal{R}_1$, $\mathcal{R}_2$ are \textbf{hackable} relative to policy set $\Pi$ and an environment $(S,A,T,I,\underline{\hspace*{0.3cm}},\gamma)$ if
there exist $\pi,\pi' \in \Pi$ such that
$$
J_1(\pi) < J_1(\pi') \And J_2(\pi) > J_2(\pi'),
$$
else they are \textbf{unhackable}.
\end{definition}
Note that an unhackable reward pair can have $J_1(\pi) < J_1(\pi') \And J_2(\pi) = J_2(\pi')$ or vice versa.
Unhackability is symmetric; this can be seen be swapping $\pi$ and $\pi'$ in Definition~\ref{def:unhackable}.
It is not transitive, however. In particular, the constant reward function is unhackable with respect to any other reward function, so if it \textit{were} transitive, any pair of policies would be unhackable.
Additionally, we say that $\mathcal{R}_1$ and $\mathcal{R}_2$ are \textbf{equivalent} on a set of policies $\Pi$ if $J_1$ and $J_2$ induce the same ordering of $\Pi$, and that $\mathcal{R}$ is \textbf{trivial} on $\Pi$ if $J(\pi) = J(\pi')$ for all $\pi,\pi' \in \Pi$. It is clear that $\mathcal{R}_1$ and $\mathcal{R}_2$ are unhackable whenever they are equivalent, or one of them is trivial, but this is relatively uninteresting. Our central question is if and when there are other unhackable reward pairs.
The symmetric nature of this definition is counter-intuitive, given that our motivation distinguishes the proxy and true reward functions.
We might break this symmetry by only considering policy sequences that monotonically increase the proxy, however, this is equivalent to our original definition of hackability: think of $\mathcal{R}_1$ as the proxy, and consider the sequence $\pi, \pi'$.
We could also restrict ourselves to policies that are approximately optimal according to the proxy; Corollary~\ref{corollary:approximately_optimal} shows that Theorem~\ref{thm:open-set} applies regardless of this restriction. %
Finally, we define \emph{simplification} as an asymmetric special-case of unhackability; Theorem~\ref{thm:finite_simplification} shows this is in fact a more demanding condition. %
\begin{definition}
$\mathcal{R}_2$ is a \textbf{simplification} of $\mathcal{R}_1$ relative to policy set $\Pi$ if for all $\pi,\pi' \in \Pi$,
$$
J_1(\pi) < J_1(\pi') \implies J_2(\pi) \leq J_2(\pi')
\And
J_1(\pi) = J_1(\pi') \implies J_2(\pi) = J_2(\pi')
$$
and there exist $\pi,\pi' \in \Pi$ such that $J_2(\pi) = J_2(\pi')$ but $J_1(\pi) \neq J_1(\pi')$. Moreover, if $\mathcal{R}_2$ is trivial
then we say that this is a \textbf{trivial simplification}.
\end{definition}
Intuitively, while unhackability allows replacing inequality with equality -- or vice versa -- a simplification can only replace inequalities with equality, collapsing distinctions between policies.
When $\mathcal{R}_1$ is a simplification of $\mathcal{R}_2$, we also say that $\mathcal{R}_2$ is a \textbf{refinement} of $\mathcal{R}_1$.
We denote this relationship as $\mathcal{R}_1 \trianglelefteq \mathcal{R}_2$ or $\mathcal{R}_2 \trianglerighteq \mathcal{R}_1$ ; the narrowing of the triangle at $R_1$ represents the collapsing of distinctions between policies.
If $\mathcal{R}_1 \trianglelefteq \mathcal{R}_2 \trianglerighteq \mathcal{R}_3$, then we have that $\mathcal{R}_1, \mathcal{R}_3$ are unhackable,\footnote{If $J_3(\pi) > J_3(\pi')$ then $J_2(\pi) > J_2(\pi')$, since $\mathcal{R}_2 \trianglerighteq \mathcal{R}_3$, and if $J_2(\pi) > J_2(\pi')$ then $J_1(\pi) \geq J_1(\pi')$, since $\mathcal{R}_1 \trianglelefteq \mathcal{R}_2$. It is therefore not possible that $J_3(\pi) > J_3(\pi')$ but $J_1(\pi) < J_1(\pi')$.} but if $\mathcal{R}_1 \trianglerighteq \mathcal{R}_2 \trianglelefteq \mathcal{R}_3$, then this is not necessarily the case.\footnote{Consider the case where $\mathcal{R}_2$ is trivial -- then $\mathcal{R}_1 \trianglerighteq \mathcal{R}_2 \trianglelefteq \mathcal{R}_3$ for any $\mathcal{R}_1, \mathcal{R}_3$.}
Note that these definitions are given relative to some $MDP \setminus \mathcal{R}$, although we often assume the environment in question is clear from context and suppress this dependence. The dependence on the policy set $\Pi$, on the other hand, plays a critical role in our results.
\section{Results}
Our results are aimed at understanding when it is possible to have an unhackable proxy reward function.
We first establish (in Section~\ref{sec:results_all}) that (non-trivial) unhackability is impossible when considering the set of all policies.
We might imagine that restricting ourselves to a set of sufficiently good (according to the proxy) policies would remove this limitation, but we show that this is not the case.
We then analyze finite policy sets (with deterministic policies as a special case), and establish necessary and sufficient conditions for unhackability and simplification.
Finally, we demonstrate via example that non-trivial simplifications are also possible for some infinite policy sets in Section~\ref{sec:results_infinite}.
\newpage
\subsection{Non-trivial Unhackability Requires Restricting the Policy Set}\label{sec:results_all}
\begin{wrapfigure}{r}{0.255\textwidth}\centering
\vspace{-14.5mm}
\includegraphics[width=0.27\textwidth]{./figures/gaussian_and_step.pdf}
\caption{
%
Two reward functions. %
%
%
While the step function may seem like a simplification of the Gaussian, these reward functions are hackable.}
\label{fig:gaussian_and_step}
\vspace{-8.001mm} %
\end{wrapfigure}
We start with a motivating example.
Consider the setting shown in Figure~\ref{fig:gaussian_and_step}, where the agent can move left/stay-still/right and gets a reward depending on its state.
Let the Gaussian (blue) be the true reward $\mathcal{R}_1$ and the step function (orange) be the proxy $\mathcal{R}_2$. These are hackable.
To see this, consider being at state $B$. Let $\pi(B)$ travel to $A$ or $C$ with 50/50 chance, and compare with the policy $\pi'$ that stays at $B$. Then we have that $J_1(\pi) > J_1(\pi')$ and $J_2(\pi) < J_2(\pi')$.%
Generally, we might hope that some environments allow for unhackable reward pairs that are not equivalent or trivial.
Here we show that this~is not the case, unless we impose restrictions on the set of policies we consider.
First note that if we consider \emph{non-stationary} policies, this result is relatively straightforward.
Suppose $\mathcal{R}_1$ and $\mathcal{R}_2$ are \emph{unhackable} and \emph{non-trivial} on the set $\Pi^N$ of all non-stationary policies, and let $\pi^\star$ be a policy that maximises ($\mathcal{R}_1$ and $\mathcal{R}_2$) reward, and $\pi_\bot$ be a policy that \emph{minimises} ($\mathcal{R}_1$ and $\mathcal{R}_2$) reward. Then the policy $\pi_\lambda$ that plays $\pi^\star$ with probability $\lambda$ and $\pi_\bot$ with probability $1-\lambda$ is a policy in $\Pi^N$. Moreover, for any $\pi$ there are two unique $\alpha, \beta \in [0,1]$ such that $J_1(\pi) = J_1(\pi_\alpha)$ and $J_2(\pi) = J_2(\pi_\beta)$.
Now, if $\alpha \neq \beta$, then either $J_1(\pi) < J_1(\pi_\delta)$ and $J_2(\pi) > J_2(\pi_\delta)$, or vice versa, for $\delta = (\alpha + \beta)/2$.
If $\mathcal{R}_1$ and $\mathcal{R}_2$ are unhackable then this cannot happen, so it must be that $\alpha = \beta$.
This, in turn, implies that $J_1(\pi) = J_1(\pi')$ iff $J_2(\pi) = J_2(\pi')$, and so $\mathcal{R}_1$ and $\mathcal{R}_2$ are \emph{equivalent}. This means that no interesting unhackability can occur on the set of all non-stationary policies.
The same argument cannot be applied to the set of \emph{stationary} policies, because $\pi_\lambda$ is typically not stationary, and mixing stationary policies' action probabilities does not have the same effect.
For instance, consider a hallway environment where an agent can either move left or right. Mixing the ``always go left'' and ``always go right'' policies corresponds to picking a direction and sticking with it, whereas mixing their action probabilities corresponds to choosing to go left or right independently at every time-step.
However, we will see that there still cannot be any interesting unhackability on this policy set, and, more generally, that there cannot be any interesting unhackability on any set of policies which contains an \emph{open subset}. Formally, a set of (stationary) policies $\dot{\Pi}$ is open if
$\mathcal{G}(\dot{\Pi})$ is open in the smallest affine space that contains $\mathcal{G}(\Pi)$, for the set of all stationary policies $\Pi$.
We will use the following lemma:
\begin{lemma}\label{lemma:homeomorphism}
In any $MDP \setminus \mathcal{R}$, if $\dot{\Pi}$ is an open set of policies, then
$\mathcal{F}(\dot{\Pi})$ is open in $\mathbb{R}^{|S|(|A|-1)}$, and $\mathcal{F}$ is a homeomorphism between $\mathcal{G}(\dot{\Pi})$ and $\mathcal{F}(\dot{\Pi})$. %
\end{lemma}
Using this lemma, we can show that interesting unhackability is impossible on any set of stationary policies $\hat{\Pi}$ which contains an open subset $\dot{\Pi}$.
Roughly, if $\mathcal{F}(\dot{\Pi})$ is open, and $\mathcal{R}_1$ and $\mathcal{R}_2$ are non-trivial and unhackable on $\dot{\Pi}$, then the fact that $J_1$ and $J_2$ have a linear structure on $\mathcal{F}(\hat{\Pi})$ implies that $\mathcal{R}_1$ and $\mathcal{R}_2$ must be equivalent on $\dot{\Pi}$. From this, and the fact that $\mathcal{F}(\dot{\Pi})$ is open, it follows that $\mathcal{R}_1$ and $\mathcal{R}_2$ are equivalent everywhere.
\begin{theorem} \label{thm:open-set} %
In any $MDP \setminus \mathcal{R}$, if $\hat{\Pi}$ contains an open set, then any pair of reward functions that are unhackable and non-trivial on $\hat{\Pi}$ are equivalent on $\hat{\Pi}$.
\end{theorem}
Since simplification is a special case of unhackability, this also implies that non-trivial simplification is impossible for any such policy set. Also note that Theorem~\ref{thm:open-set} makes \emph{no assumptions} about the transition function, etc. From this result, we can show that interesting unhackability always is impossible on the set $\Pi$ of all (stationary) policies. In particular, note that the set $\tilde{\Pi}$ of all policies that always take each action with positive probability is an open set, and that $\tilde{\Pi} \subset \Pi$.
\begin{corollary}
In any $MDP \setminus \mathcal{R}$, any pair of reward functions that are unhackable and non-trivial on the set of all (stationary) policies $\Pi$ are equivalent on $\Pi$.
\end{corollary}
Theorem~\ref{thm:open-set} can also be applied to many other policy sets.
For example, we might not care about the hackability resulting from policies with low proxy reward, as we would not expect a sufficiently good learning algorithm to learn such policies.
This leads us to consider the following definition:
\newpage
\begin{definition}
A (stationary) policy $\pi$ is $\varepsilon$-suboptimal if $J(\pi) \geq J(\pi^\star) - \varepsilon$.
\end{definition}
Alternatively, if the learning algorithm always uses a policy that is \enquote{nearly} deterministic (but with some probability of exploration), then we might not care about hackability resulting from very stochastic policies, leading us to consider the following definition:
\begin{definition}
A (stationary) policy $\pi$ is $\delta$-deterministic if $\forall s \in S \; \exists a \in A: \mathbb{P}(\pi(s) = a) \geq \delta$.
\end{definition}
Unfortunately, both of these sets contain open subsets, which means they are subject to Theorem~\ref{thm:open-set}.%
\begin{corollary}
\label{corollary:approximately_optimal}
In any $MDP \setminus \mathcal{R}$, any pair of reward functions that are unhackable and non-trivial on the set of all $\varepsilon$-suboptimal policies ($\varepsilon>0$) $\Pi^\varepsilon$ are equivalent on $\Pi^\varepsilon$, and any pair of reward functions that are unhackable and non-trivial on the set of all $\delta$-deterministic policies ($\delta<1$) $\Pi^\delta$ are equivalent on $\Pi^\delta$.
\end{corollary}
Intuitively, Theorem~\ref{thm:open-set} can be applied to any policy set with \enquote{volume} in policy space.
\subsection{Finite Policy Sets}\label{sec:results_finite}
Having established that interesting unhackability is impossible relative to the set of all policies, we now turn our attention to the case of \emph{finite} policy sets.
Note that this includes the set of all deterministic policies, since we restrict our analysis to finite MDPs.
Surprisingly, here we find that non-trivial non-equivalent unhackable reward pairs \textit{always} exist.
\begin{theorem}\label{thm:finite_unhackability}
For any $MDP \setminus \mathcal{R}$, any finite set of policies $\hat{\Pi}$ containing at least two $\pi,\pi'$ such that $\mathcal{F}(\pi) \neq \mathcal{F}(\pi')$, and any reward function $\mathcal{R}_1$, there is a non-trivial reward function $\mathcal{R}_2$ such that $\mathcal{R}_1$ and $\mathcal{R}_2$ are unhackable but not equivalent.
\end{theorem}
\begin{wrapfigure}{r}{0.47\textwidth}\centering
\includegraphics[width=0.48\textwidth]{./figures/plates.pdf}
%
\caption{
An illustration of the state-action occupancy space with a reward function defined over it. Points correspond to policies' state-action occupancies. Shading intensity indicates expected reward.
Rotating the reward function to make $J(\pi_3) > J(\pi_4)$ passes through a reward function that sets $J(\pi_1) = J(\pi_2)$.
%
Solid black lines are contour lines
%
of the original reward function, dotted blue lines are contour lines of the rotated reward function.
%
}
\label{fig:plates}
\end{wrapfigure}
This proof proceeds by finding a path from $\mathcal{R}_1$ to another reward function $\mathcal{R}_3$ that is hackable with respect to $\mathcal{R}_1$.
Along the way to reversing one of $\mathcal{R}_1$'s inequalities, we must encounter a reward function $\mathcal{R}_2$ that instead replaces it with equality.
In the case that dim$(\hat \Pi) = 3$, we can visualize moving along this path as rotating the contour lines of a reward function defined on the space containing the policies' discounted state-action occupancies, see Figure~\ref{fig:plates}.
This path can be constructed so as to avoid any reward functions that produce trivial policy orderings, thus guaranteeing $\mathcal{R}_2$ is non-trivial.
For a \emph{simplification} to exist, we require some further conditions, as established by the following theorem:
\begin{theorem}\label{thm:finite_simplification}
Let $\hat{\Pi}$ be a finite set of policies, and $\mathcal{R}_1$ a reward function. The following procedure determines if there exists a non-trivial simplification of $\mathcal{R}_1$ in a given $MDP \setminus \mathcal{R}$:
\begin{enumerate}
\item Let $E_1 \dots E_m$ be the partition of $\hat{\Pi}$ where $\pi,\pi'$ belong to the same set iff $J(\pi) = J(\pi')$.
\item For each such set $E_i$, select a policy $\pi_i \in E_i$ and let $Z_i$ be the set of vectors that is obtained by subtracting $\mathcal{F}(\pi_i)$ from each element of $\mathcal{F}(E_i)$.
\end{enumerate}
Then there is a non-trivial simplification of $\mathcal{R}$ iff $\mathrm{dim}(Z_1 \cup \dots \cup Z_m) \leq \mathrm{dim}(\mathcal{F}(\hat{\Pi})) - 2$, where $\mathrm{dim}(S)$ is the number of linearly independent vectors in $S$.
\end{theorem}
The proof proceeds similarly to Theorem~\ref{thm:finite_unhackability}. However, in Theorem~\ref{thm:finite_unhackability} it was sufficient to show that there are no trivial reward functions along the path from $\mathcal{R}_1$ to $\mathcal{R}_3$, whereas here we additionally need that if $J(\pi) = J(\pi')$ then $J'(\pi) = J'(\pi')$ for all functions $\mathcal{R}_2$ on the path --- this is what the extra conditions ensure.
Theorem~\ref{thm:finite_simplification} is opaque, but intuitively, the cases where $\mathcal{R}_1$ cannot be simplified are those where $\mathcal{R}_1$ imposes many different equality constraints that are difficult to satisfy simultaneously.
We can think of $\mathrm{dim}(\mathcal{F}(\Pi))$ as measuring how diverse the behaviours of policies in policy set $\Pi$ are.
Having a less diverse policy set means that a given policy ordering imposes fewer constraints on the reward function, creating more potential for simplification.
The technical conditions of this proof determine when the diversity of $\Pi$ is or is not sufficient to prohibit simplification, as measured by $\mathrm{dim}(Z_1 \cup \dots \cup Z_m)$.
Projecting $E_i$ to $Z_i$ simply moves these spaces to the origin, so that we can compare the directions in which they vary (i.e.\ their span).
By assumption, $E_i \cap E_j = \{\}$, but $\mathrm{span}(Z_i) \cap \mathrm{span}(Z_j)$ will include the origin, and may also contain linear subspaces of dimension greater than 0.
This is the case exactly when there are a pair of policies in $E_i$ and a pair of policies in $E_j$ that differ by the same visit counts,
for example, when the environment contains an obstacle that could be circumnavigated in several different ways (with an impact on visit counts, but no impact on reward), and the policies in $E_i$ and $E_j$ both need to circumnavigate it before doing something else.
Roughly speaking, $\mathrm{dim}(Z_1 \cup \dots \cup Z_m)$ is large when either (i) we have very large and diverse sets of policies in $\hat{\Pi}$ that get the same reward according to $\mathcal{R}$, or (ii) we have a large number of different sets of policies that get the same reward according to $\mathcal{R}$, and where there are different kinds of diversity in the behaviour of the policies in each set.
There are also intuitive special cases of Theorem~\ref{thm:finite_simplification}. For example, as noted before, if $E_i$ is a singleton then $Z_i$ has no impact on $\mathrm{dim}(Z_1 \cup \dots \cup Z_m)$. This implies the following corollary:
\begin{corollary}
For any finite set of policies $\hat{\Pi}$, any environment, and any reward function $\mathcal{R}$, if $|\hat{\Pi}| \geq 2$ and $J(\pi) \neq J(\pi')$ for all $\pi,\pi' \in \hat{\Pi}$ then there is a non-trivial simplification of $\mathcal{R}$.
\end{corollary}
A natural question is whether any reward function is guaranteed to have a non-trivial simplification on the set of all deterministic policies.
As it turns out, this is not the case.
For concreteness, and to build intuition for this result, we examine the set of deterministic policies in a simple $MDP\setminus \mathcal{R}$ with $S = \{0, 1\}, A = \{0, 1\}, T(s, a) = a, I = \{0: 0.5, 1: 0.5 \}, \gamma = 0.5$. Denote $\pi_{ij}$ the policy that takes action $i$ from state 0 and action $j$ from state 1. There are exactly four deterministic policies.
We find that of the $4! = 24$ possible policy orderings, 12 are realizable via some reward function. In each of those 12 orderings, exactly two policies (of the six available pairs of policies in the ordering) can be set to equal value without resulting in the trivial reward function (\textit{which} pair can be equated depends on the ordering in consideration). Attempting to set three policies equal always results in the trivial reward simplification.
For example, given the ordering $\pi_{00} \leq \pi_{01} \leq \pi_{11} \leq \pi_{10}$,
the simplification $\pi_{00} = \pi_{01} < \pi_{11} < \pi_{10}$ is represented
by $R = \left[\begin{smallmatrix} 0 & 3 \\ 2 & 1\end{smallmatrix}\right]$, where $\mathcal{R}(s, a) = R[s, a]$: for example, here taking action 1 from state 0 gives reward $\mathcal{R}(0, 1) = 3$.
But there is no reward function representing a non-trivial simplification of this ordering with $\pi_{01} = \pi_{11}$.
We develop and release a software suite to compute these results.
Given an environment and a set of policies, it can calculate all policy orderings represented by some reward function.
Also, for a given policy ordering, it can calculate all nontrivial simplifications and reward functions that represent them.
For a link to the repository, as well as a full exploration of these policies, orderings, and simplifications, see Appendix~\ref{sec:software}.
\subsection{Unhackability in Infinite Policy Sets}\label{sec:results_infinite}
The results in Section~\ref{sec:results_all} do not characterize unhackability for infinite policy sets that do not contain open sets.
Here, we provide two examples of such policy sets; one of them admits unhackable reward pairs and the other does not.
Consider policies $A,B,C$, and reward functions $\mathcal{R}_1$ with $J_1(C) < J_1(B) < J_1(A)$ and $\mathcal{R}_2$ with $J_2(C) = J_2(B) < J_2(A)$.
Policy sets $\Pi_a = \{ A \} \cup \{ \lambda B + (1 - \lambda) C : \lambda \in [0, 1]\}$ and $\Pi_b = \{ A \} \cup \{ \lambda B + (1 - \lambda) C : \lambda \in [0, 1]\}$ are depicted in Figure~\ref{fig:both_triangle_examples}; the vertical axis represents policies' values according to $\mathcal{R}_1$ and $\mathcal{R}_2$.
For $\Pi_a$, $\mathcal{R}_2$ is a simplification of $\mathcal{R}_1$, but
for $\Pi_b$, it is not, since $J_1(X) < J_1(Y)$ and $J_2(X) > J_2(Y)$.
\begin{figure}[h]
\centering
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{./figures/triangle13.png}
\caption{}
\label{fig:one_sided_triangle}
\end{subfigure}
\qquad
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{./figures/triangle36.png}
\caption{}
\label{fig:three_sided_triangle}
\end{subfigure}
\caption{
%
%
%
Infinite policy sets that do not contain open sets sometimes allow simplification (a), but not always (b).
Points A, B, C represent deterministic policies, while the bold lines between them represent stochastic policies. %
%
The y-axis gives the values of the policies according to reward functions $\color{NavyBlue} \mathcal{R}_1$ and $\color{Mahogany} \mathcal{R}_2$.
We attempt to simplify $\color{NavyBlue} \mathcal{R}_1$ by rotating the reward function such that $\color{Mahogany} J_2(B) = J_2(C)$; in the figure, we instead (equivalently) rotate the triangle along the AB axis, leading to the red triangle. In (a), $\color{Mahogany} \mathcal{R}_2$ simplifies $\color{NavyBlue} \mathcal{R}_1$, setting all policies along the BC segment equal in value (but still lower than A). In (b), $\color{Mahogany} \mathcal{R}_2$ swaps the relative value of policies X and Y ($\color{NavyBlue}{J_1(X)} < \color{NavyBlue}{J_1(Y)} = \color{Mahogany}{J_2(Y)} < \color{Mahogany}{J_2(X)}$) and so does not simplify $\color{NavyBlue} \mathcal{R}_1$.
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
}
\label{fig:both_triangle_examples}
\end{figure}
\section{Discussion}
We reflect on our results and identify limitations in Section~\ref{sec:limitations}.
In Section~\ref{sec:implications}, we discuss how our work can inform discussions about the appropriateness, potential risks, and limitations of using of reward functions as specifications of desired behavior.
\subsection{Limitations}
\label{sec:limitations}
Our work has a number of limitations.
We have only considered finite MDPs and Markov reward functions, leaving more general environments for future work.
While we characterized hackability and simplification for finite policy sets, the conditions for simplification are somewhat opaque, and our characterization of infinite policy sets remains incomplete.
As previously discussed, our definition of hackability is strict, arguably too strict.
Nonetheless, we believe that understanding the consequences of this strict definition is an important starting point for further theoretical work in this area.
The main issue with the strictness of our definition has to do with the symmetric nature of hackability. %
The existence of complex behaviors that yield low proxy reward and high true reward is much less concerning than the reverse, as these behaviors are unlikely to be discovered while optimizing the proxy.
For example, it is very unlikely that our agent would solve climate change in the course of learning how to wash dishes.
Note that the existence of \textit{simple} behaviors with low proxy reward and high true reward \textit{is} concerning;
these could arise early in training, leading us to trust the proxy, only to later see the true reward decrease as the proxy is further optimized.
To account for this issue, future work should explore more realistic assumptions about the probability of encountering a given sequence of policies when optimizing the proxy, and measure hackability in proportion to this probability.
We could allow for approximate unhackability by only considering pairs of policies ranked differently by the true and proxy reward functions as evidence of hacking iff their value according to the true reward function differs by more than some $\varepsilon$.
Probabilistic unhackability could be defined by looking at the number of misordered policies; this would seem to require making assumptions about the probability of encountering a given policy when optimizing the proxy.
Finally, while unhackability is a guarantee that no hacking will occur, \textit{hackability} is far from a guarantee of hacking.
Extensive empirical work is necessary to better understand the factors that influence the occurrence and severity of reward hacking in practice.
\subsection{Implications}\label{sec:implications}
How should we specify our preferences for AI systems' behavior?
And how detailed a specification is required to achieve a good outcome?
In reinforcement learning, the goal of maximizing (some) reward function is often taken for granted, but a number of authors have expressed reservations about this approach \citep{gabriel2020artificial, Dobbe2021hard, Hadfield2016Cooperative, hadfield2017inverse, bostrom2014superintelligence}.
Our work has several implications for this discussion, although we caution against drawing any strong conclusions due to the limitations mentioned in Section~\ref{sec:limitations}.
One source of confusion and disagreement is the role of the reward function; it is variously considered as a means of specifying a task \citep{leike2018scalable} or encoding broad human values \citep{Dewey2011learning}; such distinctions are discussed by \citet{christiano_narrow_alignment} and \citet{gabriel2020artificial}.
We might hope to use Markov reward functions to specify narrow tasks without risking behavior that goes against our broad values.
However, if we consider the ``narrow task'' reward function as a proxy for the true ``broad values'' reward function, our results indicate that this is not possible: these two reward functions will invariably be hackable.
Such reasoning suggests that reward functions must instead encode broad human values, or risk being hacked.
This seems challenging, perhaps intractably so, indicating that alternatives to reward optimization may be more promising.
Potential alternatives include imitation learning \citep{ross2011reduction},
constrained RL \citep{Csaba2020con}, quantilizers \citep{taylor2016quantilizers}, and incentive management \citep{everitt2019understanding}.
Scholars have also criticized the assumption that human values can be encoded as rewards \citep{Dobbe2021hard}, and challenged the use of metrics more broadly \citep{oneil2016weapons,thomas2022reliance}, citing Goodhart's Law \citep{Manheim2018Categorizing, goodhart1984problems}.
A concern more specific to the optimization of reward functions is power-seeking \citep{turner2021optimalneurips, bostrom2012superintelligent, omohundro2008basic}.
\citet{turner2021optimalneurips} prove that optimal policies tend to seek power in most MDPs and for most reward functions.
Such behavior could lead to human disempowerment; for instance, an AI system might disable its off-switch \citep{Hadfield2016the}.
\citet{bostrom2014superintelligence} and others have argued that power-seeking makes even slight misspecification of rewards potentially catastrophic, although this has yet to be rigorously established.
Despite such concerns, approaches to specification based on learning reward functions %
remain popular \citep{fu2017learning, Stiennon2020Learning, nakano2021webgpt}.
So far, reward hacking has usually been avoidable in practice, although some care must be taken \citep{Stiennon2020Learning}.
Proponents of such approaches have emphasized the importance of learning a reward model in order to exceed human performance and generalize to new settings \citep{brown2020better, leike2018scalable}.
But our work indicates that such learned rewards are almost certainly hackable, and so cannot be safely optimized.
Thus we recommend viewing such approaches as a means of learning a policy in a safe and controlled setting, which should then be validated before being deployed.
\section{Conclusion}
Our work begins the formal study of reward hacking in reinforcement learning.
We formally define hackability and simplification of reward functions, and show conditions for the (non-)existence of non-trivial examples of each.
We find that unhackability is quite a strict condition, as the set of all policies never contains non-trivial unhackable pairs of reward functions.
Thus in practice, reward hacking must be prevented by limiting the set of possible policies, or controlling (e.g.\ limiting) optimization.
Alternatively, we could %
pursue approaches not based on optimizing reward functions.
\newpage
\bibliographystyle{apalike}
|
2,869,038,153,950 | arxiv | \section{Introduction}
Clusters of galaxies are among the most important objects for cosmological
studies. Models of large scale structure formation such as CDM, predict
that the abundance of clusters is determined by the spectrum of primordial
perturbations and cosmological parameters $\Omega$ and $\Lambda$.
Observations of clusters at different redshifts can be used to constrain
these parameters (e.g., White \& Rees 1978, Kaiser 1986, White, Efstathiou,
\& Frenk 1993, Henry \& Arnaud 1991, Viana \& Liddle 1996, Henry 1997).
Following a different approach, observations of the Sunyaev-Zel'dovich
effect (Sunyaev \& Zel'dovich 1972) in a large sample of distant clusters
can be used for a direct measurement of the distance to these clusters, and
thus provide the values of $H_0$ (e.g., Birkinshaw, Hughes, \& Arnaud 1991)
and~$q_0$.
Up until the present, the largest samples of distant clusters resulted from
optical surveys that searched for enhancements in the surface density of
galaxies (e.g., Postman et al.\ 1996). This method suffers seriously from
projection effects (e.g., van Haarlem et al.\ 1997). Distant clusters found
by such techniques as galaxy concentrations around distant radio sources
(Dickinson 1996) or ``dark'' lenses (Hattori et al.\ 1997) cannot be
considered as statistical samples. Of all methods for detecting distant
clusters, X-ray surveys are the least sensitive to projection, because the
X-ray emission is proportional to the square of the density of the hot gas,
which must be compressed in a deep potential well for us to detect it. It is
noteworthy that unlike optical, X-ray surveys have the possibility of
finding interesting objects such as ``fossil'' clusters in which almost all
galaxies have merged to form a cD galaxy (Ponman et al.\ 1994), and
hypothetical ``failed'' clusters in which galaxy formation was suppressed
(Tucker et al.\ 1995). To date, the largest published sample of distant
X-ray selected clusters is that from the \emph{Einstein}\/ Extended Medium
Sensitivity Survey (EMSS; Goia et al.\ 1990, Stocke et al.\ 1991). However,
because of the relatively high flux limit, the EMSS sample contains only 6
clusters at $z>0.5$.
Finding clusters in X-rays is complicated by their rarity among other types
of sources. A comparison of the $\log N - \log S$ relations for all sources
(Hasinger et al.\ 1993a) and clusters (this work) shows that at a flux of
$10^{-14}\,$ergs$\;$s$^{-1}\,$cm$^{-2}$\ in the 0.5--2~keV band, clusters comprise not more than
10--20\% of the total source population. The large amount of optical
identification work needed for cluster selection can be greatly reduced if
they are searched for among spatially extended X-ray sources. Even at $z=1$,
a rich cluster with a core-radius of 250~kpc has an angular radius of
$>20\arcsec$, which still can be resolved with the \ROSAT\/ PSPC on-axis.
Detection of extended sources requires new analysis techniques. Even if the
spatial extent is not used for cluster selection, special detection
techniques are needed because clusters at $z\approx 0.2-0.3$ are 3--4 times
broader than the \ROSAT\/ PSPC point spread function.
The idea of selecting distant cluster samples from various \ROSAT\/ surveys
was pursued by different groups in the past few years. Rosati et al.\
(1995, 1998) searched for clusters in long exposure ($>15$~ksec) \ROSAT\/
PSPC pointed observations with a total area of 50~deg$^{2}$, using optical
identifications of all extended X-ray sources found by wavelet transform
analysis. Their sample consists at present of 70 clusters. The Wide Angle
\ROSAT\/ Pointed Survey (WARPS, Scharf et al.\ 1997, Jones et al.\ 1998)
uses the Voronoi Tessellation and Percolation technique to detect both
point-like and extended sources, followed by optical identifications of all
sources. The WARPS cluster sample consists at present of 46 clusters found
in \ROSAT\/ pointings with exposures $>8$~ksec, covering 16.2~deg$^{2}$. A
small sample of 15 clusters at $0.3<z<0.7$ was identified by the SHARC
survey (Collins et al.\ 1997). The RIXOS cluster sample (Castander et al.\
1995) consists of 13 clusters, detected using a technique which was
optimized for point sources. Their results on cluster evolution appear to
contradict other \ROSAT\/ surveys (Collins et al.\ 1997), probably
because the point source detection algorithm had a low efficiency for
detecting extended cluster emission. Finally, important information about
the surface density of clusters at very low fluxes is provided by several
very deep \ROSAT\/ pointings in which complete optical identifications are
performed (e.g.\ McHardy et el.\ 1997). Note that because of the small
area, none of the aforementioned surveys is able to study the luminosity
function of distant clusters above $3\times10^{44}$~ergs~s$^{-1}$, where the
deficit of high redshift EMSS clusters was reported (Henry et al.\ 1992).
In this paper, we present a sample of distant clusters selected from 647
\ROSAT\/ PSPC observations of high Galactic latitude targets, covering a
solid angle of 158 square degrees, a factor of three larger than the largest
of the other \ROSAT\/ surveys. The source catalog includes 200 optically
confirmed clusters, and thus is one of the largest X-ray selected samples,
comparable in size only to the \ROSAT\/ All-Sky Survey sample of nearby
clusters (Ebeling et al.\ 1997). We detect cluster candidates as extended
X-ray sources using the wavelet decomposition technique described in this
paper and Maximum Likelihood fitting of the surface brightness distributions
to determine the significance of the source extent. We then identify only
significantly extended sources with optical follow-up observations. Optical
observations confirm that 90\% of our sources are indeed clusters of
galaxies. Various selection effects such as the fraction of clusters which
remain unresolved or undetected, are studied using extensive Monte-Carlo
simulations. Comparison of the $\log N - \log S$ relation for clusters
derived from our and other \ROSAT\/ surveys shows that our cluster counts at
the bright end are in excellent agreement with those from the \ROSAT\/
All-Sky Survey sample of Ebeling et al.\ (1997). At a flux of
$2\times10^{-13}\,$ergs$\;$s$^{-1}\,$cm$^{-2}$, our $\log N - \log S$ relation agrees well with
the WARPS survey (Jones et al.\ 1998), but is somewhat higher than that
found by Rosati et al.\ (1998).
Cluster size and flux estimates throughout the paper use
$H_0=50$~km~s$^{-1}$~Mpc$^{-1}$ and $q_0=0.5$. All X-ray fluxes and
luminosities are reported in the 0.5--2~keV energy band.
\section{X-ray data}\label{sec:bg}
We analyzed only \ROSAT\/ PSPC pointings at high Galactic latitudes,
$|b|>30\ensuremath{^{\circ}}$, and low absorption, $N_H<6\times10^{20}\,$cm$^{-2}$, excluding
the 10\ensuremath{^{\circ}}\ radius regions around the LMC and SMC. Low Galactic latitude
fields were not used because the absorption is large and nonuniform in these
regions, and because a high density of stars complicates optical
identifications. We also excluded observations of extended targets, such as
known clusters of galaxies, nearby galaxies, SNRs, and star clusters. As
the only exception, we included the 2146+0413 pointing (\ROSAT\/ sequences
800150 and 800150a01) which was an X-ray follow-up of clusters selected
optically in a blank field.
All individual \ROSAT\/ sequences with listed exposures longer than 2~ksec,
meeting the above criteria and publicly available by April 1996, were
extracted from the data archive at GSFC. Using S.~Snowden's software, we
cleaned the data excluding high background intervals. We also generated
exposure maps using R4--R7 detector maps (energy range 0.5--2~keV) weighted
according to the average PSPC background spectrum. Multiple observations of
the same target were merged. Observations with cleaned exposures
$<1.5$~ksec were discarded. The final dataset consists of 647 fields,
schematically shown in Galactic coordinates in Fig~\ref{fig:fieldsgal}. We
used only hard band images, 0.6--2~keV, which increases the sensitivity of
cluster detection given that the spectrum of the \ROSAT\/ background is much
softer than that of a typical cluster. This energy band is slightly
different from that used for the exposure map generation, but this
discrepancy results only in a very small, $<2\%$, error in the vignetting
correction in the inner region of the field of view where clusters are
detected. To oversample the PSF adequately, an image pixel size of 5\arcsec\
was chosen.
\bigskip
\centerline{\includegraphics[width=3.5in]{fields_gal}}
\figcaption{The distribution of \ROSAT\/ pointings in Galactic
coordinates. The higher density in the Northern hemisphere is caused by a
preferential choice of Northern objects as \ROSAT\/ targets.
\label{fig:fieldsgal}}
\medskip
Our next step was to calculate the background map for each observation. The
\ROSAT\/ PSPC background cannot be modeled simply using the exposure map
template because of the non-uniformity of the cosmic X-ray background, the
presence of scattered solar X-rays, and the wings of the PSF around bright
sources. The angular correlation function of the XRB (Vikhlinin \& Forman
1995, Soltan et al.\ 1996) predicts $\approx 10\%$ brightness fluctuations
on a $10\arcmin$ scale. If not modeled, such background variations can cause
errors in measured cluster fluxes. Since the best approximation to the
background in each field is a smoothed source-subtracted image, we created
background maps as follows. We first divided an image by its exposure map
to remove the imprint of the PSPC window support structure. Using the
wavelet decomposition technique (\S\ref{sec:wd}), we subtracted all sources
with characteristic sizes $\leq 3\arcmin$ in radius and smoothed the
cleaned image with a $\sigma=6\arcmin$ Gaussian. The background map was
finally obtained as a product of the smoothed image and the exposure map.
\section{Detection of extended sources}
\subsection{General Considerations}
Finding clusters in \ROSAT\/ PSPC images requires detection of sources of
widely different angular size ranging from approximately the FWHM of the
PSF, $\sim 25\arcsec$, to several arcminutes. Any algorithm for finding
spatially extended sources solves two tasks: A) source detection, i.e.\
identifying regions where the surface brightness significantly exceeds that
of the background, and B) determining extent, i.e.\ deciding whether the
detected source is significantly broader than the point spread function. The
two-stage nature of extended source detection is not usually emphasized, but
can be seen in practice. Rosati et al.\ (1995) convolved images with wavelet
kernels of varying scale to find sources and then derived the source extent
from wavelet amplitudes. Scharf et al.\ (1997) used Voronoi Tessellation and
Percolation (VTP) to find regions with enhanced surface brightness and then
derived the source extent from the measured area and flux. Each of these
methods has advantages for certain tasks. For example, VTP can find extended
sources regardless of their shape. However, none of these methods is optimal
for both parts of the problem. Obviously, the best sensitivity can be
achieved if, at each stage, one uses a separate algorithm optimized for its
task. We show below that our method of detecting sources using wavelets and
determining source extent by Maximum Likelihood fitting is theoretically
close to optimum for finding regularly-shaped clusters.
The optimal method of source detection is matched filtering (e.g.\ Pratt
1978). For faint sources, the filter is close in shape to the sources
themselves, and any filter with a shape close to the matched one performs
almost equally well (Press et al.\ 1992). Our wavelet detection method uses
filters which approximate Gaussians with $\sigma=1,2,4,\ldots$ pixels. Since
these filters span a range of sizes, nearly optimal detection is achieved
for circular sources of any size. With an axially-symmetric filter, it is
possible to miss very irregular sources. However, most clusters are
relatively regular (Jones \& Forman 1998) for detection purposes. Also, this
shortcoming is clearly outweighed by the merits of the wavelet method, such
as optimal detection of sources with regular shape, complete background
subtraction and elimination of the influence of point sources. We discuss
these issues below in detail.
Consider now the optimal method to discriminate between extended and point
sources. Cluster radial surface brightness profiles can be described by the
so called $\beta$-model, $I(r,r_c)=I_0\,(1+r^2/r_c^2)^{-3\beta+0.5}$ (e.g.\
Cavaliere \& Fusco-Femiano 1976). Therefore, to discriminate between a
cluster and a point source, we should determine whether $I(r,r_c)$ with core
radius $r_c>0$ describes the data better than a $\delta$-function, that is,
$I(r,r_c)$ with $r_c=0$. According to the \emph{Neyman-Pearson Lemma}
(e.g.\ Martin 1971), the most sensitive test for this problem is the change
in the value of the likelihood function between the best-fit value of $r_c$
and $r_c=0$. Maximum Likelihood fitting may not be the best method for
finding clusters with arbitrary shape, but theoretically it is the best one
for the vast majority of clusters having regular shape.
Based on the considerations above, we implemented an algorithm for detection
of extended sources which uses our own variant of wavelet transform
analysis, wavelet decomposition, to find all sources even in the presence of
source confusion and Maximum Likelihood fitting of $\beta$-models to
determine whether each source is extended. Each step is discussed below in
detail.
\subsection{Wavelet Detection of Cluster Candidates}\label{sec:wd}
Cluster detection in the \ROSAT\/ PSPC images is complicated by the varying
background and confusion with point sources located in the vicinity of
clusters. The wavelet transform is well-suited to overcome these
difficulties. We briefly outline the relevant properties of the wavelet
transform and then describe our particular implementation.
\subsubsection{General Properties of the Wavelet Transform}
The basic idea of the wavelet transform applied to astronomical images
(e.g.\ Grebenev et al.\ 1995 and references therein) is a convolution with a
kernel which consists of a positive core and an outer negative ring, so that
the integral of the kernel over the $x,y$ plane is zero. The convolution
with such kernels allows complete background subtraction and isolation of
structures of particular angular size. This can be shown using a kernel
which is the difference of two Gaussians:
\begin{equation}\label{eq:gausswv}\label{eq:wdfamily}
W(r)\;=\; \frac{\exp(-r^2/2a^2)}{2\pi a^2} \; - \;
\frac{\exp(-r^2/2b^2)}{2\pi b^2},
\end{equation}
where $b=2a$. The convolution of this kernel with any linear function
$s(x,y)=ax+by+c$ is zero. Therefore, any slowly varying background which can
be locally approximated by a linear function is subtracted by a convolution
with this kernel. To demonstrate the ability of wavelets to reveal
structures with a given size, consider the convolution of the wavelet kernel
with a Gaussian $\exp(-r^2/2\sigma^2)$. The convolution amplitude achieves
its maximum when $\sigma=a\sqrt{2}$ but rapidly falls to $1/2$ of the
maximum for $\sigma=a/2$ and $\sigma=4a$. These properties of the wavelet
transform are used for source detection (e.g., Damiani et al.\ 1997). In
most applications, an image is convolved with a family of kernels of the
same functional form while varying its scale ($a$ in eq.~\ref{eq:gausswv}).
Sources are detected as significant local maxima in the convolved images.
Information about the source angular extent can be derived from the wavelet
transform values at different scales. This simple approach works well for
detection of isolated sources, but fails if another bright source is located
nearby, as is shown in Fig.~\ref{fig:wvdecomp}a,b. A point source with a
flux four times that of the cluster is located at $2/3$ core-radii from the
cluster center (\emph{a}). The image is convolved with the wavelet
kernels (eq.\ref{eq:gausswv}) of scale $a=1,2,4,\ldots,32$ pixels
(\emph{b}). At each scale, the point source dominates the
convolution, and the cluster remains undetected. A different kind of
complication for a simple wavelet analysis is caused by compact groups of
point sources. Convolved with a wide kernel, such groups appear as a single
extended source, resulting in false cluster detections. Neither of these
problems can be overcome by using a different symmetric wavelet kernel with
compact support (Strag \& Nguyen 1995). However, they can be overcome using
the idea employed in the CLEAN algorithm commonly applied in radio astronomy
(H\"ogbom 1974): point sources are detected first and subtracted from the
image before the detection of extended sources. Below we describe our
algorithm, which we call wavelet decomposition, which combines this approach
with wavelet transform analysis.
\subsubsection{Wavelet Decomposition}
The family of wavelet kernels we use is given by eq.~\ref{eq:wdfamily}, in
which we use several combinations of $a$ and $b$ which we call scales. At
scale 1, the positive component in eq.\ref{eq:wdfamily} is a
$\delta$-function ($a=0$) and $b=1$~pixel. At scale 2, $a=1$ and $b=2$
pixels, at scale 3, $a=2$ and $b=4$ pixels and so on. At the largest scale
$n$, the kernel is a single, positive Gaussian with $a=2^{n-1}$ pixels. How
close is this family of kernels to the optimal filter for detecting sources
with the $\beta$-model surface brightness profiles? Numerical calculations
show that in at least one of the scales, the signal-to-noise ratio exceeds
80\% of the maximum value corresponding to the optimal filter --- the
$\beta$-model itself --- for $0.55<\beta<0.8$.
The described family of wavelet kernels has the advantage of an easy and
linear back-transformation. The original image $z(x,y)$ is given by
\begin{equation}
z(x,y)= \sum_{j=1}^n w_j(x,y),
\end{equation}
where $w_j(x,y)$ is the convolution with the kernel of scale $j$. An
important interpretation of this wavelet transform follows from this
equation: it provides a decomposition of an image into a sum of components
of different characteristic sizes. With this interpretation, we construct
the following iterative scheme to remove the effect of point sources.
We convolved the image with a kernel of the smallest scale, estimated the
detection threshold as described below, and cleaned the image of noise. The
convolved image values were preserved in those regions where the brightness
exceeded $1/2$ of the detection threshold and which contained at least one
maximum above the detection threshold. The remaining image was set to zero.
We subtracted this cleaned image from the input image to remove the sources
that have been detected at this step, and repeated the convolution and
cleaning procedure iteratively until no more sources were detected at this
scale. We also added cleaned images obtained at each iteration to produce a
composite image of significant sources detected at this scale. We then moved
to the next scale, at which the input image was set to the original image
minus everything detected at the first scale. The iterations were stoped at
scale 6, for which $a=80\arcsec$ and $b=160\arcsec$ and detected sources
have typical full widths of 3\arcmin--4\arcmin.
The bottom panels of Fig.~\ref{fig:wvdecomp} illustrate this procedure. The
smallest wavelet kernel is insensitive to the broad cluster emission and
detects only the point source. When iterations at scale 1 are completed,
$\sim 90\%$ of the point source flux has been subtracted. Subtraction of the
point source continues at scales 2 and 3, while the cluster remains
undetected because it is broader than the analyzing kernel. The point source
is almost completely subtracted at small scales and does not interfere with
cluster detection at scales 4--6. The result of these iterations is a set of
images containing statistically significant structures detected at each
scale, whose characteristic size corresponds to the width of the analyzing
kernel at this scale. Therefore, to separate point and extended sources,
one can combine small and large scales, respectively. As
Fig.~\ref{fig:wvdecomp}d shows, the sum of scales 1--3 and 4--6 provides
almost perfect separation of the original image into the point source and
the cluster.
It is important to choose the correct detection thresholds. Although several
analytic methods of deriving detection thresholds for the wavelet transform
were suggested (Starck \& Pierre 1998 and references therein), we determined
them through Monte-Carlo simulations. We simulated $512\times512$ images
consisting of a flat Poisson background and convolved them with the wavelet
kernels. The distribution of the local maxima in the convolved images was
used to define the detection threshold. We set this threshold at that value
above which one expects to find on average $1/3$ local maximum per simulated
background image per scale in the absence of real sources, so that in the
combined scales 4--6 we expect one false detection per image. Thus defined,
detection thresholds correspond to a formal significance of $\approx
4.5\sigma$. Detection thresholds were tabulated for a grid of simulated
background intensities. In the analysis of real images, we estimated the
local background and found the detection threshold by interpolation over the
precalculated grid. Detection thresholds were deliberately set low,
allowing approximately 600 false detections in the entire survey, since our
goal at this step was to detect all possible candidates for the subsequent
Maximum Likelihood fitting, where the final decision about source
significance and extent is made.
\begin{figure*}[htb]
\vspace*{-3ex}
\mbox{}\hfill \includegraphics[width=3.25in]{wvdecomp_a.ps} \hfill
\includegraphics[width=3.25in]{wvdecomp_b.ps} \hfill \mbox{}\par
\vskip -2.5ex
\mbox{}\hfill \includegraphics[width=3.25in]{wvdecomp_c.ps} \hfill
\includegraphics[width=3.25in]{wvdecomp_d.ps} \hfill\mbox{}
\vskip -2.5ex
\caption{\footnotesize Advantage of the wavelet decomposition algorithm.
A bright point source is located in the vicinity of a cluster \emph{(a)}.
Dashed lines show the strip in which brightness profiles (panels
\emph{b--d}) were extracted. Panel \emph{(b)}\/ shows the result of
convolution of this image with wavelet kernels (eq.\ref{eq:gausswv}) with
the scale $a=1,2,4,\ldots,32$ pixels. The data profile is shown by the solid
histogram, and the profiles of convolved images by solid lines. At all
scales, the convolution is dominated by the point source and there is no
separate peak corresponding to the cluster. Therefore, the cluster remains
undetected by this simple analysis. Our method \emph{(c)}\/ provides a
decomposition of the original image into components with the characteristic
size 1, 2, 4, $\ldots$, 32 pixels. Small-scale components model the point
source. The cluster becomes apparent and well-separated from the point
source at large scales. The sum of the three smallest and three largest
scales of the wavelet decomposition provide almost perfect decomposition of
the raw image into its original components \emph{(d)}.}
\label{fig:wvdecomp}
\vskip -1.5ex
\end{figure*}
As a result of the wavelet decomposition, we obtain six images which contain
detected sources of characteristic size (FWHM) approximately $7\arcsec,
15\arcsec, 30\arcsec, 60\arcsec, 120\arcsec, 240\arcsec$ (scales 1 through
6). We use these images to select candidate extended sources for subsequent
modeling. Since the FWHM of the PSF is 25\arcsec\ on-axis, most point
sources are detected on scales 1--3 and are absent at scales 4--6. On the
other hand, a distant cluster with core radius of 250~kpc at $z=0.5$ has an
angular radius of 35\arcsec\ (equivalent to $\sim 70\arcsec$ FWHM) and hence
is detected at scales 4--6, to which point sources do not contribute. Even
clusters with smaller core radii, $\sim10\arcsec$, would be detected at
scale 4, because their surface brightness profiles become broader than $\sim
30\arcsec$ FWHM when blurred by the PSF. Therefore, cluster candidates can
be selected as sources detected at scale 4 or higher. Some point sources,
especially those at large off-axis angles where the angular resolution
degrades, are detected at scale 4. This shows that our cluster candidate
selection based on the wavelet decomposition is lenient, and we are unlikely
to miss any real clusters at this step. The next step is the Maximum
Likelihood fitting of selected candidate extended sources to determine the
significance of their extent and existence, which will be used for the final
cluster selection.
\subsection{Maximum Likelihood Fitting of Sources}\label{sec:ML}
\subsubsection{Isolated Clusters}
The procedure is straightforward for isolated extended sources. The photon
image is fit by a model which consists of the $\beta$-model convolved with
the PSF. Source position, core-radius, and total flux are free parameters,
while $\beta$ is fixed at a value of $2/3$. The model also includes the
fixed background taken from the map calculated as described in
\S\ref{sec:bg}. The PSF is calculated at the appropriate off-axis angle for a
typical source spectrum in the 0.6--2 keV energy band (Hasinger et al.\
1993b). The best fit parameters are found by minimizing $-2\ln L$ (Cash
1979):
\begin{equation}
-2\ln L \; = \; -2 \sum \left(d_{ij}\ln m_{ij} - m_{ij}\right),
\end{equation}
where $d_{ij}$ and $m_{ij}$ are the number of photons in the data and the
model in pixel $(i,j)$, respectively, and the sum is over all pixels in the
fitted region. Note that $m_{ij}$ includes background, so $-2\ln L$ is
defined even if the source flux is set to zero. Along with best-fit
parameters we determine the formal significances of source existence and
extent. The significance of source existence is found from the change in
$-2\ln L$ resulting from fixing the source flux at zero (Cash 1979).
Similarly, the significance of the source extent is found by fixing the
core-radius at zero and re-fitting the source position and flux.
\subsubsection{Modeling of Non-Isolated Clusters}
Point sources in the vicinity of the extended source must be included in the
fit. We use local maxima in the combined wavelet scales 1--3 to create the
list of point sources. For the fitting, point sources are modeled as the PSF
calculated for a typical source spectrum as a function of off-axis angle.
Point source fluxes are free parameters, but their positions are fixed,
because they are accurately determined by the wavelet decomposition. The
fitting procedure is analogous to that for isolated extended sources.
As was discussed above, some point sources are detected at scale 4, and
therefore we initially fit them as extended sources, i.e.\ by the
$\beta$-model with free core-radius and position. The best fit core radii
for such sources are small and consistent with zero, so they are not
included in the final catalog. However, these sources may interfere with the
determination of significance of source extent. Suppose that a faint point
source is located next to a bright cluster, and that the point source is
fitted by the $\beta$-model with free position. The best fit core-radius of
the point source component will be close to zero. To estimate the
significance of the cluster extent, we set the core-radius of the cluster
component to zero and refit all other parameters, including source
positions. In this case the best fit model will consist of the former
cluster component at the position of the point source and the former point
source component at the position of the cluster having non-zero core radius.
The net change of $-2\ln L$ will be zero and we will conclude that the
cluster component is not significantly extended. To overcome this
interference, we update source lists after the first fitting. Those
extended sources which have best fit core radii $<5\arcsec$ are removed from
the list of extended sources and added to the list of point sources.
Parameters of the remaining extended sources are then refitted.
\subsubsection{Final Source Selection}
Next, we make the final selection of extended sources.
1.~The main requirement is that the source must be real and significantly
extended. For this, we require that the formal significance of the source
existence must exceed $5\sigma$ and the significance of its extent must be
greater than $3.5\sigma$.
2.~We find, however, that because of the non-linearity of the model, the
formal significance of the source extent is often overestimated for faint
sources on top of the very low background. To exclude these cases, we
required that the total source flux must exceed 25 photons.
3.~Some bright sources have a small but significant intrinsic extent. An
example is a bright Galactic star with a very soft spectrum. Its image is
slightly broader than the PSF for hard point sources because the PSF is
broader at low energies and the stars have a larger proportion of soft
photons. To exclude such cases, we required that the source core-radius must
be greater that 1/4 of the FWHM of the PSF. This requirement is met
automatically for faint clusters, because faint sources with small core
radii cannot be significantly extended, i.e.\ cannot satisfy condition (1).
This third criterion sets the lower limit of 6.25\arcsec\ for core-radii of
clusters in our catalog. Even at $z=1$, this angle corresponds to 50 kpc.
4.~Finally, one has to exclude sources associated with the target of
observation, as well as sources detected at large off-axis angles where PSF
degradation makes detection of the source extent uncertain. Our last
requirement was that the source is at least 2\arcmin\ from the target of the
observation and at off-axis angle smaller than 17.5\arcmin.
Sources satisfying criteria 1--4 comprise the final catalog.
\begin{figure*}[htb]
\mbox{}\hfill \includegraphics[width=3.5in]{example_a.ps} \hfill
\includegraphics[width=3.5in]{example_b.ps} \hfill \mbox{}\par
\medskip
\mbox{}\hfill \includegraphics[width=3.5in]{example_c.ps} \hfill
\includegraphics[width=3.5in]{example_d.ps} \hfill\mbox{}
\caption{\footnotesize
Detection of extended sources in the 1701+6411 field. The wavelet
decomposition uses the photon image (\emph{a}) to detect significant
structures of different angular scale (\emph{b}). The wavelet image is split
into a number of connected domains (\emph{c}). The domains containing
candidate extended sources are numbered. The best fit image is shown in
panel \emph{d}. Extended sources which passed our final selection are
marked. All four sources were later confirmed as clusters by optical
observations.}
\label{fig:example}
\vskip -1.5ex
\end{figure*}
\subsection{A Real-Life Example}
To minimize computations, we fit the data only in those regions where the
sum of scales 1--6 is positive, i.e.\ where an excess over the background is
found by the wavelet decomposition. To improve the computational efficiency
still further, the image is split into connected domains. Sources located
within the same domain are fit simultaneously. The whole procedure of the
extended source detection is illustrated in Fig~\ref{fig:example}. The raw
photon image is shown in panel (\emph{a}). The wavelet decomposition
detects 97 sources in this field. The sum of scales 1--6 is shown in
Fig~\ref{fig:example}b. This image is split into connected domains
(Fig~\ref{fig:example}c). Domains which contain sources detected at scales
4, 5, or 6, are numbered. The best-fit model image in these domains is shown
in Fig~\ref{fig:example}d. Extended sources which passed the final
selection, are marked by arrows. All four of them are optically confirmed
clusters. Note that the number of candidate extended sources found by the
wavelet decomposition is more than 3 times the number of finally selected
clusters. Thus, the selection of candidate sources by the wavelet analysis
is rather lenient and does not miss real extended sources.
Using the detection procedure described in this section, we selected 239
significantly extended X-ray sources in 647 fields. In the following
sections we describe the measurement of their X-ray parameters, optical
observations, and present our final catalog.
\section{Measurement of Cluster X-ray Parameters}
For each detected cluster, we derive its position, radius, total X-ray flux,
and their uncertainties. All these quantities are derived from the best-fit
$\beta$-model, and their statistical errors are determined by Monte-Carlo
simulations. For this, we use the best-fit model image (which includes
clusters, point sources, and the background) as a template, simulate the
data using Poisson scatter, and refit the simulated data. The errors are
determined from the distribution of the best fit values in 100 simulations.
In this section, we discuss the measurement details and sources of
additional systematic errors of the cluster parameters.
\subsection{Positional Accuracy}\label{sec:positions}
Cluster position is measured as the best-fit centroid of the $\beta$-model.
In addition to the statistical uncertainty of the position, there is a
systematic uncertainty due to inaccuracy of the \ROSAT\/ aspect solution.
The aspect solution errors result in a systematic offset of all X-ray
sources in the field with respect to their optical counterparts. To correct
this error, we examined the positional correspondence of X-ray sources and
objects in the Digitized Sky Survey (DSS). If possible, targets of
observations or other prominent sources (galaxies or bright stars) were used
to find the precise coordinate correction. Coordinate shifts measured this
way have an uncertainty of 2\arcsec--5\arcsec, which is negligible compared
to the statistical error of cluster positions. If no optical counterparts of
X-ray sources were found in the DSS, we assigned a systematic position error
of 17\arcsec, the \emph{rms} value of shifts measured using targets of
observation. In some observations without a bright target, we found a
correlation between fainter X-ray and optical sources, and measured shifts
from this correlation. We regarded this shift measurement as less reliable
than that using targets, and assigned an intermediate systematic error of
10\arcsec\ to the cluster position in such fields. The uncertain rotation
of the PSPC coordinate system results in a systematic error of $\sim
5\arcsec$ or less (Briel et al.\ 1996). We did not correct for the rotation,
but simply added $5\arcsec$ in quadrature to the offset uncertainty. The
final position error listed in Table~\ref{tab:catalog} is the sum of
systematic and statistical errors in quadrature.
\subsection{Core-Radius}\label{sec:radius}
Since it is impossible to fit the $\beta$-parameter using our data, we
measure core-radius for fixed $\beta=0.67$ and refer to this value as the
effective cluster radius $r_e$. Effective radius can be also defined as the
radius at which the surface brightness falls by a factor of $2^{3/2}$ and
hence is a physically meaningful combination of core-radius and $\beta$.
The $r_e$ measurement by fitting a $\beta=0.67$ model is accurate to $\pm
20\%$ within the observed range of $\beta$, $0.6<\beta<0.8$ (Jones \& Forman
1998).
We will now show that the radius measurement is relatively insensitive to
the presence of cooling flows which cause a surface brightness excess in the
central region of the cluster (e.g.\, Fabian 1994). Cooling flow clusters in
general cannot be fit by the $\beta$-model. However, in distant clusters,
the cenral excess is completely removed by the PSF blurring, and cooling
flows simply reduce the core-radius value. To study the possible influence
of the cooling flow on the derived effective radii, we use the \ROSAT\/ PSPC
image of Abell~2199, a nearby cluster with a moderate cooling flow of
$200\,M_\odot\,$yr$^{-1}$ (Edge et al.\ 1992). The $\beta$-model fit for all
radii yields $\beta=0.57$, $r_c=69\,$kpc. If the inner 200~kpc region is
excluded to remove the cooling flow contamination, the best-fit parameters
are $\beta=0.64$, $r_c=137\,$kpc, which corresponds to an effective radius
of $142\,$kpc. We then determine the radius value which we would measure if
A2199 were located at $z=0.4$. At this redshift, the FWHM of the PSF
corresponds to $\sim 200$~kpc. We convolve the image with this ``PSF'' and
fit accounting for the smoothing and without exclusion of the center. The
best fit parameters for the smoothed data are $\beta=0.61$, $r_c=95\,$kpc.
Fixing $\beta=0.67$, as we do for the analysis of distant clusters, we
obtain $r_c=110$~kpc, only 22\% smaller than the true value obtained by
excluding the cooling flow.
\subsection{X-ray Flux}\label{sec:fluxcalc}
The surface brightness of most of detected extended sources significantly
exceeds the background only in a very limited area near the source center.
Therefore, the total source flux simply integrated in a wide aperture has
unacceptably large statistical uncertainty. To overcome this, the flux is
usually directly measured within some small aperture, and then extrapolated
to infinity using a reasonable model of the surface brightness profile
(Henry et al.\ 1992, Nichol et al.\ 1997, Scharf et al.\ 1997). Similarly to
this approach, we derived total fluxes from the normalization of the best
fit $\beta$-model. The most serious problem with the flux measurement using
such limited aperture photometry is the necessity to extrapolate the
observed flux to infinity. This extrapolation is a potential source of large
systematic errors because the surface brightness distribution at large radii
is unknown. For example, consider the flux extrapolation from the inner 2.5
core radius region using $\beta$-models with different $\beta$. This inner
region contains 49\% of the total flux if $\beta=0.6$, 64\% if $\beta=0.67$,
and 70\% if $\beta=0.7$. Therefore, assuming $\beta=0.67$ one
underestimates the flux by $\sim 30\%$ if in fact $\beta=0.6$, the median
value in the Jones \& Forman (1998) sample. In addition, a trend of $\beta$
with cluster redshift or luminosity will introduce systematic changes within
the sample. For example, Jones \& Forman find that lower luminosity
clusters have smaller $\beta$, which might result in underestimation of
their fluxes.
To address the issue of systematic flux errors in more detail, we have used
simulated realistic data (\S\ref{sec:simulations}) to estimate the effect of
the assumed value of $\beta$ on the cluster flux determination. Clusters
were fit as described in \S\ref{sec:ML}, but for three different values of
$\beta$, 0.6, $0.67$, and $0.7$. Dashed lines in Fig~\ref{fig:fluxbias} show
average ratios of the measured and input total flux as a function of the
true $\beta$, if the flux is measured as a normalization of the best-fit
model with $\beta$ fixed at 0.6 and 0.7. In all cases significant biases
are present over the observed range of $\beta$ (Jones \& Forman 1998; shaded
region). We are interested in a flux measure which has the smallest
uncertainty for the whole range of $\beta$, not the one which yields an
unbiased flux estimate for some fixed value of $\beta$. The quantity
$(f_{0.6}+f_{0.7})/2$, where $f_{0.6}$ and $f_{0.7}$ are cluster fluxes
calculated assuming $\beta=0.6$ and $0.7$, respectively, is close to the
desired flux measure (solid line in Fig~\ref{fig:fluxbias}). It provides a
satisfactory flux estimate, accurate to $\pm10\%$ over the observed range of
$\beta$. We use this quantity to measure cluster fluxes throughout the rest
of this paper, and add the systematic error of $10\%$ to the statistical
uncertainty in the flux.
\begin{table*}
\vspace*{-1ex}
\tabcaption{\centerline{Comparison of flux measurements}\label{tab:comparison}}
\begin{center}
\renewcommand{\arraystretch}{1.2}
\footnotesize
\let\ph\phantom
\begin{tabular}{ccccccccc}
\hline
\hline
Cluster & $z$ & Our survey &EMSS & Nichol et al.& WARPS &
\multicolumn{3}{c}{Flux ratio$^{a}$} \\
\cline{7-9}
& & 0.5--2 keV &0.3--3.5 keV& 0.3--3.5 keV & 0.5--2 keV &
EMSS & Nichol et al. & WARPS \\
\hline
MS 1201.5+2824 & 0.167 & 102.6 & 169.4 & 174.7 & 95.6 &
1.03 & 1.00 & 1.07 \\
MS 1208.7+3928 & 0.340 & \ph{1}26.6 & \ph{1}41.1 & \ph{1}42.7 & 29.3 &
1.12 & 1.08 & 0.91 \\
MS 1308.8+3244 & 0.245 & \ph{1}46.7 & \ph{1}69.3 & \ph{1}74.9 & 50.7 &
1.16 & 1.07 & 0.92 \\
MS 2255.7+2039 & 0.288 & \ph{1}50.5 & \ph{1}57.6 & \ph{1}73.9 & 51.9 &
1.53 & 1.19 & 0.97 \\
Average & & & & & &
1.21 & 1.09 & 0.97 \\
\hline
\end{tabular}
\end{center}
\footnotesize
$^a$ Ratios of fluxes measured in our survey and EMSS, Nichol et al., and
WARPS. To calculate these ratios, 0.3--3.5~keV fluxes were converted to the
0.5--2~keV energy band using the conversion coefficients from Jones et al.\
(1998).
\vskip -2.5ex
\end{table*}
Our sample includes four EMSS clusters (Henry et al.\ 1992), which were also
detected in the WARPS survey (Jones et al.\ 1998) and whose \ROSAT\/
observations were studied by Nichol et al.\ (1997). We use these clusters to
compare fluxes from all these surveys. Table~\ref{tab:comparison} shows
general agreement, within 10\%, between different \ROSAT\/ surveys,
especially between ours and WARPS. However, Henry et al.\ and, to a smaller
degree, Nichol et al.\ find fluxes which are systematically lower than those
from our survey and WARPS. Note that all
\ROSAT\/ surveys use essentially the same data, so the difference cannot be
explained by statistical fluctuations. Jones et al.\ have earlier performed
a similar comparison using a larger number of clusters. They also noted the
systematic difference of their fluxes compared to EMSS and Nichol et al.,
and explained this by the difference in flux measurement methods. All the
surveys derived fluxes by extrapolation from that measured within some
aperture using a $\beta$-model. However, Henry et al.\ and Nichol et al.\
assumed fixed $\beta=0.67$ and $r_c=250$~kpc, while Jones et al.\ estimated
core-radii individually for each cluster, similar to our procedure. Also,
our fluxes can be $\sim 5\%$ higher than those obtained for $\beta=0.67$,
because our measurements are optimized for the entire observed range of
$\beta$. Cluster-to-cluster variations of $\beta$ probably explain $\sim
10\%$ non-systematic differences in flux for the same cluster in different
surveys. Jones et al.\ also compared their measurements with fluxes directly
integrated in a 4~Mpc aperture. They found that their fluxes exceed the
directly measured values by 10\%, with $\sim 60\%$ of that difference
explained by the cluster luminosity originating from outside 4~Mpc. Since
our measurements are $\sim 3\%$ lower than those of Jones et al., we
conclude that our fluxes are accurate within a few percent which is better
than the assigned systematic uncertainty.
\vskip -0.5ex
\centerline{\includegraphics[width=3.25in]{fluxbias.ps}}
\vskip -2.5ex
\figcaption{Ratio of the measured and input cluster flux as a
function of the cluster $\beta$. Fluxes of simulated clusters were measured
by fitting $\beta$-models with $\beta$ fixed at 0.6 (upper dashed line), 0.7
(lower dashed line) and 0.67 (dotted line). Solid line corresponds to the
flux measure $(f_{0.6}+f_{0.7})/2$ used for our sample.
\label{fig:fluxbias}}
\section{Optical observations}
We are carrying on a program of optical photometric and spectroscopic
observations of our clusters. A complete discussion of optical observations
and data reduction will be presented in McNamara et al.\ (in preparation).
Below we discuss the optical results relevant to the X-ray catalog presented
here.
\subsection{Cluster Identification}
In some earlier works, optical identification of X-ray selected clusters
seeks a concentration of galaxies in redshift space, which requires a large
investment of telescope time. For our sources, the detected extended X-ray
emission is already a strong indication of cluster existence. Therefore, we
relaxed the optical identification criteria and required that either a
significant enhancement in the projected density of galaxies be found or
that an elliptical galaxy not included in the NGC catalog lie at the peak of
the X-ray emission. While the galaxy concentration criterion is obvious,
the elliptical galaxy one is needed to identify poor clusters and groups
which fail to produce a significant excess of galaxies over the background.
It also helps to identify ``fossil groups'', in which galaxies have merged
into a cD (Ponman et al.\ 1994). A potential problem with this second
criterion is that an active nucleus of an elliptical galaxy might be falsely
identified as a cluster. However, a significant extent of X-ray emission in
all our sources makes this unlikely. Also, our spectroscopic observations of
such single-galaxy sources never showed emission lines characteristic of
AGNs.
\tabcaption{\centerline{Status of optical identifications}\label{tab:optidsum}}
\begin{center}
\renewcommand{\arraystretch}{1.2}
\footnotesize
\begin{tabular}{lr}
\hline
\hline
\multicolumn{2}{c}{Total sample} \\
Objects & 223 \\
Confirmed clusters & 200\\
False X-ray detections & 18 \\
No CCD imaging data & 5 \\ \\
\multicolumn{2}{c}{NED identifications} \\
Previously known clusters & 37\\
Previously known clusters with measured redshift & 29\\
NED AGN & 1\\
\\
\multicolumn{2}{c}{X-ray flux $>2\times10^{-13}\,$ergs$\;$s$^{-1}\,$cm$^{-2}$} \\
Objects & 82 \\
Confirmed clusters & 80\\
False detections & 1 \\
No data & 1 \\
\hline
\end{tabular}
\end{center}
\medskip
We obtained R, and in some cases I, V, and B band CCD images on the FLWO
1.2m, Danish 1.54m, and Las Campanas 1m telescopes. For brighter clusters,
we also used second generation Digitized Sky Survey (DSS-II) plates. Using
the DSS-II, it is possible to identify clusters at $z\lesssim0.45$. The
sensitivity of our CCD images is adequate to identify clusters to
$z=0.7-0.9$. If no cluster was visible in the CCD image, we considered this
object as a false detection (although it could be a very distant cluster).
These objects were retained in the sample for statistical completeness, but
marked in Table~\ref{tab:catalog}.
We also searched for possible optical counterparts in the NASA Extragalactic
Database (NED). The summary of NED identifications is given in
Table~\ref{tab:optidsum}. We obtained CCD photometry for some of the
catalogued clusters and tried to obtain spectroscopic data if redshifts were
not available. Fifteen extended sources were identified with isolated NGC
galaxies, and therefore removed from the cluster catalog. One object,
identified with an AGN, was considered as a false detection but was left in
the catalog for statistical completeness.
A summary of optical identifications of our cluster catalog is given in
Table~\ref{tab:optidsum}. In total, we confirmed 90\% of sources as clusters
in the total sample, while 8\% of sources are likely false detections. For
2\% of sources, no optical counterpart was present in the DSS-II and no CCD
images were yet obtained. In the X-ray bright subsample,
$f>2\times10^{-13}\,$ergs$\;$s$^{-1}\,$cm$^{-2}$, we optically confirmed 98\% of sources as
clusters; one object in this subsample is a false detection and for the
remaining one, optical images are saturated by neighboring Arcturus. These
high success rates demonstrate the high quality of our X-ray selection.
\begin{figure*}
\vspace*{-3ex}
\mbox{}\hfill\includegraphics[width=3.25in]{magz_ccd.ps}\hfill
\includegraphics[width=3.25in]{magz_dss2.ps}\hfill\mbox{}
\vskip-4ex
\figcaption{(\emph{a}) X-ray luminosity corrected BCG magnitudes vs.\
redshift. The dotted line shows the analytical fit (see text). The estimated
redshift uncertainty of $\Delta z = ^{+0.04}_{-0.07}$ is shown by dashed
lines. Crosses mark five high-redshift EMSS clusters. These clusters were
not used in the fit.\label{fig:magz:ccd} (\emph{b}) Same as (\emph{a}) but
magnitudes were measured using DSS-II. The dotted line shows the best fit
relation, and dashed lines correspond to $\Delta z =\pm 0.07$.
\label{fig:magz:dss}}
\vskip-2.5ex
\end{figure*}
\subsection{Spectroscopic and Photometric Redshifts}\label{sec:photz}
We observed an incomplete subsample of clusters spectroscopically on the
MMT, ESO 3.6m, and Danish 1.54m telescope. In most cases, we identified
several obvious cluster galaxies in the CCD images and then obtained a
long-slit spectrum, usually for 2--3 galaxies per cluster. The slit always
included the brightest cluster galaxy. For 10 clusters observed at the ESO
3.6m telscope, we obtained multi-object spectra, 10--15 galaxies per
cluster. Altogether, we measured 47 redshifts ranging from $z=0.040$ to
$z=0.574$. Further details of spectroscopic observations will be presented
in McNamara et al.\ (1998, in preparation).
For those clusters without spectroscopic data, we estimated redshifts from
the magnitude of the brightest cluster galaxy (BCG). The BCG was selected as
the brightest galaxy either within the error circle of the cluster X-ray
position or the one in the center of the galaxy concentration; both criteria
were met simultaneously in most cases. Although the BCG selection was
somewhat subjective, the tightness of the magnitude vs.\ redshift relation
obtained for $\sim 1/4$ of the total sample confirms our procedures. For
nearby clusters, the scatter in the absolute magnitude of BCGs is small,
$\sigma_M\approx0.2$ (Sandage 1972), which corresponds to $\approx 10\%$
distance error. Our results show that the scatter is small at higher
redhifts as well. The magnitude vs.\ redshift relation is calibrated within
our sample, and photometric redshifts are estimated using the CCD images
obtained under photometric conditions or DSS-II plates.
The CCD galaxy photometry was performed in the R band. The BCG magnitudes
were measured within a fixed 4\arcsec aperture. Such an aperture was chosen
to make the measurement relatively insensitive to poor seeing, which was
$\sim 2\arcsec$ in some cases, and encompass $\sim 50\%$ of light in
high-redshift galaxies. The fixed angular aperture corresponds to the metric
aperture increasing with redshift, from 10~kpc at $z=0.1$ to 29~kpc at
$z=0.5$. The increase of the metric aperture is a monotonic function of
redshift, the same for all clusters, and thus does not prevent us from using
the $m-z$ relation for photometric redshift estimates. We did not make
K-corrections of BCG magnitudes because this is also a monotonic systematic
function of redshift. Measured magnitudes were corrected for Galactic
extinction using Burstein \& Heiles (1982) maps.
There is a correlation between the BCG magnitude and the cluster X-ray
luminosity (Hudson \& Ebeling 1997), which increases the scatter in the
$m-z$ relation. Within our sample, the absolute BCG magnitude changes
approximately as $-0.5\log L_x$, in good agreement with the Hudson \&
Ebeling results. Below we use the corresponding correction, $m^\prime = m +
0.5\log(L_x/10^{44}\,\mbox{erg s$^{-1}$})$ to compensate for this effect.
The X-ray luminosity-corrected BCG magnitude is plotted vs. cluster redshift
in Fig~\ref{fig:magz:ccd}. This dependence can be well fit by a cosmological
dimming law $m^\prime = m_0 + 5\log z - 1.086(q^\prime-1)z$ with best-fit
parameters $m_0=20.45\ensuremath{^{\rm m}}$ and $q^\prime=-0.121$. In this equation,
$q^\prime$ provides a useful parametrization but does not have the meaning
of the cosmological deceleration parameter, because magnitudes were not
K-corrected and a varying metric aperture was used. The best fit relation is
shown by the dotted line in Fig.~\ref{fig:magz:ccd}. Photometric redshifts
were estimated from the analytical fit using the following iterative
procedure. We estimated redshift from the uncorrected BCG magnitude. Using
the estimated redshift, we calculated the X-ray luminosity, corrected the
BCG magnitude as described above, and re-estimated $z$. The process was
repeated until the estimated redshift converged. We checked this procedure
by estimating photometric redshifts of clusters with measured redshifts.
This comparison has shown that the photometric estimate is unbiased and has
an uncertainty of $\Delta z = ^{+0.04}_{-0.07}$.
We also observed five high-$z$ EMSS clusters (0302+1658, 0451.6--0305,
0015.9+1609, 1137.5+6625, and 1054.5--0321) to check the $m-z$ relation at
high redshift using an external X-ray selected sample. These clusters are
plotted by crosses in Fig.~\ref{fig:magz:ccd}. They follow the relation
defined by our sample very well. In addition, these five EMSS clusters are
very X-ray luminous; their accordance with the $m-z$ relation confirms the
validity of the X-ray luminosity correction we apply to BCG magnitudes.
For 13 clusters without photometric CCD data, redshifts were estimated using
the Second Digitized Sky Survey plates. Photometric calibration of was
performed using our CCD images, and will be described in McNamara et al.\
(1998, in preparation). BCG magnitudes were measured in a fixed angular
aperture of 5\arcsec. No K-correction was applied. The X-ray luminosity
corrected DSS-II magnitudes are plotted vs.\ redshift in
Fig.~\ref{fig:magz:dss}. The $m-z$ relation can be fit by the relation $m =
m_0 + 5\log z - 1.086(q^\prime-1)z$ with best fit parameters $m_0=19.84$,
$q^\prime=-1.23$ photometric redshifts were estimated using a procedure
analogous to that for the CCD data. The comparison of the estimated and
measured redshifts yields the accuracy of the photometric estimate of
$\Delta z\approx \pm 0.07$.
\section{The Catalog}
Our cluster catalog is presented in Table~\ref{tab:catalog}. The object
number is given in column~1. The coordinates (J2000.0) of the X-ray centroid
are listed in columns 2 and 3. The total unabsorbed X-ray flux in the
0.5--2~keV energy band (observer frame) in units of $10^{-14}\,$ergs$\;$s$^{-1}\,$cm$^{-2}$\ and
it uncertainty are listed in columns 4 and 5. Angular core-radius and its
uncertainty are given in columns 6 and 7. Column~8 contains spectroscopic or
photometric redshifts. The 90\% confidence interval of the photometric
redshift is given in column~9. Thirteen clusters for which the DSS was used
for photometric redshift are marked by superscript in column~9. If redshift
is spectroscopic, no error interval is given. Three clusters show clear
concentrations of galaxies near the X-ray position, but the choice of BCG is
uncertain because of the large cluster angular size. We do not list
photometric redshifts for these clusters and mark them by ``U'' in the Notes
column. Columns~10 lists 90\% X-ray position error circle. Column~11
contains notes for individual clusters. In this column, we list the optical
identifications from the literature. We also mark likely false detections by
``F''.
Table~\ref{tab:lofields} shows coordinates and exposures for the 647
analyzed \ROSAT\/ pointings. For a quick estimate of sensitivity in each
field, one can use the listed exposure time and Fig~\ref{fig:limflux}. In
this figure, we show the limiting flux, at which clusters are detected with
a probability of $90\%$ for off-axis angles between 2\arcmin\ and
17.5\arcmin.
\section{Monte-Carlo Simulations of Cluster Detection}\label{sec:simulations}
For a statistical analysis of our cluster catalog, the detection efficiency
as a function of flux and extent, and measurement uncertainty is required.
To derive these functions, we used extensive Monte-Carlo simulations
described in this section.
\subsection{Correcting for Selection Effects}
The most direct way to compare theoretical models with our cluster catalog
is to predict the number of clusters within some interval of measured fluxes
and radii (and redshift) and then compare the prediction with the number of
detected clusters in this interval. To predict the number of detected
clusters, one needs to know the detection probability as a function of real
cluster flux, $f$, and radius, $r_c$, and the distribution of measured
values, $f_m$ and $r_{c,m}$, also as a function of $f$ and $r_c$. Using a
theoretical model, one calculates the number of real clusters as a function
of flux and radius, then multiplies this number by the detection
probability, and then convolves it with the measurement scatter.
Since the detection algorithm for extended sources is rather complicated,
the only method of deriving appropriate corrections is through Monte-Carlo
simulations.
\medskip
\centerline{\includegraphics[width=3.25in]{plotlimflux.ps}}
\vskip -3.5ex
\figcaption{Approximate limiting flux, at which the cluster detection
probability is 90\% in the range of off-axis angles $2\arcmin-17.5\arcmin$,
plotted vs.\ exposure time. Limiting fluxes for three values of cluster
core-radius, $r_c=15\arcsec$, 30\arcsec, and 60\arcsec, are shown.
Sensitivity is best for $r_c\approx 30\arcsec$ and declines for smaller and
larger clusters.
\label{fig:limflux}}
\subsection{What Affects the Cluster Detection?}
In this section, we discuss the effects that influence the cluster detection
process, and therefore should be included in Monte-Carlo simulations.
The first obvious effect is the degradation of the \ROSAT\/ angular
resolution at large off-axis angles. Because of this degradation, a cluster
with $r_c=20\arcsec$ is well-resolved on-axis where the FWHM of the PSF is
25\arcsec, but the same cluster is indistinguishable from a point source if
located at an off-axis angle of 17\arcmin\ where the PSF is 57\arcsec\
(FWHM).
Point sources, which may lie in the vicinity of a cluster, reduce the
efficiency of cluster detection and increase the measurement errors.
Therefore, the simulations should include realistic spatial and flux
distributions of point sources.
In addition, exposure time, Galactic absorption, and the average background
level vary strongly among the analyzed \ROSAT\/ fields, and so does the
probability to detect a cluster of given flux. Also, the background has to
be modeled individually for each field, and cannot be assumed known in
simulations.
To model all these effects, we simulate realistic \ROSAT\/ images containing
point sources, insert clusters with known input parameters at random
positions into the simulated images, and analyze these images identically to
the real data. The selection functions are then derived from comparison of
the numbers and parameters of input and detected clusters.
\subsection{Simulating ROSAT Images without Clusters}\label{sec:sim:point}
We begin with point sources which are the major contributor to the X-ray
background in the \ROSAT\/ band. To simulate source fluxes, we use the $\log
N - \log S$ relation measured in the flux range of
$1.2\times10^{-15}-10^{-12}\,$ergs$\;$s$^{-1}\,$cm$^{-2}$\ (Vikhlinin et al.\ 1995). Fluxes are
simulated using the extrapolation of $\log N - \log S$ in the range from
$10^{-11}$ to $2.5\times10^{-17}\,$ergs$\;$s$^{-1}\,$cm$^{-2}$, where the integral emission of
point sources saturates the X-ray background. Source positions are
simulated either randomly or with a non-zero angular correlation function
using a two-dimensional version of the Soneira \& Peebles (1978) algorithm.
After the source position is determined, we convert the flux to the number
of detected photons using the exposure time at the source position, and the
counts-to-flux conversion appropriate to a power law spectrum with
$\Gamma=2$ and the actual Galactic absorption in the simulated field. The
number of detected source photons is drawn from a Poisson distribution. The
photon positions are simulated around the source position according to the
PSF as a function of off-axis angle. Finally, we add a flat Poisson
background (corrected for the exposure variations across the field) until
the average background levels are equal in the simulated image and the
corresponding real observation. This flat uniform component corresponds to
truly diffuse backgrounds, such as foreground Galactic emission, scattered
solar X-rays, and the particle background.
The images simulated according to the described procedure correctly
reproduce fluxes and the spatial distribution of point sources, the average
background level, and background fluctuations caused by undetected point
sources and their possible angular correlation.
\subsection{Simulations of clusters}
The next step is to put a cluster of a given flux and angular size at a
random position in the image. An elliptical $\beta$-model
\begin{equation}\label{eq:ellbeta}
I(x,y) = I_0\;\left(1+x^2/a_x^2+y^2/a_y^2\right)^{-3\beta+1/2},
\end{equation}
was used for cluster brightness. Cluster $\beta$ parameters and axial ratios
were randomly selected from the distribution observed in nearby clusters
(Jones \& Forman 1998, Mohr et al.\ 1995). To include the influence of edge
effects arising because detected clusters must lie between off-axis angles
of 2\arcmin--17.5\arcmin, cluster positions were simulated in the inner
18.5\arcmin\ circle of the field of view. Cluster flux was converted to the
number of detected photons using the local exposure and the counts-to-flux
coefficient corresponding to a $T=5$~keV plasma spectrum and the Galactic
absorption for the field. The cluster model was convolved with the PSF
calculated for the given off-axis angle. Photons were simulated using a
Poisson distribution around the model and added to the image.
Reducing simulated images identically to the real data, we derive the
detection efficiency as a function of flux and effective radius. Effective
radius is defined as the radius at which the radially-averaged surface
brightness drops by a factor of $2^{3/2}$ (\S\ref{sec:radius}). The
effective radius can be calculated from parameters in eq.~\ref{eq:ellbeta}
as
\begin{equation}
r_e=\sqrt{a_xa_y\left(2^{1.5/(3\beta-0.5)}-1\right)}.
\end{equation}
In simulations, we verified that the detection probability indeed has very
little dependence on the $\beta$-parameter and axial ratio and is determined
by $r_e$ only.
\subsection{Simulation Runs}
Simulated images were reduced identically to the real data, i.e.\ we modeled
the background (\S\ref{sec:bg}), detected candidate extended sources by the
wavelet decomposition (\S\ref{sec:wd}), fitted these candidate sources and
applied our final selection criteria (\S\ref{sec:ML}), and recorded
parameters of input and detected clusters. Each of the 647 \ROSAT\/ fields
was simulated 650 times. Radii and fluxes of input clusters were randomly
distributed in the 5\arcsec--300\arcsec,
$10^{-14}$--$3\times10^{-12}\,$ergs$\;$s$^{-1}\,$cm$^{-2}$\ range.
To derive the distribution of false detections, we performed a separate set
of simulations without putting clusters into simulated images. In this set
of simulations, each field was simulated 50 times.
Simulations were performed with point sources distributed either randomly or
with the angular correlation function measured by Vikhlinin \& Forman (1995)
for faint \ROSAT\/ sources. The spatial correlation of point source
significantly increases the number of false detections (by a factor of 1.5),
but has little or no effect on the detection probability of real clusters.
The simulation results were used to measure the cluster selection functions
necessary for a statistical study of our catalog. These data are available
in electronic publication on AAS CDROM and through the WWW page
\mbox{http://hea--www.harvard.edu/x--ray--clusters/}.
\mbox{}\hfill\includegraphics[width=3.25in]{effarea2d.ps}\hfill\mbox{}
\vskip-2.5ex
\figcaption{Probability of cluster detection as a function of flux
and core-radius. Contours correspond to the detection probabilities of 1, 5,
10, 20, 30, 40, 50, 60, 70, 80, 90, and 99\%. \label{fig:effarea2d}}
\subsection{Results of Simulations: Detection Probability}\label{sec:detprob}
The probability that a cluster with unabsorbed flux $f$ and radius $r_e$,
whose position falls within 18.5\arcmin\ of the center of one of the
analyzed \ROSAT\/ fields will be detected, is shown in
Fig.~\ref{fig:effarea2d}. This probability is normalized to the geometric
area of the annulus in which detected clusters may be located
(2\arcmin--17.5\arcmin). At a given flux, the detection probability is the
highest for clusters with radii of $\sim 30\arcsec$. It gradually decreases
for clusters with large radius, because their flux is distributed over the
larger area, thus decreasing their statistical significance. The detection
probability also decreases for compact clusters, because they become
unresolved at large off-axis angles. This effect is important for clusters
with angular core radii of $\lesssim 15\arcsec$. Even at $z=1$ this radius
corresponds to 130~kpc, which is two times smaller than the core-radius of a
typical rich cluster (250~kpc, Jones \& Forman 1998). Therefore, cluster
detection efficiency is limited mainly by the low number of photons, not by
the resolution of the \ROSAT\/ PSPC.
The detection probability changes by less than 10\% for clusters with axial
ratios $<0.7$ compared to azimuthally symmetric clusters. This is caused by
significant PSF smearing, which reduces the apparent ellipticity of distant
clusters. Similarly, we have found no significant dependence of the
detection probability on the value of the $\beta$-parameter.
\begin{figure*}[htb]
\vspace*{-3ex}
\mbox{}\hfill\includegraphics[width=3.25in]{gridbias_flux.ps}\hfill
\includegraphics[width=3.25in]{gridbias_ax.ps}\hfill\mbox{}
\vskip -2.5ex
\caption{\footnotesize Bias and scatter of flux and radius measurements.
Points show the average relative deviation of the measured quantity. Error
bars show the relative scatter of the measured quantity, not errors of bias.}
\label{fig:biasgrid}
\vskip -1.5ex
\end{figure*}
\begin{figure*}[htb]
\vspace*{-2.5ex}
\mbox{}\hfill\includegraphics[width=3.25in]{false.ps}\hfill
\includegraphics[width=3.25in]{falsedistr.ps}\hfill\mbox{}
\vskip -2.5ex
\caption{\footnotesize Distribution of false sources as a function of
measured flux and radius (left), and a cumulative distribution as a function
of flux (right). In the left panel, contours represent the levels of equal
density of false source distribution. Contour labels show the number of
false sources outside the contour. Points represent parameters of the likely
false detections in the data, i.e.\ those X-ray sources which have no
cluster counterparts in deep CCD images.}
\label{fig:falsedistr}
\vskip -1.5ex
\end{figure*}
\subsection{Results of Simulations: Measurement Scatter and Bias}\label{sec:measscat}
In this section we consider the distribution of measured flux and radius of
detected clusters, as a function of input flux and radius. This distribution
is derived for clusters detected in \emph{any}\/ field and at \emph{any}\/
off-axis angle, and is different from the uncertainties listed in
Table~\ref{tab:catalog}, which are determined only by the photon counting
statistics. Fig.~\ref{fig:biasgrid} shows the distributions derived for
several values of input cluster flux and radius. The points in this figure
represent the mean relative deviation of the observed parameter, while the
error bars show the mean relative scatter (both positive and negative).
Generally, the flux measurement is unbiased and has a small relative scatter
of $\sim 20\%$. At low fluxes, where the detection probability decreases,
the measured fluxes tend to be overestimated. This bias is naturally present
whenever a flux measurement is performed near the detection threshold, and
is not related to the particular detection algorithm. For example, a source
with a true flux exactly equal to the detection threshold will be detected
with 50\% probability, and in all these cases the measured flux exceeds the
true flux. Averaged over detections, the measured flux exceeds the true
value. This flux bias should be accounted for in deriving the luminosity
functions and the $\log N -
\log S$ relation. On the other hand, for clusters with very large radii, the
flux is underestimated, because the background is overestimated near broad
clusters. This effect is important only for clusters at low redshifts which
have large angular core radii.
The radii of very compact clusters are strongly overestimated on average,
because such clusters can be detected as extended sources only if their
measured radius is a positive fluctuation with respect to the true value,
similar to the flux bias above. The measured radii of very broad clusters
are underestimated because of the oversubtraction of the background. The
sizes of distant clusters mostly fall in the range of $15\arcsec-1\arcmin$,
where our radius measurements are unbiased. For example, a radius of
250~kpc corresponds to 45\arcsec{} at $z=0.3$, 35\arcsec{} at $z=0.5$, and
29\arcsec\ at $z=1$. At $z<0.2$, 250~kpc corresponds to large angular radii
and therefore measured sizes of large low-redshift clusters are
underestimated.
\subsection{Results of Simulation: False Detections}\label{sec:false}
Because of the finite angular resolution of the \ROSAT\/ PSPC, closely
located point sources can be falsely classified as a single extended source.
Optical identification is the most direct way of finding such false
detections. However, optical observations alone, with no estimate of the
number of false detections, could result in our failure to identify
interesting new classes of objects such as quasars lensed by ``dark''
clusters (Hattori et al.\ 1997), clusters dominated by a single galaxy
(Ponman et al.\ 1994), ``failed'' clusters (Tucker, Tananbaum, \& Remillard
1995). Therefore, it is desirable to have an independent estimate of the
number of false detections and their distribution as a function of flux and
radius. For this, we simulate \ROSAT\/ images without clusters and reduce
them identically to the real data. All the extended sources detected in
these simulations are false. Since the simulations correctly reproduce
fluxes and spatial distribution of point sources and all the instrumental
artifacts of the \ROSAT\/ PSPC, the expected number of false detections can
be accurately measured.
Confusion of point sources is the main effect leading to false cluster
detections. The degree of confusion depends strongly on whether point
sources are distributed randomly or have angular correlation, and the number
of false detections changes correspondingly. From simulations, we derive
that our source catalog should on average contain 17.2 false detections if
point sources are randomly located. If point sources have correlation with
the observed amplitude (Vikhlinin \& Forman 1995), the number of false
detections increases to 25.9. Fig~\ref{fig:falsedistr} shows the
distribution of false detections in radius vs.\ flux coordinates and their
cumulative distribution as a function of flux, obtained for correlated point
sources. For randomly located sources, the distributions in
Fig~\ref{fig:falsedistr} should simply be scaled. The contamination of our
extended source catalog by confused point sources is between 8\% and 11\%.
The predicted number of false detections agrees well with results of optical
identifications. From simulations, we expect on average $\approx1.5$ false
detections with fluxes $>2\times10^{-13}\,$ergs$\;$s$^{-1}\,$cm$^{-2}$. Of 82 X-ray sources above
this flux, 80 are optically confirmed clusters, and one is a likely false
detection. In the total sample, we expect $\approx 17-26$ false sources,
while the presently available optical identifications set an upper limit of
23 and lower limit of 18 false detections in the data
(Table~\ref{tab:optidsum}). Finally, the distribution of flux and
core-radius of X-ray sources without optical cluster counterparts matches
well the distribution for false detections found in simulations
(Fig~\ref{fig:falsedistr}). Thus, our sample provides no support for the
existence of ``dark'' clusters.
\subsection{Sky Coverage as a Function of Flux}
To compute the $\log N - \log S$ function, the survey solid angle as a
function of flux is required. Traditionally, the sky coverage is thought of
as the area, in which a survey is ``complete'', i.e.\ all sources above the
given flux are detected. The differential $\log N - \log S$ is computed as
the ratio of the number of detected sources in a given flux bin and the sky
coverage in this flux bin. However, this view of the sky coverage is not
correct in the presence of significant flux measurement errors, which is the
case in all \ROSAT\/ surveys. First, the source detection probability
changes gradually from 0 to 1 in a flux range of finite width, and cannot be
adequately approximated by a step-like function of flux. Second, the
measurement scatter leads to significant biases in the derived $\log N -
\log S$ relation, as we describe below. Some intrinsically bright sources
have low measured fluxes, while some intrinsically faint sources have high
measured fluxes. For surveys with uniform sensitivity, the number of
sources usually increases at faint fluxes and the described effect leads to
overestimation of $\log N - \log S$ (Eddington 1940). In X-ray surveys, the
sky coverage usually drops rapidly at faint fluxes and therefore the number
of detected sources decreases at faint fluxes. In this case, the sign of the
Eddington bias is opposite and the $\log N - \log S$ function is
underestimated (see Fig.~6 in Hasinger et al.\ 1993a).
\vskip -1ex
\tabcaption{\centerline{Sky coverage of the survey}\label{tab:area}}
\begin{center}
\vskip -0.5ex
\renewcommand{\arraystretch}{1.2}
\footnotesize
\begin{tabular}{ccc}
\hline
\hline
Limiting Flux & \multicolumn{2}{c}{Solid Angle (deg$^2$) } \\
\cline{2-3}
ergs$\;$s$^{-1}\,$cm$^{-2}$ & entire sample & $z>0.5$ clusters \\
\hline
$1.3\times10^{-14}$ & 0.074 & 0.070 \\
$1.5\times10^{-14}$ & 0.094 & 0.089 \\
$2.0\times10^{-14}$ & 0.185 & 0.190 \\
$3.0\times10^{-14}$ & 1.354 & 1.364 \\
$4.5\times10^{-14}$ & 9.026 & 9.100 \\
$7.0\times10^{-14}$ & 34.74 & 34.03 \\
$1.0\times10^{-13}$ & 66.55 & 66.20 \\
$1.5\times10^{-13}$ & 102.6 & 104.3 \\
$2.0\times10^{-13}$ & 122.8 & 127.4 \\
$3.0\times10^{-13}$ & 140.9 & 147.0 \\
$4.5\times10^{-13}$ & 148.1 & 154.0 \\
$7.0\times10^{-13}$ & 149.3 & 159.6 \\
$1.0\times10^{-12}$ & 151.1 & 161.3 \\
$1.5\times10^{-12}$ & 157.1 & 164.7 \\
$2.0\times10^{-12}$ & 158.5 & 165.1 \\
\hline
\end{tabular}
\end{center}
\smallskip
For a plausible model of the source population, one can calculate the ratio
of the differential $\log N - \log S$ for detected and real sources, if the
detection probability and measurement scatter is known. The ratio of these
$\log N - \log S$ functions has the usual meaning of the sky coverage. This
approach to the survey area calculation was used by Vikhlinin et al.\ (1995
and 1995b) to obtain an unbiased measurement of the $\log N - \log S$
relation for point sources. We use the same approach here to define the
survey area for the present cluster survey. We assume non-evolving clusters
with the luminosity function of Ebeling et al.\ (1997) in a $q_0=0.5$
cosmology. The distribution of cluster radii and their correlation with
luminosities is adopted from Jones \& Forman (1998). We simulate cluster
redshifts between $z=0$ and $z=2$ using the cosmological volume-per-redshift
law (e.g.\ Peebles 1993). We then simulate the rest-frame luminosity between
$L_x=10^{42}$ and $10^{46}\,$ergs~s$^{-1}$. Cluster radius is simulated
from the distribution corresponding to the simulated luminosity. We then
calculate the observed angular radius and flux accounting for the
correlation between X-ray luminosity and temperature (e.g., David et al.\
1993), the probability to detect this cluster (\S\ref{sec:detprob}), and
finally we simulate the measured flux (\S\ref{sec:measscat}). The detection
probability is added to the distribution of detected clusters as a function
of measured flux, and 1 is added to the the number of input clusters in the
corresponding bin of real flux. Simulating $10^6$ clusters according to this
procedure, we determine the sky coverage as the ratio of detected and input
sources in the corresponding flux bins. The calculated survey area is shown
in Table~\ref{tab:area}. In this table we also show the sky coverage for the
distant, $z>0.5$, subsample. The sky coverage for distant subsamples differs
from that for the entire sample because of the different distribution of
angular sizes.
Using different cluster evolution models (including evolution of
luminosities, number density, and radii), we verified that the derived sky
coverage varies by no more than $10\%$ compared to the no-evolution
assumption, if cluster radii do not evolve. Using the present cluster
sample, Vikhlinin et al.\ (1998) show that the distribution of sizes of
distant and nearby clusters is indeed very similar.
\vspace*{-1.5ex}
\centerline{\includegraphics[width=3.25in]{plotcnts.ps}}
\vskip -2.5ex
\figcaption{Cluster $\log N - \log S$ relation. The results from
our survey are shown as the heavy solid histogram with several individual
points including error bars. Vertical error bars represent the uncertainty
in the number of clusters, while horizontal error bars correspond to a
possible systematic uncertainty in flux (\S\ref{sec:fluxcalc}). Other
surveys are shown for comparison.\label{fig:lnls}}
\medskip
\section{${\rm LOG}\; N - {\rm LOG}\; S$~ relation for clusters}
Using the survey solid angle, we calculate the $\log N - \log S$ relation
for clusters. Each optically confirmed cluster is added to the cumulative
distribution with the weight equal to the inverse solid angle corresponding
to its measured flux. The derived cumulative $\log N - \log S$ function is
shown in Fig~\ref{fig:lnls}. We also show the cluster counts derived in
other surveys: EMSS (adopted from Jones et al.\ 1998), \ROSAT\/ All-Sky survey
sample of X-ray brightest clusters (BCS; Ebeling et al.\ 1997), WARPS survey
(Jones et al.\ 1998), Rosati et al.\ (1998 and 1995), and an ultra-deep UK
\ROSAT\/ survey (McHardy et al.\ 1997). The $\log N - \log S$ relation
derived from our survey spans more than 2.5 orders of magnitude in flux. At
the bright end, our result shows excellent agreement with the samples of
nearby clusters from the BCS and EMSS. At intermediate fluxes, around
$2\times10^{-13}\,$ergs$\;$s$^{-1}\,$cm$^{-2}$, our cluster counts agree well with a small-area
WARPS survey. Finally, the extrapolation of our $\log N - \log S$ relation
down to $3\times10^{-15}\,$ergs$\;$s$^{-1}\,$cm$^{-2}$, agrees with results of McHardy et al.\
(1997), who identified most of the X-ray sources, regardless of extent, in
their ultra-deep survey. Our $\log N - \log S$ relation seems to be
systematically higher than the surface density of clusters identified in the
50~deg$^2$ survey of Rosati et al. For example, the difference is a factor
of 1.3 at $2\times10^{-13}\,$ergs$\;$s$^{-1}\,$cm$^{-2}$, where we optically confirmed 98\% of
detected sources and where the survey area corrections are relatively small.
This difference is marginally significant at the $\sim 2\sigma$ level. Since
Rosati et al.\ have not published their cluster sample nor the details of
the survey area calculations, it is hard to assess the source of this
discrepancy. We only note that it can be explained, for example, if there is
a systematic difference of 15--20\% in fluxes. A discrepancy of our $\log N
- \log S$ relation with the EMSS near their sensitivity limit is most likely
due to the difference in measured fluxes (\S\ref{sec:fluxcalc}).
\section{Summary}
We present a catalog of 200 clusters detected as extended X-ray sources in
647 \ROSAT\/ PSPC observations covering a solid angle of 158 square degrees.
To detect these sources, we used a novel detection algorithm combining a
wavelet decomposition to find candidate extended sources and Maximum
Likelihood fitting to evaluate the statistical significance of the source
extent. Optical identifications demonstrate a high success rate of our
X-ray selection: 90\% of detected sources in the total sample, and 98\% in
the bright subsample are optically confirmed as clusters of galaxies. We
present X-ray parameters of all detected sources and spectroscopic or
photometric redshifts for optically confirmed clusters. Extensive
Monte-Carlo simulations of our source detections are used to derive the sky
coverage of the survey necessary for a statistical study of X-ray properties
of our clusters. We present the $\log N - \log S$ relation derived from our
cluster catalog. This relation shows a general agreement with other,
smaller area surveys.
In a subsequent paper (Vikhlinin et al.\ 1998) we use this sample to
constrain the evolution of cluster luminosities and radii at high redshift.
\acknowledgements
We thank M.~Markevitch for useful discussions, D.~Fabricant and M.~Franx for
the advice regarding MMT observations, and H.~Ebeling and P.~Rosati for
useful communications regarding their surveys. We are grateful to
J.~A.~Tyson, E.~Barton, and S.~Jha who obtained some of the CCD images. We
made use of the Digitized Sky Survey produced by the Space Telescope Science
Institute from the Oschin Schmidt Telescope on Mt.\ Palomar and the UK
Schmidt Telescope photographic plates, NASA/IPAC Extragalatic Database, and
the \ROSAT\/ data archive maintained by GSFC. Financial support for this
work was provided by the Smithsonian Institution, NAS8-39073 contract, and
the Russian Basic Research foundation grant 95--02--05933. HQ acknowledges
partial support from FONDECYT grant 8970009 and the award of a Presidential
Chair in Science.
|
2,869,038,153,951 | arxiv | \section{Introduction}
Since the experimental realization of spin-orbit coupling (SOC) effect
in ultracold atomic gases \citep{Lin2011s,Wang2012s,Cheuk2012s},
it is possible to investigate many interesting and exotic matter states,
like stripe phase \citep{Ho2011b} and topological state \citep{Jiang2011m}, etc,
in this highly controllable system. To study properties of these many-body
matter states, lots of scattering techniques based on the interplay
between atoms and light play significant roles in enriching knowledge
about them. For example, radio frequency can often be used to study
the single-particle spectral function \citep{Chin2005r}, while two-photon
Bragg scattering technique is utilized to study both single-particle
excitations and rich collective ones \citep{Veeravalli2009b,Hoinka2017g}.
As an many-body physical quantity, dynamic structure factor
is related to the imaginary part of response function after Fourier
transformation \citep{Pitaevskii2003book}. The definition of dynamic
structure factor is related closely to a certain physical operator, which
is applied to perturb system. Usually we focus our discussion on
density operators of two spin components, which at the same time can impart a set
of momentum and energy to the system to induce a density response.
This density-related dynamic structure factor provides rich information
about the dynamics of the system. At a small transferred momentum,
the signal of dynamic structure factor is dominated by all possible
collective excitations, like Goldstone phonon excitation \citep{Hoinka2017g}, second sound \cite{Hu2022s},
Leggett excitation \citep{Leggett1966n,Zhang2017t,Zou2021d}, and
possible Higgs excitation \citep{Pekker2015a,Zhao2020d}. At a large
transferred momentum, the dynamic structure factor is mainly influenced
by the single-particle excitation \citep{Combescot2006m}, which is
determined by the many-body single-particle spectrum. Collecting all possible dynamical excitation displayed by dynamic structure
factor, we can effectively understand dynamical properties related
to a certain many-body matter state of the system. Usually the experimental measurement of many-body physical quantities is a challenging work. However it is know that the density dynamic structure factor
is proportional to the value of centre-of-mass velocity of the system \citep{Brunello2001m}, which
makes it feasible to measure density structure factor by two-photon Bragg scattering experiment.
In this paper, we theoretically investigate one-dimensional (1D) Fermi
superfluid with Raman-type SOC effect. The system can be realized
by confining the motion of the system in other two dimension with
optical lattice technique. This system has been found to experience
a phase transition from a conventional Bardeen-Cooper-Schrieffer (BCS)
superfluid to an interesting topological superfluid by continuously
increasing an effective Zeeman field \citep{Jiang2011m,Liu2012t,Wei2012m}.
When the system comes into this topological superfluid, an impurity,
a boundary or a topological defect can generate local Majorana fermions
accompanied by a zero eigenenergy \citep{Liu2013i,Xu2014d,Liu2015s}.
Since there is no symmetry breaking during phase transition, experimentally
it is a great challenge to distinguish these two matter states. In
this paper, we theoretically calculate the density dynamic structure
factor of 1D Raman-SOC Fermi superfluid with random phase approximation \cite{AndersonPR1958}, and analysis their main dynamical
characters in both BCS and topological superfluid, which is expected
to provide some dynamical information to understand and distinguish
these two states.
This paper is organized as follows. In the next section, we will use
the language of Green\textquoteright s function to introduce the microscopic
model of 1D Fermi superfluid with Raman SOC effect, outline the mean-field
approximation and how to calculate response function with random phase
approximation. We give results of dynamic structure factor of
both BCS and topological superfluid in Sec. III. In Sec. IV and V, we give
our conclusions and acknowledgment. Some calculation details will be given in the final
appendix.
\section{Methods}
\subsection{Model and Hamiltonian}
For a two spin-components 1D Raman-SOC Fermi superfluid with \textsl{s}-wave
contact interaction, the system can be described by a model Hamiltonian
\begin{equation}
\begin{array}{cc}
H & =\underset{\sigma}{\sum}\int dx\psi_{\sigma}^{\dagger}\left(x\right)\left[-\frac{1}{2m}\frac{\partial^{2}}{\partial x^{2}}-\mu\right]\psi_{\sigma}\left(x\right)\\
& -h\int dx\left[\psi_{\uparrow}^{\dagger}\left(x\right)e^{i2k_{R}x}\psi_{\downarrow}\left(x\right)+H.c.\right]\\
& +g_{1D}\int dx\psi_{\uparrow}^{\dagger}\left(x\right)\psi_{\downarrow}^{\dagger}\left(x\right)\psi_{\downarrow}\left(x\right)\psi_{\uparrow}\left(x\right),
\end{array}
\end{equation}
where $\psi_{\sigma}(\psi_{\sigma}^{\dagger})$ is the annihilation
(generation) operator of real particles with mass $m$ for spin-\textgreek{sv}
component and chemical potential $\mu$. A dimensionless parameter
$\gamma=mg_{1D}\text{/}n_{0}$ is usually used to describe the interaction
strength $g_{1D}$ of a uniform system at a bulk density $n_{0}$,
by which we can define the Fermi wave vector $k_{F}=\pi n_{0}/2$
and Fermi energy $E_{F}=k_{F}^{2}/2m$. $h$ is an effective
Zeeman field and $k_{R}$ is the recoil momentum of SOC laser beam,
both of which coming from the SOC effect. Here and in the following,
we have set $\hbar=k_{B}=1$ for simple. In many related references about SOC effect,
a further unitary transformation will be carried out to the above Hamiltonian \cite{Liu2013i}, which
induces a term $\hat{k} \cdot \sigma$ turns out in the single-particle Hamiltonian ($\sigma$ is Pauli matrix), and the same operation also changes the physical meaning of spin index. So here we do not carry out these transformation to keep the original definition of spin index.
A standard mean-field treatment
is done to the interaction Hamiltonian $H_{{\rm int}}=g_{1D}\int dx\psi_{\uparrow}^{\dagger}\psi_{\downarrow}^{\dagger}\psi_{\downarrow}\psi_{\uparrow}$
with the usual definition of order parameter $\Delta=-g_{1D}\left\langle \psi_{\downarrow}\psi_{\uparrow}\right\rangle $.
After Fourier transformation to the mean-field Hamiltonian,
we can obtain its expression in the momentum representation, which
reads
\begin{equation}
\begin{array}{cc}
H_{{\rm mf}} & =\underset{k\sigma}{\sum}\xi_{k}c_{k\sigma}^{\dagger}c_{k\sigma}-h\left(c_{k+k_{R}\uparrow}^{\dagger}c_{k-k_{R}\downarrow}+H.c.\right)\\
& -\underset{k}{\sum}\left[\Delta c_{k\uparrow}^{\dagger}c_{-k\downarrow}^{\dagger}+\Delta^{*}c_{-k\downarrow}c_{k\uparrow}\right]
\end{array}
\end{equation}
with $\xi_{k}=k^{2}/2m-\mu$. Usually the order parameter $\Delta$
is a complex number. However U(1) symmetry is broken in the ground
state of the system, and the phase of $\Delta$ is pushed to randomly choose an
constant number. Here we can just take this phase to be zero, and induce $\Delta=\Delta^{*}$.
The exact diagonalization of mean-field Hamiltonian $H_{{\rm mf}}$
is a feasible but tedious work because of the long expression of each eigenvector. Luckily this embarrassing problem can be solved by
motion equation of Green's function $\omega\left\langle \left\langle c_{1}|c_{2}\right\rangle \right\rangle =\left\langle \left[c_{1},c_{2}\right]_{+}\right\rangle +\left\langle \left\langle \left[c_{1},H_{{\rm mf}}\right]|c_{2}\right\rangle \right\rangle $,
where $c_{1}$ and $c_{2}$ are any possible fermionic operators of
the system. Finally we find that the system has six independent Green's
functions, which are
\begin{equation}
\begin{array}{c}
G_{1}\left(k,\omega\right)\equiv\left\langle \left\langle c_{k+k_{R}\uparrow}|c_{k+k_{R}\uparrow}^{\dagger}\right\rangle \right\rangle =\underset{l}{\sum}\left[G_{1}\right]_{k}^{l}/\left(\omega-E_{k}^{l}\right),\\
G_{2}\left(k,\omega\right)\equiv\left\langle \left\langle c_{k-k_{R}\downarrow}|c_{k-k_{R}\downarrow}^{\dagger}\right\rangle \right\rangle =\underset{l}{\sum}\left[G_{2}\right]_{k}^{l}/\left(\omega-E_{k}^{l}\right),\\
\varGamma\left(k,\omega\right)\equiv\left\langle \left\langle c_{k+k_{R}\uparrow}|c_{-k-k_{R}\downarrow}\right\rangle \right\rangle =\underset{l}{\sum}\left[\varGamma\right]_{k}^{l}/\left(\omega-E_{k}^{l}\right),\\
S\left(k,\omega\right)\equiv\left\langle \left\langle c_{k-k_{R}\downarrow}|c_{k+k_{R}\uparrow}^{\dagger}\right\rangle \right\rangle =\underset{l}{\sum}\left[S\right]_{k}^{l}/\left(\omega-E_{k}^{l}\right),\\
F_{1}\left(k,\omega\right)\equiv\left\langle \left\langle c_{k+k_{R}\uparrow}|c_{-k+k_{R}\uparrow}\right\rangle \right\rangle =\underset{l}{\sum}\left[F_{1}\right]_{k}^{l}/\left(\omega-E_{k}^{l}\right),\\
F_{2}\left(k,\omega\right)\equiv\left\langle \left\langle c_{k-k_{R}\downarrow}|c_{-k-k_{R}\downarrow}\right\rangle \right\rangle =\underset{l}{\sum}\left[F_{2}\right]_{k}^{l}/\left(\omega-E_{k}^{l}\right),
\end{array}\label{eq:GF}
\end{equation}
where $l=\pm1,\pm2$ denotes respectively all four quasi-particle
energy spectrum $E_{k}^{\left(+1\right)}=-E_{k}^{\left(-1\right)}=U_{k}$
and $E_{k}^{\left(+2\right)}=-E_{k}^{\left(-2\right)}=D_{k}$. Symbols $U_{k}$
and $D_{k}$ are the up and down-branch quasi-particle
spectra, respectively,
\begin{equation}
U_{k}=\sqrt{E_{k}^{2}+h^{2}+k^{2}\lambda^{2}+2\sqrt{E_{k}^{2}h^{2}+\widetilde{\xi_{k}^{2}}k^{2}\lambda^{2}}},\label{eq:Uk}
\end{equation}
\begin{equation}
D_{k}=\sqrt{E_{k}^{2}+h^{2}+k^{2}\lambda^{2}-2\sqrt{E_{k}^{2}h^{2}+\widetilde{\xi_{k}^{2}}k^{2}\lambda^{2}}},\label{eq:Dk}
\end{equation}
with $\widetilde{\xi_{k}}=\xi_{k}+E_{R}$, $\lambda=k_{R}/m$ , $E_{R}=k_{R}^{2}/2m$
and $E_{k}=\sqrt{\widetilde{\xi_{k}^{2}}+\Delta^{2}}$. These single-particle spectra
do great influence to the static and dynamical properties of ground
state. All expressions of $\left[G_{1}\right]_{k}^{l}$,$\left[G_{2}\right]_{k}^{l}$,$\left[\Gamma\right]_{k}^{l}$,$\left[S\right]_{k}^{l}$,$\left[F_{1}\right]_{k}^{l}$
and $\left[F_{2}\right]_{k}^{l}$ will be listed in the appendix.
Based on the fluctuation and dissipation theorem, it is easy to get the relation between
all physical quantities and corresponding Green's functions. For example,
we obtain equations of density equation
\begin{equation}
n_{1}=\sum_{k}\left\langle c_{k\uparrow}^{\dagger}c_{k\uparrow}\right\rangle =-\frac{1}{\pi}\sum_{k}\int d\omega\frac{{\rm Im}\left[G_{1}\left(k,\omega\right)\right]}{e^{\omega/T}+1},
\end{equation}
\begin{equation}
n_{2}=\sum_{k}\left\langle c_{k\downarrow}^{\dagger}c_{k\downarrow}\right\rangle =-\frac{1}{\pi}\sum_{k}\int d\omega\frac{{\rm Im}\left[G_{2}\left(k,\omega\right)\right]}{e^{\omega/T}+1},
\end{equation}
and order parameter equation
\begin{equation}
\frac{\Delta}{g_{1D}}=-\sum_{k}\left\langle c_{-k\downarrow}c_{k\uparrow}\right\rangle =\frac{1}{\pi}\sum_{k}\int d\omega\frac{{\rm Im}\left[\varGamma\left(k,\omega\right)\right]}{e^{\omega/T}+1},
\end{equation}
with Green's function $G_{1}$, $G_{2}$ and $\varGamma$ in Eq. \ref{eq:GF}
at temperature $T$. By self-consistently solving the above density
and order parameter equations, the value of chemical potential $\mu$
and order parameter $\Delta$ can be numerically calculated.
\begin{figure}
\includegraphics[scale=0.3]{fig1}\caption{\label{Fig_transition} The distribution of Free energy in Panel (a),
$E_{{\rm cr}}=\left|h-\sqrt{\left(\mu-E_{R}\right)^{2}+\Delta^{2}}\right|$
in Panel (b), and chemical potential (blue solid line) and order parameter
(olive dashed line) at different effective Zeeman field $h$ during
the phase transition between BCS superfluid (red solid line) and topological
superfluid (black solid line). A gray dotted line marks the location of critical
value of effective Zeeman magnetic field $h_{c}=1.135E_{F}$ at $\gamma=\pi$
and $k_{R}=0.75k_{F}$.}
\end{figure}
In the following, we take an interaction strength $\gamma=\pi$ and
a typical experimental value of $k_{R}=0.75k_{F}$. As shown in Fig. \ref{Fig_transition},
the system experiences a phase transition from BCS superfluid to topological
superfluid when increasing continuously the effective Zeeman field $h$ over a critical
value $h_{c}=1.135E_{F}$, in which the free energies of two states are equal with each other (see panel (a)). This is a first order phase transition,
during which these two states compete with each other and make chemical potential $\mu$ and order parameter
$\Delta$ experience a discontinuous variation at $h_{c}$ (see panel (c)). It should
be noticed that the critical Zeeman field $h_{c}$ here is larger
than the another critical value of Zeeman field $h$ at which the topological superfluid
just turn out and $E_{{\rm cr}}=\left|h-\sqrt{\left(\mu-E_{R}\right)^{2}+\Delta^{2}}\right|$ just touches zero (see panel (b)).
The physical origin of this phase transition can also
be understood from the geometrical structure of the down-branch single-particle spectrum $D_k$. As shown in Fig.
\ref{Fig_uk_spectrum}, the global minimum of
$D_{k}$ experiences a switch from $k=0$ (red dotted line) to a
non-zero $k$ (black solid line), when continuously increasing Zeeman field $h$. In the critical point ($h_{c}=1.135E_{F}$), there are two options of matter state for atoms to stay in the momentum space, in which the value of chemical potential (Panel (c) of Fig. \ref{Fig_transition})
can push all atoms to stay in the regime around $k=0$, or both $k=0$
and non-zero $k$ minimum. The competition between these two situations
generate both BCS and topological superfluid with the same Free energy, and finally make the system experience this phase transition.
Next we will discuss the dynamical properties of the system and numerical methods to calculate them.
\begin{figure}
\includegraphics[scale=0.3]{fig2}\caption{\label{Fig_uk_spectrum} The distribution of down-branch single-particle
spectrum $D_{k}$ at $\gamma=\pi$ and $k_{R}=0.75k_{F}$.}
\end{figure}
\subsection{Response function and random phase approximation}
In the Fermi superfluid, there are four different densities, which
are denoted respectively by $n_{1}=\sum_k\left\langle c_{k\uparrow}^{\dagger}c_{k\uparrow}\right\rangle $,
$n_{2}=\sum_k\left\langle c_{-k\downarrow}^{\dagger}c_{-k\downarrow}\right\rangle $,
$n_{3}=\sum_k\left\langle c_{-k\downarrow}c_{k\uparrow}\right\rangle $ and $n_{4}=\sum_k\left\langle c_{k\uparrow}^{\dagger}c_{-k\downarrow}^{\dagger}\right\rangle $.
Due to the interaction between particles, these densities are closely
coupled with each other. Any fluctuation in each kind of density will
make the other densities generate an obvious density fluctuation
of them. This physics plays a significant role in the dynamical excitation
of the system, and also demonstrates the importance and necessity of
the term in Hamiltonian beyond mean-field theory. Random phase approximation
has been verified to be a feasible way to treat the fluctuation term
of Hamiltonian \citep{AndersonPR1958}. Comparing with experiments,
it can even obtain some quantitatively reliable predictions in three-dimensional
Fermi superfluid \citep{Zou2010q,Zou2018l}. Its prediction also qualitatively
agrees with quantum Monte Carlo data in two-dimensional Fermi system
\citep{Zhao2020d}. Here we also use the same method to
carry out calculation, to qualitatively study the dynamical excitation of 1D SOC Fermi superfluid. Its main idea is introduced in the following.
Following the standard linear response theorem, a weak external
density perturbations potential $V_{{\rm ext}}=[V_{1},V_{2},V_{3},V_{4}]$,
which carries a specific momentum $q$, is exerted to the Fermi superfluid.
The corresponding perturbation Hamiltonian is described by
$H_{{\rm ext}}=\sum_{kq}\Psi_{k+q}^{\dagger}\left(V_{1}\sigma_{1}+V_{2}\sigma_{2}+V_{3}\sigma_{3}+V_{4}\sigma_{4}\right)\Psi_{k}$.
Here $\Psi_{k}=\left[\begin{array}{cc}
c_{k\uparrow}, & c_{-k\downarrow}^{\dagger}\end{array}\right]^{T}$ is the field operator matrix in the momentum representation. Four matrices $\sigma_{1}=\left(I+\sigma_{z}\right)/2$, $\sigma_{2}=\left(I-\sigma_{z}\right)/2$, $\sigma_{3}=\left(\sigma_{x}-i\sigma_{y}\right)/2$,
and $\sigma_{4}=\left(\sigma_{x}+i\sigma_{y}\right)/2$ are defined with Pauli
matrices $\sigma_{x,y,z}$ and unit matrix $I$. This perturbation Hamiltonian $H_{\rm ext}$
will induce a density fluctuation of all densities, labeled by a matrix
\begin{equation}
\rho_{q}=\sum_{k}\left[\begin{array}{c}
n_{kq}^{1}\\
n_{kq}^{2}\\
n_{kq}^{3}\\
n_{kq}^{4}
\end{array}\right]=\sum_{k}\left[\begin{array}{c}
\Psi_{k}^{\dagger}\sigma_{1}\Psi_{k+q}\\
\Psi_{k}^{\dagger}\sigma_{2}\Psi_{k+q}\\
\Psi_{k}^{\dagger}\sigma_{3}\Psi_{k+q}\\
\Psi_{k}^{\dagger}\sigma_{4}\Psi_{k+q}
\end{array}\right].\label{eq:fluc_den}
\end{equation}
These density fluctuation in reverse play a non-negligible role in generating a fluctuation Hamiltonian $H_{{\rm sf}}=\sum_{q}\rho_{q}^{\dagger}A_{q}$,
which is usually called self-consistent dynamical potential \citep{Liu2004c}.
Here
\[
A_{q}=\left[\begin{array}{c}
n_{q}^{2}\\
n_{q}^{1}\\
n_{q}^{3}\\
n_{q}^{4}
\end{array}\right]=g_{1D}\sum_{k}\left[\begin{array}{c}
n_{kq}^{2}\\
n_{kq}^{1}\\
n_{kq}^{3}\\
n_{kq}^{4}
\end{array}\right]
\]
is the strength of fluctuation potential. Different from
two or three dimension case, the contribution from $n_{q}^{3}$ and
$n_{q}^{4}$ can is not divergent and there is no need to carry on renormalization
to one dimension interaction strength $g_{1D}$.
In a weak perturbation
situation, the amplitude of density fluctuation $\rho_{q}$ is proportional
to the external potential $V_{\rm ext}$, and they are connected to each
other by
\begin{equation}
\rho_{q}=\chi V_{{\rm ext}},\label{eq:resp}
\end{equation}
where $\chi$ is the response function of the system and includes
rich information about the dynamical excitation, however whose calculation
is usually quite hard to be carried out. As discussed above, a feasible
way to figure out this problem is to use random phase approximation, which collect effects of both $V_{{\rm ext}}$ and $V_{{\rm sf}}=M_{I}A_{q}$
to define an effective external potential
\begin{equation}
V_{{\rm eff}}=V_{{\rm ext}}+V_{{\rm sf}}.\label{eq:veff}
\end{equation}
Then the motion of real gases in external potential $V_{{\rm ext}}$
is equivalent to the motion of mean-field gases in this effective potential
$V_{{\rm eff}}$. So the density fluctuation is connected to this effective
potential $V_{{\rm eff}}$ by
\begin{equation}
\rho_{q}=\chi^{0}V_{{\rm eff}},\label{eq:resp0}
\end{equation}
where $\chi^{0}$ is the response function in the mean-field approximation,
whose calculation is relatively much easier. Finally, with Eqs. \ref{eq:fluc_den},
\ref{eq:resp},\ref{eq:veff}, and \ref{eq:resp0}, we find $\chi$
and $\chi^{0}$ is related to each other by equation
\begin{equation}
\chi=\frac{\chi^{0}}{1-\chi^{0}M_{I}g},\label{eq:RPA}
\end{equation}
where
\[
M_{I}=\left[\begin{array}{cccc}
0 & 1 & 0 & 0\\
1 & 0 & 0 & 0\\
0 & 0 & 0 & 1\\
0 & 0 & 1 & 0
\end{array}\right]
\]
is a constant matrix reflecting the coupling situation of four different
densities.
Next we discuss the derivation of the mean-field response function
$\chi^{0}$, which is a $4\times4$ matrix
\begin{equation}
\chi^{0}=\left[\begin{array}{cccc}
\chi_{11}^{0} & \chi_{12}^{0} & \chi_{13}^{0} & \chi_{14}^{0}\\
\chi_{21}^{0} & \chi_{22}^{0} & \chi_{23}^{0} & \chi_{24}^{0}\\
\chi_{31}^{0} & \chi_{32}^{0} & \chi_{33}^{0} & \chi_{34}^{0}\\
\chi_{41}^{0} & \chi_{42}^{0} & \chi_{43}^{0} & \chi_{44}^{0}
\end{array}\right].\label{eq:kai0}
\end{equation}
Here any matrix element $\chi_{ij}^{0}\left(x_{1},x_{2},\tau,0\right)\equiv-\left\langle \hat{n}_{i}\left(x_{1},\tau\right)\hat{n}_{j}\left(x_{2},0\right)\right\rangle $.
In the uniform system, all response function should only be the function
of relative coordinate $x=x_{1}-x_{2}$ and imaginary time $\tau$. So a generalized
coordinate $R=\left(x,\tau\right)$ is used to go on discussing. Based
on Wick's theorem, we should consider all possible two-operators contraction
terms, which are all related to 6 independent Green's functions in Eqs. \ref{eq:GF}. We
find that the mean-field response function can be displayed by $\chi^{0}=A+B$,
in which $A$ is the mean-field response function connecting to
Green's functions $G_{1}$ , $G_{2}$ and $\Gamma$, while $B$ connecting the SOC Green's functions $S$, $F_{1}$
and $F_{2}$. For example, in the spatial and time representation,
\[
\chi_{11}^{0}\left(R\right)\equiv-\left\langle \hat{n}_{1}\left(x_{1},\tau\right)\hat{n}_{1}\left(x_{2},0\right)\right\rangle =A_{11}\left(R\right)+B_{11}\left(R\right)
\]
where $A_{11}\left(R\right)=G_{1}\left(-R\right)G_{1}\left(R\right)$
and $B_{11}\left(R\right)=F_{1}^{\dagger}\left(-R\right)F_{1}\left(R\right)$.
In the ground state ($\Delta=\Delta^{*}$), we find $F_{1}^{\dagger}=F_{1}$.
After Fourier transformation to Green's functions and with identical
relation $\frac{1}{\beta}\sum_{ip_{n}}\frac{1}{ip_{n}-\varepsilon}*\frac{1}{ip_{n}+iq_{n}-\varepsilon'}=\frac{f\left(\varepsilon\right)-f\left(\varepsilon'\right)}{iq_{n}+\varepsilon-\varepsilon'}$
($ip_{n}$ and $iq_{n}$ are Matsubara frequencies, and $f(x)$ is the Fermi distribution function), we obtain the expression
of all matrix elements in the momentum-energy representation
\begin{equation}
\chi^{0}\left(q,\omega\right)=A\left(q,\omega\right)+B\left(q,\omega\right),\label{eq:xab}
\end{equation}
where
\[
A=\left[\begin{array}{cccc}
A_{11}, & A_{12}, & A_{13}, & A_{14}\\
A_{12}, & A_{22}, & A_{23}, & A_{24}\\
A_{14}, & A_{24}, & -A_{12}, & A_{34}\\
A_{13}, & A_{23}, & A_{43}, & -A_{12}
\end{array}\right]
\]
has 9 independent matrix elements, and
\[
B=\left[\begin{array}{cccc}
B_{11}, & B_{12}, & B_{13}, & B_{14}\\
B_{12}, & B_{22}, & B_{23}, & B_{24}\\
B_{14}, & B_{24}, & B_{33}, & B_{34}\\
B_{13}, & B_{23}, & B_{43}, & B_{33}
\end{array}\right]
\]
has 10 independent matrix elements. All expressions of these matrix
elements are listed in the appendix.
\subsection{Dynamic structure factor}
With Eqs. \ref{eq:RPA} and \ref{eq:xab}, we get expressions
of both the density and spin response function, which are expressed
by
\begin{equation}
\begin{array}{c}
\chi_{n}\equiv\chi_{11}+\chi_{22}+\chi_{12}+\chi_{21},\\
\chi_{s}\equiv\chi_{11}+\chi_{22}-\chi_{12}-\chi_{21}.
\end{array}
\end{equation}
And the density dynamic structure factor $S_n(q,\omega)$ and the spin one $S_s(q,\omega)$ are connected
with its corresponding response function by
\begin{equation}
S_{n/s}=-\frac{1}{\pi}\frac{1}{1-e^{-\omega/T}}{\rm Im}\left[\chi_{n/s}\right],
\end{equation}
where $q$ and $\omega$ are the transferred momentum and energy, respectively.
The sum rules of these two dynamic structure factors have been introduced in the reference \cite{He2016d}.
\section{Results}
\begin{figure}
\includegraphics[scale=0.35]{fig3}\caption{\label{Fig_dsf_den} The density dynamic structure factor $S_n(q,\omega)$ at different
Zeeman field $h=0.9E_{F},h_{c},1.3E_{F}$.}
\end{figure}
\begin{figure}
\includegraphics[scale=0.35]{fig4}\caption{\label{Fig_dsf_spin} The spin dynamic structure factor $S_s(q,\omega)$ at different
Zeeman field $h=0.9E_{F},h_{c},1.3E_{F}$.}
\end{figure}
In the following discussions, we still focus on the interaction strength
$\gamma=\pi$ and also the recoil momentum $k_{R}=0.75k_{F}$
at zero temperature. These parameters
are the same as one in Fig. \ref{Fig_transition}. We numerically
calculate the density and spin dynamic structure factor, which are shown
in Fig. \ref{Fig_dsf_den} and Fig. \ref{Fig_dsf_spin}, respectively,
in the phase transition between BCS superfluid (higher two panels)
and topological superfluid (lower two panels). Generally we investigate
a full dynamical excitation in different transferred momentum $q$,
including the low energy (or momentum) collective excitation to the
high energy (or momentum) single-particle excitation. Of course, the
presence of SOC effect goes on enriching dynamical behaviors than
the one in conventional Fermi superfluid.
\subsection{Collective phonon and roton-like excitations}
At a low transferred energy $\omega$, it is easy to investigate the
collective excitation. By continuously increasing transferred momentum
$q$ from zero, we initially see a gapless phonon excitation in the
density dynamic structure factor $S_n(q,\omega)$, which is shown by the lower red
curve in all four panels of Fig. \ref{Fig_dsf_den}. When the system is in the BCS superfluid
($h\leq h_{c}$, higher two panels), the spectrum of collective phonon excitation just
monotonically rises with transferred momentum $q$, and finally merges
into the single-particle excitation continuum in a certain large enough $q$. In the whole BCS regime, the
phonon velocity almost does not vary too much with the effective Zeeman field
$h$, except a narrow regime close to transition where BCS superfluid becomes a metastable
state and the velocity suddenly drops (red solid and dot line of Fig. \ref{Fig_sound}).
Of course, the gapless phonon excitation can also be seen in the topological
superfluid (black solid and dot line of Fig. \ref{Fig_sound}), and its velocity monotonically increases
with Zeeman field $h$ and finally saturates to a constant value. In the critical
regime $h=h_{c}$, the BCS and topological state have the same Free energy.
Although we calculate respectively their dynamic structure factor,
the phonon excitation of one state may be potentially influenced by
the other, and turns out a complex excitation behavior (see Fig. \ref{Fig_dsf_den}
and \ref{Fig_dsf_spin}).
\begin{figure}
\includegraphics[scale=0.32]{fig5}\caption{\label{Fig_sound} The sound velocity $c$ at different effective
Zeeman field $h$.}
\end{figure}
Besides the phonon collective excitation, we investigate a new collective
roton-like excitation only appearing in the topological superfluid. As shown in the lower two panels of both Fig. \ref{Fig_dsf_den} and \ref{Fig_dsf_spin}, this roton-like
excitation is a natural extension of the phonon mode, and it is denoted by a local minimum of the excitation spectrum at a fixed
momentum $q\simeq2k_{F}$, which is just the global minimum of the
down-branch spectrum $D_{k}$ (see red line of Fig. \ref{Fig_uk_spectrum}).
There is no roton-like excitation in the BCS superfluid, where $q\simeq2k_{F}$
is just a local minimum and the global minimum is located at $q=0$.
These results tell us that the emergence of roton-like excitation
is closely related to the formation of global minimum at $q\simeq2k_{F}$,
which is just the character of spectrum $D_k$ in topological superfluid. All discussions above hint that the specific single-particle effect brought by the SOC effect plays an important role in the appearance
of the roton-like excitation at a certain interaction strength. For general, we have
also checked that the same roton-like mode can be seen in other different interaction
strength (for example $\gamma=2.5,4.0$) and recoil momentum $k_{R}$,
and the location of the roton-like excitation is always fixed at
$q\simeq2k_{F}$.
The dynamical behavior of collective mode can be displayed by both the density and spin dynamic structure factor. However a different excitation related to single-particle excitation happens at a relatively large transferred
energy $\omega$ when $q$ is small, whose physical origin will be introduced in the following.
\subsection{Threshold of single-particle excitation spectrum}
When the transferred energy $\omega$ is large enough, a pair-breaking
of Cooper pairs will occur and make pairs be separated into free Fermi
atoms. Indeed a great part of the dynamical excitation in Fig. \ref{Fig_dsf_den}
and \ref{Fig_dsf_spin} is dominated by this pair-breaking effect.
In the density dynamic structure factor $S_{n}$, this effect usually
is much obvious in a relatively large transferred momentum $q>k_{F}$,
where the collective excitation are depressed very much. Different
from the conventional Fermi superfluid, this single-particle excitation
takes up a large regime in the spin dynamic structure factor $S_{s}$,
even for a small and zero transferred momentum $q$. Before understanding
this single-particle excitation, it is necessary to study the threshold
energy to break a Cooper pair.
\begin{figure}
\includegraphics[scale=0.35]{fig6}\caption{\label{Fig_spectrum} Four kinds of single-particle excitation spectra.
Olive line: $D_{k}\leftrightarrow D_{k+q}.$ Red line: $D_{k}\leftrightarrow U_{k+q}$
and $U_{k}\leftrightarrow D_{k+q}$. Blue line: $U_{k}\leftrightarrow U_{k+q}$.}
\end{figure}
This pair-breaking excitation is related to two-branch structure of quasi-particle
spectrum $U_{k}$ and $D_{k}$, and the two atoms forming a Cooper-pair
can come from the same or different single-particle spectrum.
This two-branch structure of spectrum generates much richer single-particle excitation than the conventional Fermi superfluid, and induces
four possible combinations of Fermi atoms in a Cooper pair, namely
the $DD$, $DU$, $UD$, and $UU$ type. The minimum energy at a certain
momentum $q$ to break a pair should be ${\rm min}[D_{k}+D_{k+q}]$, ${\rm min}[D_{k}+U_{k+q}]$,
${\rm min}[U_{k}+D_{k+q}]$ or ${\rm min}[U_{k}+U_{k+q}]$. Also due
to the potential three wells geometry of down-branch spectrum $D_{k}$,
there are not only the global minimum energy but also many possible
local minima to break a Cooper pair in single-particle excitations.
These results are shown in Fig. \ref{Fig_spectrum}. The lowest olive
line denote the $DD$ excitation, and its minimum value of pair-breaking
excitation is from the down-branch quasi-particle spectrum (${\rm min}[D_{k}+D_{k+q}]$).
Besides global minimum, it also has two and even three local minima
at some specific transferred momentum $q$, displayed by olive dotted
lines. In other different regime of $q$, these local minima will disappear
since the geometry of spectrum has been changed. From the BCS superfluid
($h=0.9E_{F}$) to the topological superfluid ($h=1.3E_{F}$), the
value of order parameter $\Delta$ monotonically decreases with effective
Zeeman field $h$ (shown by panel (b) of Fig.\ref{Fig_transition}).
This behaviour make the pair-breaking excitation be much easier
in a large $h$, and generally make olive line become lower and lower.
The red line denotes $DU$
and $UD$ excitations, which are overlapped with each other. The two
atoms in a pair comes from different branch of spectrum. It starts
from the ${\rm min}[D_{k}+U_{k+q}]$, whose energy is higher than
the $DD$ one. Similar to $DD$ excitation, there are some possible local
minima in these cross excitations. It should be emphasized that this
$DU$ single-particle excitation at a small $q$ displays a much stronger
excitation strength in the spin dynamic structure factor than one
in density dynamic structure factor (see Fig. \ref{Fig_dsf_spin}), which also reflect the coupling between different spin components.
Starting from the ${\rm min}[U_{k}+U_{k+q}]$,
the blue line is the $UU$ excitation, which requires the largest
excitation energy. This excitation has less density of state in the small $q$
regime in the BCS superfluid, while topological state enhances its
density of state and displays a relatively stronger signal.
All of these kinds of critical single-particle excitations are just
located in the colorful edge curve of Fig.\ref{Fig_dsf_den} and Fig.\ref{Fig_dsf_spin},
and mark the regime of single-particle excitation. To better understand
the dynamical excitation in these colorful panels, we will discuss
the dynamic structure factor at a selected transferred momentum $q$.
\subsection{Dynamic excitation at a constant momentum $q$}
For a relatively large transferred momentum $q\gg k_{F}$, the dynamic
structure factor will be dominated by the single-particle excitation.
As shown in Fig. \ref{Fig_q4}, we investigate the density and spin
dynamic structure factor at $q=4k_{F}$ in both BCS and topological
superfluid. In all four panels, we can see two obvious single-particle
excitations ($DD$ and $DU$ type) and a sharp collective phonon excitation.
The locations of threshold energy for two single-particle excitations
are respectively labeled by the olive and red dash-dot arrows. Here
the phonon excitation has already been mixed with the $DD$ single-particle excitation, which induces a non-zero expansion width to the
peak of collective mode. Its location is between the olive and red
arrows, after which more and more single-particle excitations turn
out. There is no obvious $UU$ excitation signal here, which is drowned
into the background of other single-particle excitations. At $q=4k_F$, it is easy to see that the topological superfluid displays
a relatively stronger $DD$ excitation $q=4k_{F}$ than one in BCS state.
\begin{figure}
\includegraphics[scale=0.35]{fig7}\caption{\label{Fig_q4} The density (gray) and spin (magenta) dynamical structure
factor of 1D SOC Fermi superfluid at transferred momentum $q=4k_{F}$.}
\end{figure}
When taking transferred momentum $q=2k_{F}$, the competition between
collective mode and single-particle mode is very intense. We watch
a much richer dynamical excitation. As shown in Fig. \ref{Fig_q2},
both $S_{n}$ and $S_{s}$ present two sharp delta-like peak and all
three kinds of single-particle excitation, the threshold locations
of which are still respectively labeled by olive, red and blue arrows.
At $q=2k_{F}$, the left sharp peak in four panels locates on the
left side of green arrows ($DD$ type excitation). In fact it is a
natural extension of the collective phonon excitation, which is close
to merge into the single-particle excitation continuum. The peak in topological
superfluid (lower two panels) happens at a relative low excitation
energy since the system generates a roton-like collective excitation,
which has been discussed above. As to the right sharp peak, it locates at the higher red eyebrow position in Fig. \ref{Fig_dsf_den}, its physical origin is still an open question, we argue that it may be the possible
collective Higgs oscillation, which has totally merged into the single-particle
excitation \citep{Fan2022p}.
\begin{figure}
\includegraphics[scale=0.35]{fig8}
\caption{\label{Fig_q2} The density (gray) and spin (magenta) dynamical structure
factor of 1D SOC Fermi superfluid at transferred momentum $q=2k_{F}$.}
\end{figure}
For a much smaller transferred momentum $q=1k_{F}$, the competition
of all dynamical excitation displayed by two dynamic structure factors
becomes much more intense, and the energy differences between all possible
excitations are not far away from each other. The results of both BCS and topological state
is shown in Fig. \ref{Fig_q1}. When $h=0.9E_{F}$ (BCS), we see one
clear phonon excitation around $\omega\simeq1.2E_{F}$, and all other
three kinds of single-particle excitations on its right hand, whose initial excitation
energy are still marked by arrows. In this case, $DU$ type excitation
has two threshold energys. While the left red arrow is from the global
minimum of excitation energy ${\rm min}[D_{k}+U_{k+q}]$, the right
one comes from its local minimum. Similar physics also be found in
$h=h_{c}$ (BCS side). However a high peak ($\omega\simeq2.3E_F$) rises after olive arrows. When the system
comes into the topological regime ($h=1.3E_{F}$), this unknown peak ($\omega\simeq1.9E_F$)
will present a delta-like excitation in a certain narrow energy regime
(see also low-right panel of Fig. \ref{Fig_dsf_den}). It seems that this unknown peak is different from the unknown one discussed above. Maybe it is generated by the competition of two collective mode in two different states, and we argue it is the redundancy of collective mode in the metastable state. As to the single
particle excitation, three minima of $DD$ type have also been obtained
in this case, and their positions are located by three olive arrows,
each of which will induce the regular oscillation of the curve of
dynamic structure factor.
\begin{figure}
\includegraphics[scale=0.35]{fig9}\caption{\label{Fig_q1} The density (blue) and spin (red) dynamical structure
factor of 1D SOC Fermi superfluid at transferred momentum $q=1k_{F}$.}
\end{figure}
\section{Conclusions and outlook}
In summary, we numerically calculate the density and spin dynamic
structure factor of 1D Raman-SOC Fermi superfluid with random phase
approximation during the phase transition between BCS and topological
superfluid. The dynamic structure factor presents rich single-particle
excitations and collective mode. Due to the two-branch structure of
single-particle spectrum, there are three kinds of single-particle
excitation, namely $DD$, $DU$ ($UD$) and $UU$ excitation. We also
calculate their own threshold energy to break a Cooper pair. Among these
single-particle excitation, the $DU$ one takes a great part only
in the spin dynamic structure factor at a small transferred momentum,
which comes from the coupling effect between spin and orbital motion.
As to collective excitation, these is an interesting roton-like collective
excitation at $k\simeq2k_{F}$ when the system comes into the topological
state. The generation of this roton-like excitation is due to the
switch of global minimum of single-particle spectrum $D_{k}$ from
$k=0$ to $k\simeq2k_{F}$. The similar physics has also been found
in other different interaction strengths $\gamma$ and recoil momenta
$k_{R}$. Also these are some unknown quasi-delta like excitations when
$q$ is between $k_{F}$ and $2k_{F}$, which are worth explaining its physical origin in
our future research.
\section{Acknowledgements}
We are grateful for fruitful discussions with Hui Hu, Wei Yi and Wei
Zhang. This research was supported by the National Natural Science
Foundation of China, Grants No. 11804177 (P.Z.), No. 11547034 (H.Z.),
No. 11974384 (S.-G.P.).
\section{Appendix}
In this appendix, we will list expressions of 6 independent Green'
functions and mean-field response function $\chi^{0}=A+B$.
$G_{1}\left(k,\omega\right)=\sum_{l}\left[G_{1}\right]_{k}^{l}/\left(\omega-E_{k}^{l}\right),$
with
\[
\begin{array}{c}
\left[G_{1}\right]_{k}^{1}=+\frac{U_{k}^{2}-\xi_{-}^{2}-h^{2}-\Delta^{2}}{2\left(U_{k}^{2}-D_{k}^{2}\right)}+\frac{\xi_{+}U_{k}^{2}-\xi_{+}\xi_{-}^{2}+\xi_{-}h^{2}-\xi_{+}\Delta^{2}}{2U_{k}\left(U_{k}^{2}-D_{k}^{2}\right)},\\
\left[G_{1}\right]_{k}^{2}=+\frac{U_{k}^{2}-\xi_{-}^{2}-h^{2}-\Delta^{2}}{2\left(U_{k}^{2}-D_{k}^{2}\right)}-\frac{\xi_{+}U_{k}^{2}-\xi_{+}\xi_{-}^{2}+\xi_{-}h^{2}-\xi_{+}\Delta^{2}}{2U_{k}\left(U_{k}^{2}-D_{k}^{2}\right)},\\
\left[G_{1}\right]_{k}^{3}=-\frac{D_{k}^{2}-\xi_{-}^{2}-h^{2}-\Delta^{2}}{2\left(U_{k}^{2}-D_{k}^{2}\right)}-\frac{\xi_{+}D_{k}^{2}-\xi_{+}\xi_{-}^{2}+\xi_{-}h^{2}-\xi_{+}\Delta^{2}}{2D_{k}\left(U_{k}^{2}-D_{k}^{2}\right)},\\
\left[G_{1}\right]_{k}^{4}=-\frac{D_{k}^{2}-\xi_{-}^{2}-h^{2}-\Delta^{2}}{2\left(U_{k}^{2}-D_{k}^{2}\right)}+\frac{\xi_{+}D_{k}^{2}-\xi_{+}\xi_{-}^{2}+\xi_{-}h^{2}-\xi_{+}\Delta^{2}}{2D_{k}\left(U_{k}^{2}-D_{k}^{2}\right)},
\end{array}
\]
where $\xi_{\pm}=\left(k\pm k_{R}\right)^{2}/2m-\mu$. $G_{2}\left(k,\omega\right)=\sum_{l}\left[G_{2}\right]_{k}^{l}/\left(\omega-E_{k}^{l}\right),$
with
\[
\begin{array}{c}
\left[G_{2}\right]_{k}^{1}=+\frac{U_{k}^{2}-\xi_{+}^{2}-h^{2}-\Delta^{2}}{2\left(U_{k}^{2}-D_{k}^{2}\right)}+\frac{\xi_{-}U_{k}^{2}-\xi_{-}\xi_{+}^{2}+\xi_{+}h^{2}-\xi_{-}\Delta^{2}}{2U_{k}\left(U_{k}^{2}-D_{k}^{2}\right)},\\
\left[G_{2}\right]_{k}^{2}=+\frac{U_{k}^{2}-\xi_{+}^{2}-h^{2}-\Delta^{2}}{2\left(U_{k}^{2}-D_{k}^{2}\right)}-\frac{\xi_{-}U_{k}^{2}-\xi_{-}\xi_{+}^{2}+\xi_{+}h^{2}-\xi_{-}\Delta^{2}}{2U_{k}\left(U_{k}^{2}-D_{k}^{2}\right)},\\
\left[G_{2}\right]_{k}^{3}=-\frac{D_{k}^{2}-\xi_{+}^{2}-h^{2}-\Delta^{2}}{2\left(U_{k}^{2}-D_{k}^{2}\right)}-\frac{\xi_{-}D_{k}^{2}-\xi_{-}\xi_{+}^{2}+\xi_{+}h^{2}-\xi_{-}\Delta^{2}}{2D_{k}\left(U_{k}^{2}-D_{k}^{2}\right)},\\
\left[G_{2}\right]_{k}^{4}=-\frac{D_{k}^{2}-\xi_{+}^{2}-h^{2}-\Delta^{2}}{2\left(U_{k}^{2}-D_{k}^{2}\right)}+\frac{\xi_{-}D_{k}^{2}-\xi_{-}\xi_{+}^{2}+\xi_{+}h^{2}-\xi_{-}\Delta^{2}}{2D_{k}\left(U_{k}^{2}-D_{k}^{2}\right)}.
\end{array}
\]
$\varGamma\left(k,\omega\right)=\sum_{l}\left[\varGamma\right]_{k}^{l}/\left(\omega-E_{k}^{l}\right),$
with
\[
\begin{array}{c}
\left[\varGamma\right]_{k}^{1}=-\left[\varGamma\right]_{k}^{2}=-\frac{\Delta\left[U_{k}^{2}-\left(\xi_{-}^{2}-h^{2}+\Delta^{2}\right)\right]}{2U_{k}\left(U_{k}^{2}-D_{k}^{2}\right)},\\
\left[\varGamma\right]_{k}^{3}=-\left[\varGamma\right]_{k}^{4}=+\frac{\Delta\left[D_{k}^{2}-\left(\xi_{-}^{2}-h^{2}+\Delta^{2}\right)\right]}{2D_{k}\left(U_{k}^{2}-D_{k}^{2}\right)}.
\end{array}
\]
$S\left(k,\omega\right)=\sum_{l}\left[S\right]_{k}^{l}/\left(\omega-E_{k}^{l}\right),$with
\[
\begin{array}{c}
\left[S\right]_{k}^{1}=h\left[-\frac{\xi_{+}+\xi_{-}}{2\left(U_{k}^{2}-D_{k}^{2}\right)}-\frac{U_{k}^{2}+\xi_{+}\xi_{-}-h^{2}+\Delta^{2}}{2U_{k}\left(U_{k}^{2}-D_{k}^{2}\right)}\right],\\
\left[S\right]_{k}^{2}=h\left[-\frac{\xi_{+}+\xi_{-}}{2\left(U_{k}^{2}-D_{k}^{2}\right)}+\frac{U_{k}^{2}+\xi_{+}\xi_{-}-h^{2}+\Delta^{2}}{2U_{k}\left(U_{k}^{2}-D_{k}^{2}\right)}\right],\\
\left[S\right]_{k}^{3}=h\left[+\frac{\xi_{+}+\xi_{-}}{2\left(U_{k}^{2}-D_{k}^{2}\right)}+\frac{D_{k}^{2}+\xi_{+}\xi_{-}-h^{2}+\Delta^{2}}{2D_{k}\left(U_{k}^{2}-D_{k}^{2}\right)}\right],\\
\left[S\right]_{k}^{4}=h\left[+\frac{\xi_{+}+\xi_{-}}{2\left(U_{k}^{2}-D_{k}^{2}\right)}-\frac{D_{k}^{2}+\xi_{+}\xi_{-}-h^{2}+\Delta^{2}}{2D_{k}\left(U_{k}^{2}-D_{k}^{2}\right)}\right].
\end{array}
\]
$F_{1}\left(k,\omega\right)=\sum_{l}\left[F_{1}\right]_{k}^{l}/\left(\omega-E_{k}^{l}\right),$
with
\[
\begin{array}{c}
\left[F_{1}\right]_{k}^{1}=-\frac{\Delta h\left(2U_{k}+\xi_{+}-\xi_{-}\right)}{2U_{k}\left(U_{k}^{2}-D_{k}^{2}\right)},\\
\left[F_{1}\right]_{k}^{2}=-\frac{\Delta h\left(2U_{k}-\xi_{+}+\xi_{-}\right)}{2U_{k}\left(U_{k}^{2}-D_{k}^{2}\right)},\\
\left[F_{1}\right]_{k}^{3}=+\frac{\Delta h\left(2D_{k}+\xi_{+}-\xi_{-}\right)}{2D_{k}\left(U_{k}^{2}-D_{k}^{2}\right)},\\
\left[F_{1}\right]_{k}^{4}=+\frac{\Delta h\left(2D_{k}-\xi_{+}+\xi_{-}\right)}{2D_{k}\left(U_{k}^{2}-D_{k}^{2}\right)}.
\end{array}
\]
$F_{2}\left(k,\omega\right)=\sum_{l}\left[F_{2}\right]_{k}^{l}/\left(\omega-E_{k}^{l}\right),$
with
\[
\begin{array}{c}
\left[F_{2}\right]_{k}^{1}=+\frac{\Delta h\left(2U_{k}-\xi_{+}+\xi_{-}\right)}{2U_{k}\left(U_{k}^{2}-D_{k}^{2}\right)},\\
\left[F_{2}\right]_{k}^{2}=+\frac{\Delta h\left(2U_{k}+\xi_{+}-\xi_{-}\right)}{2U_{k}\left(U_{k}^{2}-D_{k}^{2}\right)},\\
\left[F_{2}\right]_{k}^{3}=-\frac{\Delta h\left(2D_{k}-\xi_{+}+\xi_{-}\right)}{2D_{k}\left(U_{k}^{2}-D_{k}^{2}\right)},\\
\left[F_{2}\right]_{k}^{4}=-\frac{\Delta h\left(2D_{k}+\xi_{+}-\xi_{-}\right)}{2D_{k}\left(U_{k}^{2}-D_{k}^{2}\right)}.
\end{array}
\]
The expressions of all 9 independent matrix elements in mean-field
response function $A$ are
$A_{11}=+\underset{pll'}{\sum}\left[G_{1}\right]_{p}^{l}\left[G_{1}\right]_{p+q}^{l'}\frac{f\left(E_{p}^{l}\right)-f\left(E_{p+q}^{l'}\right)}{i\omega_{n}+E_{p}^{l}-E_{p+q}^{l'}},$
$A_{12}=-\underset{pll'}{\sum}\left[\varGamma\right]_{p}^{l}\left[\varGamma\right]_{p+q}^{l'}\frac{f\left(E_{p}^{l}\right)-f\left(E_{p+q}^{l'}\right)}{i\omega_{n}+E_{p}^{l}-E_{p+q}^{l'}},$
$A_{13}=+\underset{pll'}{\sum}\left[G_{1}\right]_{p}^{l}\left[\varGamma\right]_{p+q}^{l'}\frac{f\left(E_{p}^{l}\right)-f\left(E_{p+q}^{l'}\right)}{i\omega_{n}+E_{p}^{l}-E_{p+q}^{l'}},$
$A_{14}=+\underset{pll'}{\sum}\left[\varGamma\right]_{p}^{l}\left[G_{1}\right]_{p+q}^{l'}\frac{f\left(E_{p}^{l}\right)-f\left(E_{p+q}^{l'}\right)}{i\omega_{n}+E_{p}^{l}-E_{p+q}^{l'}},$
$A_{22}=+\underset{pll'}{\sum}\left[G_{2}\right]_{p}^{l}\left[G_{2}\right]_{p+q}^{l'}\frac{f\left(E_{p}^{l}\right)-f\left(E_{p+q}^{l'}\right)}{i\omega_{n}+E_{p}^{l}-E_{p+q}^{l'}},$
$A_{23}=-\underset{pll'}{\sum}\left[\varGamma\right]_{p}^{l}\left[G_{1}\right]_{p+q}^{l'}\frac{f\left(E_{p}^{l}\right)-f\left(E_{p+q}^{l'}\right)}{i\omega_{n}+E_{p}^{l}-E_{p+q}^{l'}},$
$A_{24}=-\underset{pll'}{\sum}\left[G_{1}\right]_{p}^{-l}\left[\varGamma\right]_{p+q}^{l'}\frac{f\left(E_{p}^{l}\right)-f\left(E_{p+q}^{l'}\right)}{i\omega_{n}+E_{p}^{l}-E_{p+q}^{l'}},$
$A_{34}=+\underset{pll'}{\sum}\left[G_{2}\right]_{p}^{-l}\left[G_{2}\right]_{p+q}^{l'}\frac{f\left(E_{p}^{l}\right)-f\left(E_{p+q}^{l'}\right)}{i\omega_{n}+E_{p}^{l}-E_{p+q}^{l'}},$
$A_{43}=+\underset{pll'}{\sum}\left[G_{1}\right]_{p}^{l}\left[G_{1}\right]_{p+q}^{-l'}\frac{f\left(E_{p}^{l}\right)-f\left(E_{p+q}^{l'}\right)}{i\omega_{n}+E_{p}^{l}-E_{p+q}^{l'}},$
where $f\left(x\right)=1/\left(e^{x/k_{B}T}+1\right)$ is the Fermi-Dirac
distribution function. The expressions of 10 independent matrix elements
in mean-field response function $B$ are
$B_{11}=-\underset{pll'}{\sum}\left[F_{1}\right]_{p}^{l}\left[F_{1}\right]_{p+q}^{l'}\frac{f\left(E_{p}^{l}\right)-f\left(E_{p+q}^{l'}\right)}{i\omega_{n}+E_{p}^{l}-E_{p+q}^{l'}},$
$B_{12}=+\underset{pll'}{\sum}\left[S\right]_{p}^{l}\left[S\right]_{p+q}^{l'}\frac{f\left(E_{p}^{l}\right)-f\left(E_{p+q}^{l'}\right)}{i\omega_{n}+E_{p}^{l}-E_{p+q}^{l'}},$
$B_{13}=-\underset{pll'}{\sum}\left[S\right]_{p}^{l}\left[F_{1}\right]_{p+q}^{l'}\frac{f\left(E_{p}^{l}\right)-f\left(E_{p+q}^{l'}\right)}{i\omega_{n}+E_{p}^{l}-E_{p+q}^{l'}},$
$B_{14}=-\underset{pll'}{\sum}\left[F_{1}\right]_{p}^{l}\left[S\right]_{p+q}^{l'}\frac{f\left(E_{p}^{l}\right)-f\left(E_{p+q}^{l'}\right)}{i\omega_{n}+E_{p}^{l}-E_{p+q}^{l'}},$
$B_{22}=-\underset{pll'}{\sum}\left[F_{2}\right]_{p}^{l}\left[F_{2}\right]_{p+q}^{l'}\frac{f\left(E_{p}^{l}\right)-f\left(E_{p+q}^{l'}\right)}{i\omega_{n}+E_{p}^{l}-E_{p+q}^{l'}},$
$B_{23}=+\underset{pll'}{\sum}\left[S\right]_{p}^{l}\left[F_{2}\right]_{p+q}^{l'}\frac{f\left(E_{p}^{l}\right)-f\left(E_{p+q}^{l'}\right)}{i\omega_{n}+E_{p}^{l}-E_{p+q}^{l'}},$
$B_{24}=+\underset{pll'}{\sum}\left[F_{2}\right]_{p}^{l}\left[S\right]_{p+q}^{l'}\frac{f\left(E_{p}^{l}\right)-f\left(E_{p+q}^{l'}\right)}{i\omega_{n}+E_{p}^{l}-E_{p+q}^{l'}},$
$B_{33}=-\underset{pll'}{\sum}\left[F_{2}\right]_{p}^{l}\left[F_{1}\right]_{p+q}^{l'}\frac{f\left(E_{p}^{l}\right)-f\left(E_{p+q}^{l'}\right)}{i\omega_{n}+E_{p}^{l}-E_{p+q}^{l'}},$
$B_{34}=-\underset{pll'}{\sum}\left[S\right]_{p}^{-l}\left[S\right]_{p+q}^{l'}\frac{f\left(E_{p}^{l}\right)-f\left(E_{p+q}^{l'}\right)}{i\omega_{n}+E_{p}^{l}-E_{p+q}^{l'}},$
$B_{43}=-\underset{pll'}{\sum}\left[S\right]_{p}^{l}\left[S\right]_{p+q}^{-l'}\frac{f\left(E_{p}^{l}\right)-f\left(E_{p+q}^{l'}\right)}{i\omega_{n}+E_{p}^{l}-E_{p+q}^{l'}}.$
|
2,869,038,153,952 | arxiv |
\section*{S1 - Exact Computation of $\bm{\left<R_g^2\right>}$}
In the present section, we evaluate the mean squared radius of gyration, extending a method that has been used,
for example by Rubinstein and Colby \cite{rubinstein}, for the calculation
of $\left<R_g^2\right>$ of an unperturbed Freely-Jointed Chain.
The radius of gyration can be written as
\begin{equation}
\left<R_g^2\right>=\left<\frac{1}{2(N+1)^2}\sum_{i,j}({\bf R}_i-{\bf R}_j)^2\right> =
\frac{1}{(N+1)^2}\sum_{j>i}\left<({\bf R}_i-{\bf R}_j)^2\right>\ .
\end{equation}
The term inside the two brackets is the end to end vector of the subchain delimited by monomer in position $i$ and $j$.
Implementing the exact formula for $\left<\bm{R_e}^2\right>$ reported in the main text, we easily find
\begin{equation}
\left<R_g^2\right>=
\frac{1}{(N+1)^2}\sum_{i=0}^N\sum_{j=i}^N \left[(j-i)b^2\left(1-\mathcal{L}^2\right)+(j-i)^2b^2\mathcal{L}^2\right]\simeq
\frac{1}{6}Nb^2+\frac{1}{12}N^2b^2\mathcal{L}^2\,,
\label{eqn:SI_Rg}
\end{equation}
where we have neglected all the terms which are sublinear in $N$.
The contribution involving only the $x$ and $y$ coordinates can be explicitly computed in a similar fashion. Denoting
the position vector of the $i$-th monomer as ${\bf R}_i\equiv(X_i,Y_i,Z_i)$
\begin{equation}
\frac{1}{(N+1)^2}\sum_{j>i}\left<\left(X_j-X_i\right)^2\right>+\left<\left(Y_j-Y_i\right)^2\right>=
\frac{2}{(N+1)^2}\sum_{j>i}\left<\left(X_j-X_i\right)^2\right>\,,
\end{equation}
where we substituted $\left<\left(Y_j-Y_i\right)^2\right>=\left<\left(X_j-X_i\right)^2\right>$ because
of the cylindrical symmetry of the system.
By means of the exact distribution of the end-to-end vector reported in the main text, we finally find
\begin{equation}
\frac{2}{(N+1)^2} \sum_{j>i}\left<\left(X_j-X_i\right)^2\right>=
\frac{2}{(N+1)^2}\sum_{i=0}^N\sum_{j=i}^N (j-i)b^2g\simeq\frac{1}{3}Nb^2g\,.
\label{eqn:SI_Rgperp}
\end{equation}
The contribution involving the $z$ coordinate can be computed by following the same strategy, yielding as
a final result
\begin{equation}
\frac{1}{(N+1)^2} \sum_{j>i}\left<\left(Z_j-Z_i\right)^2\right>\simeq
\frac{1}{6}Nb^2a+\frac{1}{6}Nb^2\mathcal{L}^2+\frac{1}{12}N^2b^2\mathcal{L}^2=
\frac{1}{6}Nb^2(1-2g)+\frac{1}{12}N^2b^2\mathcal{L}^2\,.
\label{eqn:SI_Rgz}
\end{equation}
Naturally, summing Eq. (\ref{eqn:SI_Rgperp}) and Eq. (\ref{eqn:SI_Rgz}) one retrieves the formula for the
radius of gyration reported in Eq. (\ref{eqn:SI_Rg}).
\section*{S2 - Details of the Monte Carlo simulations}
The main features of the shape of a stretched FJC
were investigated by means of both analytical computations and MC simulations.
The latter were performed extracting the orientation of each tangent vector
directly from the distribution $p(\bm{r})$, which we report here for convenience:
\begin{equation}
p(\bm{r})=\frac{\beta fb}{4\pi b^2\sinh(\beta fb)}\exp(\beta\bm{f}\cdot\bm{r})\delta\left(\left|\bm{r}\right|-b\right)\,.
\label{eqn:prob_single_step}
\end{equation}
More in detail, thanks to the cylindric symmetry of the problem,
the azimuthal angle $\varphi$ could be simply extracted uniformly in
the range $[0,2\pi]$. As for the polar angle $\vartheta$, a little workaround
was needed in order to map its distribution onto uniform sampling.
Let $x$ be a random number uniformly distributed in the range $[0,1]$.
By construction, the infinitesimal probability to find a number in the
interval $[x,x+dx]$ is simply given by $dx$. As for the angle $\vartheta$,
by considering only the polar contribution to equation (\ref{eqn:prob_single_step})
and remembering that $\gamma\equiv\beta fb$,
we easily find for the distribution of its cosine
\begin{equation}
\tilde{p}(\cos\vartheta)=\frac{\gamma}{2\sinh\gamma}e^{\gamma\cos\vartheta}\,.
\end{equation}
Since the mapping has to preserve the infinitesimal probability of corresponding
values of $\theta$ and $x$, the following condition has to be satisfied:
\begin{equation}
\tilde{p}(\cos\vartheta)d\cos\vartheta=dx\,.
\end{equation}
Integrating both sides, we thus get
\begin{equation}
\frac{1}{2\sinh \gamma}\left[e^{\gamma\cos\vartheta}-e^{\gamma}\right]=x\,,
\end{equation}
where the integration constant was fixed by imposing $\theta(x=0)=0$.
Therefore, for each step the polar angle $\vartheta$ was computed by inverting
\begin{equation}
\cos\vartheta=\frac{\log(e^\gamma-2x\sinh \gamma)}{\gamma}\,.
\end{equation}
For each value of $N$ and $\gamma$, the mean values of the several
quantities considered in the main text
are obtained by averaging $10^4$
different realizations. Statistical error is estimated by normalizing the standard deviation of
the results and, when not shown, is always smaller than the size of symbols in the figures.
\section*{S3 - Analysis of subleading terms in $\bf{\lambda_1}$}
In the present section we focus on the various terms contributing to $\lambda_1$ in the large-force regime.
In section S1 we computed the contributions to the radius of gyration coming from
the $xy$ projection (Eq. (\ref{eqn:SI_Rgperp})) and the $z$ component (Eq. (\ref{eqn:SI_Rgz}))
of the FJC.
For $\gamma\gg1$, $\lambda_1$ is expected to give the leading contribution to
the $z$ component as well as to play a
significant role in quantitatively determining the $xy$ projection.
Therefore, we predict the following functional form:
\begin{equation}
\left<\lambda_1\right>=\frac{1}{6}Nb^2(1-2g)+\frac{1}{12}N^2b^2\mathcal{L}^2+c_1Nb^2g\,.
\label{eqn:SI_lambda1}
\end{equation}
Starting from the MC data reported in Fig. 3 top in the main text, we thus considered the combination
\begin{equation}
\lambda_1^{\mbox{\tiny sub}}\equiv \left<\lambda_1\right>-\frac{1}{6}Nb^2(1-2g)-\frac{1}{12}N^2b^2\mathcal{L}^2\,.
\end{equation}
According to Eq. (\ref{eqn:SI_lambda1}), the following formula should thus hold
\begin{equation}
\frac{\lambda_1^{\mbox{\tiny sub}}}{Nb^2}=c_1g\,.
\label{eqn:SI_lambda1_sub}
\end{equation}
As we show in Fig.\ref{fig:lambda1_sub}, the MC data collapse onto a universal
curve when $\lambda_1^{\mbox{\tiny sub}}$ is normalized by the chain size $N$. Moreover,
by tuning the numeric constant $c_1$ they are well described by the function $g$, as expected from
Eq. (\ref{eqn:SI_lambda1_sub}). The optimum value of the constant is $c_1=0.204\pm 0.001$, in perfect
agreement with the result found in the main text starting from the fits of $\lambda_2$ and $\lambda_3$.
\section*{S4 - Exact Computation of Asphericity}
In the case of the asphericity the idea is the same as for the radius of gyration, but a larger number of terms must be evaluated.
We first rewrite the formula in the following way:
\begin{equation}\label{defas}
\left<A\right> \ = \ \frac{3\left<Tr\left[\mathcal{T}^2\right]\right>}{2\left<\left(Tr\left[\mathcal{T}\right]\right)^2\right>}-\frac{1}{2} \ \equiv
\ \frac{3\sum_{\alpha=1}^{3}\sum_{\beta=1}^{3}\left<\mathcal{T}_{\alpha\beta}\mathcal{T}_{\beta\alpha}\right>}{2\sum_{\alpha=1}^{3}\sum_{\beta=1}^{3}\left<\mathcal{T}_{\alpha\alpha}\mathcal{T}_{\beta\beta}\right>}-\frac{1}{2} \ ,
\end{equation}
where we are using Greek letters as labels for spatial coordinates ($x,y,z$).
We have to find the values of two terms, that we write more explicitly:
\begin{equation}
\left<\mathcal{T}_{\alpha\beta}\mathcal{T}_{\beta\alpha}\right> = \frac{1}{N^4}\sum_{i=0}^{N}\sum_{j=i}^{N}\sum_{k=0}^{N}\sum_{l=k}^{N}
\left<\left(\alpha_{j}-\alpha_{i}\right)\left(\beta_{j}-\beta_{i}\right)
\left(\alpha_{l}-\alpha_{k}\right)\left(\beta_{l}-\beta_{k}\right)\right> \ ,
\end{equation}
\begin{equation}
\left<\mathcal{T}_{\alpha\alpha}\mathcal{T}_{\beta\beta}\right> \ = \ \frac{1}{N^4}\sum_{i=0}^{N}\sum_{j=i}^{N}\sum_{k=0}^{N}\sum_{l=k}^{N}
\left<\left(\alpha_{j}-\alpha_{i}\right)^{2}\left(\beta_{l}-\beta_{k}\right)^{2}\right> \ .
\end{equation}\vspace{0.4cm}\\
Since the tensor is symmetric, only six terms are independent: $\mathcal{T}_{xx}^2$, $\mathcal{T}_{xy}^2$, $\mathcal{T}_{xz}^2$, $\mathcal{T}_{zz}^2$, $\mathcal{T}_{xx}\mathcal{T}_{yy}$,
$\mathcal{T}_{xx}\mathcal{T}_{zz}$.
The most complicated are the ones with equal indices, $\mathcal{T}_{xx}^2$ and $\mathcal{T}_{zz}^2$.
We will calculate $\mathcal{T}_{zz}^2$. All the other terms can be evaluated in a similar way. We must treat separately three different cases:\vspace{0.4cm}\\
\fbox{$I)$ \ $i<j<k<l$}\vspace{0.8cm}\\
\begin{figure}[H]
\centering
\includegraphics[scale=0.15]{diagram3.png}
\end{figure}
In this case there is no overlap between the intervals $\left(i,j\right)$ and $\left(k,l\right)$ and the mean of a product becomes the product of means:\vspace{0.4cm}\\
\begin{align}
[I] &= \ 2\frac{1}{N^4}\sum_{i=0}^{N}\sum_{j=i}^{N}\sum_{k=j}^{N}\sum_{l=k}^{N}\left<\left(z_{j}-z_{i}\right)^{2}\left(z_{l}-z_{k}\right)^{2}\right> = \\
&= 2\frac{1}{N^4}\sum_{i=0}^{N}\sum_{j=i}^{N}\sum_{k=j}^{N}\sum_{l=k}^{N}\left<\left(z_{j}-z_{i}\right)^{2}\right>\left<\left(z_{l}-z_{k}\right)^{2}\right> \ = \\
&= 2\frac{1}{N^4}\sum_{i=0}^{N}\sum_{j=i}^{N}\left[(j-i)b^2a+{\cal L}^2(j-i)^2b^2\right]\sum_{k=j}^{N}\sum_{l=k}^{N}\left[(l-k)b^2a+{\cal L}^2(l-k)^2b^2\right] \ ,
\end{align}
where $a=1-2g-{\cal L}^2$ and we have used the result previously obtained for the $z$ component of the end to end vector.
\vspace{1 cm}\\
\fbox{$II)$ \ $i<k<j<l$}
\begin{figure}[H]
\centering
\includegraphics[scale=0.14]{diagram2.png}
\end{figure}
\begin{equation}
[II] \ = \ 2\frac{1}{N^4}\sum_{i=0}^{N}\sum_{k=i}^{N}\sum_{j=k}^{N}\sum_{l=j}^{N}\left<\left(z_{j}-z_{i}\right)^{2}\left(z_{l}-z_{k}\right)^{2}\right> \ .
\end{equation}
Because of the overlap the average cannot be split {\em directly}. However, this problem can be solved with a trick:
\begin{equation*}
\begin{split}
& \left<\left(z_{j}-z_{i}\right)^{2}\left(z_{l}-z_{k}\right)^{2}\right> \ = \ \left<\left(\left(z_{j}-z_{k}+z_{k}-z_{i}\right)
\left(z_{l}-z_{j}+z_{j}-z_{k}\right)\right)^{2}\right> \ = \\
& \left<\left(\left(z_{j}-z_{k}\right)\left(z_{l}-z_{j}\right)+\left(z_{k}-z_{i}\right)\left(z_{l}-z_{j}\right)+
\left(z_{j}-z_{k}\right)\left(z_{k}-z_{i}\right)+\left(z_{j}-z_{k}\right)^2\right)^{2}\right> \ = \\
& \left<\left(z_{j}-z_{k}\right)^{2}\left(z_{l}-z_{j}\right)^{2}\right>+\left<\left(z_{k}-z_{i}\right)^{2}\left(z_{l}-z_{j}\right)^{2}\right>+
\left<\left(z_{j}-z_{k}\right)^{2}\left(z_{k}-z_{i}\right)^{2}\right>+\left<\left(z_{j}-z_{k}\right)^{4}\right> \\
& +2\left<\left(z_{j}-z_{k}\right)\left(z_{k}-z_{i}\right)\left(z_{l}-z_{j}\right)^{2}\right>+
2\left<\left(z_{j}-z_{k}\right)\left(z_{l}-z_{j}\right)\left(z_{k}-z_{i}\right)^{2}\right> \\
& +4\left<\left(z_{l}-z_{j}\right)\left(z_{k}-z_{i}\right)\left(z_{j}-z_{k}\right)^{2}\right>+ 2\left<\left(z_{l}-z_{j}\right)\left(z_{j}-z_{k}\right)^{3}\right>+2\left<\left(z_{k}-z_{i}\right)\left(z_{j}-z_{k}\right)^{3}\right> \ .
\end{split}
\end{equation*}
The final expression, even if it is much longer, has a great advantage: now in any product the round brackets are uncorrelated with respect to each other, and we can split the averages:
\begin{equation*}
<\left( \ \right)...\left( \ \right)> \ = \ <\left( \ \right)>....<\left( \ \right)> \ .
\end{equation*}
In this way, each term in the previous equation can be computed as above.
\vspace{1 cm}\\
\fbox{$III)$ \ $i<k<l<j$}
\begin{figure}[H]
\centering
\includegraphics[scale=0.16]{diagram1.png}
\end{figure}
\begin{equation}
[III] \ = \ 2\frac{1}{N^4}\sum_{i=0}^{N}\sum_{k=i}^{N}\sum_{l=k}^{N}\sum_{j=l}^{N}<\left(z_{j}-z_{i}\right)^{2}\left(z_{l}-z_{k}\right)^{2}> \ .
\end{equation}
Since the calculation is analogous to the previous one, we will just show how it is possible to decorrelate each term:
\begin{equation}
<\left(z_{j}-z_{i}\right)^{2}\left(z_{l}-z_{k}\right)^{2}> \ = \ <\left(\left(z_{j}-z_{l}+z_{l}-z_{k}+z_{k}-z_{i}\right)
\left(z_{l}-z_{k}\right)\right)^{2}> \ ,
\end{equation}
and then the usual calculation is made.
For evaluating the mean values we need the following expressions:
\begin{equation}
\begin{split}
& {\sigma^2}_z = Nb^{2}(1-2g-\mathcal{L}^2) \\
& \mu_z = Nb\mathcal{L} \\
& {\sigma^2}_{x,y} = Nb^{2}g\\
& \mu_{x,y} = 0\\
& \left<\alpha\right> \ = \mu_{\alpha}\\
& \left<\alpha^2\right> \ = \mu_{\alpha}^2+\sigma_{\alpha}^{2}\\
& \left<\alpha^3\right> \ = \mu_{\alpha}^3+3\mu_{\alpha}\sigma_{\alpha}^{2}\\
& \left<\alpha^4\right> \ = \mu_{\alpha}^4+3\sigma_{\alpha}^4+6\mu_{\alpha}^{2}\sigma_{\alpha}^{2} \ .
\end{split}
\end{equation}
where $\alpha$ indicates a space coordinate. Keeping only the leading terms, the result is
\begin{equation}
\left<\mathcal{T}_{zz}^2\right> \ = \ [I] \ + \ [II] \ + \ [III] \ = \ \frac{N^2b^4}{720}
\left(5\mathcal{L}^{4}N^{2}+44\mathcal{L}^{2}Na+36a^2\right) \ ,
\end{equation}
The six moments are
\begin{equation}
\begin{split}
& \left<\mathcal{T}_{xx}^2\right> \ = \ \frac{g^{2}N^{2}b^4}{20}\\
& \left<\mathcal{T}_{xy}^2\right> \ = \ \frac{g^{2}N^{2}b^4}{90}\\
& \left<\mathcal{T}_{xx}\mathcal{T}_{yy}\right> \ = \ \frac{g^{2}N^{2}b^4}{36}\\
& \left<\mathcal{T}_{zz}^2\right> \ = \ \frac{N^2b^4}{720}\left(5\mathcal{L}^{4}N^{2}+44\mathcal{L}^{2}Na+36a^2\right)\\
& \left<\mathcal{T}_{xz}^2\right> \ = \ \frac{gN^2b^4}{360}\left(4a+3\mathcal{L}^{2}N\right)\\
& \left<\mathcal{T}_{xx}\mathcal{T}_{zz}\right> \ = \ \frac{gN^2b^4}{72}\left(2a+\mathcal{L}^{2}N\right) \ .
\end{split}
\end{equation}
Substituting in equation (S4), we finally find the mean asphericity
\begin{equation}\label{asph2}
\left<A\right> = 1 - \frac{72ag+36g^2+24Ng\mathcal{L}^2}{36a^2+80ag+112g^2+4\mathcal{L}^{2}\left(11a+10g\right)N+5N^{2}\mathcal{L}^4} \ .
\end{equation}
which is the formula reported in the main text.
\section*{S5 - Computation of ellipsoid orientation within the dipole analogy}
As explained in the main text, we captured the orientational behavior of the enveloping ellipsoid of a
stretched FJC by exploiting the strong resemblance of this system to an electric dipole in the presence
of an external field. Starting from the interaction energy $\alpha N\gamma\mathcal{L}\cos\psi_1$,
the average cosine of the main axis of the ellipsoid can be straightforwardly computed:
\begin{equation}
\left<\cos\psi_1\right>=\frac{\int_0^{\pi/2}\sin\psi_1\cos\psi_1 e^{\alpha N\gamma\mathcal{L}\cos\psi_1}d\psi_1}
{\int_0^{\pi/2} \sin\psi_1e^{\alpha N\gamma\mathcal{L}\cos\psi_1}d\psi_1}=
\frac{1}{1-e^{-\alpha N\gamma\mathcal{L}}}-\frac{1}{\alpha N\gamma\mathcal{L}}\,,
\end{equation}
which is Eq. (4) in the main text. We note that the upper bound of the integrals in the previous formula
is $\pi/2$ because of the symmetry of the system with respect to a rotation by $\pi$.
In contrast, the computation of $\left<\cos\psi_2\right>$ (which, due to the cylindrical
symmetry of the dipole analogy is equal to $\left<\cos\psi_3\right>$)
is more cumbersome, and we report its explicit derivation in what follows. The following formulas
will be needed for our computation:
\begin{equation}
\int_0^{2\pi} \left(\sin \theta\right)^k d\theta=
\begin{cases}
0 & \mbox{if $k$ odd}\\
2\pi \frac{\left(k-1\right)!!}{k!!} & \mbox{if $k$ even}
\end{cases}\,,
\label{eqn:SI_int1}
\end{equation}
\begin{equation}
\int_0^{\pi/2}\left(\sin\psi\right)^{2k+1}d\psi=\frac{\left(2k\right)!!}{\left(2k+1\right)!!}\,,
\label{eqn:SI_int2}
\end{equation}
\begin{equation}
I_1(x)=\sum_{k=0}^{\infty}\frac{1}{k!(k+1)!}\left(\frac{x}{2}\right)^{2k+1}\,,
\label{eqn:SI_int3}
\end{equation}
\begin{equation}
\sinh x=\sum_{k=0}^{\infty}\frac{x^{2k+1}}{(2k+1)!}\,,
\label{eqn:SI_int4}
\end{equation}
where in Eqs. (\ref{eqn:SI_int1}) and (\ref{eqn:SI_int2}) it is intended
that the double factorials are equal to $1$ for $k=0$, and
in Eq. (\ref{eqn:SI_int3}) $I_1(x)$ is the modified Bessel function of the first kind.
For a given value of $\psi_2$, one has to consider
for the eigenvector $\hat{u}_1$ corresponding
to $\lambda_1$ all the possible orientations lying on
the plane perpendicular to the versor $\hat{u}_2$ identified by $\psi_2$. According to the relative
orientation of $\hat{u}_1$ and the external force, each specific orientation results into
a different interaction energy, \textit{i.e.} a different
Boltzmann weight. Quantitatively, let us consider a reference frame where the $z$ axis
is oriented along the external force and the $x$ axis is complanar to both $\hat{z}$ and $\hat{u}_2$.
In this reference frame, one has $\hat{u}_2\equiv\left(\sin\psi_2,0,\cos\psi_2\right)$. From here, it is
easy to show that a unitary vector lying on the plane perpendicular to $\hat{u}_2$ can be written as
$\left(-\cos\psi_2\sin\theta,\cos\theta,\sin\psi_2\sin\theta\right)$, where $\theta\in[0,2\pi]$.
Therefore, the orientation of $u_1$ with respect to the force is given by $\cos\psi_1=\sin\psi_2\sin\theta$, and
the corresponding adimensionalized interaction energy between the dipole and the external field is
$\alpha N\gamma\mathcal{L}\cos\psi_1=\alpha N\gamma\mathcal{L}\sin\psi_2\sin\theta$.
The total Boltzmann weight corresponding to the given choice of $\psi_2$ is obtained by integrating
the Boltzmann weights relative to this energy over all the possible values of $\theta$. Thus
\begin{equation}
\left<\cos\psi_2\right>=\frac{\int_0^{\pi/2}\sin\psi_2\cos\psi_2
\left(\int_0^{2\pi} e^{\alpha N\gamma\mathcal{L}\sin\psi_2\sin\theta}d\theta\right)d\psi_2}
{\int_0^{\pi/2}\sin\psi_2
\left(\int_0^{2\pi} e^{\alpha N\gamma\mathcal{L}\sin\psi_2\sin\theta}d\theta\right)d\psi_2}\,.
\label{eqn:formula_init_psi2}
\end{equation}
We first address the computation of the Boltzmann weight. By Taylor-expanding the exponential, we have
\begin{equation}
\int_0^{2\pi} e^{\alpha N\gamma\mathcal{L}\sin\psi_2\sin\theta}d\theta=
\sum_{k=0}^{\infty}\frac{\left(\alpha N\gamma\mathcal{L}\sin\psi_2\right)^k}{k!}
\int_0^{2\pi}\left(\sin\theta\right)^kd\theta\,.
\end{equation}
Making use of Eq.(\ref{eqn:SI_int1}) and performing the change of dummy variable $k=2m$ in the
non-vanishing terms of the series, we find
\begin{equation}
\int_0^{2\pi} e^{\alpha N\gamma\mathcal{L}\sin\psi_2\sin\theta}d\theta=
2\pi\sum_{m=0}^{\infty}\frac{\left(\alpha N\gamma\mathcal{L}\right)^{2m}}{\left(2m\right)!}
\frac{\left(2m-1\right)!!}{\left(2m\right)!!}\left(\sin\psi_2\right)^{2m}\,.
\end{equation}
The numerator in Eq.(\ref{eqn:formula_init_psi2}) thus becomes
\begin{equation*}
\int_0^{\pi/2}\sin\psi_2\cos\psi_2
\left(\int_0^{2\pi} e^{\alpha N\gamma\mathcal{L}\sin\psi_2\sin\theta}d\theta\right)d\psi_2=
\end{equation*}
\begin{equation*}
=2\pi\sum_{m=0}^{\infty}\frac{\left(\alpha N\gamma\mathcal{L}\right)^{2m}}{\left(2m\right)!}
\frac{\left(2m-1\right)!!}{\left(2m\right)!!}
\int_0^{\pi/2}\cos\psi_2\left(\sin\psi_2\right)^{2m+1}d\psi_2=
\end{equation*}
\begin{equation*}
=2\pi\sum_{m=0}^{\infty}\left(\alpha N\gamma\mathcal{L}\right)^{2m}
\frac{\left(2m-1\right)!!}{\left(2m\right)!}
\frac{\left.\left(\sin\psi_2\right)^{2m+2}\right|_{\psi_2=0}^{\pi/2}}{\left(2m+2\right)\left(2m\right)!!}=
\end{equation*}
\begin{equation*}
=2\pi\sum_{m=0}^{\infty}
\frac{\left(\alpha N\gamma\mathcal{L}\right)^{2m}}{\left(2m+2\right)!!\left(2m\right)!!}\,,
\end{equation*}
where in the last step we exploited the identity $\left(2m\right)!=\left(2m\right)!!\left(2m-1\right)!!$. Moreover,
since $\left(2m\right)!!=2^mm!$, the previous formula can be rewritten as
\begin{equation}
2\pi\sum_{m=0}^{\infty}
\frac{\left(\alpha N\gamma\mathcal{L}\right)^{2m}}{\left(2m+2\right)!!\left(2m\right)!!}=
\frac{2\pi}{\alpha N\gamma\mathcal{L}}\sum_{m=0}^{\infty}
\frac{1}{m!\left(m+1\right)!}\left(\frac{\alpha N\gamma\mathcal{L}}{2}\right)^{2m+1}=
2\pi\frac{I_1(\alpha N\gamma\mathcal{L})}{\alpha N\gamma\mathcal{L}}\,,
\label{eqn:SI_num}
\end{equation}
where we made use of Eq. (\ref{eqn:SI_int3}). Analogously, the denominator in Eq. (\ref{eqn:formula_init_psi2})
can be explicitely computed as
\begin{equation*}
\int_0^{\pi/2}\sin\psi_2
\left(\int_0^{2\pi} e^{\alpha N\gamma\mathcal{L}\sin\psi_2\sin\theta}d\theta\right)d\psi_2=
\end{equation*}
\begin{equation*}
=2\pi\sum_{m=0}^{\infty}\frac{\left(\alpha N\gamma\mathcal{L}\right)^{2m}}{\left(2m\right)!}
\frac{\left(2m-1\right)!!}{\left(2m\right)!!}
\int_0^{\pi/2}\left(\sin\psi_2\right)^{2m+1}d\psi_2=
\end{equation*}
\begin{equation*}
=2\pi\sum_{m=0}^{\infty}\frac{\left(\alpha N\gamma\mathcal{L}\right)^{2m}}{\left(2m\right)!}
\frac{\left(2m-1\right)!!}{\left(2m\right)!!}
\frac{\left(2m\right)!!}{\left(2m+1\right)!!}=
\end{equation*}
\begin{equation*}
=\frac{2\pi}{\alpha N\gamma\mathcal{L}}
\sum_{m=0}^{\infty}\frac{\left(\alpha N\gamma\mathcal{L}\right)^{2m+1}}{\left(2m+1\right)!}=
\end{equation*}
\begin{equation}
=2\pi\frac{\sinh(\alpha N\gamma\mathcal{L})}{\alpha N\gamma\mathcal{L}}\,,
\label{eqn:SI_den}
\end{equation}
where in the second step we substituted Eq. (\ref{eqn:SI_int2}), while in the last step we
made use of the Taylor expansion reported in Eq. (\ref{eqn:SI_int4}). Substituting
Eq.(\ref{eqn:SI_num}) and Eq.(\ref{eqn:SI_den}) into Eq.(\ref{eqn:formula_init_psi2}), we
finally obtain
\begin{equation}
\left<\cos\psi_2\right>=\frac{I_1(\alpha N\gamma\mathcal{L})}{\sinh(\alpha N\gamma\mathcal{L})}\,,
\end{equation}
which is the result reported in Eq. (5) in the main text.
\begin{figure}[H]
\centering
\includegraphics[scale=0.36]{angles_l3.png}
\caption{\label{fig:angles_l3} Average cosine of the angle between the external force
and the eigenvector corresponding to $\lambda_3$.}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[scale=0.36]{lambda1_sub.png}
\caption{\label{fig:lambda1_sub}
Plot of $\lambda_1^{\mbox{\tiny sub}}/Nb^2$ as a function of $\gamma$ for several values of $N$.}
\end{figure}
\end{document}
|
2,869,038,153,953 | arxiv | \section{Introduction}
Experimental progress in controlling ultra-cold atoms has opened a new chapter in our understanding of the properties of strongly-correlated many-body quantum systems \cite{Lewenstein2007,Bloch2008}. Old fashioned theoretical toy-models known from condensed matter physics are undergoing a renaissance since they provide realistic descriptions of the real quantum systems confined in optical lattices (specially arranged laser beams forming periodic potential \cite{LewensteinBook}). In the simplest case of ultra-cold bosons confined in such a potential the system is described by the Bose-Hubbard (BH) model, where single-particle tunnelings compete with local two-body interactions. The theoretical analysis of \cite{Fisher1989,Jaksch1998} shows that this competition leads directly to the phase transition from insulating phase (dominated by interactions) to superfluid phase (dominated by tunnelings). These predictions were confirmed in a spectacular experiment with ultra-cold rubidium atoms \cite{Greiner2002}. Many different extensions to the model have since been proposed and studied theoretically \cite{LewensteinBook}, and are now awaiting experimental verification.
In this article, the ground state phase diagram of a particular extension of the standard BH model is studied. Mutual interaction between particles is assumed here to be of three-body origin, i.e. these dominate over two-body interactions. Although this assumption seems very exotic, there are some possibilities of mimicking such a model in experiments with ultra-cold atoms confined in optical lattices. In the standard description three-body terms in Hubbard-like models are introduced as an effective correction originating from interactions through higher orbitals of optical lattices \cite{Will2010}. Typically in such a scenario three-body terms are small corrections to dominant two-body terms, and they can be viewed as the first occupation-dependent correction to the on-site two-body interaction. Due to perturbative changes of single-particle wave functions, the effective three-body terms are attractive (for a repulsive gas) \cite{Will2010,Sowinski2012a}. BH models with two- and three-body interactions have been studied in many different scenarios and using different numerical techniques \cite{Sowinski2012a,Chen2008a,Zhou2010,Safavi2012,Silva2011,Al-Jib2013,Silva,Sowinski2013a,Singh,Abdullaev,Ejima2013,Sowinski2014}. Recently it was suggested that it is also possible to control three-body terms independently of the two-body ones. This can be done by exploiting internal degrees of freedom of interacting particles \cite{Mazza2010} or via very fast dissipation processes \cite{Daley2009}. It also seems possible to control effective three-body interactions in the limit of high densities. In this limit three-body interactions can be viewed as an effective way of taking into account changes in electronic potential induced by a neighboring third particle. Typically, these changes are very small and therefore can be neglected. Nevertheless, if one tunes an external magnetic field to the value where the two-body $s$-wave scattering length vanishes, then three-body interaction induced by this mechanism dominates and in principle can be many orders of magnitude larger than two-body interaction. The consequences of a similar mechanism have been studied for the case of polar molecules interacting via long-range forces \cite{Buchler2007,Schmidt,Capogrosso2009a}.
\section{The Model}
On this basis we now assume that two-body interactions can be neglected and the on-site energy changes only when three- or more particles are present on a given lattice site. In the one-dimensional case the Hamiltonian of the system reads:
\begin{equation}
{\cal H} = -J \sum_i \hat{a}_i^\dagger\left(\hat{a}_{i-1}+\hat{a}_{i+1}\right) + \frac{W}{6}\sum_i \hat{n}_i(\hat{n}_i-1)(\hat{n}_i-2),
\end{equation}
where $\hat{a}_i$ annihilates a boson at site $i$, and $\hat{n}_i=\hat{a}_i^\dagger\hat{a}_i$ is a local density operator. The parameter $J$ is the single-particle hopping amplitude to the neighboring site and $W$ denotes the energy cost of forming a triplet on a given lattice site. For numerical calculations, it is assumed that the lattice has $L$ sites and open boundary conditions. The properties of Hubbard-like models are strongly dependent on the average density $\rho=N/L$, where $N$ is total number of bosons confined in the lattice. For example, it is known that for models considering on-site interactions only, the insulating phase can occur only
for integer fillings \cite{Fisher1989}. Therefore, it is convenient to introduce a chemical potential $\mu$ and to rewrite the Hamiltonian in the grand canonical ensemble ${\cal K}={\cal H}-\mu\hat{N}$, where $\hat{N}=\sum_i \hat{n}_i$ is the total number of particles operator. The phase diagram of the model is described in \cite{Chen2008a,Silva}, and a similar extended BH model with non-local three-body interactions was recently studied in \cite{Capogrosso2009a}.
\section{Simple observations}
To start we investigate the properties of the system in the limit of vanishing tunneling $J\rightarrow 0$. In this limit, for any $\mu$, all correlations between neighboring sites vanish and system remains in the Mott Insulator phase (MI) with integer filling $\rho_0$. The grand canonical energy of the system is given by
\begin{equation}
{\cal E}(\rho_0,L) = L\left[\frac{W}{6}\,\rho_0(\rho_0-1)(\rho_0-2)-\mu\rho_0\right].
\end{equation}
From this relation one can easily find the boundaries of the insulating lobes (i.e. values of chemical potential for which density changes by unity). The critical values of chemical potential for which insulating phase with filling $\rho_0$ is stable are given by
\begin{equation}
\mu_\pm(\rho_0)/W = \frac{1}{2}(\rho_0 - 1)(\rho_0-1\pm 1).
\end{equation}
For any integer $\rho_0$ one finds the energy gap $\Delta(\rho_0) = \mu_+(\rho)-\mu_-(\rho_0)=W\cdot\left(\rho_0 - 1\right)$. This means that, in contrast to the standard BH model, insulating phases with larger fillings become larger. Moreover for $\rho=1$ one finds that $\mu_+(1)=\mu_-(1)=0$, i.e. the MI phase with $\rho_0=1$ does not exist at all in the system.
\section{The phase diagram}
To obtain the phase diagram of the studied system over the whole range of tunnelings, we follow the standard method based on energetic arguments \cite{Elesin1994}. This method is based on the observation that in the MI phase, in contrast to the SF phase, a non vanishing energy gap for adding (subtracting) a particle to (from) the system always exists. It is therefore possible to obtain the upper/lower boundary of the insulating phase with filling $\rho_0$ for given tunneling $J$, by finding numerically the ground state energy ${\cal E}_0(\rho_0,L,J)$ for $N=\rho_0\cdot L$ particles and the ground state energies ${\cal E}_{\pm}(\rho_0,L,J)$ for system with $N=\rho_0\cdot L\pm 1$ particles respectively. The upper and lower boundaries of the insulating phase are therefore given by
\begin{equation}
\mu_\pm(\rho_0,L,J) = \pm\left[{\cal E}_\pm(\rho_0,L,J)-{\cal E}_0(\rho_0,L,J)\right],
\end{equation}
as well as the energy gap within the phase
\begin{equation}
\Delta(\rho_0,L,J) = \mu_+(\rho_0,L,J)-\mu_-(\rho_0,L,J).
\end{equation}
In practice, phase boundaries obtained in this way depend strongly on the lattice size $L$. Moreover, the energy gap for the SF phase vanishes only in the thermodynamic limit $L \rightarrow \infty$, and precise localization of the phase boundaries becomes ambiguous. To overcome this problem, we perform DMRG \cite{DMRG} numerical calculations for different lattice sizes $L=32,48,\ldots,128$ and extrapolate the obtained data to the limit $L\rightarrow \infty$. This extrapolation can be done quite easily as the boundaries $\mu_\pm(\rho_0,L,J)$ treated as functions of lattice size $L$ fit almost perfectly to the linear regression with $1/L$ (for discussion see \cite{Sowinski2012a}). In Fig. \ref{Fig0} an exemplary case is presented for both $\rho_0=2$ and $\rho_0=3$. This shows the accuracy of predictions based on linear data regression to the thermodynamic limit $L\rightarrow \infty$.
\begin{figure}
\includegraphics{fig0.eps}
\caption{The upper $\mu_+$ and lower $\mu_-$ boundaries of MI phase as a function of the inverse of the system size $1/L$ for two densities $\rho=2$ ($J/W=0.15$) and $\rho=3$ ($J/W=0.22$). The solid lines are linear fits to the numerical data points. Linear data extrapolation to the limit $1/L\rightarrow 0$ gives phase boundaries in the thermodynamic limit. Numerical data obtained from DMRG for $L=32,64,\dots,128$. \label{Fig0}}
\end{figure}
Finally, the phase diagram of the system is obtained by plotting extrapolated values of $\mu_\pm(\rho_0,L\rightarrow\infty,J)$ as functions of tunneling (Fig. \ref{Fig1}). The result is consistent with previous analytical predictions in the limit of vanishing tunneling. The second insulating lobe ($\rho=3$) is broader than the first one ($\rho=2$) in the direction of chemical potential as well as in the direction of tunneling. This means that, in contrast to the standard Bose-Hubbard model, the critical tunneling $J_c$ for which system undergoes the phase transition from MI to SF phase is shifted to larger values for higher fillings.
\begin{figure}
\includegraphics{fig1.eps}
\caption{(Left panel) The phase diagram of the Bose-Hubbard model with pure three-body interactions. In contrast to the standard Bose-Hubbard model in the first insulating lobe one finds two particles in each lattice site. Note also that the second insulating lobe for $\rho=3$ is larger than the first one for $\rho=2$. The phase diagram determined in thermodynamic limit $L\rightarrow\infty$ by extrapolating the numerical data obtained from DMRG for $L=32,64,\dots,128$. (Right panel) Rescaled single-particle gap $\Delta$ as a function of tunneling for $\rho=2$ (red circles) compared with analytical result obtained in third-order perturbation \eqref{ThirdOrderGap} (solid black line). \label{Fig1}}
\end{figure}
From the numerical point of view the most problematic part of these calculations lies in determining the critical tunneling $J_c$ for which the system undergoes phase transition from MI to SF phase. Theoretically, this point is defined as a tunneling for which the energy gap $\Delta(J)= \mu_+(J)-\mu_-(J)$, calculated in the thermodynamic limit $L\rightarrow\infty$, vanishes. Unfortunately, due to the numerical errors, this definition can not be adopted directly. The phase diagram obtained above allows us to estimate the critical tunneling $J_c/W\sim 0.19$ for $\rho=2$ and $J_c/W\sim 0.28$ for $\rho=3$.
At this point it is worth comparing the energy gap $\Delta(J)$ obtained numerically to the analytical results obtained recently in \cite{Ejima2013}. In that paper the authors perform perturbative calculations for a general BH model with two- and three-body local interactions (for $\rho=2$). In the third-order of perturbation with respect to the tunneling, in the particular case of vanishing two-body interactions, the result reduces to the form
\begin{equation} \label{ThirdOrderGap}
\frac{\Delta^{(3)}(J)}{W} = 1 - 10\frac{J}{W}+\frac{38}{3}\left(\frac{J}{W}\right)^2+\frac{116}{3}\left(\frac{J}{W}\right)^3+\ldots
\end{equation}
As it is seen in the right panel of Fig. \ref{Fig1}, the energy gap obtained numerically fits almost perfectly to the predictions of \eqref{ThirdOrderGap}. The deviations are clearly visible for larger tunnelings where the third-order approximation breaks down.
\section{Berenzinskii-Kosterlitz-Thouless transition}
In order to determine the critical tunneling more precisely two independent but complementary methods may be used. The first is based on the assumption that near the critical point the studied system belongs to the same universality class as the standard Bose-Hubbard model. At the phase transition the standard BH model in $d$ dimensions can be mapped to the $d+1$-dimensional XY model. Therefore, in the one-dimensional case the phase transition belongs to the Berenzinskii-Kosterlitz-Thouless class (BKT)\cite{Berenzinskii1972,Kosterlitz1973}. As was shown recently, the universality class does not change when one extends the standard BH model with local three-body terms \cite{Sowinski2012a}. This suggests, that even in the limit of vanishing two-body terms (as studied here) the universality class can remain unchanged. If true, this is a way to obtain the critical tunneling $J_c$. Indeed, for the BKT transition the energy gap $\Delta(J)$ in the vicinity of the critical tunneling $J_c$, vanishes as
\begin{equation} \label{GapRel}
\Delta(J) \sim \exp\left[-\frac{\alpha}{\sqrt{1-J/J_c}}\right].
\end{equation}
Therefore, if the critical tunneling $J_c$ was known and indeed the relation \eqref{GapRel} would hold, then by plotting $\log\,\Delta(J)$ against $\sqrt{1-J/J_c}$ the data points should follow a linear regression.
Moreover, this can happen only for a unique value of $J_c$ and, due to uniques of the relation \eqref{GapRel}, only if the transition is of BKT type. Plots in Fig. \ref{Fig2} show that the BKT scaling is satisfied with an appropriately chosen critical value of the tunneling $J_c$. In this way we confirm that the phase transition is indeed of BKT type. Values of critical tunneling obtained in this way are $J_c/W=0.191 (\pm 0.005)$ and $J_c/W=0.282 (\pm 0.005)$ for $\rho_0=2$ and $\rho_0=3$ respectively. Uncertainties in the critical tunnelings may be estimated from comparison of the results obtained for different system sizes $L=118,\ldots,128$. In all these cases the critical tunneling differs from estimated values by no more than estimated uncertainties.
\begin{figure}
\includegraphics{fig2.eps}
\caption{Energy gap of the insulating lobe $\Delta$ as a function of tunneling $J$ for two integer fillings $\rho_0=2$ (upper panel) and $\rho_0=3$ (bottom panel). With this scaling the numerical points fit to the linear behavior predicted by the Kosterlitz-Thouless universality class \eqref{GapRel}. This suggests that studied model belongs to the same universality class as standard Bose-Hubbard model. Numerical data obtained from DMRG for $L=128$. In the insets the correlation functions ${\cal C}_2$ (red solid line) and ${\cal C}_3$ (blue dashed line) as functions of tunneling $J/W$ are presented. \label{Fig2}}
\end{figure}
For completeness local two-body ${\cal C}_2 = \langle \hat{a}_m^{\dagger 2}\hat{a}_m^2\rangle$ and local three-body ${\cal C}_3 = \langle \hat{a}_m^{\dagger 3}\hat{a}_m^3\rangle$ correlation functions (for the middle lattice site $m=L/2$) are plotted in the insets of Fig. \ref{Fig2}. For both fillings studied ($\rho_0=2$ and $\rho_0=3$), in the vicinity of the phase transition the three-body correlation ${\cal C}_3$ changes its behavior, which can be viewed as a changing of ground-state properties. Note however that in the limit of large tunneling, both correlation functions necessarily approach the values of the standard BH model.
\section{Entanglement entropy approach}
The phase transition from the MI to SF phase can be also identified using a complementary method, by looking for changes in the behavior of the entanglement entropy (EE) of the subsystem ${\cal S}(l,L) = -\mathrm{Tr} \left[\hat\rho_l\,\mathrm{ln}\hat\rho_l\right]$. Here, $\hat\rho_l =\mathrm{Tr}_{L-l} |\mathtt{G}\rangle\langle\mathtt{G}|$ is the reduced density matrix of the subchain of length $l$ obtained by tracing-out remaining degrees of freedom from the ground state of the system $|\mathtt{G}\rangle$. The scaling behavior of the EE is well known in the thermodynamic limit, i.e. when $L\rightarrow \infty$. In the SF phase, due to the nonlocal correlations in the system, EE treated as a function of size of the subsystem is logarithmically divergent with $l$. In contrast, in the MI phase, long-range correlations vanish and therefore entanglement entropy saturates for large enough subsystem sizes $l$. These facts have some consequences also for finite size $L$ of the full system. As predicted by conformal field theory, depending on the boundary conditions, in the SF phase entanglement entropy is the following function function of $l$ \cite{Calabrese2004,Laflorencie2006}
\begin{equation} \label{EntropyEq}
{\cal S}(l,L) = \frac{\mathtt{c}}{3\kappa}\,\mathrm{ln}\left[\frac{\kappa L}{\pi}\sin\left(\frac{\pi l}{L}\right)\right] + s(L) + {\cal O}\left(\frac{l}{L}\right).
\end{equation}
The parameter $\kappa$ depends on the boundary conditions and is equal to $1$ or $2$ for periodic or open boundary conditions respectively. The pre-factor ${\mathtt c}$ is related to the central charge of corresponding conformal field theory. For non-critical phases (like MI phase) it is zero, whereas it is non zero whenever the system manifests some non-local correlations. It is known that deep in the SF phase, due to the equivalence with Tomonaga-Luttinger liquid \cite{Cazalilla2011}, central charge ${\mathtt c}=1$. To show that EE in the studied system can be well understood with this description, Fig. \ref{Fig3} plots entanglement entropy ${\cal S}(l,L)$ as a function of scaled subsystem size $\mathrm{ln}\left[\sin\left(\frac{\pi l}{L}\right)\right]$ obtained from DMRG calculations with $L=128$ for different tunnelings $J/W$ and $\rho_0=2$. With appropriate scaling the numerical points fit almost perfectly to lines which is in agreement with the predictions of \eqref{EntropyEq}. The gradients of these lines are directly related to the central charge of the many-body quantum state.
\begin{figure}
\includegraphics{fig3.eps}
\caption{Entanglement entropy of the subchain of length $l$ for a number of example tunnelings $J/W$ ($\rho=2$). With chosen scaling the numerical points fit to the linear predictions of CFT. In the MI phase (low tunnelings) the slope of the corresponding lines (proportional to the central charge $\mathtt c$) is equal to 0. In the SF phase (large tunnelings) the line gradients saturate on the value $\sim 1/6$. This corresponds to the central charge value ${\mathtt c}=1$ predicted by the Tomonaga-Luttinger liquid theory. Numerical data obtained from DMRG method for $L=128$. \label{Fig3}}
\end{figure}
\begin{figure}
\includegraphics{fig4.eps}
\caption{Central charge $\mathtt c$ as a function of the tunneling rate $J$ for two integer fillings $\rho=2$ (upper panel) and $\rho=3$ (bottom panel) determined from the behavior of the EE \eqref{EntropyEq}. For small tunnelings in MI phase the central charge is equal to $0$ and in deep SF phase it is equal to $1$ in accordance with Luttinger Liquid theory. Near the quantum phase transition we observe a rapid change of central charge, and for critical tunneling $J_c$ the central charge $\mathtt c$ achieves the maximal value. The value of the critical tunneling $J_c$ agrees with the value determined from decaying of the energy gap of insulating lobe \eqref{GapRel}. Numerical data obtained from DMRG method for $L=128$. \label{Fig4}}
\end{figure}
The method described above enables one to plot the central charge $\mathtt c$ as a function of tunneling $J$. The results for two integer fillings $\rho_0=2$ and $\rho_0=3$ are presented in Fig. \ref{Fig4}. In both cases, in the MI phase the central charge vanishes and deep in the SF phase it saturates at the expected value ${\mathtt c}=1$. For moderate values of tunneling rapid change in the behavior of entanglement entropy is observed. The central charge achieves its maximal value at the critical point predicted with the previous method. Such behavior of the central charge is very similar to the situation observed in the standard BH model \cite{Ejima2012}. It is believed that non monotonicity in the central charge behavior is a direct consequence of the finite size of the system, and in the thermodynamic limit it smoothly flows to ''step-like'' behavior. The maximal value of the central charge $\mathtt c$ obtained from finite size calculations is reached in the neighborhood of the critical tunneling $J_c$. All numerical results obtained here fully agree with all these properties.
\section{Conclusions}
The phase diagram for the one-dimensional extended Bose-Hubbard model with pure three-body interactions was studied. It was shown that insulating lobes are present for integer fillings $\rho_0\geq 2$ and that their shapes, in contrast to the standard BH model, become larger for larger $\rho_0$. Three-body interactions lead to enhanced stability of the MI phase in the $\mu-J$ phase diagram. The first two MI lobes were discussed in details with DMRG calculations for different system sizes. Values of critical tunnelings $J_c$ for which the system undergoes phase transition from MI to SF were determined. It was also shown that the studied model belongs to the BKT universality class in analogy to the standard BH model.
\section{Acknowledgements}
The author thanks R. W. Chhajlany, P. Deuar, M. Gajda, and M. Lewenstein for their fruitful comments and suggestions. This research was supported by the (Polish) National Science Center Grant No. DEC-
2011/01/D/ST2/02019. The author acknowledges support from the Foundation for Polish Science (KOLUMB
Programme; KOL/7/2012) and hospitality from ICFO.
|
2,869,038,153,954 | arxiv | \section{Introduction}
The goal in many real-world applications of artificial intelligence is to create a pipeline from data, to predictive models, to decisions. Together, these steps enable a form of evidence-based decision making which has transformative potential across domains such as healthcare, scientific discovery, transportation, and more \cite{horvitz2010data,horvitz2010healthcare}. This pipeline requires two technical components: machine learning models and optimization algorithms. Machine learning models use the data to predict unknown quantities; optimization algorithms use these predictions to arrive at a decision which maximizes some objective. Our concern here is combinatorial optimization, which is ubiquitous in real-world applications of artificial intelligence, ranging from matching applicants to public housing to selecting a subset of movies to recommend. We focus on common classes of combinatorial problems which have well-structured continuous relaxations, e.g., linear programs and submodular maximization. A vast literature has been devoted to combinatorial optimization \cite{korte2012combinatorial}. Importantly though, optimization is often insufficient without the broader pipeline because the objective function is unknown and must predicted via machine learning.
While machine learning has witnessed incredible growth in recent years, the two pieces of the pipeline are treated entirely separately by typical training approaches. That is, a system designer will first train a predictive model using some standard measure of accuracy, e.g., mean squared error for a regression problem. Then, the model's predictions are given as input to the optimization algorithm to produce a decision. Such \emph{two-stage} approaches are extremely common across many domains \cite{wang2006cope,fang2016deploying,mukhopadhyay2017prioritized,xue2016avicaching}. This process is justified when the predictive model is perfect, or near-so, since completely accurate predictions also produce the best decisions. However, in complex learning tasks, all models will make errors and the training process implicitly trades off where these errors will occur. When prediction and optimization are separate, this tradeoff is divorced from the goal of the broader pipeline: to make the best decision possible.
We propose a \emph{decision-focused learning} framework which melds the data-decisions pipeline by integrating prediction and optimization into a single end-to-end system. That is, the predictive model is trained using the quality of the decisions which it induces via the optimization algorithm. Similar ideas have recently been explored in the context of convex optimization \cite{donti2017task}, but to our knowledge ours is the first attempt to train machine learning systems for performance on \emph{combinatorial} decision-making problems. Combinatorial settings raise new technical challenges because the optimization problem is discrete. However, machine learning systems (e.g., deep neural networks) are often trained via gradient descent.
Our first contribution is a general framework for training machine learning models via their performance on combinatorial problems. The starting point is to relax the combinatorial problem to a continuous one. Then, we analytically differentiate the optimal solution to the continuous problem as a function of the model's predictions. This allows us to train using a continuous proxy for the discrete problem. At test time, we round the continuous solution to a discrete point.
Our second contribution is to instantiate this framework for two broad classes of combinatorial problems: linear programs and submodular maximization problems. Linear programming encapsulates a number of classical problems such as shortest path, maximum flow, and bipartite matching. Submodular maximization, which reflects the intuitive phenomena of diminishing returns, is also ubiquitous; applications range from social networks \cite{kempe_maximizing_2003} to recommendation systems \cite{viappiani2010optimal}. In each case, we resolve a set of technical challenges to produce well-structured relaxations which can be efficiently differentiated through.
Finally, we give an extensive empirical investigation, comparing decision-focused and traditional methods on a series of domains. Decision-focused methods often improve performance for the pipeline as a whole (i.e., decision quality) despite worse predictive accuracy according to standard measures. Intuitively, the predictive models trained via our approach focus specifically on qualities which are important for making good decisions. By contrast, more generic methods produce predictions where error is distributed in ways which are not aligned with the underlying task.
\section{Problem description}
We consider combinatorial optimization problems of the form $\max_{x\in \mathcal{X}} f(x, \theta)$, where $\mathcal{X}$ is a discrete set enumerating the feasible decisions. Without loss of generality, $\mathcal{X} \subseteq \{0,1\}^n$ and the decision variable $x$ is a binary vector. The objective $f$ depends on a parameter $\theta \in \Theta$. If $\theta$ were known exactly, a wide range of existing techniques could be used to solve the problem. In this paper, we consider the challenging (but prevalent) case where $\theta$ is unknown and must be inferred from data. For instance, in bipartite matching, $x$ represents whether each pair of nodes were matched and $\theta$ contains the reward for matching each pair. In many applications, these affinities are learned from historical data.
Specifically, the decision maker observes a feature vector $y \in \mathcal{Y}$ which is correlated with $\theta$. This introduces a learning problem which must be solved prior to optimization. As in classical supervised learning, we formally model $y$ and $\theta$ as drawn from a joint distribution $P$. Our algorithm will observe training instances $(y_1, \theta_1)...(y_N, \theta_N)$ drawn iid from $P$. At test time, we are give a feature vector $y$ corresponding to an \emph{unobserved} $\theta$. Our algorithm will use $y$ to predict a parameter value $\hat{\theta}$. Then, we will solve the optimization problem $\max_x f(x, \hat{\theta})$ to obtain a decision $x^*$. Our utility is the objective value that $x^*$ obtains with respect to the \emph{true but unknown} parameter $\theta$, $f(x^*, \theta)$.
Let $m : \mathcal{Y} \to \Theta$ denote a model mapping observed features to parameters. Our goal is to (using the training data) find a model $m$ which maximizes expected performance on the underlying optimization task. Define $x^*(\theta) = \arg\max_{x \in \mathcal{X}} f(x, \theta)$ to be the optimal $x$ for a given $\theta$. The end goal of the data-decisions pipeline is to maximize
\begin{align} \label{eq:task-objective}
\E_{y, \theta \sim P}\left[f(x^*(m(y)), \theta)\right]
\end{align}
The classical approach to this problem is a \emph{two-stage} method which first learns a model using a task-agnostic loss function (e.g., mean squared error) and then uses the learned model to solve the optimization problem. The model class will have its own parameterization, which we denote by $m(y, \omega)$. For instance, the model class could consist of deep neural networks where $\omega$ denotes the weights. The two-stage approach first solves the problem $\min_\omega \E_{y, \theta \sim P}\left[\mathcal{L}(\theta, m(y, \omega))\right]$, where $\mathcal{L}$ is a loss function. Such a loss function measures the overall ``accuracy" of the model's predictions but does not specifically consider how $m$ will fare when used for decision making. The question we address is whether it is possible to do better by specifically training the model to perform well on the decision problem.
\section{Previous work}
There is a growing body of research at the interface of machine learning and discrete optimization \cite{vinyals2015pointer,bertsimas2017optimal,khalil2017tree,khalil2017graphs}. However, previous work largely focuses on either using discrete optimization to find an accuracy-maximizing predictive model or using machine learning to speed up optimization algorithms. Here, we pursue a deeper synthesis; to our knowledge, this work is the first to train predictive models using combinatorial optimization performance with the goal of improving decision making.
The closest work to ours in motivation is \cite{donti2017task}, who study task-based convex optimization. Their aim is to optimize a convex function which depends on a learned parameter. As in their work, we use the idea of differentiating through the KKT conditions. However, their focus is entirely on continuous problems. Our discrete setting raises new technical challenges, highlighted below. Elmachtoub and Grigas \shortcite{elmachtoub2017smart} also propose a means of integrating prediction and optimization; however, their method applies strictly to linear optimization and focuses on linear predictive models while our framework applies to nonlinear problems with more general models (e.g., neural networks). Finally, some work has noted that two-stage methods lead to poor optimization performance in specific domains \cite{beygelzimer2009offset,ford2015beware}.
Our work is also related to recent research in structured prediction \cite{belanger2017end,tu2018learning,niculae2018sparsemap,djolonga2017differentiable}. which aims to make a prediction lying in a discrete set. This is fundamentally different than our setting since their goal is to \emph{predict} an external quantity, not to \emph{optimize} and find the best decision possible. However, structured prediction sometimes integrates a discrete optimization problem as a module within a larger neural network. The closest such work technically to ours is \cite{tschiatschek2018differentiable}, who design a differentiable algorithm for submodular maximization in order to predict choices made by users. Their approach is to introduce noise into the standard greedy algorithm, making the probability of outputting a given set differentiable. There are two key differences between our approaches. First, their approach does not apply to the decision-focused setting because it maximizes the likelihood of a \emph{fixed} set but cannot optimize for finding the best set. Second, exactly computing gradients for their algorithm requires marginalizing over the $k!$ possible permutations of the items, forcing a heuristic approximation to the gradient. Our approach allows closed-form differentiation.
Some deep learning architectures differentiate through gradient descent steps, related to our approach in the submodular setting. Typically, previous approaches explicitly unroll $T$ iterations of gradient descent in the computational graph \cite{domke2012generic}. However, this approach is usually employed for \emph{unconstrained} problems where each iteration is a simple gradient step. By contrast, our combinatorial problems are constrained, requiring a projection step to enforce feasibility. Unrolling the projection step may be difficult, and would incur a large computational cost. We instead exploit the fact that gradient ascent converges to a local optimum and analytically differentiate via the KKT conditions.
\section{General framework}
Our goal is to integrate combinatorial optimization into the loop of gradient-based training. That is, we aim to directly train the predictive model $m$ by running gradient steps on the objective in Equation \ref{eq:task-objective}, which integrates both prediction and optimization. The immediate difficulty is the dependence on $x^*(m(y, \omega))$. This term is problematic for two reasons. First, it is a discrete quantity since $x^*$ is a decision from a binary set. This immediately renders the output nondifferentiable with respect to the model parameters $\omega$. Second, even if $x^*$ were continuous, it is still defined as the solution to an optimization problem, so calculating a gradient requires us to differentiate through the argmax operation.
We resolve both difficulties by considering a continuous relaxation of the combinatorial decision problem. We show that for a broad class of combinatorial problems, there are appropriate continuous relaxations such that we can analytically obtain derivatives of the continuous optimizer with respect to the model parameters. This allows us to train any differentiable predictive model via gradient descent on a continuous surrogate to Equation \ref{eq:task-objective}. At test time, we solve the true discrete problem by rounding the continuous point.
More specifically, we relax the discrete constraint $x \in \mathcal{X}$ to the continuous one $x \in conv(\mathcal{X})$ where $conv$ denotes the convex hull. Let $x(\theta) = \arg\max_{x \in conv(\mathcal{X})}f(x, \theta)$ denote the optimal solution to the continuous problem. To train our predictive model, we would like to compute gradients of the whole-pipeline objective given by Equation \ref{eq:task-objective}, replacing the discrete quantity $x^*$ with the continuous $x$. We can obtain a stochastic gradient estimate by sampling a single $(y, \theta)$ from the training data. On this sample, the chain rule gives
\begin{align*}
\frac{d f (x(\hat{\theta}), \theta)}{d \omega} = \frac{d f(x(\hat{\theta}), \theta)}{d x(\hat{\theta})} \frac{d x(\hat{\theta})}{d \hat{\theta}} \frac{d \hat{\theta}}{d \omega}
\end{align*}
The first term is just the gradient of the objective with respect to the decision variable $x$, and the last term is the gradient of the model's predictions with respect to its own internal parameterization.
The key is computing the middle term, which measures how the optimal decision changes with respect to the prediction $\hat{\theta}$. For continuous problems, the optimal continuous decision $x$ must satisfy the KKT conditions (which are sufficient for convex problems). The KKT conditions define a system of linear equations based on the gradients of the objective and constraints around the optimal point. Is is known that by applying the implicit function theorem, we can differentiate the solution to this linear system \cite{gould2016differentiating,donti2017task}. In more detail, recall that our continuous problem is over $conv(\mathcal{X})$, the convex hull of the discrete feasible solutions. This set is a polytope, which can be represented via linear equalities as the set $\{x: Ax\leq b\}$ for some matrix $A$ and vector $b$. Let $(x, \lambda)$ be pair of primal and dual variables which satisfy the KKT conditions. Then differentiating the conditions yields that
\begin{align}\label{eq:kkt}
\begin{bmatrix}
&\nabla^2_x f(x, \theta) & A^T\\
& diag(\lambda) A & diag(Ax-b)\\
\end{bmatrix}
\begin{bmatrix}
\frac{d x}{d\theta}\\
\frac{d \lambda}{d\theta}
\end{bmatrix}
=
\begin{bmatrix}
\frac{d \nabla_x f(x, \theta)}{d\theta} \\
0
\end{bmatrix}
\end{align}
By solving this system of linear equations, we can obtain the desired term $\frac{dx}{d\theta}$. However, the above approach is a general framework; our main technical contribution is to instantiate it for specific classes of combinatorial problems. Specifically, we need (1) an appropriate continuous relaxation, along with a means of solving the continuous optimization problem and (2) efficient access to the terms in Equation \ref{eq:kkt} which are needed for the backward pass (i.e., gradient computation). We provide both ingredients for two broad classes of problems: linear programming and submodular maximization. In each setting, the high-level challenge is to ensure that the continuous relaxation is differentiable, a feature not satisfied by naive alternatives. We also show how to efficiently compute terms needed for the backward pass, especially for the more intricate submodular case.
\subsection{Linear programming}
The first setting that we consider is combinatorial problems which can be expressed as a linear program with equality and inequality constraints in the form
\begin{align}
&\max \theta^T x \,\,\, \text{s.t. }\,Ax = b, \,\, Gx \leq h \label{problem:lp}
\end{align}
Example problems include shortest path, maximum flow, bipartite matching, and a range of other domains. For instance, in a shortest path problem $\theta$ contains the cost for traversing each edge, and we are interested in problems where the true costs are unknown and must be predicted. Since the LP can be regarded as a continuous problem (it just happens that the optimal solutions in these example domains are integral), we could attempt to apply Equation \ref{eq:kkt} and differentiate the solution. This approach runs into an immediate difficulty: the optimal solution to an LP may not be differentiable (or even continuous) with respect to $\theta$. This is because the optimal solution may ``jump" to a different vertex. Formally, the left-hand side matrix in Equation \ref{eq:kkt} becomes singular since $\nabla_x^2 f(x, \theta)$ is always zero.
We resolve this challenge by instead solving the regularized problem
\begin{align}
&\max \theta^T x - \gamma ||x||_2^2 \,\,\, \text{s.t. }\,Ax = b, \,\, Gx \leq h\label{eq:lp-quad}
\end{align}
which introduces a penalty proportional to the squared norm of the decision vector. This transforms the LP into a strongly concave quadratic program (QP). The Hessian is given by $\nabla_x^2 f(x, \theta) = -2\gamma I$ (where $I$ is the identity matrix), which renders the solution differentiable under mild conditions (see supplement for proof):
\begin{theorem}
Let $x(\theta)$ denote the optimal solution of Problem \ref{eq:lp-quad}. Provided that the problem is feasible and all rows of $A$ are linearly independent, $x(\theta)$ is differentiable with respect to $\theta$ almost everywhere. If $A$ has linearly dependent rows, removing these rows yields an equivalent problem which is differentiable almost everywhere. Wherever $x(\theta)$ is differentiable, it satisfies the conditions in Equation \ref{eq:kkt}.
\end{theorem}
Moreover, we can control the loss that regularization can cause on the original, linear problem:
\begin{theorem}
Define $D = \max_{x, y\in conv(\mathcal{X})} ||x - y||^2$ as the squared diameter of the feasible set and $OPT$ to be the optimal value for Problem \ref{problem:lp}. We have $\theta^\top x(\theta) \geq OPT - \gamma D$.
\end{theorem}
Together, these results give us a differentiable surrogate which still enjoys an approximation guarantee relative to the integral problem. Computing the backward pass via Equation \ref{eq:kkt} is now straightforward since all the relevant terms are easily available. Since $\nabla_x \theta^\top x = \theta$, we have $\frac{d \nabla_x f(x, \theta)}{d\theta} = I$. All other terms are easily computed from the optimal primal-dual pair $(x, \lambda)$ which is output by standard QP solvers. We can also leverage a recent QP solver \cite{amos2017optnet} which maintains a factorization of the KKT matrix for a faster backward pass. At test time, we simply set $\gamma = 0$ to produce an integral decision.
\subsection{Submodular maximization}
We consider problems where the underlying objective to maximize a set function $f: 2^V \to R$, where $V$ is a ground set of items. A set function is \emph{submodular} if for any $A \subseteq B$ and any $v \in V\setminus B$, $f(A \cup \{v\}) - f(A) \geq f(B \cup \{v\}) - f(B)$. We will restrict our consideration to submodular functions which are \emph{monotone} ($f(A \cup \{v\}) - f(A) \geq 0 \,\, \forall A, v$) and \emph{normalized} $f(\emptyset) = 0$. This class of functions contains many combinatorial problems which have been considered in machine learning and artificial intelligence (e.g., influence maximization, facility location, diverse subset selection, etc.). We focus on the cardinality-constrained optimization problem $\max_{|S| \leq k} f(S)$, though our framework easily accommodates more general matroid constraints.
\textbf{Continuous relaxation: } We employ the canonical continuous relaxation for submodular set functions, which associates each set function $f$ with its \emph{multilinear extension} $F$ \cite{calinescu2011maximizing}. We can view a set function as defined on the domain $\{0,1\}^{|V|}$, where each element is an indicator vector which the items contained in the set. The extension $F$ is a continuous function defined on the hypercube $[0,1]^{|V|}$. We interpret a given fraction vector $x \in [0,1]^{|V|}$ as giving the marginal probability that each item is included in the set. $F(x)$ is the expected value of $f(S)$ when each item $i$ is included in $S$ independently with probability $x_i$. In other words, $F(x) = \sum_{S \subseteq V} f(S) \prod_{i\in S}x_i \prod_{i\not\in S} 1-x_i$. While this definition sums over exponentially many terms, arbitrarily close approximations can be obtained via random sampling. Further, closed forms are available for many cases of interest \cite{iyer2014monotone}. Importantly, well-known rounding algorithms \cite{calinescu2011maximizing} can convert a fractional point $x$ to a set $S$ satisfying $\E[f(S)] \geq F(x)$; i.e., the rounding is lossless.
As a proxy for the discrete problem $\max_{|S| \leq k} f(S)$, we can instead solve $\max_{x \in conv(\mathcal{X})} F(x)$, where $\mathcal{X} = \{x \in \{0,1\}^{|V|}: \sum_i x_i \leq k\}$. Unfortunately, $F$ is not in general concave. Nevertheless, many first-order algorithms still obtain a constant factor approximation. For instance, a variant of the Frank-Wolfe algorithm solves the continuous maximization problem with the optimal approximation ratio of $(1 - 1/e)$ \cite{calinescu2011maximizing,bian2017guaranteed}.
However, non-concavity complicates the problem of differentiating through the continuous optimization problem. Any polynomial-time algorithm can only be guaranteed to output a \emph{local} optimum, which need not be unique (compared to strongly convex problems, where there is a single global optimum). Consequently, the algorithm used to select $x(\theta)$ might return a \emph{different} local optimum under an infinitesimal change to $\theta$. For instance, the Frank-Wolfe algorithm (the most common algorithm for continuous submodular maximization) solves a linear optimization problem at each step. Since (as noted above), the solution to a linear problem may be discontinuous in $\theta$, this could render the output of the optimization problem nondifferentiable.
We resolve this difficulty through a careful choice of optimization algorithm for the forward pass. Specifically, we use apply projected stochastic gradient ascent (SGA), which has recently been shown to obtain a $\frac{1}{2}$-approximation for continuous submodular maximization \cite{Hassani2017gradient}. Although SGA is only guaranteed to find a local optimum, each iteration applies purely differentiable computations (a gradient step and projection onto the set $conv(\mathcal{X})$), and so the final output after $T$ iterations will be differentiable as well. Provided that $T$ is sufficiently large, this output will converge to a local optimum, which must satisfy the KKT conditions. Hence, we can apply our general approach to the local optimum returned by SGA. The following theorem shows that the local optima of the multilinear extension are differentiable:
\begin{theorem} \label{theorem:submod-diff}
Suppose that $x^*$ is a local maximum of the multilinear extension, i.e,., $\nabla_x F(x^*, \theta) = 0$ and $\nabla_x^2 F(x^*, \theta) \succ 0$. Then, there exists a neighborhood $\mathcal{I}$ around $x^*$ such that the maximizer of $F(\cdot, \theta)$ within $\mathcal{I} \cap conv(\mathcal{X})$ is differentiable almost everywhere as a function of $\theta$, with $\frac{d x(\theta)}{d\theta}$ satisfying the conditions in Equation \ref{eq:kkt}.
\end{theorem}
We remark that Theorem \ref{theorem:submod-diff} requires a local maximum, while gradient ascent may in theory find saddle points. However, recent work shows that random perturbations ensure that gradient ascent quickly escapes saddle points and finds an approximate local optimum \cite{jin2017escape}.
\textbf{Efficient backward pass:} We now show how the terms needed to compute gradients via Equation \ref{eq:kkt} can be efficiently obtained. In particular, we need access to the optimal dual variable $\lambda$ as well as the term $\frac{d \nabla_x F(x, \theta)}{d\theta}$. These were easy to obtain in the LP setting but the submodular setting requires some additional analysis. Nevertheless, we show that both can be obtained efficiently.
\textbf{Optimal dual variables:} SGA only produces the optimal primal variable $x$, not the corresponding dual variable $\lambda$ which is required to solve Equation \ref{eq:kkt} in the backward pass. We show that for cardinality-constrained problems, we can obtain the optimal dual variables analytically given a primal solution $x$. Let $\lambda_i^L$ be the dual variable associated with the constraint $x_i \geq 0$, $\lambda_i^U$ with $x_i \leq 1$ and $\lambda^S$ with $\sum_i x_i \leq k$. By differentiating the Lagrangian, any optimum satisfies
\begin{align*}
\nabla_{x_i} f(x) - \lambda_i^L + \lambda_i^U + \lambda_i^S = 0 \quad \forall i
\end{align*}
where complementary slackness requires that $\lambda_i^L = 0$ if $x_i > 0$ and $\lambda_i^U = 0$ if $x_i < 1$. Further, it is easy to see that for all $i$ with $0 < x_i < 1$, $\nabla_{x_i} f(x)$ must be equal. Otherwise, $x$ could not be (locally) optimal since we could increase the objective by finding a pair $i,j$ with $\nabla_{x_i} f(x) > \nabla_{x_j} f(x)$, increasing $x_i$, and decreasing $x_j$. Let $\nabla_{*}$ denote the shared gradient value for fractional entries. We can solve the above equation and express the optimal dual variables as
\begin{align*}
\lambda^S = -\nabla_*, \,\,\,\, \lambda_i^L = \lambda^S - \nabla_{x_i} f, \,\,\,\, \lambda_i^U = \nabla_{x_i} f - \lambda^S
\end{align*}
where the expressions for $\lambda_i^L$ and $\lambda_i^U$ apply only when $x_i = 0$ and $x_i = 1$ respectively (otherwise, complementary slackness requires these variables be set to 0).
\textbf{Computing $\mathbf{\frac{d}{d\theta} \nabla_x F(x, \theta)}$: } We show that this term can be obtained in closed form for the case of probabilistic coverage functions, which includes many cases of practical interest (e.g.\ budget allocation, sensor placement, facility location, etc.). However, our framework can be applied to arbitrary submodular functions; we focus here on coverage functions just because they are particularly common in applications. A coverage function takes the following form. There a set of items $U$, and each $j \in U$ has a weight $w_j$. The algorithm can choose from a ground set $V$ of actions. Each action $a_i$ covers each item $j$ independently with probability $\theta_{ij}$. We consider the case where the probabilities $\theta$ are be unknown and must be predicted from data. For such problems, the multilinear extension has a closed form
\begin{align*}
F(x, \theta) = \sum_{j \in U} w_j \left(1 - \prod_{i \in V} 1 - x_{ij}\theta_{ij}\right)
\end{align*}
and we can obtain the expression
\begin{align*}
\frac{d}{d\theta_{kj}} \nabla_{x_i} F(x, \theta) = \begin{cases}
- \theta_{ij} x_k \prod_{\ell \not= i,k} 1 - x_\ell \theta_{\ell j} & \text{if } k \not= i\\
\prod_{k \not=i} 1 - x_k \theta_{kj} & \text{otherwise}.
\end{cases}
\end{align*}
\section{Experiments}
We conduct experiments across a variety of domains in order to compare our decision-focused learning approach with traditional two stage methods. We start out by describing the experimental setup for each domain. Then, we present results for the complete data-decisions pipeline in each domain (i.e., the final solution quality each method produces on the optimization problem). We find that decision-focused learning almost always outperforms two stage approaches. To investigate this phenomenon, we show more detailed results about what each model learns. Two stage approaches typically learn predictive models which are more accurate according to standard measures of machine learning accuracy. However, decision-focused methods learn qualities which are important for optimization performance even if this leads to lower accuracy in an overall sense.
\begin{table*}\centering
\fontsize{9}{9}\selectfont
\caption{Solution quality of each method for the full data-decisions pipeline.}\label{table:opt}
\begin{tabular}{@{}r|ccccccccc@{}}\toprule
& \multicolumn{3}{c}{Budget allocation} & \phantom{ab}& \multicolumn{1}{c}{ Matching} &
\phantom{ac} & \multicolumn{3}{c}{Diverse recommendation}\\
\cmidrule{2-4} \cmidrule{6-6} \cmidrule{8-10}
$k = $ & $5$ & $10$ & $20$ && $-$ && $5$ & $10$ & $20$\\ \midrule
NN1-Decision & \textbf{49.18 $\pm$ 0.24} & \textbf{72.62 $\pm$ 0.33} & \textbf{98.95 $\pm$ 0.46} && 2.50 $\pm$ 0.56 && \textbf{15.81 $\pm$ 0.50} & \textbf{29.81 $\pm$ 0.85} & \textbf{52.43 $\pm$ 1.23}\\
NN2-Decision & 44.35 $\pm$ 0.56 & 67.64 $\pm$ 0.62 & 93.59 $\pm$ 0.77 && \textbf{6.15 $\pm$ 0.38} && 13.34 $\pm$ 0.77 & 26.32 $\pm$ 1.38 & 47.79 $\pm$ 1.96\\
NN1-2Stage & 32.13 $\pm$ 2.47 & 45.63 $\pm$ 3.76 & 61.88 $\pm$ 4.10 && 2.99 $\pm$ 0.76 && 4.08 $\pm$ 0.16 & 8.42 $\pm$ 0.29 & 19.16 $\pm$ 0.57\\
NN2-2Stage & 9.69 $\pm$ 0.05 & 18.93 $\pm$ 0.10 & 36.16 $\pm$ 0.18 && 3.49 $\pm$ 0.32 && 11.63 $\pm$ 0.43 & 22.79 $\pm$ 0.66 & 42.37 $\pm$ 1.02\\
RF-2Stage & \textbf{48.81 $\pm$ 0.32} & \textbf{72.40 $\pm$ 0.43} & \textbf{98.82 $\pm$ 0.63} && 3.66 $\pm$ 0.26 && 7.71 $\pm$ 0.18 & 15.73 $\pm$ 0.34 & 31.25 $\pm$ 0.64\\
Random & 9.69 $\pm$ 0.04 & 18.92 $\pm$ 0.09 & 36.13 $\pm$ 0.14 && 2.45 $\pm$ 0.64 && 8.19 $\pm$ 0.19 & 16.15 $\pm$ 0.35 & 31.68 $\pm$ 0.71\\
\bottomrule
\end{tabular}
\end{table*}
\textbf{Budget allocation: }We start with a synthetic domain which allows us to illustrate how our methods differ from traditional approaches and explore when improved decision making is achievable. This example concerns budget allocation, a submodular maximization problem which models an advertiser's choice of how to divide a finite budget $k$ between a set of channels. There is a set of customers $R$ and the objective is $f(S) = \sum_{v \in R} 1 - \prod_{u \in S} (1 - \theta_{uv})$, where $\theta_{uv}$ is the probability that advertising on channel $u$ will reach customer $v$. This is the expected number of customers reached. Variants on this problem have been the subject of a great deal of research \cite{alon2012optimizing,soma2014optimal,miyauchi2015threshold}.
In our problem, the matrix $\theta$ is not known in advance and must be learned from data. The ground truth matrices were generated using the Yahoo webscope \cite{yahoowebscope} dataset which logs bids placed by advertisers on a set of phrases. In our problem, the phrases are channels and the accounts are customers. Each instance samples a random subset of 100 channels and 500 customers. For each edge $(u,v)$ present in the dataset, we sample $\theta_{uv}$ uniformly at random in [0,0.2]. For each channel $u$, we generate a feature vector from that channel's row of the matrix, $\theta_{u}$ via complex nonlinear function. Specifically, $\theta_{u}$ is passed through a 5-layer neural network with random weight matrices and ReLU activations to obtain a feature vector $y_u$. The learning task is to reconstruct $\theta_{u}$ from $y_u$. The optimization task is to select $k$ channels in order to maximize the number of customers reached.
\textbf{Bipartite matching: } This problem occurs in many domains; e.g., bipartite matching has been used to model the problem of a public housing programs matching housing resources to applicants \cite{benabbou2018diversity} or platforms matching advertisers with users \cite{bellur2007improved}. In each of these cases, the reward to matching any two nodes is not initially known, but is instead predicted from the features available for both parties. Bipartite matching can be formulated as a linear program, allowing us to apply our decision-focused approach. The learning problem is to use node features to predict whether each edge is present or absent (a classification problem). The optimization problem is to find a maximum matching in the predicted graph.
Our experiments use the cora dataset \cite{sen2008collective}. The nodes are scientific papers and edges represent citation. Each node's feature vector indicating whether each word in a vocabulary appeared in the paper (there are 1433 such features). The overall graph has 2708 nodes. In order to construct instances for the decision problem, we partitioned the complete graph into 27 instances, each with 100 nodes, using metis \cite{karypis1998fast}. We divided the nodes in each instance into the sides of a bipartite graph (of 50 nodes each) such that the number of edges crossing sides was maximized. The learning problem is much more challenging than before: unlike in budget allocation, the features do not contain enough information to reconstruct the citation network. However, a decision maker may still benefit from leveraging whatever signal is available.
\textbf{Diverse recommendation:}
One application of submodular optimization is to select diverse sets of item, e.g.\ for recommendation systems or document summarization. Suppose that each item $i$ is associated with a set of topics $t(i)$. Then, we aim to select a set of $k$ items which collectively cover as many topics as possible: $f(S) = \left|\bigcup_{i \in S}t(i)\right|$. Such formulations have been used across recommendation systems \cite{ashkan2015optimal}, text summarization \cite{takamura2009text}, web search \cite{agrawal2009diversifying} and image segmentation \cite{prasad2014submodular}.
In many applications, the item-topic associations $t(i)$ are not known in advance. Hence, the learning task is to predict a binary matrix $\theta$ where $\theta_{ij}$ is 1 if item $i$ covers topic $j$ and 0 otherwise. The optimization task is to find a set of $k$ items maximizing the number of topics covered according to $\theta$. We consider a recommendation systems problem based on the Movielens dataset \cite{movielens} in which 2113 users rate 10197 movies (though not every user rated every movie). The items are the movies, while the topics are the top 500 actors. In our problem, the movie-actor assignments are unknown, and must be predicted only from user ratings. This is a \emph{multilabel classification problem} where we attempt to predict which actors are associated with each movie. We randomly divided the movies into 101 problem instances, each with 100 movies. The feature matrix $y$ contains the ratings given by each of the 2113 users for the 100 movies in the instance (with zeros where no rating is present).
\begin{table*}\centering
\fontsize{9.5}{9.5}\selectfont
\caption{Accuracy of each method according to standard measures.}\label{table:accuracy}
\begin{tabular}{@{}r|ccccccc@{}}\toprule
& \multicolumn{1}{c}{Budget allocation} & \phantom{abc} & \multicolumn{2}{c}{ Matching} & \phantom{abc} &\multicolumn{2}{c}{Diverse recommendation}\\
\cmidrule{2-2} \cmidrule{4-5} \cmidrule{7-8}
& MSE && CE & AUC && CE & AUC\\ \midrule
NN1-Decision & 0.8673e-02 $\pm$ 1.83e-04 && 0.994 $\pm$ 0.002 & 0.501 $\pm$ 0.011 && 1.053 $\pm$ 0.005 & 0.593 $\pm$ 0.003\\
NN2-Decision & 1.7118e-02 $\pm$ 2.65e-04 && 0.689 $\pm$ 0.004 & \textbf{0.560 $\pm$ 0.006} && 1.004 $\pm$ 0.022 & 0.577 $\pm$ 0.008\\
NN1-2Stage & 0.0501e-02 $\pm$ 2.67e-06 && 0.696 $\pm$ 0.001 & 0.499 $\pm$ 0.013 && 0.703 $\pm$ 0.001 & 0.389 $\pm$ 0.003\\
NN2-2Stage & 0.0530e-02 $\pm$ 2.27e-06 && \textbf{0.223 $\pm$ 0.005} & 0.498 $\pm$ 0.007 && 0.690 $\pm$ 0.000 & \textbf{0.674 $\pm$ 0.004}\\
RF-2Stage & \textbf{0.0354e-02 $\pm$ 4.17e-06} && 0.693 $\pm$ 0.000 & 0.500 $\pm$ 0.000 && \textbf{0.689 $\pm$ 0.000} & 0.500 $\pm$ 0.000\\
\bottomrule
\end{tabular}
\end{table*}
\textbf{Algorithms and experimental setup: } In each domain, we randomly divided the instances into 80\% training and 20\% test. All results are averaged over 30 random splits. Our decision-focused framework was instantiated using feed-forward, fully connected neural networks as the underlying predictive model. All networks used ReLU activations. We experimented with networks with 1 layer, representing a restricted class of models, and 2-layer networks, where the hidden layer (of size 200) gives additional expressive power. We compared two training methods. First, the decision-focused approach proposed above. Second, a two stage approach that uses a machine learning loss function (mean squared error for regression tasks and cross-entropy loss for classification). \emph{This allows us to isolate the impact of the training method since both use the same underlying architecture.} We experimented with additional layers but observed little benefit for either method. All networks were trained using Adam with learning rate $10^{-3}$. We refer to the 1-layer decision focused network as \emph{NN1-Decision} and the 1-layer two stage network as \emph{NN1-2Stage} (with analogous names for the 2-layer networks). We also compared to a random forest ensemble of 100 decisions trees (\emph{RF-2Stage}). Gradient-based training cannot be applied to random forests, so benchmark represents a strong predictive model which can be used by two stage approaches but not by our framework. Lastly, we show performance for a random decision.
\textbf{Solution quality: }
Table \ref{table:opt} shows the solution quality that each approaches obtains on the full pipeline; i.e., the objective value of its decision evaluated using the true parameters. Each value is the mean (over the 30 iterations) and a bootstrapped 95\% confidence interval. For the budget allocation and diverse recommendation tasks, we varied the budget $k$. The decision-focused methods obtain the highest-performance across the board, tied with random forests on the synthetic budget allocation task.
We now consider each individual domain, starting with budget allocation. Both decision-focused methods substantially outperform the two-stage neural networks, obtaining at least 37\% greater objective value. This demonstrates that with fixed predictive architecture, decision-focused learning can greatly improve solution quality. NN1-Decision performs somewhat better than NN2-Decision, suggesting that the simpler class of models is easier to train. However, NN1-2Stage performs significantly worse than NN1-Decision, indicating that alignment between training and the decision problem is highly important for simple models to succeed. RF-2Stage performs essentially equivalently to NN1-Decision. This is potentially surprising since random forest are a much more expressive model class. As we will see later, much of the random forest's success is due to the fact that the features in this synthetic domain are very high-signal; indeed, they suffice for near-perfect reconstruction. The next two domains, both based on real data, explore low-signal settings where highly accurate recovery is impossible.
In bipartite matching, NN2-Decision obtains the highest overall performance, making nearly \emph{over 70\% more matches} than the next best method (RF-2Stage, followed closely by NN2-2Stage). Both 1-layer models perform extremely poorly, indicating that the more complex learning problem requires a more expressive model class. However, the highly expressive RF-2Stage does only marginally better than NN2-2Stage, demonstrating the critical role of aligning training and decision making.
In the diverse recommendation domain, NN1-Decision has the best performance, followed closely by NN2-Decision. NN2-2Stage trails by 23\%, and NN1-2Stage performs extremely poorly. This highlights the importance of the training method within the same class of models: NN1-Decision obtains approximately 2.7 times greater objective value than NN1-2Stage. RF-2Stage also performs poorly in this domain, and is seemingly unable to extract any signal which boosts decision quality above that of random.
\begin{figure}
\centering
\includegraphics[width=3in]{predictions_combined.pdf}
\caption{(a) ground truth (b) NN1-2Stage (c) NN1-Decision} \label{fig:visualization}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1.5in]{scatter_diffopt}
\includegraphics[width=1.5in]{scatter_twostage}
\caption{Left: our method's predicted total out-weight for each item. Right: predictions from two stage method.}\label{fig:outweight}
\end{figure}
\textbf{Exploration of learned models:}
We start out by showing the accuracy of each method according to standard measures, summarized in Table \ref{table:accuracy}. For classification domains (diverse recommendation, matching), we show cross-entropy loss (which is directly optimized by the two stage networks) and AUC. For regression (the budget allocation domain), we show mean squared error (MSE). For budget allocation and diverse recommendation, we fixed $k=10$.
The two-stage methods are, in almost all cases, significantly more accurate than the decision-focused networks despite their worse solution quality. Moreoever, no accuracy measure is well-correlated with solution quality. On budget allocation, the two decision-focused networks have the worst MSE but the best solution quality. On bipartite matching, NN2-2Stage has better cross-entropy loss but much worse solution quality than NN2-Decision. On diverse recommendation, NN2-2Stage has the best AUC but worse solution quality than either decision-focused network.
This incongruity raises the question of what differentiates the predictive models learned via decision-focused training. We now show more a more detailed exploration of each model's predictions. Due to space constraints, we focus on the simpler case of the synthetic budget allocation task, comparing NN1-Decision and NN1-2Stage. However, the higher-level insights generalize across domains (see the supplement for more detailed visualizations).
Figure \ref{fig:visualization} shows each model's predictions on an example instance. Each heat map shows a predicted matrix $\theta$, where dark entries correspond to a high prediction and light entries to low. The first matrix is the ground truth. The second matrix is the prediction made by NN1-2Stage, which matches the overall sparsity of the true $\theta$ but fails to recover almost all of the true connections. The last matrix corresponds to NN1-Decision and appears completely dissimilar to the ground truth. Nevertheless, these seemingly nonsensical predictions lead to the best quality decisions.
To investigate the connection between predictions and decision, Figure \ref{fig:outweight} aggregates each model's predictions at the channel level. Formally, we examine the predicted out-weight for each channel $u$, i.e., the sum of the row $\theta_u$. This is a coarse measure of $u$'s importance for the optimization problem; channels with connections to many customers are more likely to be good candidates for the optimal set. Surprisingly, NN1-Decision's predicted out-weights are extremely well correlated with the ground truth out-weights ($r^2 = 0.94$). However, the absolute magnitude of its predictions are skewed: the bulk of channels have low outweight (less than 1), but NN1-Decision's predictions are all at least 13. By contrast NN1-2Stage has poorer correlation, making it less useful for identifying the outliers which comprise the optimal set. However, it better matches the values of low out-weight channels and hence attains better MSE. This illustrates how aligning the model's training with the optimization problem leads it to focus on qualities which are specifically important for decision making, even if this compromises accuracy elsewhere.
\textbf{Acknowledgments: } This work was supported by the Army Research Office (MURI W911NF1810208) and a National Science Foundation Graduate Research Fellowship.
\bibliographystyle{aaai}
|
2,869,038,153,955 | arxiv | \section{Introduction}
Liquid crystal droplets have been extensively studied, both from theoretical and experimental points of view \cite{deGennes1993,Dubois-Violette1969,Williams1986,Bezic1992,Lavrentovich1998,Lopez-Leon2011b,Orlova2015}. They are of particular interest to the scientific community because they represent one of the simplest systems in which topological defects are found to be stable. Indeed, the natural curvature of the spherical interface induces geometrical frustration in the molecular arrangement, resulting in disordered regions called topological defects. These defects are not only interesting from a fundamental point of view, but also control the mechanical and optical properties of the droplet. Many industrial applications have benefited from this interesting feature \cite{Bahadur}. Switchable windows, in which the optical properties of nematic droplets are tuned by an externally applied electric field, are a good example of this \cite{Drzaic1986,Drzaic1988}.
The richness in defect configurations increases immensely if a chiral nematic or cholesteric is used to make the droplets. Although chiral nematics in confined geometries have been quite extensively studied in the past \cite{Bouligand1970,Bouligand1984,Bezic1992,Xu1997,Lavrentovich1998}, state-of-the-art experimental and numerical techniques have revealed a plethora of interesting new structures \cite{Sec2012,Sec2014,Orlova2015,PosnjakG_SciRep6_2016} and possible applications \cite{Lin2011,Geng2013,MannaU_AngewChemIntEd52_2013,AguirreLE_ProcNatlAcadSci113_2016,ZhouY_ACSNano_2016}. In particular, the recent discovery of lasing properties in cholesteric droplets has revived the research in the domain \cite{Humar2010,GardinerDJ_OptExpress19_2011,Uchida2013, Napoli2013, ChenL_AdvOptMater2_2014, Uchida2015, WandCR_PhysRevE91_2015}. Due to molecular chirality, cholesteric liquid crystals display a mesoscopic helical organization, with a repeated distance set by the helical pitch. This layered structure makes each droplet a Bragg resonator, where light emission can be stimulated by including additional dye molecules in the liquid crystal. Such a configuration has an associated topological defect that spans the droplet radius and plays a determinant role in the droplet optical properties. Numerical simulations have provided a detailed description of the molecular organization within the droplet, revealing the intricate double-helix structure of the radial defect \cite{Sec2012}.
The detailed structure of the double-helix radial defect has first been observed experimentally in water-cholesteric-water double emulsions \cite{Darmon2015}. In this geometry, the liquid crystal is not confined to a bulk droplet, but to a thick spherical shell. This configuration enables tuning the chirality of the system by playing with the shell thickness-to-pitch ratio. At high chirality, the shell displays a radial defect with an intricate double-helix structure, as predicted by simulations for a bulk cholesteric droplet. However, at low chirality, the shell is characterized by two defects, each of them made of a number singular rings that piles up with a certain separation distance. Between these two limit cases, new defect configurations are expected to emerge \cite{WandCR_PhysRevE91_2015}. These new configurations might be relevant in the context of optical applications \cite{Uchida2013, ChenL_AdvOptMater2_2014,Uchida2015,Lagerwall2016} and in the design of new building blocks for colloidal self-assembly \cite{Poon2004,Lavrentovich2011,Geng2013,Lopez-Leon2011,Sec2012b, Lagerwall2012, Koning2013}.
\begin{figure*}[t!]
\centering
\includegraphics[width=1\textwidth]{AllConfigurations8}
\caption{(a) Schematics showing a side view of a liquid crystal shell. (b-f) Top view of cholesteric liquid crystal shells between crossed polarisers. Each picture correspond to a specific defect configuration: (b) Four defects of winding number $+1/2$, (c) One defect of winding number $+1$ and two defects of winding number $+1/2$, (d) Two defects of winding number $+1$ \cite{Darmon2015}, (e) One defect of winding number $+3/2$ and one defect of winding number $+1/2$, (f) A single defect of winding number $+2$ \cite{Darmon2015}. Scale bar: $20\,\mu m$.}
\label{Shells1}
\end{figure*}
In the present work, we study the new defect structures emerging in cholesteric shells for a wide range of shell thickness and cholesteric pitch. We show the existence of five possible configurations, provided that the molecules are tangentially anchored to the shell boundaries, which differ in the number and winding number of the defects. Interestingly, we report for the first time the existence of stable $+3/2$ defects in a spherical geometry. By looking at a very large sample of these shells, we show how these configurations are statistically distributed as a function of two relevant dimensionless parameters $u=(R-a)/R$ and $c=(R-a)/p$, where $a$ and $R$ respectively denote the inner and outer radii of the shell and $p$ is the cholesteric pitch. We study the detailed structure of each of the observed defects by bringing together experiments and numerical simulations, and show the existence of structures that are essentially different to those predicted for bulk droplets. We finally investigate the possibility of inducing transitions between defect configurations. By performing de-swelling experiments, we show that it is possible to induce topological transformations where the defects recombine themselves to form new defects of higher winding number. These transformations typically occur by following a well defined path. We finally study the intricate trajectories of the defects before recombining and develop a simple theoretical framework to explain the dynamics of these transitions.
\section{Equilibrium configurations}
\subsection{Experimental and numerical methods}
We use a glass capillary microfluidic device to generate cholesteric liquid crystal shells \cite{Utada2005}. The shells are double emulsions with the following composition: the inner and outer phases are composed of water with 1\%wt Polyvinyl Alcohol (PVA), and the middle phase is a mixture of 4-Cyano-4'-pentylbiphenyl (5CB) and a chiral dopant (S)-4-Cyano-4'-(2-methylbutyl)biphenyl (CB15). The amount of CB15 in the liquid crystalline solution determines the microscopic pitch, denoted $p$, of the resulting right-handed cholesteric helical arrangement \cite{Ko2009}. The role of PVA is two-fold: $(i)$ it acts as a surfactant to stabilize the double emulsion and $(ii)$ it enforces planar degenerate anchoring on both inner and outer boundaries, meaning that the liquid crystal molecules are forced to lie tangentially to the two interfaces. The radii of the inner and outer droplets, see Fig.~\ref{Shells1}(a), are respectively denoted $a$ and $R$. In the present study, $R$ ranges between 30 and 90\,$\mu m$. The density mismatch between the inner aqueous solution and the liquid crystalline solution causes thickness heterogeneity in the shell. However, a disjoining pressure prevents contact between the two droplets, so that the minimal shell thickness is $h_0 \ne 0$ (see Fig.~\ref{Shells1}(a)). The average shell thickness can be defined as $h\equiv R-a$. For each mixture, we ensure that we are far from the liquid crystal/isotropic phase transition to avoid defect nucleation or recombination, commonly observed close to the transition.
To gain insight into the detailed structure of the observed defects, we also perform numerical simulations. Since the shell thickness varies gradually and $R$ is large compared to $h$, the shell thickness gradient only affects the movement and the equilibrium position of the defects, but have negligible impact on their internal director structure. To show the structure of each defect, we thus assume a flat planar degenerate cell, which models a small area of the shell around the defect, and we enforce fixed winding number on the outer boundary. The simulation was done using the Landau-de Gennes free energy:
\begin{align}
F=&\int_{\text{bulk}} \left\lbrace \frac{A}{2}Q_{ij}Q_{ji}+\frac{B}{3}Q_{ij}Q_{jk}Q_{ki}+\frac{C}{4}(Q_{ij}Q_{ji})^2 \right\rbrace {\rm d}V \nonumber \\
+&\int_{\text{bulk}} \left\lbrace \frac{L}{2}Q_{ij,k}Q_{ji,k}+2q_0 L \epsilon_{ikl}Q_{ij}Q_{lj,k} \right\rbrace {\rm d}V\\
+&\int_{\text{surface}} \left\lbrace \frac{W}{2}\left(\tilde{Q}_{ij}-\tilde{Q}_{ij}^\perp\right)^2\right\rbrace {\rm d}S \nonumber \ ,
\end{align}
which was then minimized with a finite difference method on a $360\times360\times200$ grid. Note that the first two contributions respectively account for the phase transition and bulk elasticity, with $A$, $B$, $C$ the material parameters and $L$ the single elastic constant, consistently with previous studies \cite{Sec2012,Darmon2015}. The auxiliary tensors $\tilde{Q}_{ij}$ and $\tilde{Q}_{ij}^\perp$ respectively denote the Q-tensor with added trace and its projection to the surface, as defined by Fournier and Galatola \cite{Fournier2002}, and $q_0=2\pi/p$ is the intrinsic wave number of the cholesteric pitch. The last term in Eq.~(1) represents a surface anchoring term, where the anchoring strength was taken to be strong with $W=0.01\,{\rm J/m^2}$. The simulated slab thickness is $1.6{\,\rm \mu m}$. In each case, the initial condition was a pure $\chi$ cholesteric defect line with a chosen winding number, which was left to relax into the equilibrium structure. To highlight the symmetry and structure of both singular and nonsingular defects, we visualize them with the splay-bend parameter \cite{Copar2013}.
\subsection{Defect configurations in cholesteric shells}
Because of the spherical nature of the interfaces delimiting the shell, any tangential nematic director field $\bm n$, where $\bm n$ represents the average molecular orientation, will be necessarily frustrated. Those frustrations are translated into topological defects which are singular points in the director field. Around those defects, the director experiences a $2 \pi m$ rotation, where $m$ is called the winding number. Since the symmetry of the nematic liquid crystal is only 2-fold, defect winding numbers can either be integers or semi-integers \cite{Nelson2002}. The Poincar\'e-Hopf theorem \cite{Poincare1885, Hopf1926,Kamien2002} establishes that the winding numbers of the defects present on a surface must sum up to the surface Euler characteristic $\chi$, which in the particular case of a sphere equals $+2$. There are five different ways to satisfy this theorem using only positive winding numbers: i) One single $+2$ defect, ii) two $+1$ defects, iii) one $+3/2$ defect and one $+1/2$ defect, iv) one $+1$ defect and two $+1/2$ defects, and v) four $+1/2$ defects. Although all these configurations are compatible with the topological constrains, the configuration adopted by the shell will be, in principle, the one minimizing free energy.
\begin{figure}[t!]
\centering
\includegraphics[width=1\columnwidth]{Diagramme6}
\caption{Statistical repartition of defect configurations in chiral nematic shells as a function of $u=h/R$ and $c=h/p$. The two limit cases, corresponding to nematic shells and cholesteric droplets, are respectively shown on the bottom and right sides of the diagram. The magenta dotted square is a visual help to refer to Fig.~\ref{tetra}. }
\label{diagramme}
\end{figure}
\begin{figure}[b!]
\centering
\includegraphics[height=3.8cm]{ZoomDiag}
\caption{Statistical repartition of thin shells with little chirality as a function of $u=h/R$ and $c=h/p$. }
\label{tetra}
\end{figure}
Three kinds of configurations have been reported for nematic shells \cite{Fernandez2007,Lopez-Leon2011}. The first possible defect arrangement has four $+1/2$ defects. This defect configuration is the ground state for a purely two-dimensional nematic on a sphere \cite{Lubensky1992,Nelson2002}. In the case of a shell, however, the defects are not surface point defects, but four singular disclination lines of winding number $+1/2$ that span the shell thickness. The second configuration is characterized by the presence of two $+1$ defects on each spherical surface. These surface defects, or boojums, associate into two pairs such that each defect on the outer sphere has its counterpart on the inner sphere. This defect configuration, which has an inherent three-dimensional character, is equivalent to the one observed in bulk nematic droplets. The subtle interplay between surface and bulk effects that takes place in shells becomes obvious in the third type of defect configuration observed, which is a hybrid state characterized by one $+1$ defect associated to two $+1/2$ defects \cite{Lopez-Leon2011}. Hence, at the level of simple nematics, it is already clear that competition between surface and bulk effects plays a determinant role in the new type of defect configurations emerging in a shell geometry. This richness is expected to become even greater when inducing chirality in the nematic order.
When we add a chiral dopant to the nematic phase to produce a cholesteric shell, we indeed uncover a richer set of configurations, with a total of five different arrangements. These configurations are displayed in Figs.~\ref{Shells1} (b)-(f), which are cross-polarised images of the different types of cholesteric shell. In the images, the defects appear as dark points from which coloured brushes emerge. The number of coloured brushes, $M_i$, is related to the defect winding number, $m_i$, as $m_i=M_i/4$. The configurations shown in Figs.~\ref{Shells1} (b)-(d) are similar to those already observed in nematic shells, having four, three, and two defects, respectively. We also observe a configuration with a single $+2$ defect, see Fig.~\ref{Shells1} (f), which is characteristic of bulk cholesteric droplets \cite{Bezic1992}. Finally, we observe a fifth and more intriguing configuration with one $+1/2$ defect and one $+3/2$ defect, see Fig.~\ref{Shells1} (e). This state was first theoretically imagined by Bezi\'{c} \& \v{Z}umer \cite{Bezic1992} for cholesteric droplets but had never been observed before. The existence of stable $+3/2$ defects in a shell is itself remarkable, as they were only previously observed in specific planar cases \cite{Madhusudana1982, Lee1982, MadhusudanaNV_MolCrystLiqCryst103_1983,LavrentovichOD_EurophysicsLetters12_1990,CuiL_LiqCryst26_1999, Li2003}. Interestingly, in cholesteric shells, all five possible configurations satisfying the Poincar\'e-Hopf theorem for positive winding numbers are found. In the following, and throughout the manuscript, we will use the notation $z_i\,[m_i] + z_j\,[m_j]$ to refer to the defect configurations, where $z_i$ denotes the number of defects with winding number $m_i$.
In the shells shown in Fig.~\ref{Shells1}, the defects appear in the thinnest hemisphere of the shell, located either at the top or bottom of the shell depending on the sign of the density mismatch. Indeed, the equilibrium positions of the defects are ruled by a competition between an attractive force induced by the shell thickness gradient and a repulsive elastic defect interaction \cite{Lopez-Leon2011,Darmon2015}. It is worth mentioning that, in the $1\,[+1] + 2\,[+1/2]$ configuration, the outer defects sit at the vertices of a isosceles triangle with vertex angle $\alpha_0 \simeq 30^{\circ}$, see Fig.~\ref{Shells1}(b), regardless of the shell geometry. This cholesteric arrangement differs from its nematic counterpart, in which the triangle is not necessarily isosceles \cite{Koning2015}.
The elastic energies of the above configurations naturally differ from one another. To gain insight into the energy landscape associated to cholesteric shells, we look into the statistical repartition of each of these configurations. There are three characteristic length scales for cholesteric shells, namely the outer radius $R$, the inner radius $a$, and the cholesteric pitch $p$, from which two dimensionless parameters can be constructed. We select two meaningful parameters: $u=h/R$, which is a measure of the relative shell thickness, and $c=h/p$, called \textit{confinement ratio}, which counts the number of $2\pi$-turns of the molecular field over the average thickness of the shell, consistently with our previous study \cite{Darmon2015}.
Fig.~\ref{diagramme} displays the statistical repartition of the five configurations for a number of shells $N_\text{tot}=743$, when varying $c$ between 0 and 6 and $u$ between 0 and 1. We measure $u$ and $c$ for each shell right after its creation, at rest, without any modification of its physico-chemical properties. The data are represented with pie charts, where the five different configurations are color-coded. The number of measured shells is indicated in each box. Note that the red and orange colors correspond to configurations found only in cholesteric shells. The two limit cases $c=0$ and $u=1$, corresponding respectively to nematic shells and cholesteric droplets, are also represented on the bottom and right parts of the diagram of Fig.~\ref{diagramme}. In the following we distinguish three cases, namely shells with large, intermediate and small thicknesses.
Thick shells, \textit{i.e.} for $u \in [0.67,1]$, behave as droplets. At low chirality, \textit{i.e.} for $c<1.2$, only $2\,[+1]$ configurations are found, while at high chirality, \textit{i.e.} for $c>1.2$, the samples are only populated with $1\,[+2]$ configurations. This tendency is exactly the same as the one observed in cholesteric droplets, for which there is a sharp transition between $2\,[+1]$ and $1\,[+2]$ droplets at $R/p \simeq 1.2$ \cite{Lopez-Leon2011b}.
In shells with intermediate thickness, \textit{i.e.} for $u \in [0.33,0.67]$, confinement effects become more significant. For $c<1.2$, the three configurations reported in nematic shells are found \cite{Lopez-Leon2011}. At $c=0$, the $1\,[+1] + 2\,[+1/2]$ configuration clearly dominates in the sample, although free energy calculations have shown that this arrangement is never the ground state of the system \cite{Koning2015}. When adding a little chirality, \textit{i.e.} for $c \in [0,1.2]$, we find the same three defect arrangements but with a notable difference in their statistical repartition. Indeed, a small but strictly positive confinement ratio seems to favor the $4\,[+1/2]$ configuration over the others. Interestingly, when increasing further the chirality in our samples, \textit{i.e.} for $c \in [1.2,2.4]$, we observe that $(i)$ the $1\,[+1] + 2\,[+1/2]$ configuration disappears, $(ii)$ the relative populations of $4\,[+1/2]$, $2\,[+1]$ and $1\,[+2]$ are approximately equal and $(iii)$ there is a new configuration that seems to be specific to the cholesteric phase, namely the $1\,[+3/2] + 1\,[+1/2]$ configuration, although very rare (only one shell out of 189). For higher confinement ratios, \textit{i.e.} for $c>2.4$, the $1\,[+2]$ becomes largely predominant and eventually the only possible configuration for $c>3.6$.
Thin shells with low chirality, \textit{i.e.} for $u \in [0,0.33]$ and $c<1.2$, are comparable to their intermediate counterparts in terms of statistical repartition of defect configurations, the only notable difference being that the proportion of $4\,[+1/2]$ is even larger in thin shells. As a matter of fact, a zoom on the lower left part of the diagram, displayed in Fig.~\ref{tetra}, reveals a remarkable feature. For very thin shells with little chirality, \textit{i.e.} for $c \in [0,0.6]$ and $u \in [0,0.17]$, the sample is populated mostly with $4\,[+1/2]$ shells (around 80\%). This could be particularly relevant in the context of colloidal self-assembly, since the $4\,[+1/2]$ configuration could be exploited to produce building blocks able to self-assemble into crystals with a diamond structure, which are expected to be perfect photonic band-gap materials \cite{Nelson2002}.
The main differences between intermediate and thin shells occur at higher chirality. First, we see that the $2\,[+1]$ configuration represents a larger majority for $c \in [1.2,3.6]$. Second, we observe that the $4\,[+1/2]$ configuration disappears. Third, the hybrid $1\,[+3/2] + 1\,[+1/2]$ state becomes a non negligible part of the whole population. Finally, at very high confinement ratios, \textit{i.e.} for $c>3.6$, the $1\,[+2]$ configuration takes over the rest of populations. Hence, as it is often the case in physical systems, it is at the crossover regimes, in our case far enough from nematic shells and cholesteric droplets, that the greater richness of configurations is found.
\smallskip
\section{Defect structures in chiral nematic shells}
Although nematic and cholesteric shells can be regrouped and compared in terms of defect winding numbers, cholesterics have an additional degree of order: the cholesteric twist axis. For this reason, the very nature of their disclinations is fundamentally more complex: in cholesteric liquid crystals, there are three possible types of disclinations called $\chi$, $\lambda$ and $\tau$, depending on whether the twist axis, the nematic director, or both, are singular. In a $\chi$ disclination line, the twist axis coincides with the line, where the director is singular, as shown in Fig.~\ref{DSSetRSS}(a). The $\tau$ and $\lambda$ disclination lines are characterised by a twist occurring perpendicularly to the disclination line, where the twist axis is singular. This is schematically shown in Fig.~\ref{DSSetRSS}(a), where the nails represent an out-of-plane director field, with the nail heads indicating the direction at which $\bm n$ points upwardly. The $\tau$ disclination is also singular in terms of the director, whereas $\lambda$ has a non-singular core.
In a cholesteric shell with planar boundary conditions, the twist axis points perpendicularly to the surface everywhere except at the defects. Thus, all the defects have $\chi$ signature, with different semi-integer and integer winding numbers, when observing the surrounding director field far enough from their defect cores -- this feature is exploited in our simulations for an initial condition. However, it has been shown that cholesteric disclinations can relax locally in a non trivial fashion to minimize the free energy of the system \cite{Bouligand1970,deGennes1993}. For example, we recently presented a detailed description of the intricate structure of the defects in the $2[+1]$ configuration of a cholesteric shell \cite{Darmon2015}, see Fig.~\ref{DSSetRSS}(c) and Fig.~\ref{DSSetRSS}(d). We showed that, in addition to the two pairs of boojums appearing in the nematic case, here there is a number of alternating $\tau^{-1/2}$ and $\lambda^{+1/2}$ disclination rings that pile up though the shell connecting the upper and lower boojums of each pair. This structure is shown in Fig.~\ref{DSSetRSS}(b), where the dashed lines represent the director field. The defects can be identified by the blue and yellow isosurfaces, which indicate the regions of large splay and bend deformations, respectively. The singular rings, represented in red, are surrounded by regions of large splay elastic deformation.
\begin{figure}[t!]
\centering
\includegraphics[height=8.5cm]{DSSSoft6}
\caption{(a) Schematics of $\chi^{+1}$, $\tau^{-1/2}$ and $\lambda^{+1/2}$ disclinations in cholesterics. {(b) A simulated cross section of a $+1$ defect for $c=2.5$, showing that the defect core consists of a sequence of hyperbolic hedgehogs in the form of small $\tau^{-1/2}$ disclination rings, and a sequence of $\lambda^{+1/2}$ rings that terminate the layers. The splay-bend parameter \cite{CoparS_LiqCryst40_2013} is used to highlight defects as regions of high deformation: blue and yellow regions respectively indicate zones of high splay and bend distortion.} (c) Side view of a $2[+1]$ shell between crossed polarisers, revealing a visible nonuniform structure of the defect core, which is enlarged in (d). Scale bar: $20\,\mu m$.}
\label{DSSetRSS}
\end{figure}
Another non trivial disclination structure has been recently reported for the $1[+2]$ configuration \cite{Darmon2015}, see Fig.~\ref{RSScore}(b) and Fig.~\ref{RSScore}(c) for a top and side view of the shell. We showed that the disclination of global winding number $+2$ relaxes into two $\lambda^{+1}$ lines that wind around each other in a double-helix, as numerically predicted by \textit{Se\v{c} et al.} \cite{Sec2012} for droplets, see Fig.~\ref{RSScore}(a). Two pairs of +1 boojums are also present on the inner and outer boundaries of the shell, which appear in Fig.~\ref{RSScore}(a) as two points of concentrated distortion at the upper and lower planes. An interesting feature concerns the size of the overall disclination structure, of total winding number $+2$, which seems to change with $p$. To investigate this, we consider $1[+2]$ shells obtained for different values of $p$. Fig.~\ref{RSScore}(d) shows three pictures of the defect cores, corresponding to $p=9\,\mu$m, $p=3.6\,\mu$m, and $p=1.36\,\mu$m from left to right. The scale bar is identical in each image and corresponds to $10\,\mu$m. All the pictures have been taken for very similar $R \simeq 50\,\mu$m. It is clear from Fig.~\ref{RSScore}(d) that the spatial extension $s$ of the defect structures increases with $p$. More quantitatively, we even find that the spatial extent $s/R$ of the defect is directly proportional to the rescaled cholesteric pitch $p/R$, as shown in Fig.~\ref{RSScore}(e).
\begin{figure*}[t!]
\centering
\includegraphics[width=1\textwidth]{RSSSoft7}
\caption{(a) Simulation of a nonsingular $+2$ defect core, consisting of two helically winding $\lambda^{+1}$ disclinations, ending as boojums at the boundary surfaces. The splay-bend parameter \cite{CoparS_LiqCryst40_2013} is used to highlight defects as regions of high deformation: blue and yellow regions respectively indicate zones of high splay and bend distortion. (b) Side view of a $1[+2]$ shell between crossed polarisers. (c) Top view of a $1[+2]$ shell between crossed polarisers. (d) Crossed polarised images of +2 defects corresponding to shells with $p=9\,\mu$m, $p=3.6\,\mu$m, and $p=1.36\,\mu$m from left to right. (e) Rescaled defect spatial extension $s/R$ as a function of the rescaled cholesteric pitch $p/R$. Scale bar: $10\,\mu m$.}
\label{RSScore}
\end{figure*}
The first of the newly reported configurations in cholesteric shells is the tetravalent state characterised by four disclinations of $+1/2$ winding number. To investigate the nature of the observed $+1/2$ line, we perform numerical simulations. Instead of a pure straight $\chi^{+1/2}$ line, we see a singular disclination of helical shape with a period of half the cholesteric pitch, and a $\lambda^{+1/2}$ defect winding around it, terminating the cholesteric layers, see Fig.~\ref{Threehalf}(a). The singular disclination line has locally a $-1/2$ winding number, and resembles a $\tau^{-1/2}$ disclination, even though the twist axis is ill-defined around the core of the structure. The slope of the helix, together with the additional twist provided by the $\lambda$ disclinations, explain the seemingly contradictory transition from the $+1/2$ far-field winding and a $-1/2$ local winding of the singular defect core -- another demonstration that all singular disclinations in the director are topologically equivalent.
Another configuration that presents $+1/2$ defects is the $1\,[+1] + 2\,[+1/2]$ configuration. The $+1$ defect resembles very much that of the $2[+1]$. Indeed, its larger spatial extent and very similar shape make us believe that it actually corresponds to the same structure. Similarly, the $+1/2$ defects seem to be identical in the $1\,[+1] + 2\,[+1/2]$ and $4\,[+1/2]$ configurations. The trivalent state can therefore be described as follows: one defect composed of alternating $\tau^{-1/2}$ and $\lambda^{+1/2}$ disclination rings, arranged as shown in Fig.~\ref{DSSetRSS} (b), and two $+1/2$ disclination lines with the structure shown in Fig.~\ref{Threehalf}(a).
\begin{figure*}[t!]
\centering
\includegraphics[height=4.5cm]{ThreeHalfSoft6}
\caption{(a) A simulated $+1/2$ disclination line, which is locally composed of a helically shaped $\tau^{-1/2}$ singular core (with three-fold cross section, revealed by the splay-bend parameter) and a $\lambda^{+1/2}$ wound around it. (b) Cross-polarised image showing a top view of the $+3/2$ (right) and $+1/2$ (left) defects of a cholesteric shell. (c) Director field corresponding to the optical texture shown in panel (b). (d) A simulated $+3/2$ disclination core, composed from a more convoluted singular $\tau^{-1/2}$ line, wound around a nonsingular (escaped) $\lambda^{+1}$ line, which goes through the core and ends as two boojums at the surfaces. The entire structure is, as in panel (a), wrapped in a $\lambda^{+1/2}$ which terminates the layers. (e) Side view of a $1\,[+3/2] + 1\,[+1/2]$ shell between crossed polarisers. The inset shows a zoom in the $+3/2$ defect. Scale bar: $20\,\mu m$.}
\label{Threehalf}
\end{figure*}
The last but perhaps most intriguing defect combination is the state with $+3/2$ and $+1/2$ defects, see Fig.~\ref{Shells1}(d), which seems to be the first experimental evidence of stable $+3/2$ defects in cholesterics. The combination of a $+3/2$ and a $+1/2$ defect was imagined by \textit{Bezic et al.} \cite{Bezic1992} in their theoretical study of cholesteric droplets, but had never been observed before. Fig.~\ref{Threehalf}(b) and Fig.~\ref{Threehalf}(e) respectively show a top and side view of such defect configuration in an experimental cholesteric shell. According to the optical texture shown in Fig.~\ref{Threehalf}(b), the director field on the outer surface should be arranged as shown in Fig.~\ref{Threehalf}(c), where the $3\pi$ rotation of $\bm n$ around the $+3/2$ defect becomes evident. The side view of the $+3/2$ defect actually reveals the existence of a relatively thick line which appears to have a helical shape, see the inset in Fig.~\ref{Threehalf}(e).
We numerically investigate the inner structure of the $+3/2$ defect by studying the relaxation of a $\chi^{+3/2}$ line. As in the case of a $+1/2$ defect, the core deforms into a helically twisted $-1/2$ singular disclination line, and a $\lambda^{+1/2}$ disclination terminating the regular layers, see Fig.~\ref{Threehalf}(d). However, due to additional winding that has to be compensated, there is another nonsingular $\lambda^{+1}$ going through the center of the structure. This \textit{escaped} core has the director almost perpendicular to the shell surface, and ends as two boojums, just like in the $+1$ defect. Note that here, the $\lambda^{+1}$ does not decompose into a stack of small defect loops, but is wrapped tightly by the singular $-1/2$ disclination.
\section{Defects recombination and Lehmann effect}
We learned from the statistical study of cholesteric shells that the respective populations of defect configurations depend on both $u$ and $c$. In other words, changing the geometry and the confinement ratio of the shell influences the observed equilibrium configurations. We recently showed that for cholesteric shells it is possible to transform a $2[+1]$ configuration into a $1[+2]$ configuration in a reversible way, by forcing the shell to move in the $u-c$ diagram \cite{Darmon2015}. In this paper, we investigate the possibility of inducing transformations between other defect configurations.
To change the shell parameters $u$ and $c$, we use osmosis. By adding CaCl$_2$ to the outer phase, we create a difference in osmotic pressure between the two aqueous phases that makes the inner droplet de-swell, resulting in the simultaneous increase of $u$ and $c$. We study the topological transformations undergone by shells having $4[+1]$, $1\,[+1] + 2\,[+1/2]$, and $1\,[+3/2] + 1\,[+1/2]$ configurations. The de-swelling of a $4[+1]$ shell only make the defects become closer. As mentioned in Section 2, the equilibrium distance between defects results from a competition between an attractive force induced by the shell thickness gradient and a repulsive elastic defect interaction \cite{Lopez-Leon2011,Darmon2015}. Therefore, when $u$ becomes larger, the shell becomes also more heterogeneous in thickness, shifting the equilibrium towards shorter defect distances. In a $1[+2]$ shell, when the two $+1$ defects are close enough, they come together and rearrange to form a single defect, so that the final state is the $1[+2]$ configuration. In a $4[+1]$ shell, however, the process ends differently. Indeed, we never observe a recombination of the defects, since the inner droplet is expelled from the shell when the defects become close enough. This actually means that the energy barriers associated to the possible transitions involving the $4\,[+1/2]$ configuration cannot be overcome by changing the geometry of the system.
During the de-swelling process, we observe an interesting defect dynamics, where the defects get closer while turning around each other in what we called a defect \textit{waltz}, which we already reported for $1[+2]$ shells and explained as a result of a chemical Lehmann effect \cite{Oswald2009}. Indeed, the radial current $\bm J$ of water molecules induces a torque $\bm \Gamma_\text{Leh}$ on the chiral molecules, provoking a rotation of the whole liquid crystal texture, and as a result, a rotation of the defects. This torque is related to the current through $\boldsymbol{\Gamma}_\text{Leh}= -\nu \bm J$, where $\nu$ is a phenomenological coefficient characteristic of the cholesteric mixture \cite{Oswald2009}. The resulting defect trajectories for $4\,[+1/2]$ and $2\,[+1]$ shells are shown in Fig.~\ref{trajectories}(a) and (b), respectively.
\begin{figure*}[t!]
\centering
\includegraphics[width=1\textwidth]{Trajectories4}
\caption{Defects trajectories in de-swelling experiments for the following defect configurations: (a) $2\,[+1]$, (b) $4\,[+1/2]$, (c) $1\,[+1] \, \& \, 2\,[+1/2]$, and (d) $1\,[+3/2] \, \& \, 1\,[+1/2]$.}
\label{trajectories}
\end{figure*}
To investigate further possible transitions between configurations, we perform a de-swelling experiment in a $1\,[+1] + 2\,[+1/2]$ shell. As in previous experiments, the defects get closer as the shell de-swells. When they are close enough, the $+1$ defect fuse together with one of the $+1/2$ defects, hence becoming a $+3/2$ defect, see the defect trajectories in Fig.~\ref{trajectories}(c). Nevertheless, we could not test further defect rearrangements in this experiment because the de-swelling process becomes very slow after a couple of hours. Indeed, the osmotic pressures in the inner and outer phases tend to equilibrate after some time, resulting in very slight changes of the shell geometry, hence losing the fuel for a possible transition. To check whether $+3/2$ and $+1/2$ defects are able to recombine, we perform a de-swelling experiment starting precisely from a shell with a $1\,[+3/2] + 1\,[+1/2]$ configuration. As shown in Fig.~\ref{trajectories}(d), $+3/2$ and $+1/2$ defects are indeed able to merge and form a single $+2$ defect. It is interesting to remark the $1\,[+1] + 2\,[+1/2]$ state can eventually evolve into a $1\,[+2]$ configuration, but by following a very specific path, where the $+1$ defect needs to recombine first with a $+1/2$ defect to form a $+3/2$ defect, which can in turn recombine with the remaining $+1/2$ defect to give rise to the final $+2$ defect. During all the de-swelling experiments, we observe a defect rotation similar to the one previously reported for $2\,[+1]$ shells. This can be explained by the fact that the Lehmann rotation depends neither on the nature nor on the number of defects present in the system. In all cases, we systematically find $\boldsymbol \Gamma_\text{Leh} \cdot \boldsymbol J > 0$, such that $\nu$ is always $<0$, as expected for a right-handed cholesteric, which is another good indicator that we are truly witnessing the Lehmann effect.
We wish to go a step further in the description of the Lehmann rotation by introducing a simple yet insightful theoretical framework. As mentioned previously, the chiral molecules of the cholesteric liquid crystal experience a torque $\boldsymbol \Gamma_\text{Leh}$, originating from the chemical potential gradient $\boldsymbol \nabla \mu$, itself related to $\boldsymbol J$ through $\boldsymbol J = - \boldsymbol \nabla \mu$. Considering then the liquid crystal as a permeable membrane of permeability $\xi$, one can relate the water flow $Q$ to the difference in chemical potential $\Delta \mu$ through $Q=\xi \mathcal A \Delta \mu / v_\text a$, where $\mathcal A$ is the area of the membrane and where $v_a$ is the molar volume. Noting finally that $\nabla \mu = \Delta \mu / h$, the Lehmann torque can be written as:
\begin{eqnarray}
\Gamma_\text{Leh}=\frac{\nu v_\text a}{\xi h \mathcal A} Q \ .
\end{eqnarray}
Interstingly, while $h$ and $\mathcal A$ are both function of time, the product $h\mathcal A$ is not since it approximately corresponds to the volume of liquid crystal which is a conserved quantity throughout the experiment. As a result, only $Q$ is function of time in the Lehmann torque. Looking at the dynamics of such a system, one also needs to take into account the viscous counter-torque $ \Gamma_\text{visc} = \eta \omega$, where $\eta$ is the bulk rotational viscosity and where $\omega$ is the angular velocity of the director field \cite{Oswald2009}. In Fig.~\ref{ModelLehmann}, we plot the experimental angular velocity $\omega$ of the defects as a function of time (blue squares). As one can see, $\omega$ is time dependent and two parts can be identified in its evolution: $(i)$ it first increases and reaches a maximum, and $(ii)$ it decreases on a time scale that is larger than that of the first ascending part. A first approach would naturally consist in balancing the two torques \cite{Oswald2009}, yielding $\omega (t) \sim \frac{\nu v_\text a}{\eta \xi h \mathcal A} Q(t)$, and check whether $\omega$ and $Q$ indeed have the same temporal dependence. In the inset of Fig.~\ref{ModelLehmann}, we plot the water flow $Q$ as function of time, obtained from measuring how much the inner droplet de-swells during the experiment, on a log-lin scale. We see that Q monotonously decreases with time.
\begin{figure}[b!]
\centering
\includegraphics[width=1\columnwidth]{TheoModelLehmann4}
\caption{Angular rotation of the defects as a function of time in a typical de-swelling experiment, where the $2\,[+1]$ configuration evolves into the $1\,[+2]$ configuration. Inset: Flow of water through the shell, $Q$, as a function of time. The blue squares correspond to the experimental data and the red line to the theoretical model. The error bars correspond to the standard deviation of the rolling average performed on the experimental data.}
\label{ModelLehmann}
\end{figure}
The above-mentioned balance is therefore insufficient to describe the more complex behavior of $\omega(t)$. We thus need to add the observed transient regime to the theoretical framework, corresponding to the increasing part of $\omega(t)$.
We do so through the following governing equation:
\begin{eqnarray}
\alpha \frac{\text d \omega}{\text d t}=\Gamma_\text{Leh} - \Gamma_\text{visc} \ ,
\label{equadiff1}
\end{eqnarray}
where $\alpha$ is an effective coefficient related to the transient regime. There is indeed a certain time for the osmotic pressure difference to be established, which is $\simeq 10^3\,$s for our system, according to Fig.~\ref{ModelLehmann}. Equation~\eqref{equadiff1} can be rewritten as:
\begin{eqnarray}
\tau_{\eta} \frac{\text d \omega}{\text d t} + \omega(t)= \beta Q(t) \ ,
\label{equadiff2}
\end{eqnarray}
where $\tau_{\eta}=\alpha/\eta$, and where $\beta=\nu v_\text a/(\eta \xi h \mathcal A)$. From the time evolution of $Q$ in the inset of Fig.~\ref{ModelLehmann}, it appears $Q$ is exponentially decreasing with time. In the following, we will therefore consider that $Q(t) = Q_0 e^{-t/\tau_Q}$ with $\tau_Q=1000\,$s, represented by the solid red line in the inset of Fig.~\ref{ModelLehmann}. The solution $\omega(t)$ to Eq.~\eqref{equadiff2} then reads:
\begin{eqnarray}
\omega(t)=\frac{\beta \tau_Q Q_0}{\tau_Q-\tau_{\eta}} \left(e^{-t/\tau_Q} - e^{-t/\tau_\eta} \right) \ .
\label{equadiff3}
\end{eqnarray}
This theoretical solution of $\omega(t)$ is displayed as a solid red line in Fig.~\ref{ModelLehmann}, with the best possible adjustable parameters. We find a rather good agreement between the data and our model, at least at a qualitative level, with $\tau_\eta=500\,$s. The small oscillations in the decreasing part of $\omega_\text{exp}(t)$ are probably an experimental artefact due to possible flows within the sample.
These flows are constantly changing the local concentration of salt in the outer solution, which results in irregular osmosis dynamics. Note that there is also a small discrepancy between the model and the data at longer times, due to the fact that the evolution of $Q$ is not strictly exponential (see inset Fig.~\ref{ModelLehmann}). Hence, our model seems to capture well the essence of the observed phenomenology, namely the faster inertial ascending part of $\omega(t)$, and the slower decrease following the decreasing water flow.
\section{Conclusions}
We provided a thorough study of the defect configurations appearing in cholesteric liquid crystal shells. We showed that five types of configurations are possible, revealing the greater richness of cholesteric shells as compared to their nematic counterparts. A remarkable result is the observation of stable $+3/2$ defects, which had only been observed before in exotic nematics or intricate confinements. Numerical simulations proved very efficient in gaining insight into the complex nature of the topological defects observed, which were composed by several disclination lines assembled into higher order structures. The formation of a given defect configuration depends on two dimensionless parameters, $c=h/p$ and $u=h/R$, where $h$, $R$ are the shell thickness and outer radius, respectively, and $p$ is the helical cholesteric pitch. By playing with these two parameters, we were able to induce transitions between configurations. In the allowed transitions, the defects approach each other by following intricate paths and an intriguing dynamics, which can be explained in terms of the chemical Lehmann effect.
\section*{Acknowledgements}
We thank S. \v{Z}umer, D. Se\v{c}, and A. Fernandez-Nieves for fruitful exchanges. We acknowledge support from the French National Research Agency by Grant 13-JS08-0006-01 and the Institut Pierre-Gilles
de Gennes (Laboratoire d’excellence, Investissements d’avenir, Program ANR-
10-IDEX 0001-02 PSL and ANR-10-EQPX-31). S. \v{C}opar acknowledges support from Slovenian Research Agency under grants P1-0099 and Z1-6725.
\bibliographystyle{h-physrev}
|
2,869,038,153,956 | arxiv |
\section{Introduction.}
\label{sec_introduction}
Since the end of the nineties,
constructing maps of the internet using {\tt trace\-route} -like measurements
received much attention, see for instance
\cite{faloutsos99sigcomm, dimes,skitter,trahome,scamper,ripeNccTtm,nlanrAmp,whatToDo,atlas,heuristics,scriptroute}.
Such measurements are however partial and they may contain significant
bias
\cite{sampling,marginal,plrevisited,dallAsta,relevance,achlioptas05bias}.
As a consequence, much effort is nowadays devoted to the collection of
more accurate data \cite{dimes,trahome,e2emon2007,parisTraceroute}, but
this task is challenging.
In order to avoid these issues and obtain some insight on internet topology {\em dynamics},
we use here a radically different approach: we focus on what a given machine sees of the topology around itself, which we call an {\em ego-centered view} (it basically is a routing tree measured in a {\tt trace\-route}-like manner).
These ego-centered measurements may be performed very efficiently (typically in minutes, and inducing low network load); it is therefore possible to repeat them in periodic rounds, and obtain in this way information on the {\em dynamics} of the topology, at a time-scale significantly higher than previous approaches (see for instance \cite{oliveira2007evol,plrevisited}).
Taking advantage of these strengths, we conduct massive radar-like measurements of the internet. We provide both the measurement tool and the collected data, and show that they reveal interesting features of the observed topology.
\section{Measurement framework.}
\label{sec_tools}
One may use {\tt trace\-route}\ directly to collect ego-cent\-ered views by probing a set of destinations.
This approach however has serious drawbacks. First, as detailed in
\cite{DTSigmetrics} and illustrated in Figure~\ref{fig_ex}, the
measurement load is highly unbalanced between nodes and there is much
redundancy in the obtained data (intuitively, one probes links close
to the monitor much more than others). Even worse, this implies that
the obtained information is not homogeneous, and thus much more
difficult to analyse rigorously (for instance, the dynamics may seem
higher close to the monitor). Finally, though the measurement would
intuitively produce a routing tree, the obtained view actually differs
significantly from a tree (see for instance \cite{parisTraceroute}). Again,
this makes the analysis (visualisation of the data, for instance) more
intricate.
Finally, the direct {\tt trace\-route}\ approach has multiple severe
drawbacks. In this section we first design an ego-centered measurement
tool remedying to this. We then include it in a radar measurement
scheme.
\subsection{Ego-centered measurements.}
\label{sec_tracetree}
As already discussed in various contexts
\cite{DTJSAC,DTSigmetrics,probingScheme,pansiot2007multicast,streamlining,scriptroute},
one may avoid the issues described
above by performing tree-like measurements in a backward way: given a
set of destinations to probe, one first discovers the last link on the
path to each of them, then the previous link on each of these paths,
and so on; when two (or more) paths reach the same node then the
probing towards all corresponding destinations, except one,
stops\,\footnote{Such measurements require the distance towards each
destination, which is not trivial \cite{streamlining}; we discuss
this in Section~\ref{sec_radar}.}. However, as illustrated in
Figure~\ref{fig_ex}, naive such measurements encounter serious
problems because of routing changes and other events.
We provide a solution in
the {\tt trace\-tree}\ algorithm below: the tree nodes are not \mbox{\em \sc ip}\ addresses
anymore, but pairs composed of an \mbox{\em \sc ip}\ address (or a
star if a timeout occurred) and the \mbox{\sc ttl}\ at which it was observed
(see Figure~\ref{fig_ex} for an illustration). This is sufficient to
ensure that the obtained view is a tree, while keeping the algorithm
very simple. It sends only one packet for each link, and
thus is optimal.
Moreover, each link is discovered exactly
once, which gives an homogeneous view of the topology and balances the
measurement load.
\input{algo.tex}
\begin{figure}[!h]
\centering
\includegraphics[scale=0.27]{ex_star.eps}
\caption{Typical outputs of various measurements schemes.
(1) -- Real topology.
$a$ is the monitor, $n$, $o$, and $p$ are the destinations.
We suppose that $l$ does not answer to probes, that $b$ is a per-destination load balancer,
forwarding traffic for $n$ to $d$,
and traffic for $o$ to $f$, and that $e$ is a per-packet load balancer forwarding packets alternately to $i$ and $h$. Such situations are frequent in practice.
(2) -- Measurement with {\tt trace\-route}. Three routes are collected, leading to a higher load on links close to the monitor (represented by thicker lines here).
(3) -- Naive tree measurement. Because of a route change due to per-packet load balancer $e$, one obtains a disconnected part.
(4) -- Measurement with {\tt trace\-tree}. Nodes are pairs of \mbox{\em \sc ip}\ addresses and \mbox{\sc ttl}, with redundancy in the addresses;
one necessarily obtains a tree.
(5--7) -- Main steps of the filtering process.
(5) -- Pairs with same \mbox{\em \sc ip}\ address are merged and loops are removed;
(6) -- Appropriate stars are merged and a {\em \sc bfs}\ tree is computed;
(7) -- Leaves which are not the last node on a path towards a destination are iteratively removed.
This is the final output of the filter.
}
\label{fig_ex}
\end{figure}
From such trees with (\mbox{\em \sc ip},\mbox{\sc ttl}) nodes, one obtains a tree on \mbox{\em \sc ip}\ addresses by applying the following filter (illustrated in Figure~\ref{fig_ex})\,\footnote{The measurement would be slightly more efficient if the filter was included directly in {\tt trace\-tree}; however, to keep things simple and modular, we preferred to separate the two.}:
first merge all nodes of the tree which correspond to a same \mbox{\em \sc ip};
remove loops (links from an \mbox{\em \sc ip}\ to itself); iteratively remove the stars with no successor;
merge all the stars which are successor of a same node into a unique star;
construct a {\em \sc bfs}\ tree of the obtained graph which leads to a tree on \mbox{\em \sc ip}\ addresses\,\footnote{During the construction
of the {\em \sc bfs}{} tree, neighbours of a node are visited in lexicographic order, and stars
are visited after \mbox{\em \sc ip}{}s.};
iteratively remove the leaves which are not the last nodes encountered when probing any destination.
The key point is that the obtained tree is a possible \mbox{\em \sc ip}\ routing
tree from the monitor to the destinations (similar to a broadcast
tree).
The obtained tree contains almost as much information as the original
{\tt trace\-tree}\ output and has the advantage of being much more simple to
analyse.
We evaluated the impact of this filtering on our observations,
and found that it was negligible:
Detailing this is out of the scope of this paper.
Many non-trivial points would deserve more discussion. For instance,
one may apply a greedy sending or receiving strategy
(by replacing line $\alpha$ or $\beta$ in
Algorithm~\ref{algo_tracetree} by a {\tt while}, respectively);
identifying reply packets is
non-trivial, as well as extracting the relevant information from the
read packets; introducing a delay may be necessary to stay below the maximal
{\em \sc icmp}\ sending rate of the monitor; one may consider answers received
after the timeout but before the end of the measurement (whereas we
ignore them); one may use other protocols than {\em \sc icmp}\ (the classical
{\tt trace\-route}\ uses {\em \sc udp}\ or {\em \sc icmp}\ packets); the initial order of the
destinations may have an impact on the measurement; there may be many
choices for the {\em \sc bfs}\ tree in the filter; etc. However, entering in
such details is far beyond the scope of this paper, and we refer to
the code and its documentation \cite{radarurl} for full details.
\subsection{Radar.}
\label{sec_radar}
With the {\tt trace\-tree}\ tool and its filtered version, we have the ground
material to conduct radar measurements: given a monitor and a set of
destinations, it suffices to run periodic ego-centered measurements,
which we call measurement {\em rounds}.
The measurement frequency must be high enough to capture interesting
dynamics, but low enough to keep the network load reasonable. We will
discuss this
in the next section.
The only remaining issue is the estimation of distances towards destinations, which is a non-trivial task in general \cite{streamlining}. This plays a key role here, since over-estimated distances lead to several packets hitting destinations. Under-estimated distances, instead, miss the last links towards the destinations.
One may however suppose that the distance between the monitor and any
destination generally is stable between consecutive rounds of radar
measurement.
Then,
the distances at a given round are the ones observed during the
previous round. If the distance happens to be under-estimated (we do
not see the destination at this distance), then we set it to a default
maximal value (generally equal to $30$) and start the measurement from
there (and we update the corresponding distance for the next round).
\begin{figure*}[!ht]
\centering
\includegraphics[scale=0.48]{mdm.rapide.fig.eps}
\includegraphics[scale=0.48]{bordeaux.nb_ip_sim.fig.eps}
\includegraphics[scale=0.48]{allparts.fig.eps}\\
\axes{\ \ \ \ \ \ \ \ \ \ \ \hspace{1cm}$x$ = hours; $y$ = \# ip}{\ \ \ \ \ \ \ \ \hspace{2.5cm} \em $x$ = hours; $y$ = \# ip}{\ \ \ \hspace{2cm} $x$ = hours; $y$ = round duration (s)}
\caption{
{\bf Impact of measurement parameters.}
The $x$ axis of all plots represents the time (in hours) since the beginning of the measurement.
{\bf Left: impact of inter-round delay.} Number of distinct \mbox{\em \sc ip}{} addresses viewed at each round. The bottom plot corresponds to a control monitor with the base parameters; the other monitor starts with the base parameters, and about $27$ hours later we reduce the inter-round delay from $10$ minutes to $1$ (each ego-centered measurement takes around $4$ minutes).
{\bf Center: impact of the number of destinations.} Number of distinct \mbox{\em \sc ip}{} addresses viewed at each round.
The plot close to $y=10\,000$ corresponds to a control monitor with the base parameters. The other plain-line plot is produced by a monitor which starts with the base parameters, thus with a destination set $D$ of size $3\,000$, changes to a set $D'$ of $10\,000$ destinations containing $D$,
goes back to $D$,
and finally turns to a subset $D''$ of size $1\,000$ of $D$.
In addition, the dotted plots are simulations of what we would have seen from this monitor with $D$ during the measurement using $D'$ (obtained by dropping all nodes and links which are on paths towards destinations that are not in $D$), and what we would have seen with $D''$ during the measurements using $D$ or $D'$ (obtained similarly).
{\bf Right: impact of timeout value.} Round duration (in seconds).
The monitor starts with a timeout value of $4\,s$, then we change it to $2\,s$, and finally to $1\,s$.
}
\label{fig_speed_up}
\label{fig_nb_dest}
\end{figure*}
\section{Measurement and data.}
\label{sec_data}
First notice that many parameters (including the monitor and destination set) may have a deep impact on the obtained data. Estimating this impact is a challenging task since testing all combinations of parameters is totally out of reach. In addition, the continuous evolution of the measured object makes it difficult to compare several measurements: the observed changes may be due to parameter modifications or to actual changes in the topology.
To bypass these issues while keeping the study rigorous, we propose
the following approach. We first choose a set of seemingly reasonable
parameters, which we call {\em base parameters} (see
Section~\ref{sec_base}). Then we conduct measurements with these
parameters from several monitors in parallel.
On some monitors,
called {\em control monitors}, we keep these parameters constant;
on others, called {\em test monitors}, we alternate periods with base
parameters and periods where we change (generally one of) these
parameters.
Control monitors make it possible to check
that the changes observed from test monitors are due to changes of
parameters, not to events on the network. The alternation of periods
with base parameters and modified ones also makes it possible to
confirm this, and to observe the induced changes in the observations.
In many cases, it is also possible to simulate what one would have
seen in principle if the parameters had stayed unchanged, which gives
further insight (we will illustrate this below).
We use a wide set of more than one hundred monitors scattered around
the world, provided by PlanetLab \cite{planetLab} and other structures
(small companies and individual \mbox{\sc dsl}\ links) \cite{radarurl}. In
order to be as general as possible, and to simplify the destination
setup, we use destinations chosen by sampling random valid \mbox{\em \sc ip}\
addresses and keeping those answering to {\tt ping}\ {\em at the time of
the list construction}. Other selection procedures would of course
make sense (this raises interesting perspectives).
\subsection{Our base parameters and data set.}
\label{sec_base}
In all the paper, the base parameters consist of a set of $3\,000$ destinations for each monitor, a maximal \mbox{\sc ttl}\ of $30$, a $2$ seconds timeout and a $10$ minutes delay between rounds. All our measurements were conducted with variations of these parameters; wherever it is not explicitly specified, the parameters were the base ones. We ran measurements continuously during several weeks, with some interruptions due to monitors and/or local network shutdowns.
The obtained data is available at \cite{radarurl}.
\subsection{Influence of parameters.}
\label{sec_influence}
Using the methodology sketched above, we show here how to rigorously evaluate the influence of various parameters. We focus on a few representative ones only, the key conclusion being that the base parameters described above fit our needs very well.
Figure~\ref{fig_speed_up} (left) shows the impact of the inter-round delay: on the rightmost part the delay was significantly reduced, leading to an increase in the observation's time resolution ({\em i.e.}\ more points per unit of time). It is clear from the figure that this has no significant impact on the observed behavior. In particular, the variations in the number of \mbox{\em \sc ip}\ addresses seen, though they have a higher resolution after the speed-up, are very similar before and after it. Moreover, the control monitor shows that the base time scale is relevant, since improving it does not reveal significantly higher dynamics.
Figure~\ref{fig_nb_dest} (middle) shows the impact of the number of
destinations. As expected, increasing this number leads to an increase
in the number of observed \mbox{\em \sc ip}\ addresses.
The key point however is that increasing
the number of destinations may lead to a relative loss of efficiency:
simulations of what we would have seen with $3\,000$ or $1\,000$
destinations display a smaller number of \mbox{\em \sc ip}\ addresses than direct
measurements with these numbers of destinations (the control monitor
proves that this is not due to a simultaneous topology change). This
is due to the fact that probing towards $10\,000$ destinations induces
too high a network load: since some routers answer to {\em \sc icmp}\ packets
with a limited rate only \cite{govindan2002estimating}, overloading
them makes them invisible to our measurements. Importantly, this does
not occur in simulations of $1\,000$ destination measurements from
ones with $3\,000$, thus showing that the load induced with $3\,000$
destinations is reasonable, to this regard.
Figure~\ref{fig_nb_dest} (right) shows the impact of the timeout value.
As expected, decreasing the timeout leads to a decrease in the round
duration.
However, it also causes more replies to probe packets to be ignored because we receive them after the timeout.
A good value for the timeout is a compromise
between the two. We observe that the round duration is only slightly
larger with a timeout of $2s$ than with a timeout of $1s$ (contrary to
the change between a timeout of $4$ and $2s$). The base value of the
timeout (2s) seems therefore appropriate, because it is rather large
and does not lead to a long round duration.
\begin{figure*}[!ht]
\centering
\hfill
\includegraphics[scale=0.48]{tr-tt-rounds.eps}
\hfill
\includegraphics[scale=0.48]{tr-tt-packets.eps}
\hfill
\includegraphics[scale=0.46]{LOAD2_d.fig.eps}
\hfill
\\
\axes{\hspace{1cm}$x$ = \# rounds; $y$ = \# ip}{\hspace{0cm}$x$ = \# packets; $y$ = \# ip}{$x$ = \# times probed; $y$ = \# links\hspace{-1cm}}
\caption{
{\bf Comparison between {\tt trace\-route}\ and {\tt trace\-tree}.}
{\bf Left and center:} number of distinct \mbox{\em \sc ip}\ addresses viewed since the beginning with a {\tt trace\-route}\ measurement (plain lines) and a {\tt trace\-tree}\ measurement simulated from it (dotted lines);
left: as a function of the number of rounds;
center: as a function of the number of packets sent.
To improve readability, we cut the part of the plots corresponding to the $20$ first rounds and to the $10^6$ first packets, respectively.
{\bf Right:} typical link load distribution with a {\tt trace\-route}\ ego-centered measurement. For each value $x$ on the horizontal axis, we give the number of links which are discovered $x$ times during a {\tt trace\-route}\ ego-centered measurement with 3\,000 destinations (base value).
}
\label{fig_traceroute}
\end{figure*}
We also considered other observables (like the number of stars seen at
each round, and the number of packets received after the timeout),
for measurements
obtained from various monitors and towards various destinations; in
all cases, the conclusion was the same: the base parameters proposed
above meet our requirements.
\begin{figure*}[!ht]
\centering
\includegraphics[scale=0.45]{pics.eps}
\caption{Bottom: number of distinct \mbox{\em \sc ip}\ addresses observed during each round of measurement.
Top: number of distinct \mbox{\em \sc ip}\ addresses observed during series of ten consecutive rounds.}
\label{fig_events}
\end{figure*}
\begin{figure*}[!ht]
\begin{minipage}[c]{0.35\linewidth}
\includegraphics[scale=0.48]{distrib.6.eps}
\end{minipage}
\begin{minipage}[c]{0.25\linewidth}
\includegraphics[scale=.28]{710.ps}~
\includegraphics[scale=.28]{644.ps}
\end{minipage}
\begin{minipage}[c]{0.30\linewidth}
\includegraphics[scale=0.48]{cc.size_distrib.fig.eps}
\end{minipage}
\axes{\hspace{1cm}$x$ = \# ip ; $y$ = \# series}{\hspace{0cm}}{$x$ = components size; $y$ = \# components \hspace{-1cm}}
\caption{
{\bf Left: Distribution of the values of the upper plot in Figure~\ref{fig_events}}.
{\bf Center: typical {\em islands} of appearing nodes.} Each node is an \mbox{\em \sc ip}\ address; the black ones are the ones observed during the second half of the measurement only, the others being already present in the first half.
The square nodes were present in {\em all} the ($2\,200$) rounds of measurement. Links are directed from bottom to top, {\em i.e.}\ from the monitor to destinations. The number of rounds necessary to discover all $13$ new nodes in the left drawing was $669$ rounds ($1\,306$ to $1\,974$), but only $2$ rounds ($2\,021$ and $2\,022$) were sufficient for the $9$ right ones. Notice that $7$ connected components of new nodes are displayed: $4$ of size $1$, $1$ of size $4$, $1$ of size $5$, and $1$ of size $9$.
{\bf Right: distribution of new node component sizes.} For each possible size $x$ (horizontal axis), the number of connected components of new nodes of size $x$ is given.
}
\label{fig_cc}
\end{figure*}
\subsection{Comparison with {\tt trace\-route}.}
\label{sec_traceroute}
As explained in Section~\ref{sec_tracetree}, a key goal of our
{\tt trace\-tree}\ measurement tool is to perform significantly better than
direct use of {\tt trace\-route}\ in our context.
To evaluate this, we compare the difference in the obtained information
with {\tt trace\-route}{} and {\tt trace\-tree}{},
as well as the load they induce on the network (Figure~\ref{fig_traceroute}, left and center).
First notice that the plot as a function of the number of rounds with {\tt trace\-route}\ is higher than the one with {\tt trace\-tree}, as expected: any {\tt trace\-route}\ round gathers slightly more data than the corresponding {\tt trace\-tree}\ round (below $1\,\%$, here).
It is however much more interesting to compare them in terms of the number of packets sent (reflecting the load induced on the network and our ability to increase the measurement frequency). The plots show that, to this regard, {\tt trace\-tree}\ is much more efficient than direct {\tt trace\-route}\ measurements: here, {\tt trace\-tree}\ reaches $14\,100$ distinct \mbox{\em \sc ip}\ addresses with around $3$ millions packets, while {\tt trace\-route}\ needs around $4.5$ millions packets.
Recall moreover that the load induced by {\tt trace\-tree}\ is balanced among links,
which is not the case for {\tt trace\-route}{}, see Figure~\ref{fig_traceroute} (right).
We can see that some links are probed a very high number of times {\em at each round}
(typically up to $3\,000$ times if we use $3\,000$ destinations).
See \cite{DTJSAC,DTSigmetrics,probingScheme} for detailed studies of such effects.
Finally, in addition to the key advantage of providing homogeneous tree ego-centered views of the topology, the {\tt trace\-tree}\ tool also is much more efficient than {\tt trace\-route}\ in terms of the number of packets sent, thus making it possible to repeatedly run it in
radar measurements
with a reasonable network cost.
\section{Towards event detection.}
\label{sec_event}
One key interest of our measurements is that they make it possible to
observe the dynamics of the \mbox{\em \sc ip}{} internet topology from an ego-centered
perspective, at a time scale of a few minutes only.
In
particular, detecting {\em events} in this dynamics, {\em i.e.}\ major
changes in the topology, is very appealing from a security and
modeling point of view.
A most natural direction to try and detect events is to observe the
number of distinct \mbox{\em \sc ip}\ addresses seen at {\em each} round, as plotted
in Figure~\ref{fig_events}. Clear events indeed appear in such plots,
under the form of downward peaks.
However, this provides little
information, if any: these peaks may be caused by temporary partial or total
connectivity losses at the monitor (or close to it), not by important
events at the internet level. On the other hand, one may notice that
no significant upward peak appears in this plot. Notice that this is a
non-trivial fact: from a topological point of view, such peaks would
be possible; the fact that they do not occur reflects non-trivial
properties of the topology and its dynamics, which we leave for
further study.
Interestingly, the plot of the number of distinct \mbox{\em \sc ip}\ addresses seen
during ten consecutive rounds, Figure~\ref{fig_events}, has very
different characteristics. It exhibits
upward peaks (the distribution of observed values, presented
in Figure~\ref{fig_cc}, left, confirms that these peaks are statistically significant
outliers). These peaks reveal important changes in the \mbox{\em \sc ip}\ addresses
observed in consecutive rounds, and thus important routing changes:
though the number of observed \mbox{\em \sc ip}{} addresses is roughly the same before
and after these events,
the ego-centered views have changed.
\begin{figure*}[!ht]
\begin{center}
\includegraphics[scale=0.5]{clem.eps}
\end{center}
\caption{Representation of the event at round 106231 in Figure~\ref{fig_events}:
the graph is obtained by merging 100 rounds before the event together with
a single round after the event.
Edges in bold black are edges that were seen in the round after the event but not in the
100 rounds before.
}
\label{fig_graph}
\end{figure*}
To illustrate this,
we present in Figure~\ref{fig_graph} a graph obtained by merging ego-centered
views measured before and after such an upward peak.
We can clearly see that this peak corresponds to a large number of new edges
appearing in a specific part of the network,
confirming the occurrence of a significant event.
\medskip
Another approach consists in detecting
events occurring during a measurement from round $i$ to round $j$ by
comparing it to the measurement from round $i-k$ to round $i$, which
serves as a reference: we consider the \mbox{\em \sc ip}\ addresses seen during the
period of interest which were not observed in the reference period. We
call these \mbox{\em \sc ip}\ addresses the {\em new} addresses.
Our observations show that it is natural to observe such new addresses
during any measurement.
However, one may expect
that events of interest will lead to the appearance of connected
groups of such addresses; we therefore propose to compute the
connected components composed of new addresses\,\footnote{{\em i.e.}\ maximal
sets of new addresses such that there exists a path between any two
of them composed only of new addresses.} as a way to observe these
events.
We display such components in Figure~\ref{fig_cc} (center), together
with their neighborhood. This figure shows clearly that, in some
cases, the observed components are non-trivial islands of newly
observed nodes, revealing local events in the network.
Figure~\ref{fig_cc} (right) however shows that such non-trivial islands are quite rare: most connected components of new nodes are very small, often reduced to a single node ($949$ over a total of $1\,457$ components, in our example). Despite this, some large components appear (the largest one in our example has size $17$, and $15$ components have size at least $10$), thus revealing underlying events of interest.
Another important characteristic of connected components of new addresses is the number of rounds needed to discover all their nodes, defined as the round number at which their last node was discovered minus the round number at which their first node was, plus one. Indeed, short discovery times indicate that all the new nodes under concern probably appeared because of a same event. Large times, instead, show that several events (located close to each other in the network) occurred. The examples in Figure~\ref{fig_cc} (center) show that both cases occur.
The distribution of the number of rounds needed to discover each component of new nodes (not represented here) is very heterogeneous, with many components discovered very rapidly and others much more slowly. This gives little information, however, as the discovery time may depend strongly on the component size. Studying the correlations between the two (not represented here) confirms this, but it also shows that some large connected components are discovered very rapidly.
\medskip
The two approaches we described point out specific moments at which events occurred;
one may then observe the data more
closely, in order to investigate the nature of these events. We leave
this for further research.
\section{Conclusion and perspectives.}
\label{sec_conclusion}
In this paper, we propose, implement, and illustrate a new measurement
approach which makes it possible to study the dynamics of \mbox{\em \sc ip} -level
internet topology at a time scale of a few minutes. We provide a rich
dataset consisting in radar measurements from more than one
hundred monitors towards thousands of destinations, conducted for
several weeks in continuous.
The most important direction for further research is of course the
analysis of collected data.
A particularly appealing goal is the detection of events
in the dynamics of the observed topology;
this raises difficult fundamental questions,
such as the characterization of {\em normal} dynamics,
or the identification of relevant time scales for the observation.
Other promising directions include visualizing the observed
dynamics,
and conducting more radar measurements to gain a deeper insight
(for instance, one could conduct simultaneous
measurements from several monitors to observe the dynamics from
different viewpoints).
\bigskip
\noindent
{\bf Acknowledgments.}
We warmly thank the PhD students and other colleagues of the LIP6, in particular Guillaume Valadon, Renata Teixeira,
and Brice Augustin who provided great insight during this work.
Likewise, we thank Beno\^\i t Donnet, who helped much with the references and also provided useful comments. Many interesting
discussions within the METROSEC project \cite{metrosecurl} also played a key role in our work.
We also thank all the people who provided monitors to us, in particular the PlanetLab staff \cite{planetLab}, Fr\'ed\'eric Aidouni,
Julien Aussibal, Prof. Hiroshi Esaki (WIDE), Jean-Charles de Longueville (Hellea) and S\'ebastien Wacquiez (Enix);
no such work would be possible without their help.
This work was funded in part by the METROSEC and AGRI projects.
\bibliographystyle{latex8}
|
2,869,038,153,957 | arxiv | \section{Introduction}\label{sec:intro}
In the quest for designing advanced propulsion and power-generation systems, there is an increasing need for an effective methodology that combines engineering physics, computer simulations and statistical modeling. A key point of interest in this design process is the treatment of turbulence flows, a subject that has far-reaching scientific and technological importance \citep{McC1990}. Turbulence refers to the irregular and chaotic behavior resulting from motion of a fluid flow \citep{Pop2001}, and is characterized by the formation of eddies and vortices which transfer flow kinetic energy due to rotational dynamics. Such a phenomenon is an unavoidable aspect of everyday life, present in the earth's atmosphere and ocean waves, and also in chemically reacting flows in propulsion and power-generation devices. In this paper, we develop a surrogate model, or emulator, for predicting turbulent flows in a swirl injector, a mechanical component with a wide variety of engineering applications.
There are two reasons why a statistical model is required for this important task. First, the time and resources required to develop an effective engineering device with desired functions may be formidable, even at a \textit{single} design setting. Second, even with the availability of high-fidelity simulation tools, the computational resources needed can be quite costly, and only a handful of design settings can be treated in practical times. {For example, the flow simulation of a single injector design takes over 6 days of computation time, parallelized using 200 CPU cores.} For practical problems with large design ranges and/or many design inputs, the use of only high-fidelity simulations is insufficient for surveying the full design space. In this setting, emulation provides a powerful tool for efficiently predicting flows at any design geometry, using a small number of flow simulations as training data. A central theme of this paper is that, by properly \textit{eliciting} and \textit{applying} physical properties of the fluid flow, simplifying assumptions can be made on the emulator which greatly reduce computation and improve prediction accuracy. In view of the massive simulation datasets, which can exceed many gigabytes or even terabytes in storage, such efficiency is paramount for the usefulness of emulation in practice.
The proposed emulator utilizes a popular technique called \textit{kriging} \citep{Mat1963}, which employs a Gaussian Process (GP) for modeling computer simulation output over a desired input domain. The main appeal of kriging lies in the fact that both the emulation predictor and its associated uncertainty can be evaluated in closed-form. For our application, a kriging model is required which can predict {flows} at any injector geometry setting; we refer to this as \textit{flow kriging} for the rest of the paper. In recent years, there have been important developments in flow kriging, including the works of \cite{Wea2006} and \cite{Rou2008} on {regular spatial grids} (i.e., outputs are observed at the same spatial locations over all simulations), and \cite{Hea2015} on irregular grids. Unfortunately, it is difficult to apply these models to the more general setting in which the \textit{dimensions} of spatial grids vary greatly for different input variables. In the present work, for instance, the desired design range for injector length varies from 20 mm to 100 mm. Combined with the high spatial and temporal resolutions required in simulation, the resulting flow data is much too large to process using existing models, and data-reduction methods are needed.
There has been some work on using reduced-basis models to compact data for emulation, including the functional linear models by \cite{Fea2006}, wavelet models by \cite{Bea2007} and principal component models by \cite{RS2002} and \cite{Hea2007}. Here, we employ a generalization of the latter method called \textit{proper orthogonal decomposition (POD)} \citep{Lum1967}, which is better known in statistical literature as the Karhunen-Lo\`eve decomposition \citep{Kar1947,Loe1955}. From a flow physics perspective, POD separates a simulated flow into key instability structures, each with its corresponding spatial and dynamic features. Such a decomposition is, however, inappropriate for emulation, because there is no way to connect the extracted instabilities of one input setting to the instabilities of another setting. To this end, we propose a new method called the \textit{common POD} (CPOD) to extract \textit{common} instabilities over the design space. This technique exploits a simple and physically justifiable linearity assumption on the spatial distribution of instability structures.
In addition to efficient flow emulation, our model also provides two important features. First, the same domain-specific model simplications (e.g., on the spatio-temporal correlation structure) which enable efficient prediction also allow for an efficient uncertainty quantification (UQ) for such a prediction. This UQ is highly valuable in practice, since the associated uncertainties for variable disturbance propagations can then be used for mitigating flow instabilities \citep{Yea2013}. Second, by incorporating known properties of the fluid flow into the model, the proposed emulator can in turn provide valuable insights on the dominant physics present in the system, which can then be used to guide further scientific investigations. One key example of this is the learning of dominant flow coupling mechanisms using a large co-kriging model \citep{SC1991,Bea2014} under sparsity constraints.
The paper is structured as follows. Section \ref{sec:data} provides a brief overview of the physical model of concern, including injector design, governing equations and experimental design. Section \ref{sec:method} introduces the proposed emulator model, and proposes a parallelized algorithm for efficient parameter estimation. Section \ref{sec:result} presents the emulation prediction and UQ for a new injector geometry, and interprets important physical correlations extracted by the emulator. Section 5 concludes with directions for future work.
\section{Injector schematic and large eddy simulations}\label{sec:data}
We first describe the design schematic for the swirl injector of concern, then briefly outline the governing partial differential equations and simulation tools. A discussion on experimental design is provided at the end of this section.
\begin{table}[t]
\begin{minipage}{0.53\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/injconf.png}
\captionof{figure}{Schematic of injector configuration.}
\label{fig:inj}
\end{minipage}
\hfill
\begin{minipage}{0.43\textwidth}
\centering
\begin{tabular}{cc}
\toprule
\text{\bf{Parameter}} & \text{\bf{Range}}\\
\toprule
\text{$L$} & 20 mm - 100 mm\\
\text{$R_n$} & 2.0 mm - 5.0 mm\\
\text{$\delta$} & 0.5 mm - 2.0 mm\\
\text{$\theta$} & $45^{\circ} - 75^{\circ}$\\
\text{$\Delta L$} & 1.0 mm - 4.0 mm\\
\toprule
\end{tabular}
\caption{Range of geometric parameters.}
\label{tbl:range}
\end{minipage}
\end{table}
\subsection{Injector design}
Figure \ref{fig:inj} shows a schematic of the swirl injector under consideration. It consists of an open-ended cylinder and a row of tangential entries for liquid fluid injection. The configuration is typical of many propulsion and power-generation applications \citep{ZY2008, WY2016, Wea2017}. Liquid propellant is tangentially introduced into the injector and forms a thin film attached to the wall due to the swirl-induced centrifugal force. A low-density gaseous core exists in the center region in accordance with conservation of mass and angular momentum. The liquid film exits the injector as a thin sheet and mixes with the ambient gas. The swirl injection and atomization process involves two primary mechanisms: disintegration of the liquid sheet as it swirls and stretches, and sheet breakup due to the interaction with the surroundings. The design of the injector significantly affects the atomization characteristics and stability behaviors.
{Figure} \ref{fig:inj} shows the five design variables considered for injector geometry: the injector length $L$, the nozzle radius $R_n$, the inlet diameter $\delta$, the injection angle $\theta$, and the distance between inlet and head-end $\Delta L$. From flow physics, these five variables are influential for liquid film thickness $h$ and spreading angle $\alpha$ (see Figure \ref{fig:inj}), which are key measures of injector performance of a swirl injector. For example, a larger injection angle $\theta$ induces greater swirl momentum in the liquid oxygen flow, which in turn causes thinner film thickness and smaller spreading angle. Table \ref{tbl:range} summarizes the design ranges for these five variables. To ensure the applicability of our work, broad geometric ranges are considered, covering design settings for several existing rocket injectors. Specifically, the range for injector length $L$ covers the length of RD-0110 and RD-170 liquid-fuel rocket engines.
\subsection{Flow simulation}
The numerical simulations here are performed with a pressure of 100 atm, which is typical of contemporary liquid rocket engines with liquid oxygen (LOX) as the propellant. The physical processes modeled here are turbulent flows, in which various sizes of turbulent eddies are involved. A direct numerical simulation to resolve all eddy length-scales is computationally prohibitive. To this end, we employ the large eddy simulation (LES) technique, which directly simulates large turbulent eddies and employs a model-based approach for small eddies. To provide initial turbulence, broadband Gaussian noise is superimposed onto the inlet velocity components. Thermodynamic and transport properties are simulated using the techniques in \cite{Hea2014} and \cite{Wea2015}; the theoretical and numerical framework can be found in \cite{OY1998} and \cite{Zea2004}. To optimize computational speed, a multi-block domain decomposition technique combined with the message-passing interface for parallel computing is applied. Each LES simulation takes 6 days of computation time, parallelized over 200 CPU cores, to obtain $T = 1,000$ snapshots with a time-step of 0.03 ms after the flow reaches statistically stationary state. From this, six flow variables of interest can be extracted: axial ($u$), radial ($v$), and circumferential ($w$) components of velocity, temperature ($T$), pressure ($P$) and density ($\rho$).
Numerical simulations are conducted for $n=30$ injector geometries in the timeframe set for this project. These simulation runs are allocated over the design space in Table \ref{tbl:range} using the maximum projection (MaxPro) design proposed by \cite{Jea2015}. Compared to Latin-hypercube-based designs (e.g., \citealp{Mea1979}, \citealp{MM1995}), MaxPro designs enjoy better space-filling properties in {all} possible projections of the design space, and also provide better predictions for GP modeling. While $n=30$ simulation runs may appear to be too small of a dataset for training the proposed flow emulator, we show this sample size can provide accurate flow predictions for the application at hand, through an elicitation of flow physics and the incorporation of such physics into the model. For these 30 runs, one issue which arises is that the simulation data is massive, requiring nearly a hundred gigabytes in computer storage. For such large data, a blind application of existing flow kriging methods may require weeks for flow prediction, which entirely defeats the purpose of emulation, because simulated flows can generated in 6 days. Again, by properly eliciting and incorporating physics as simplifying assumptions for the emulator model, accurate flow predictions can be achieved in hours despite a limited run size. We elaborate on this elicitation procedure in the following section.
\section{Emulator model}\label{sec:method}
\begin{table}[t]
\centering
\begin{tabular}{K{0.6\linewidth} | K{0.38\linewidth}}
\toprule
\textbf{Flow physics} & \textbf{Model assumption}\\
\toprule
Coherent structures in turbulent flow \citep{Lum1967} & POD-based kriging\\
\hline
Similar Reynolds numbers for cold-flows \citep{Sto1851} & Linear-scaling modes in CPOD \\
\hline
Dense simulation time-steps & Time-independent emulator\\
\hline
Couplings between flow variables \citep{Pop2001} & Co-kriging framework with covariance matrix $\bm{T}$\\
\hline
Few-but-significant couplings \citep{Pop2001} & Sparsity on $\bm{T}^{-1}$\\
\toprule
\end{tabular}
\caption{Elicited flow physics and corresponding assumptions for the emulator model.}
\label{tbl:model}
\end{table}
We first introduce the new idea of CPOD, then present the proposed emulator model and a parallelized algorithm for parameter estimation. A key theme in this section (and indeed, for this paper) is the elicitation and incorporation of flow physics within the emulator model. This not only allows for {efficient} and {accurate} flow predictions through simplifying model assumptions, but also provides a data-driven method for {extracting} useful flow physics, which can then guide future experiments. As demonstrated in Section 4, both objectives can be achieved despite limited runs and complexities inherent in flow data. Table \ref{tbl:model} summarizes the elicited flow physics and the corresponding emulator assumptions; we discuss each point in greater detail below.
\subsection{Common POD}
A brief overview of POD is first provided, following \cite{Lum1967}. For a \textit{fixed} injector geometry, let $Y(\bm{x},t)$ denote a flow variable (e.g., pressure) at spatial coordinate $\bm{x}\in \mathbb{R}^2$ and flow time $t$. POD provides the following decomposition of $Y(\bm{x},t)$ into separable spatial and temporal components:
\begin{equation}
Y(\bm{x},t) =\sum_{k=1}^\infty \beta_k(t) \phi_k(\bm{x}),
\label{eq:klexp}
\end{equation}
with the spatial eigenfunctions $\{\phi_k(\bm{x})\}_{k=1}^\infty$ and temporal coefficients $\{\beta_k(t)\}_{k=1}^\infty$ given by:
\begin{align}
\begin{split}
\phi_k(\bm{x}) = \argmax_{\substack{\| \psi \|_2 = 1, \\ \langle \psi, \phi_l \rangle = 0, \forall l < k}} \int \left\{ \int Y(\bm{x},t) \psi(\bm{x}) \; d\bm{x}\right\}^2 \; dt, \quad
\beta_k(t) = \int Y(\bm{x},t) \phi_k(\bm{x}) \; d\bm{x}.
\end{split}
\end{align}
Following \cite{Bea1993}, we refer to $\{\phi_k(\bm{x})\}_{k=1}^\infty$ as the \textit{spatial POD modes} for $Y(\bm{x},t)$, and its corresponding coefficients $\{\beta_k(t)\}_{k=1}^\infty$ as \textit{time-varying coefficients}.
There are two key reasons for choosing POD over other reduced-basis models. First, one can show \citep{Loe1955} that any truncated representation in \eqref{eq:klexp} gives the best flow reconstruction of $Y(\bm{x},t)$ in $L_2$-norm, compared to any other linear expansion of space/time products with the same number of terms. This property is crucial for our application, since it allows the massive simulation data to be optimally reduced to a smaller training dataset for the proposed emulator. Second, the POD has a special interpretation in terms of turbulent flow. In the seminal paper by \cite{Lum1967}, it is shown that, under certain conditions, the expansion in \eqref{eq:klexp} can extract \textit{physically meaningful} coherent structures which govern turbulence instabilities. For this reason, physicists use POD as an experimental tool to pinpoint key flow instabilities, simply through an inspection of $\phi_k(\bm{x})$ and the dominant frequencies in $\beta_k(t)$. For example, using POD analysis, \cite{ZY2008} showed that the two flow phenomena, hydrodynamic wave propagation on LOX film and vortex core excitation near the injector exit, are the key mechanisms driving flow instability. This is akin to the use of {principal components} in regression, which can yield meaningful results in applications where such components have innate interpretability.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{Figures/CommonGrid}
\caption{Common grid using linearity assumption for CPOD.}
\label{fig:rescalestep1}
\end{figure}
Unfortunately, POD is only suitable for extracting instability structures at a \textit{single} geometry, whereas for emulation, a method is needed that can extract {common} structures over \textit{varying} geometries. With this in mind, we propose a new decomposition called common POD (CPOD). The key assumption of CPOD is that, under a \textit{physics-guided partition} of the computational domain, the spatial distribution of coherent structures \textit{scales linearly} over varying injector geometries. For {cold flows}, this can be justified by similar Reynolds numbers (which characterize flow dynamics) over different geometries \citep{Sto1851}. This is one instance of model simplification through elicitation, because such a property likely does not hold for general flows. This linearity assumption is highly valuable for computational efficiency, because flows from different geometries can then be rescaled onto a common spatial grid for instability extraction. Figure \ref{fig:rescalestep1} visualizes this procedure. The grids for each simulation are first split into four parts: from injector head-end to the inlet, from the inlet to the nozzle exit, and the top and bottom portions of the downstream region. Each part is then proportionally rescaled to a common, reference grid according to changes in the geometric variables $L$, $R_n$ and $\Delta L$ (see Figure \ref{fig:inj}). From a physics perspective, such a partition is necessary for the linearity assumption to hold.
Stating this mathematically, let $\bm{c}_1, \cdots, \bm{c}_n \in \mathbb{R}^p$ be the $n$ simulated geometries, let $Y(\bm{x},t;\bm{c}_i)$ be the simulated flow at setting $\bm{c}_i$, and fix some setting $\bm{c} \in \{\bm{c}_i\}_{i=1}^n$ as the geometry for the common grid. Next, define $\mathcal{M}_i:\mathbb{R}^2 \rightarrow \mathbb{R}^2$ as the linear map which rescales spatial modes on the common geometry $\bm{c}$ back to the $i$-th simulated geometry $\bm{c}_i$ according to geometric changes in $L$, $R_n$ and $\Delta L$. $\mathcal{M}_i$ can be viewed as the inverse map of the procedure described in the previous paragraph and visualized in Figure \ref{fig:rescalestep1}, which rescales modes from $\bm{c}_i$ to the common geometry $\bm{c}$ (see Appendix A.1 for details). CPOD provides the following decomposition of $Y(\bm{x},t; \bm{c}_i)$:
\begin{equation}
Y(\bm{x},t; \bm{c}_i) = \sum_{k=1}^\infty \beta_k(t; \bm{c}_i) \mathcal{M}_i \{\phi_k(\bm{x})\},
\label{eq:cklexp}
\end{equation}
with the spatial CPOD modes $\{\phi_k(\bm{x})\}$ and time-varying coefficients $\{\beta_k(t;\bm{c}_i)\}$ defined as:
\small
\begin{equation}
\phi_k(\bm{x}) = \argmax_{\substack{\| \psi \|_2 = 1, \\ \langle \psi, \phi_l \rangle = 0, \forall l < k}} \sum_{i=1}^n \int \left\{ \int Y(\bm{x},t; \bm{c}_i) \mathcal{M}_i\{\psi(\bm{x})\} \; d\bm{x} \right\}^2 dt, \; \beta_k(t;\bm{c}_i) = \int Y(\bm{x},t; \bm{c}_i) \mathcal{M}_i \{ \phi_k(\bm{x}) \} \; d\bm{x}.
\end{equation}
\normalsize
Here, $\phi_k(\bm{x})$ is the spatial distribution for the $k$-th common flow structure, with $\beta_k(t;\bm{c}_i)$ its time-varying coefficient for geometry $\bm{c}_i$. As in POD, leading terms in CPOD can also be interpreted in terms of flow physics, a property we demonstrate later in Section \ref{sec:result}. CPOD therefore not only provides optimal {data-reduction} for the simulation data, but also extracts {physically meaningful} structures which can then be incorporated for emulation.
Algorithmically, the CPOD expansion can be computed by rescaling and interpolating all flow simulations to the common grid, computing the POD expansion, and then rescaling the resulting modes back to their original grids. Interpolation is performed using the inverse distance weighting method in \cite{She1968}, and can be justified by dense spatial resolution of the data (with around 100,000 grid points for each simulation). Letting $T$ be the total number of time-steps, a naive implementation of this decomposition requires $O(n^3T^3)$ work, due to a singular-value-decomposition (SVD) step. Such a decomposition therefore becomes computationally intractable when the number of runs grows large or when simulations have dense time-steps (as is the case here). To avoid this computational issue, we use an iterative technique from \cite{LS1998} called {the implicitly restarted Arnoldi method}, which approximates leading terms in \eqref{eq:cklexp} using periodically restarted Arnoldi decompositions. The full algorithm for CPOD is outlined in Appendix A.
\subsection{Model specification}
After the CPOD extraction, the extracted time-varying coefficients $\{\beta_k(t;\bm{c}_i)\}_{i,k}$ are then used as data for fitting the proposed emulator. There has been some existing work on dynamic emulator models, such as \cite{CO2010}, \cite{Cea2009} and \cite{LW2009}, but the sheer number of simulation time-steps here can impose high computation times and numerical instabilities for these existing methods \citep{Hea2015}. As mentioned previously, computational efficiency is paramount for our problem, since simulation runs can be performed within a week. Moreover, existing emulators cannot account for cross-correlations between different dynamic systems, while the flow physics represented by different CPOD modes are known to be highly coupled from governing equations. Here, we exploit the dense temporal resolution of the flow by using a \textit{time-independent (TI)} emulator that employs independent kriging models at {each slice of time}. The rationale is that, because time-scales are so fine, there is no practical need to estimate temporal correlations (even when they exist), since prediction is not required between time-steps. This time-independent simplification is key for emulator efficiency, since it allows us to fully exploit the power of parallel computing for model fitting and flow prediction.
The model is as follows. Suppose $R$ flow variables are considered (with $R=6$ in the present case), and the CPOD expansion in \eqref{eq:cklexp} is truncated at $K_r$ terms for flow $r = 1, \cdots, R$. Let $\boldsymbol{\beta}^{(r)}(t;\bm{c}) = (\beta^{(r)}_1(t;\bm{c}), \cdots, \beta^{(r)}_{K_r}(t;\bm{c}))^T$ be the vector of $K_r$ time-varying coefficients for flow variable $r$ at design setting $\bm{c}$, with $\boldsymbol{\beta}(t;\bm{c}) = (\boldsymbol{\beta}^{(1)}(t;\bm{c})^T, \cdots, \boldsymbol{\beta}^{(R)}(t;\bm{c})^T)^T$ the coefficient vector for all flows at $\bm{c}$. We assume the following \textit{time-independent GP model} on $\boldsymbol{\beta}(t;\bm{c})$:
\begin{equation}
\boldsymbol{\beta}(t;\bm{c}) \sim GP\{\boldsymbol{\mu}(t), \boldsymbol{\Sigma}(\cdot, \cdot;t)\}, \quad \boldsymbol{\beta}(t;\bm{c}) \perp \boldsymbol{\beta}(t';\bm{c}) \text{ for } t \neq t'.
\label{eq:gpcoef}
\end{equation}
Here, $K = \sum_{r=1}^R K_r$ is the number of extracted modes over all $R$ flow variables, $\boldsymbol{\mu}\in \mathbb{R}^K$ is the process mean vector, and $\boldsymbol{\Sigma}(\cdot, \cdot): \mathbb{R}^p \times \mathbb{R}^p \rightarrow \mathbb{R}^{K \times K}$ its corresponding covariance matrix function defined below. Since the GPs are now time-independent, we present the specification for \textit{fixed} time $t$, and refer to $\boldsymbol{\beta}(t;\bm{c})$, $\boldsymbol{\mu}(t)$ and $\bm{\Sigma}(\cdot,\cdot;t)$ as $\boldsymbol{\beta}(\bm{c})$, $\boldsymbol{\mu}$ and $\bm{\Sigma}(\cdot,\cdot)$ for brevity.
For computational efficiency, the following separable form is assumed for $\boldsymbol{\Sigma}(\cdot,\cdot)$:
\begin{equation}
\boldsymbol{\Sigma}(\bm{c}_1, \bm{c}_2) = r_\tau(\bm{c}_1, \bm{c}_2) \bm{T}, \quad r_\tau(\bm{c}_1, \bm{c}_2) = \prod_{j=1}^p \tau_j^{4(c_{1j} - c_{2j})^2}, \quad \bm{c}_1, \bm{c}_2 \in \mathbb{R}^p, \quad \tau_j \in (0,1),
\label{eq:gpcov}
\end{equation}
where $\bm{T} \in \mathbb{R}^{K \times K}$ is a symmetric, positive definite matrix called the \textit{CPOD covariance matrix}, and $r_\tau(\cdot,\cdot)$ is the correlation function over the design space, parameterized by $\boldsymbol{\tau} = (\tau_1, \cdots, \tau_p)^T \in (0,1)^p$. This can be viewed as a large co-kriging model \citep{SC1991} over the design space, with the multivariate observations being the extracted CPOD coefficients for all flow variables. Note that $r_{\tau}$ is a reparametrization of the squared-exponential (or Gaussian) correlation function $\exp\{-\sum_{j=1}^p \theta_j (c_{1j}-c_{2j})^2\}$, with $\theta_j = -4 \log \tau_j$. In our experience, such a reparametrization allows for a more numerically stable optimization of MLEs, because the optimization domain $\tau_j \in (0,1)$ is now bounded. Our choice of the Gaussian correlation is also well-justified for the application at hand, since fully-developed turbulence dynamics are known to be relatively smooth.
Suppose simulations are run at settings $\bm{c}_1, \cdots, \bm{c}_n$, and assume for now that model parameters are known. Invoking the conditional distribution of the multivariate normal distribution, the time-varying coefficients at a new setting $\bm{c}_{new}$ follow the distribution:
\small
\begin{align}
\begin{split}
\boldsymbol{\beta}(\bm{c}_{new})|\{\boldsymbol{\beta}(\bm{c}_i)\}_{i=1}^n \sim \mathcal{N} \Bigg( &\boldsymbol{\mu} +
\left( \bm{T} \otimes \bm{r}_{\tau,new}\right)^T
\left( \bm{T}^{-1} \otimes \bm{R}_{\tau}^{-1} \right)
\left(\boldsymbol{\beta} -
\bm{1}_n
\otimes
\boldsymbol{\mu}\right), \\
& \quad \bm{T} - \left( \bm{T} \otimes \bm{r}_{\tau,new}\right)^T \left( \bm{T}^{-1} \otimes \bm{R}_{\tau}^{-1} \right) \left( \bm{T} \otimes \bm{r}_{\tau,new} \right) \Bigg),
\label{eq:coefdist}
\end{split}
\end{align}
\normalsize
where $\bm{r}_{\tau,new} = (r_\tau(\bm{c}_{new},\bm{c}_1), \cdots, r_\tau(\bm{c}_{new},\bm{c}_n))^T$ and $\bm{R}_\tau= {[r_\tau(\bm{c}_i,\bm{c}_j)]^n_{i=1}}_{j=1}^n$. Using algebraic manipulations, the minimum-MSE (MMSE) predictor for $\boldsymbol{\beta}(\bm{c}_{new})|\{\boldsymbol{\beta}(\bm{c}_i)\}_{i=1}^n$ and its corresponding variance is given by
\small
\begin{equation}
\boldsymbol{\hat{\beta}}(\bm{c}_{new}) = \boldsymbol{\mu} +\left( (\bm{r}^T_{\tau,new}\bm{R}_{\tau}^{-1})\otimes\bm{I}_{K}\right)
\left(\boldsymbol{\beta} -
\bm{1}_n
\otimes
\boldsymbol{\mu} \right),\mathbb{V}\{\boldsymbol{\beta}(\bm{c}_{new})|\{\boldsymbol{\beta}(\bm{c}_i)\}^n_{i=1}\} = \left(1 - \bm{r}_{\tau,new}^T \bm{R}_{\tau}^{-1} \bm{r}_{\tau,new} \right) \bm{T},
\label{eq:coefpred}
\end{equation}
\normalsize
where $\bm{I}_K$ and $\bm{1}_n$ denote a $K \times K$ identity matrix and a 1-vector of $n$ elements, respectively. Substituting this into the CPOD expansion \eqref{eq:cklexp}, the predicted $r$-th flow variable becomes:
\begin{equation}
\hat{Y}^{(r)}(\bm{x},t; \bm{c}_{new}) = \sum_{k=1}^{K_r} \hat{\beta}^{(r)}_k(\bm{c}_{new}) \mathcal{M}_{new} \{\phi^{(r)}_k(\bm{x})\},
\label{eq:flowpred}
\end{equation}
with the associated spatio-temporal variance:
\small
\begin{equation}
\mathbb{V} \{{Y}^{(r)}(\bm{x},t; \bm{c}_{new})| \{{Y}^{(r)}(\bm{x},t; \bm{c}_{i})\}_{i=1}^n\} = \sum_{k=1}^{K_r} \mathbb{V}\{{\beta}^{(r)}_k(\bm{c}_{new}) | \{\boldsymbol{\beta}(\bm{c}_i)\}^n_{i=1}\} \} \left[\mathcal{M}_{new} \{\phi^{(r)}_k(\bm{x})\} \right]^2,
\label{eq:flowvar}
\end{equation}
\normalsize
where $\phi^{(r)}_k(\bm{x})$ is the $k$-th CPOD mode for flow variable $r$. This holds because the CPOD modes for a {fixed} flow variable are orthogonal (see Section 3.1).
It is worth noting that, when model parameters are {known}, the MMSE predictor in \eqref{eq:coefpred} from the proposed co-kriging model (which we call $M_A$) is the same as the MMSE predictor from the simpler \textit{independent} GP model with $\bm{T}$ diagonal (which we call $M_0$). One advantage of the co-kriging model $M_A$, however, is that it provides improved UQ compared to the independent model $M_0$, as we show below. Moreover, the MMSE predictor for a derived function $g$ of the flow can be quite different between $M_A$ and $M_0$. This is demonstrated in the study of turbulent kinetic energy in Section 4.3.
\subsubsection{CPOD covariance matrix}
\label{sec:covmat}
\begin{figure}[t]
\centering
\includegraphics[width=0.5\textwidth]{Figures/JasaTflow}
\caption{Illustration of the CPOD correlation matrix $\bm{T}$. Red indicates a diagonal matrix, while blue indicates non-diagonal entries.}
\label{fig:TT}
\end{figure}
We briefly describe why the CPOD covariance matrix $\bm{T}$ is appealing from both a physical and a statistical perspective. From the underlying governing equations, it is well known that certain dynamic behaviors are strongly \textit{coupled} for different flow variables \citep{Pop2001}. For example, pressure oscillation in the form of acoustic waves within an injector can induce velocity and density fluctuations. In this sense, $\bm{T}$ incorporates knowledge of these physical couplings within the {emulator} itself, with $\bm{T}_{ij}\gg 0$ indicating the presence of a significant coupling between modes $i$ and $j$, and vice versa. The covariance selection and estimation of $\bm{T}$ therefore provide a data-driven way to \textit{extract} and \textit{rank} significant flow couplings, which is of interest in itself and can be used to guide further experiments. Note that the block submatrices of $\bm{T}$ corresponding to the {same} flow variable (marked in red in Figure \ref{fig:TT}) should be diagonal, by the orthogonality of CPOD modes.
The CPOD covariance matrix $\bm{T}$ also plays an important {statistical} role in emulation. Specifically, when significant cross-correlations exist between modes (which we know to be true from the flow couplings imposed by governing equations), the incorporation of this correlation structure within our model ought to provide a more {accurate} quantification of uncertainty. This is indeed true, and is made precise by the following theorem.
\begin{theorem}
Consider the two models $M_0: \boldsymbol{\beta}(\bm{c}) \in \mathbb{R}^K \sim GP\{\boldsymbol{\mu}, \boldsymbol{\Sigma}^{(0)}\}$ and $M_A: \boldsymbol{\beta}(\bm{c}) \sim GP\{\boldsymbol{\mu}, \boldsymbol{\Sigma}^{(A)}\}$, where $\boldsymbol{\Sigma}^{(0)}(\bm{c}_1, \bm{c}_2)= r_\tau(\bm{c}_1, \bm{c}_2) \bm{D}$ and $\boldsymbol{\Sigma}^{(A)}(\bm{c}_1, \bm{c}_2)= r_\tau(\bm{c}_1, \bm{c}_2) \bm{T}$ with $\bm{T} \succeq 0$ and $\bm{D} = \textup{diag}\{\bm{T}\}$. Let $C_0$ be the $100(1-\alpha)\%$ highest-density confidence region (HDCR, see \citealp{Hyn1996}) of $\boldsymbol{\beta}(\bm{c}_{new})|\{\boldsymbol{\beta}(\bm{c}_i)\}_{i=1}^n$ under $M_0$. Suppose $\lambda_{min}(\bm{T}^{1/2}\bm{D}^{-1}\bm{T}^{1/2}) > 1$. Then:
\[\mathbb{P}\left\{\boldsymbol{\beta}(\bm{c}_{new}) \in C_0|M_A, \{\boldsymbol{\beta}(\bm{c}_i)\}_{i=1}^n \right\} < 1-\alpha.\]
\label{thm:uq}
\end{theorem}
\begin{proof}
For brevity, let $\boldsymbol{\beta} \equiv \boldsymbol{\beta}(\bm{c}_{new})|\{\boldsymbol{\beta}(\bm{c}_i)\}_{i=1}^n$, and let $\hat{\boldsymbol{\beta}} \equiv \mathbb{E}[\boldsymbol{\beta}(\bm{c}_{new})|\{\boldsymbol{\beta}(\bm{c}_i)\}_{i=1}^n]$. Letting $\bm{Z} \sim \mathcal{N}(\bm{0},\bm{I}_K)$, it is easy to show that
\[\boldsymbol{\beta} - \hat{\boldsymbol{\beta}} | M_0 \sim \mathcal{N}\left\{\bm{0}, \left(1 - \bm{r}_{\tau,new}^T \bm{R}_{\tau}^{-1} \bm{r}_{\tau,new} \right) \bm{D} \right\} \stackrel{d}{=} \sqrt{1 - \bm{r}_{\tau,new}^T \bm{R}_{\tau}^{-1} \bm{r}_{\tau,new} } \bm{D}^{1/2} \bm{Z}, \quad \text{and}\]
\[\boldsymbol{\beta} - \hat{\boldsymbol{\beta}} | M_A \sim \mathcal{N}\left\{\bm{0}, \left(1 - \bm{r}_{\tau,new}^T \bm{R}_{\tau}^{-1} \bm{r}_{\tau,new} \right) \bm{T} \right\} \stackrel{d}{=} \sqrt{1 - \bm{r}_{\tau,new}^T \bm{R}_{\tau}^{-1} \bm{r}_{\tau,new} } \bm{T}^{1/2} \bm{Z}.\]
Under the independent model $M_0$, the $100(1-\alpha)\%$ HDCR becomes:
\[C_0 = \{\boldsymbol{\xi} \; : \; \left(1 - \bm{r}_{\tau,new}^T \bm{R}_{\tau}^{-1} \bm{r}_{\tau,new} \right)^{-1} (\boldsymbol{\xi} - \hat{\boldsymbol{\beta}})^T \bm{D}^{-1} (\boldsymbol{\xi} - \hat{\boldsymbol{\beta}}) \leq \chi^2_K(1-\alpha) \},\]
where $\chi^2_K(1-\alpha)$ be the $(1-\alpha)$-quantile of a $\chi^2$-distribution with $K$ degrees of freedom. Now, let $\lambda_{min}$ denote the minimum eigenvalue of $\bm{T}^{1/2} \bm{D}^{-1} \bm{T}^{1/2}$. It follows that
\begin{align*}
\mathbb{P}\left(\boldsymbol{\beta} \in C_0|M_A\right) &= \mathbb{P}\left\{ (\boldsymbol{\beta} - \hat{\boldsymbol{\beta}})^T \bm{D}^{-1} (\boldsymbol{\beta} - \hat{\boldsymbol{\beta}}) \leq \left(1 - \bm{r}_{\tau,new}^T \bm{R}_{\tau}^{-1} \bm{r}_{\tau,new} \right) \chi^2_K(1-\alpha) \Big| M_A \right\}\\
&= \mathbb{P}\left\{ \bm{Z}^T (\bm{T}^{1/2} \bm{D}^{-1} \bm{T}^{1/2}) \bm{Z} \leq \chi^2_K(1-\alpha) \right\}\\
&\leq \mathbb{P}\left\{ \bm{Z}^T \bm{Z} \leq \lambda_{min}^{-1} \chi^2_K(1-\alpha) \right\},
\end{align*}
since $\bm{Z}^T (\bm{T}^{1/2} \bm{D}^{-1} \bm{T}^{1/2}) \bm{Z} \geq \lambda_{min}\bm{Z}^T \bm{Z}$ almost surely. The asserted result follows because $\mathbb{P}\left\{ \bm{Z}^T \bm{Z} \leq \lambda_{min}^{-1} \chi^2_K(1-\alpha) \right\}$ is strictly less than $1-\alpha$ when $\lambda_{min} > 1$.
\end{proof}
In words, this theorem quantifies the effect on coverage probability when the true co-kriging model $M_A$, which accounts for cross-correlations between modes, is misspecified as $M_0$, the independent model ignoring such cross-correlations. Note that an increase in the number of significant {non-zero cross-correlations} in $\bm{T}$ causes $\bm{T}^{1/2} \bm{D}^{-1} \bm{T}^{1/2}$ to deviate further from unity, which in turn may increase $\lambda_{min}$. Given enough such correlations, Theorem \ref{thm:uq} shows that the coverage probability from the misspecified model $M_0$ is less than the desired $100(1-\alpha)\%$ rate. In the present case, this suggests that when there are enough significant {flow couplings}, the co-kriging model $M_A$ provides more {accurate} UQ for the \textit{joint} prediction of flow variables when compared to the misspecified, independent model $M_0$. This improvement also holds for functions of flow variables (as we demonstrate later in Section \ref{sec:result}), although a formal argument is not presented here.
It is important to mention here an important trade-off for co-kriging models in general, and why the proposed model is appropriate for the application at hand in view of such a trade-off. It is known from spatial statistics literature (see, e.g., \citealp{Bea2014, Mea2016}) that when the matrix $\bm{T}$ exhibits strong correlations and can be estimated well, one enjoys improved predictive performance through a co-kriging model (this is formally shown for the current model in Theorem \ref{thm:uq}). However, when such correlations are absent or cannot be estimated well, a co-kriging model can yield poorer performance to an independent model! We claim that the former is true for the current application at hand. First, the differential equations governing the simulation procedure explicitly impose strong dependencies between flow variables, so we know \textit{a priori} the existence of strong correlations in $\bm{T}$. Second, we will show later in Section \ref{sec:correx} that the dominant correlations selected in $\bm{T}$ are physically interpretable in terms of fluid mechanic principles and conservation laws, which provides strong evidence for the correct estimation of $\bm{T}$.
One issue with fitting $M_A$ is that there are many more parameters to estimate. Specifically, since the CPOD covariance matrix $\bm{T}$ is $K \times K$ dimensional, there is {insufficient} data for estimating all entries in $\bm{T}$ using the extracted coefficients from the CPOD expansion. One solution is to impose the sparsity constraint $\|\bm{T}^{-1}\|_1 \leq \gamma$, where $\|\bm{A}\|_1 = \sum_{k=1}^K \sum_{l=1}^K |A_{kl}|$ is the element-wise $L_1$ norm. For a small choice of $\gamma$, this forces nearly all entries in $\bm{T}^{-1}$ to be zero, thus permitting consistent estimation of the {few significant} correlations. Sparsity can also be justified from an engineering perspective, because the number of significant couplings is known to be small from flow physics. $\gamma$ can also be adjusted to extract a {pre-specified} number of flow couplings, which is appealing from an engineering point-of-view. The justification for sparsifying $\bm{T}^{-1}$ instead of $\bm{T}$ is largely computational, because, algorithmically, the former problem can be handled much more efficiently than the latter using the graphical LASSO (\citealp{Fea2008}; see also \citealp{BT2011}). Such efficiency is crucial here, since GP parameters need to be jointly estimated as well.
Although the proposed model is {similar} to the one developed in \cite{Qea2008} for emulating qualitative factors, there are two key distinctions. First, our model allows for {different} process variances for each coefficient, whereas their approach restricts all coefficients to have {equal} variances. Second, our model incorporates sparsity on the CPOD covariance matrix, an assumption necessary from a {statistical} point-of-view and appealing from a physics extraction perspective. Lastly, the algorithm proposed below can estimate $\bm{T}$ more efficiently than the semi-definite programming approach in \cite{Qea2008}.
\subsection{Parameter estimation}
To estimate the model parameters $\boldsymbol{\mu}$, $\bm{T}$ and $\boldsymbol{\tau}$, maximum-likelihood estimation (MLE) is used in favor of a Bayesian implementation. The primary reason for this choice is computational efficiency: for the proposed emulator to be used as a fast investigative tool for surveying the design space, it should generate flow predictions much quicker than a direct LES simuation, which requires several days of parallelized computation.
From \eqref{eq:gpcoef} and \eqref{eq:gpcov}, the maximum-likelihood formulation can be written as $\argmin_{\boldsymbol{\mu}, \bm{T}, \boldsymbol{\tau}} \allowbreak l_\lambda(\boldsymbol{\mu}, \bm{T}, \boldsymbol{\tau})$, where $l_\lambda(\boldsymbol{\mu}, \bm{T}, \boldsymbol{\tau})$ is the \textit{penalized} negative log-likelihood:
\small
\begin{equation}
l_\lambda(\boldsymbol{\mu}, \bm{T}, \boldsymbol{\tau}) = n\log \det\bm{T} + K\log \det \bm{R}_\tau+(\bm{B}-\bm{1}_n \otimes \boldsymbol{\mu})^T [ \bm{R}_\tau^{-1}\otimes{\bm{T}}^{-1}](\bm{B}-\bm{1}_n \otimes \boldsymbol{\mu}) + \lambda \|\bm{T}^{-1}\|_1.
\label{eq:nll}
\end{equation}
\normalsize
Note that, because the formulation is convex in $\bm{T}^{-1}$, the sparsity constraint $\|\bm{T}^{-1}\|_1 \leq \gamma$ has been incorporated into the likelihood through the penalty $\lambda \|\bm{T}^{-1}\|_1$ using strong duality. Similar to $\gamma$, a {larger} $\lambda$ results in a smaller number of selected correlations, and vice versa. The tuning method for $\lambda$ should depend on the desired end-goal. For example, if predictive accuracy is the primary goal, then $\lambda$ should be tuned using cross-validation techniques \citep{Fea2001}. However, if correlation extraction is desired or prior information is available on flow couplings, then $\lambda$ should be set so that a {fixed} (preset) number of correlations is extracted. We discuss this further in Section \ref{sec:result}.
\begin{algorithm}[t]
\caption{BCD algorithm for maximum likelihood estimation}
\label{alg:mle}
\begin{algorithmic}[1]
\small
\ParFor{each time-step $t = 1, \cdots, T$}
\State $\bullet$ \; Set initial values $\boldsymbol{\mu} \leftarrow \bm{0}_K$, $\bm{T} \leftarrow \bm{I}_{K}$ and $\boldsymbol{\tau} \leftarrow \bm{1}_p$, and set $\bm{B} \leftarrow (\boldsymbol{\beta}(\bm{c}_1), \cdots, \boldsymbol{\beta}(\bm{c}_n))^T$
\Repeat\\
\quad \quad \quad \underline{Optimizing $\bm{T}$}:
\State $\bullet$ \; Set $\bm{W} \leftarrow \frac{1}{n}{(\bm{B} - \bm{1}_n \otimes \boldsymbol{\mu}^T )^T \bm{R}_\tau^{-1}(\bm{B} - \bm{1}_n \otimes \boldsymbol{\mu}^T )} + \lambda \cdot \bm{I}_{K}$
\Repeat
\For{$j = 1, \cdots, K$}
\State $\bullet$ \; Solve $\tilde{\boldsymbol{\delta}} = \argmin_{\boldsymbol{\delta}} \left\{ \frac{1}{2} \|\bm{W}_{-j, -j}^{1/2}\boldsymbol{\delta}\|_2^2 + \lambda \|\boldsymbol{\delta}\|_1 \right\}$ using LASSO
\State $\bullet$ \; Update $\bm{W}_{-j,j} \leftarrow \bm{W}_{-j,-j} \tilde{\boldsymbol{\delta}}$ and $\bm{W}_{j,-j}^T \leftarrow \bm{W}_{-j,-j} \tilde{\boldsymbol{\delta}}$
\EndFor
\Until{$\bm{W}$ converges}
\State $\bullet$ \; Update $\bm{T} \leftarrow \bm{W}^{-1}$\\
\quad \quad \quad \underline{Optimizing $\boldsymbol{\mu}$ and $\boldsymbol{\tau}$}:
\State $\bullet$ \; Update $\boldsymbol{\tau} \leftarrow \argmin_{\tau} l_\lambda(\boldsymbol{\mu}_{\boldsymbol{\tau}}, \bm{T}, \boldsymbol{\tau})$ with L-BFGS, with $\boldsymbol{\mu}_{\boldsymbol{{\tau}}} = (\bm{1}_n^T \bm{R}_{{\tau}}^{-1}\bm{1}_n)^{-1} (\bm{1}_n^T \bm{R}_{{\tau}}^{-1} \bm{B})$
\State $\bullet$ \; Update $\boldsymbol{\mu} \leftarrow \boldsymbol{\mu}_{\boldsymbol{\tau}}$
\Until{$\boldsymbol{\mu}$, $\bm{T}$ and $\boldsymbol{\tau}$ converge}
\EndParFor
\State $\bullet$ \; \Return $\boldsymbol{\mu}(t)$, $\bm{T}(t)$ and $\boldsymbol{\tau}(t)$
\normalsize
\end{algorithmic}
\end{algorithm}
Assume for now a fixed penalty $\lambda>0$. To compute the MLEs in \eqref{eq:nll}, we propose the following \textit{blockwise coordinate descent} (BCD) algorithm. First, assign initial values for $\boldsymbol{\mu}$, $\bm{T}$ and $\boldsymbol{\tau}$. Next, iterate the following two updates until parameters converge: (a) for fixed GP parameters $\boldsymbol{\mu}$ and $\boldsymbol{\tau}$, optimize for $\bm{T}$ in \eqref{eq:nll}; and (b) for fixed covariance matrix $\bm{T}$, optimize for $\boldsymbol{\mu}$ and $\boldsymbol{\tau}$ in \eqref{eq:nll}. With the use of the graphical LASSO algorithm from \cite{Fea2008}, the first update can be computed efficiently. The second update can be computed using non-linear optimization techniques on $\boldsymbol{\tau}$ by means of a closed-form expression for $\boldsymbol{\mu}$. In our implementation, this is performed using the L-BFGS algorithm \citep{LN1989}, which offers a super-linear convergence rate without the cumbersome evaluation and manipulation of the Hessian matrix \citep{NW2006}. The following theorem guarantees that the proposed algorithm converges to a stationary point of \eqref{eq:nll} (see Appendix B for proof).
\begin{theorem}
The BCD scheme in Algorithm \ref{alg:mle} converges to some solution $(\hat{\boldsymbol{\mu}},\hat{\bm{T}},\hat{\boldsymbol{\tau}})$ which is stationary for the penalized log-likelihood $l_\lambda(\boldsymbol{\mu}, \bm{T}, \boldsymbol{\tau})$.
\label{thm:conv}
\end{theorem}
It is worth noting that the proposed algorithm does not provide global optimization. This is not surprising, because the log-likehood $l_\lambda$ is non-convex in $\boldsymbol{\tau}$. To this end, we run multiple threads of Algorithm \ref{alg:mle} in parallel, each with a different initial point $\boldsymbol{\tau}_0$ from a large space-filling design on $[10^{-3},1-10^{-3}]^p$, then choose the converged parameter setting which yields the largest likelihood value from \eqref{eq:nll}. In our experience, this heuristic performs quite well in practice.
\section{Emulation results}\label{sec:result}
In this section, we present in four parts the emulation performance of the proposed model, when trained using the database of $n=30$ flow simulations described in Section 2. First, we briefly introduce key flow characteristics for a swirl injector, and physically interpret the flow structures extracted from CPOD. Second, we compare the numerical accuracy of our flow prediction with a validation simulation at a new injector geometry. Third, we provide a spatio-temporal quantification of uncertainty for our prediction, and discuss its physical interpretability. Lastly, we summarize the extracted flow couplings from $\mathbf{T}$, and explain why these are both intuitive and intriguing from a flow physics perspective.
\subsection{Visualization and CPOD modes}
We employ three flow snapshots of circumferential velocity (shown in Figure \ref{fig:inst}) to introduce key flow characteristics for a swirl injector: the fluid transition region, spreading angle, surface wave propagation and center recirculation. These characteristics will be used for assessing emulator accuracy, UQ and extracted flow physics.
\begin{figure}[!t]
\begin{minipage}{0.48\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/instantaneous_plot}
{\caption{Flow snapshots of circumferential velocity at $t$ = 6, 12 and 18 ms.}
\label{fig:inst}}
\end{minipage}
\hfill
\begin{minipage}{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/Energy}
{\caption{Energy distribution of CPOD modes for circumferential velocity flow.}
\label{fig:energy}}
\includegraphics[width=\linewidth]{Figures/Mode}
\caption{The leading two spatial CPOD modes for circumferential velocity flow.}
\label{fig:PODmodes}
\end{minipage}
\end{figure}
\begin{itemize}
\item \textit{Fluid transition region:} The fluid transition region is the region which connects compressed-liquid near the wall (colored blue in Figure \ref{fig:inst}) to light-gas (colored red) near the centerline at supercritical pressure \citep{WY2016}. This region is crucial for analyzing injector flow characteristics, as it provides the instability propagation and feedback mechanisms between the injector inlet and exit. An important emulation goal is to accurately predict both the {spatial location} of this region and its dynamics, because such information can be used to assess feedback behavior at new geometries.
\item \textit{Spreading angle:} The spreading angle $\alpha$ (along with the LOX film thickness $h$) is an important physical metric for measuring the performance of a swirl injector. A larger $\alpha$ and smaller $h$ indicate better performance of injector atomization and breakup processes. The spreading angle can be seen in Figure \ref{fig:inst} from the blue LOX flow at injector exit (see Figure \ref{fig:inj} for details).
\item \textit{Surface wave propagation:} Surface waves, which transfer energy through the fluid medium, manifest themselves as wavy structures in the flowfield. These waves allow for propagation of flow instabilities between upstream and downstream regions of the injector, and can be seen in the first snapshot of Figure \ref{fig:inst} along the LOX film boundary.
\item \textit{Center recirculation:} Center recirculation, another key instability structure, is the circular flow of a fluid around a rotational axis (this circular region is known as the {vortex core}). From the third snapshot in Figure \ref{fig:inst}, a large vortex core (in white) can be seen at the injector exit, which is expected because of sudden expansion of the LOX stream and subsequent generation of adverse pressure gradient.
\end{itemize}
Regarding the CPOD expansion, Figure \ref{fig:energy} shows the energy ratio captured using the leading $M$ terms in \eqref{eq:cklexp} for circumferential velocity, with this ratio defined as:
\[\xi(M) = \frac{\sum_{k=1}^M \sum_{i=1}^n \int \left[ \int \beta_k(t; \bm{c}_i) \mathcal{M}_i\{\phi_k(\bm{x})\} \; d\bm{x}\right]^2 \; dt}{\sum_{k=1}^\infty \sum_{i=1}^n \int \left[ \int \beta_k(t; \bm{c}_i) \mathcal{M}_i\{\phi_k(\bm{x})\} \; d\bm{x} \right]^2 \; dt}.\]
Only $M=10$ and $M=45$ modes are needed to capture 90\% and 99\% of the total flow energy over \textit{all} $n=30$ simulation cases, respectively. Compared to a similar experiment in \cite{ZY2008}, which required around $M=20$ modes to capture 99\% flow energy for a \textit{single} geometry, the current results are very promising, and show that the CPOD gives a reasonably compact representation. This also gives empirical evidence for the linearity assumption used for computation efficiency. Similar results also hold for other flow variables as well, and are not reported for brevity. Additionally, the empirical study in \cite{ZY2008} showed that the POD modes capturing the top 95\% energy have direct physical interpretability in terms of known flow instabilities. To account for these (and perhaps other) instability structures in the model, we set the truncation limit $K_r$ as the smallest value of $M$ satisfying $\xi(M) \geq 99\%$, which appears to provide a good balance between predictive accuracy and computational efficiency.
The extracted CPOD terms can also be interpreted in terms of flow physics. We illustrate this using the leading two CPOD terms for circumferential velocity, whose spatial distributions are shown in Figure \ref{fig:PODmodes}. Upon an inspection of these spatial plots and their corresponding spectral frequencies, both modes can be identified as hydrodynamic instabilities in the form of longitudinal waves propagating along the LOX film boundary. Specifically, the first mode corresponds to the first harmonic mode for this wave, and the second mode represents the second harmonic and shows the existence of an antinode in wave propagation. As we show in Section 4.4, the interpretability of CPOD modes allows the proposed model to extract physically meaningful couplings for further analysis.
\begin{figure}[!tp]
\begin{minipage}{0.48\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/pred}
\vspace{-1cm}
\caption{Simulated and emulated temperature flow at $t=21.75$ ms, $23.25$ ms and $24.75$ ms.}
\label{fig:comp}
\end{minipage}
\hfill
\begin{minipage}{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/MRE_mod}
\caption{MRE at injector inlet (top), fluid transition region (middle) and injector exit (bottom).}
\label{fig:mae}
\end{minipage}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.5\textwidth]{Figures/PSD_probe}
\caption{Injector subregions (dotted in blue) and probe locations (circled in white).}
\label{fig:probes}
\end{figure}
\subsection{Emulation accuracy}
To ensure that our emulator model provides accurate flow predictions, we perform a validation simulation at the new geometric setting: $L=22\text{ mm}$, $R_n=3.215\text{ mm}$, $\Delta L=3.417\text{ mm}$, $\theta = 58.217^\circ$ and $\delta=0.576\text{ mm}$. This new geometry provides a 10\% variation on an existing injector used in the RD-0110 liquid-fuel engine \citep{Yan1995}. Since the goal is {predictive accuracy}, the sparsity penalty $\lambda$ in \eqref{eq:nll} is tuned using 5-fold cross-validation \citep{Fea2001}. We provide below a qualitative comparison of the predicted and simulated flows, and then discuss several metrics for quantifying emulation accuracy.
Figure \ref{fig:comp} shows three snapshots of the simulated and predicted fully-developed flows for temperature, in intervals of $1.5$ ms starting at $21.75$ ms. From visual inspection, the predicted flow closely mimics the simulated flow on several performance metrics, including the fluid transition region, film thickness and spreading angle. The propagation of surface waves is also captured quite well within the injector, with key downstream recirculation zones correctly identified in the prediction as well. This comparison illustrates the effectiveness of the proposed emulator in capturing key flow physics, and demonstrates the importance of incorporating known flow properties of the fluid as assumptions in the statistical model.
Next, three metrics are used to quantify emulation accuracy. The first metric, which reports the mean relative error in important sub-regions of the injector, measures the \textit{spatial} aspect of prediction accuracy. The second metric, which inspects spectral similarities between the simulated and predicted flows, measures \textit{temporal} accuracy. The last metric investigates how well the predicted flow captures the underlying flow physics of an injector.
For {spatial} accuracy, the following mean relative error (MRE) metric is used:
\[
\text{MRE}(t;\mathcal{S})=\frac{\int_{\mathcal{S}}|Y(\mathbf{x},t;\mathbf{c}_{new})-\hat{Y}(\mathbf{x},t;\mathbf{c}_{new})| \; d \bm{x}}{\int_{\mathcal{S}}|Y(\mathbf{x},t;\mathbf{c}_{new})| \; d\bm{x}}\times 100\%,
\]
where $Y(\mathbf{x},t;\mathbf{c}_{new})$ is the simulated flow at setting $\bm{c}_{new}$, and $\hat{Y}(\mathbf{x},t;\mathbf{c}_{new})$ is the flow predictor in \eqref{eq:flowpred} (for brevity, the superscript for flow variable $r$ is omitted here). In words, MRE($t;\mathcal{S}$) provides a measure of emulation accuracy within a desired sub-region $\mathcal{S}$ at time $t$, relative to the overall flow energy in $\mathcal{S}$. Since flow behaviors within the injector inlet, fluid transition region and injector exit (outlined in Figure \ref{fig:probes}) are crucial for characterizing injector instability, we investigate the MRE specifically for these three sub-regions. Figure \ref{fig:mae} plots MRE($t,\mathcal{S}$) for $t = 15 - 30$ ms, when the flow has fully developed. For all three sub-regions, the relative error is within a tolerance level of 10\% for nearly all time-steps, which is very good from an engineering perspective.
\begin{figure}[!t]
\centering
\includegraphics[width=\linewidth]{Figures/PSD}
\caption{PSD spectra for pressure flow at probes 1, 3, 5 and 7 (see Figure \ref{fig:probes}).}
\label{fig:PSD}
\end{figure}
To assess {temporal} accuracy, we conduct a power spectral density (PSD) analysis of predicted and simulated pressure flows at eight specific probes along the region of surface wave propagation (see Figure \ref{fig:probes}). This analysis is often performed as an empirical tool for assessing injector stability (see \citealp{ZY2008}), because surface waves allow for feedback loops between upstream and downstream oscillations \citep{bazarov1998liquid}. Figure \ref{fig:PSD} shows the PSD spectra for the predicted and simulated flow at four of these probes. Visually, the spectra look very similar, both at low and high frequencies, with {peaks} nearly identical for the predicted and simulated flow. Such peaks are highly useful for analyzing flow physics, because they can be used to {identify} physical properties (e.g., hydrodynamic, acoustic, etc.) of dominant instability structures. In this sense, the proposed emulator does an excellent job in mimicking important {physics} of the simulated flow.
\begin{figure}[!t]
\begin{minipage}{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/diff_uq_500}
\caption{Absolute prediction error (top) and pointwise CI width (bottom) for $x$-velocity at $t=15$ ms.}
\label{fig:UQcomparison}
\end{minipage}
\hfill
\begin{minipage}{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/UQ_u.png}
\caption{CI width of $x$-velocity at probe 1.}
\label{fig:UQtemporal}
\end{minipage}
\end{figure}
Finally, we investigate the film thickness $h$ and spreading angle $\alpha$, which are key performance metrics for injector performance. Since both of these metrics are computed using spatial gradients of flow variables, an accurate emulation of these measures suggests accurate flow emulation as well. For the validation setting, the simulated (predicted) flow has a film thickness of 0.47 mm (0.42 mm) and a spreading angle of 103.63$^\circ$ (107.36$^\circ$), averaged over the fully-developed timeframe from $t=15 - 30$ ms. This corresponds to relative errors of 10.6\% and 3.60\%, respectively, and is within the desired error tolerance from an engineering perspective.
\subsection{Uncertainty quantification}
For computer experiments, the quantification of predictive uncertainty can be as important as the prediction itself. To this end, we provide a spatio-temporal representation of this UQ, and show that it has a useful and appealing physical interpretation. For spatial UQ, the top plot of Figure \ref{fig:UQcomparison} shows the {one-sided width} of the 80\% pointwise confidence interval (CI) from \eqref{eq:flowvar} for $x$-velocity at $t=15$ ms. It can be seen that the emulator is most certain in predicting near the inlet and centerline of the injector, but shows high predictive uncertainty at the three gaseous cores downstream (in green). This makes physical sense, because these cores correspond to flow recirculation vortices, and therefore exhibit highly unstable flow behavior. From the bottom plot of Figure \ref{fig:UQcomparison}, which shows the absolute emulation error of the same flow, the pointwise confidence band not only covers the realized prediction error, but roughly mimics its spatial distribution as well.
For temporal UQ, Figure \ref{fig:UQtemporal} shows the same one-sided CI width at probe 1 (see Figure \ref{fig:probes}). We see that this temporal uncertainty is relatively steady over $t$, except for two abrupt spikes at time-steps around 300 and 800. These two spikes have an appealing physical interpretation: the first indicates a {flow displayment} effect of the central vortex core, whereas the second can be attributed to the {boundary development} of the same core. This again demonstrates the usefulness of UQ not only as a measure of predictive uncertainty, but also as a means for extracting useful flow physics without the need for expensive simulations.
To illustrate the improved UQ of the proposed model (see Theorem \ref{thm:uq}), we use a derived quantity called {turbulent kinetic energy} (TKE). TKE is typically defined as:
\small
\begin{equation}
\kappa(\bm{x},t) = \frac{1}{2}\sum_{r \in \{u,v,w\}}\left\{{Y}^{(r)}(\mathbf{x},t)-\bar{{Y}}^{(r)}(\mathbf{x})\right\}^2,
\label{eq:kedef}
\end{equation}
\normalsize
where $Y^{(u)}(\mathbf{x},t),Y^{(v)}(\mathbf{x},t)$ and $Y^{(w)}(\mathbf{x},t)$ are flows for $x$-, $y$- and circumferential velocities, respectively, with $\bar{Y}^{(u)}(\mathbf{x}),\bar{Y}^{(v)}(\mathbf{x})$ and $\bar{Y}^{(w)}(\mathbf{x})$ its corresponding time-averages. Such a quantity is particularly important for studying turbulent instabilities, because it measures {fluid rotation energy} within eddies and vortices.
For the sake of simplicity, assume that (a) the time-averages $\bar{Y}^{(u)}(\mathbf{x}),\bar{Y}^{(v)}(\mathbf{x})$ and $\bar{Y}^{(w)}(\mathbf{x})$ are fixed, and (b) the parameters $(\boldsymbol{\mu}, \bm{T}, \boldsymbol{\tau})$ are known. The following theorem provides the MMSE predictor and pointwise confidence interval for $\kappa(\bm{x},t)$ (proof in Appendix C).
\begin{theorem}
For fixed $\bm{x}$ and $t$, the MMSE predictor of $\kappa(\bm{x},t)$ at a new setting $\bm{c}_{new}$ is
\small
\begin{equation}
\hat{\kappa}(\bm{x},t) = \frac{1}{2}\sum_{r \in \{u,v,w\}}\left\{\hat{Y}^{(r)}(\mathbf{x},t)-\bar{{Y}}^{(r)}(\mathbf{x})\right\}^2 + \text{tr}\{\Phi(\bm{x},t)\},
\label{eq:kepred}
\end{equation}
\normalsize
where $\hat{Y}^{(u)}(\mathbf{x},t),\hat{Y}^{(v)}(\mathbf{x},t)$ and $\hat{Y}^{(w)}(\mathbf{x},t)$ are {predicted} flows for $x$-, $y$- and circumferential velocities from \eqref{eq:flowpred}, and $\Phi(\bm{x},t)$ is defined in (C.1) of Appendix C. Moreover, $\hat{\kappa}(\bm{x},t)$ is distributed as a weighted sum of non-central $\chi^2$ random variables, with an explicit expression given in (C.3) of Appendix C.
\label{thm:kinuq}
\end{theorem}
\noindent In practice, plug-in estimates are used for both time-averaged flows and model parameters.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{Figures/TKE_horiz}
\caption{Predicted TKE and lower 90\% confidence band for $M_A$ and $M_0$ at probe 8.}
\label{fig:ke}
\end{figure}
With this in hand, we compare the prediction and UQ of TKE from the proposed model $M_A$ and the independent model $M_0$ (see Theorem \ref{thm:uq}) with the simulated TKE at the validation setting. Figure \ref{fig:ke} shows the predicted TKE $\hat{\kappa}(\bm{x},t)$ at probe 8 over the fully-developed time-frame of $t= 15 - 30$ ms, along with the 90\% lower pointwise confidence band constructed using Theorem \ref{thm:kinuq}. Visually, the proposed model $M_A$ provides an improved prediction of the simulated TKE than the independent model $M_0$. As for the confidence bands, the average coverage rate for $M_A$ over the fully-developed time-frame (85.0\%) is much closer to the desired nominal rate of 90\% compared to that for $M_0$ (73.8\%). The proposed model therefore provides a coverage rate closer to the desired nominal rate of 90\%. The poor coverage rate for the independent model is shown in the right plot of Figure \ref{fig:ke}, where the simulated TKE often dips below the lower confidence band. By incorporating prior knowledge of flow couplings, the proposed model can provide improved predictive performance and uncertainty quantification.
\begin{table}[t]
\begin{minipage}{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/Tmatrix2.pdf}
\captionof{figure}{Graph of selected flow couplings from $\bm{T}$. Nodes represent CPOD modes, and edges represent non-zero correlations.}
\label{fig:T}
\end{minipage}
\hfill
\begin{minipage}{0.48\textwidth}
\begin{tabular}{cc}
\toprule
\text{\bf{Step}} & \text{\bf{Comp. time (mins)}}\\
\toprule
\text{CPOD extraction} & 33.91\\
\text{Parameter estimation} & 11.31\\
\text{Flow prediction} & 20.19\\
\toprule
\text{Total} & 65.41\\
\toprule
\end{tabular}
\caption{Computation time for each step of the proposed emulator, parallelized over 200 processing cores.}
\label{tbl:comptime}
\end{minipage}
\end{table}
\subsection{Correlation extraction}
\label{sec:correx}
Finally, we demonstrate the use of the proposed model as a tool for extracting {common flow couplings} on the design space. Setting the sparsity penalty $\lambda$ so that only the top nine correlations are chosen, Figure \ref{fig:T} shows the corresponding graph of the extracted couplings of CPOD modes. Nodes on this graph represent CPOD modes for each flow variable, with edges indicating the presence of a non-zero correlation between two modes. Each connected subgraph in Figure \ref{fig:T} is interpretable in terms of flow physics. For example, the subgraph connecting $u_1$, $w_1$ and $P_1$ (first modes for $x$-velocity, circumferential velocity and pressure) makes physical sense, because $u_1$ and $w_1$ are inherently coupled by Bernoulli's equation for fluid flow \citep{SS1982}, while $w_1$ and $P_1$ are connected by the centrifugal acceleration induced by circular momentum of LOX flow. Likewise, the subgraph connecting $T_1$, $\rho_1$ and $w_2$ also provides physical insight: $T_1$ and $\rho_1$ are coupled by the equation of state and conservation of energy, while $\rho_1$ and $w_2$ are connected by conservation of momentum.
The interpretability of these extracted flow couplings in terms of fundamental conservation laws from fluid mechanics is not only appealing from a flow physics perspective, but also provides a reassuring check on the estimation of the co-kriging matrix $\bm{T}$. Recall from the discussion in Section \ref{sec:covmat} that an accurate estimate of $\bm{T}$ is needed for the improved predictive guarantees of Theorem \ref{thm:uq} to hold. The consistency of the selected flow couplings (and the ranking of such couplings) with established physical principles provides confidence that the proposed estimation algorithm indeed returns an accurate estimate of $\bm{T}$. These results nicely illustrate the dual purpose of the CPOD matrix $\bm{T}$ in our co-kriging model: not only does it allow for more accurate UQ, it also extracts interesting flow couplings which can guide further experiments.
\subsection{Computation time}
In addition to accurate flow emulation and physics extraction, the primary appeal of the proposed emulator is its efficiency. Table \ref{tbl:comptime} summarizes the computation time required for each step of the emulation process, with timing performed on a parallelized system of 200 Intel Xeon E5-2603 1.80GHz processing cores. Despite the massive training dataset, which requires nearly 100GB of storage space, we see that the proposed model can provide accurate prediction, UQ and coupling extraction in slightly over an hour of computation time. Moreover, because both CPOD extraction and parameter estimation need to be performed only once, the surrogate model can generate flow predictions for hundreds of new settings within a day's time, thereby allowing for the exploration of the full design space in practical turn-around times. Through a careful elicitation and incorporation of flow physics into the surrogate model, we show that an efficient and accurate flow prediction is possible despite a limited number of simulation runs, with the trained model extracting valuable physical insights which can be used to guide further investigations.
\section{Conclusions and future work}\label{sec:concl}
In this paper, a new emulator model is proposed which efficiently predicts turbulent cold-flows for rocket injectors with varying geometries. An important innovation of our work lies in its \textit{elicitation} and \textit{incorporation} of flow properties as model assumptions. First, exploiting the deep connection between POD and turbulent flows \citep{Lum1967}, a novel CPOD decomposition is used for extracting {common} instabilities over the design space. Next, taking advantage of dense temporal resolutions, a {time-independent} emulator is proposed that considers {independent} emulators at each simulation time-step. Lastly, a sparse covariance matrix $\bm{T}$ is employed within the emulator model to account for the few significant couplings among flow variables. Given the complexities inherent in spatio-temporal flows and the massive datasets at hand, such simplifications are paramount for {accurate} flow predictions in {practical} turn-around times. This highlights the need for careful {elicitation} in flow emulation, particularly for engineering applications where the time-consuming nature of simulations limits the number of available runs.
Applying the model to simulation data, the proposed emulator provides accurate flow predictions and captures several key metrics for injector performance. In addition, the proposed model offers two appealing features: (a) it provides a physically meaningful quantification of spatio-temporal uncertainty, and (b) it extracts significant couplings between flow instabilities. A key advantage of our emulator over existing flow kriging methods is that it provides accurate predictions using only a fraction of the time required by simulation. This efficiency is very appealing for engineers, because it allows them to fully explore the desired design space and make timely decisions.
Looking ahead, we are pursuing several directions for future research. First, while the CPOD expansion appears to work well for cold-flows, the justifying assumption of similar Reynolds numbers does not hold for more complicated (e.g., reacting) turbulent flows. To this end, we are working on ways to incorporate pattern recognition techniques \citep{Fuk2013} into the GP kriging framework to jointly (a) identify common instability structures that scale {non-linearly} over varying geometries, then (b) predict such structures at new geometric settings. The key hurdle is again {computational efficiency}, and the treed GP models in \cite{Tea2012} or the local GP models in \cite{GA2015} and \cite{Sea2016} appear to be attractive options. Next, a new design is proposed recently in \cite{MJ2017} which combines the MaxPro methodology with minimax coverage, and it will be interesting to see whether such designs can provide improved performance. Lastly, to evaluate the stability of new injector geometries, the UQ for the emulated flow needs to be fed forward through an acoustics solver. Since each evaluation of the solver can be time-intensive, this forward uncertainty propagation can be performed more quickly by reducing this UQ to a set of representative points, and the support points in \cite{MJ2016} can prove to be useful for conducting such a task.\\
\if11{\noindent \textbf{Acknowledgements}: The authors gratefully acknowledge helpful advice from the associate editor, two anonymous referees and Dr. Mitat A. Birkan. This work was sponsored partly by the Air Force Office of Scientific Research under Grant No. FA 9550-10-1-0179, and partly by the William R. T. Oakes Endowment of Georgia Institute of Technology. Wu's work is partially supported by NSF DMS 1564438.}
\fi
\newpage
\begin{appendices}
\numberwithin{equation}{section}
\counterwithin{figure}{section}
\counterwithin{table}{section}
\setcounter{page}{1}
\section{Computing the CPOD expansion}
\label{sec:CPOD}
The driving idea behind CPOD is that a common spatial domain is needed to extract common instabilities over multiple injector geometries, since each simulation run has different geometries and varying grid points. We first describe a physically justifiable method for obtaining such a common domain, and then use this to compute the CPOD expansion.
\subsection{Common grid}\label{sec:commongrid}
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{Figures/FourParts.png}
\caption{Partition of the spatial grid for the first simulation case.}
\label{fig:FourParts}
\end{figure}
\begin{enumerate}
\item Identify the densest grid (i.e., with the most grid points) among the $n$ simulation runs, and set this as the common reference grid.
\item For each simulation, partition the grid into the following four parts: (a) from injector head-end to the inlet, (b) from the inlet to the nozzle exit, (c) the top portion of the downstream region and (d) the bottom portion of the downstream region (see Figure \ref{fig:FourParts} for an illustration). This splits the flow in such a way that the linearity assumption can be physically justified.
\item Linearly rescale each part of the partition to the common grid by the corresponding geometry parameters $L$, $R_n$ and $\Delta L$ (see Figure \ref{fig:FourParts}).
\item For each simulation, interpolate the original flow data onto the spatial grid of the common geometry. This step ensures the flow is realized over a common set of grid points for all $n$ simulations. In our implementation, the \textit{inverse distance weighting} interpolation method \citep{She1968} is used with $10$ nearest neighbours.
\end{enumerate}
\subsection{POD expansion}
After flows from each simulation have been rescaled onto the common grid, the original POD expansion can be used to extract common flow instabilities. Let $\{\bm{x}_j\}_{j=1}^J$ and $\{t_m\}_{m=1}^T$ denote the set of common grid points and simulated time-steps, respectively, and let $\tilde{Y}(\bm{x},t;\bm{c}_i)$ be an interpolated flow variable for geometric setting $\bm{c}_i$, $i=1, \cdots, n$ (for brevity, assume a single flow variable, e.g., $x$-velocity, for the exposition below). The CPOD expansion can be computed using the following three steps.
\begin{enumerate}
\item For notational convenience, we combine all combinations of geometries and time-steps into a single index. Set $N=nT$ and let $l = 1, \cdots, N$ index all combinations of $n$ design settings and $T$ time-steps, and let $\tilde{Y}_l(\bm{x}) \equiv \tilde{Y}(\bm{x},(t,\bm{c})_l)$. Define $\bm{Q} \in \mathbb{R}^{N \times N}$ as the following inner-product matrix:
\[\bm{Q}_{l,m} = \sum_{j=1}^J \tilde{Y}_l (\bm{x}_j) \tilde{Y}_m (\bm{x}_j).\]
Such an inner-product is possible because all $n$ simulated flows are observed on a set of \textit{common} gridpoints set.
First, compute the eigenvectors $\bm{a}_k \in \mathbb{R}^{N}$ satisfying:
\[ \bm{Q} \bm{a}_k = \lambda_k \bm{a}_k,\]
where $\lambda_k$ is the $k$-th largest eigenvalue of $\bm{Q}$. Since a full eigendecomposition requires $O(N^3)$ work, this step may be intractible to perform when the temporal resolution is dense. To this end, we employed a variant of the implicitly restarted Arnoldi method \citep{LS1998}, which can efficiently approximate {leading} eigenvalues and eigenvectors.
\item Compute the $k$-th mode $\phi_k(\bm{x})$ as:
\[
\begin{bmatrix}
\phi_k(\bm{x}_1) \\
\phi_k(\bm{x}_2) \\
\vdots \\
\phi_k(\bm{x}_J)
\end{bmatrix}
=
\begin{pmatrix}
\tilde{Y}_1 (\bm{x}_1) & \cdots & \tilde{Y}_{N} (\bm{x}_1)\\
\vdots & \ddots & \vdots \\
\tilde{Y}_1 (\bm{x}_J) & \cdots & \tilde{Y}_{N} (\bm{x}_J)
\end{pmatrix}
\bm{a}_k.
\]
To ensure orthonormality, apply the following normalization:
\[\phi_k(\bm{x}_j) := \frac{\phi_k(\bm{x}_j)}{\|\phi_k(\bm{x})\|}, \quad \|\phi_k(\bm{x})\| = \sqrt{\sum_{j=1}^J \phi_k(\bm{x}_j)^2}\]
\item Lastly, derive the CPOD coefficients $(\beta_{l,1}, \cdots, \beta_{l,N})^T$ for the snapshot at index $l$ (i.e., with design setting and time-step $(\bm{c},t)_l$) as:
\[
\begin{bmatrix}
\beta_{l,1} \\
\beta_{l,2} \\
\vdots \\
\beta_{l,N}
\end{bmatrix}
=
\begin{pmatrix}
\phi_1(\bm{x}_1) & \cdots & \phi_1(\bm{x}_J)\\
\vdots & \ddots & \vdots \\
\phi_N(\bm{x}_1) & \cdots & \phi_N(\bm{x}_J)
\end{pmatrix}
\begin{bmatrix}
\tilde{Y}_l (\bm{x}_1)\\
\tilde{Y}_l (\bm{x}_2)\\
\vdots\\
\tilde{Y}_l (\bm{x}_J)
\end{bmatrix}.
\]
Using these coefficients and a truncation at $K_r < N$ modes, it is easy to show the following decomposition of the flow at the design setting $\bm{c}_i$ and time-step $t_m$ indexed by $l$:
\[
Y(\bm{x}_j,t_m;\bm{c}_i)\approx\sum_{k=1}^{K_r} \beta_{l,k}\mathcal{M}_i\{\phi_k(\bm{x}_j)\},\quad j = 1, \cdots , J,
\]
as asserted in (3).
\end{enumerate}
\section{Proof of Theorem 2}
\label{sec:pf2}
Define the map $A: \mathbb{R}^K \times \mathbb{R}^{K \times K} \times \mathbb{R}^p \rightarrow \mathbb{R}^K \times \mathbb{R}^{K \times K} \times \mathbb{R}^p$ as a single-loop of the graphical LASSO operator for optimizing $\bm{T}$ with $\boldsymbol{\mu}$ and $\boldsymbol{\tau}$ fixed, and define $B:\mathbb{R}^K \times \mathbb{R}^{K \times K} \times \mathbb{R}^p \rightarrow \mathbb{R}^K \times \mathbb{R}^{K \times K} \times \mathbb{R}^p$ as the L-BFGS map for a single line-search when optimizing $\boldsymbol{\mu}$ and $\boldsymbol{\tau}$ with $\bm{T}$ fixed. Each BCD cycle in Algorithm 1 then follows the map composition $S = A^M \circ B^N$, where $M < \infty$ and $N < \infty$ are the iteration count for the graphical LASSO operator and number of line-searches, respectively. The parameter estimates at iteration $m$ of the BCD cycle can then be given by:
\[\Theta_{m+1} = S(\Theta_m), \quad \text{where} \; \Theta_m = (\boldsymbol{\mu}_m,\bm{T}_m,\boldsymbol{\tau}_m).\]
Define the set of stationary solutions as $\Gamma = \{\Theta \; : \; \nabla l_{\lambda}(\Theta) = \bm{0}\}$, where $\nabla l_\lambda$ is the gradient of the negative log-likelihood $l_\lambda$. Using the Global Convergence Theorem (see Section 7.7 of \citealp{LY2008}), we can prove stationary convergence:
\[\lim_{m\rightarrow \infty}\Theta_m = \Theta^* \in \Gamma,\]
if the following three conditions hold:
\begin{enumerate}[label=(\roman*)]
\item $\{\Theta_m\}_{m=1}^\infty$ is contained within a compact subset of $\mathbb{R}^K \times \mathbb{R}^{K \times K} \times \mathbb{R}^p$,
\item $l_\lambda$ is a continuous descent function on $\Gamma$ under map $S$,
\item $S$ is closed for points outside of $\Gamma$.
\end{enumerate}
We will verify these conditions below.
\begin{enumerate}[label=(\roman*)]
\item This is easily verified by the fact that $|\boldsymbol{\mu}_m| \leq \left( \max_{i,r,k} |\beta_k^{(r)}(\bm{c}_i)| \right) \bm{1}_K$, $\bm{0} \preceq \bm{T}_m \preceq \left( \max_{k,r} s^2\{\beta_k^{(r)}(\bm{c}_i)\}_{i=1}^n \right) \bm{I}_K$ and $\boldsymbol{\tau}_m \in [0,1]^p$, where $s^2\{ \cdot \}$ returns the sample standard deviation for a set of scalars.
\item To prove that $S$ is a descent function, we need to show that if $\Theta \in \Gamma$, then $l_\lambda\{S(\Theta)\} = l_\lambda\{\Theta\}$, and if $\Theta \notin \Gamma$, then $l_\lambda\{S(\Theta)\} < l_\lambda\{\Theta\}$. The first condition is trivial, since $M=0$ and $N=0$ when $\Theta$ is stationary. The second condition follows from the fact that the maps $A$ and $B$ incur a strict decrease in $l_\lambda$ whenever $\bm{T}$ and $(\boldsymbol{\mu},\boldsymbol{\tau})$ are non-stationary, respectively.
\item Note that $A^M$ is a continuous map (since the graphical LASSO map is a continuous operator) and the line-search map $B^N$ is also continuous. Since $S = A^M \circ B^N$, it must be continuous as well, from which the closedness of $S$ follows.
\end{enumerate}
\section{Proof of Theorem 3}
\label{sec:kin}
Fix some spatial coordinate $\bm{x}$ and time-step $t$, and let:
\[\bm{y} = (Y^{(u)}(\bm{x},t;\bm{c}_{new}), Y^{(v)}(\bm{x},t;\bm{c}_{new}), Y^{(w)}(\bm{x},t;\bm{c}_{new}))^T\] be the true simulated flows for $x$-, $y$- and circumferential velocities at the new setting $\bm{c}_{new}$,
\[\hat{\bm{y}} = (\hat{Y}^{(u)}(\bm{x},t;\bm{c}_{new}), \hat{Y}^{(v)}(\bm{x},t;\bm{c}_{new}), \hat{Y}^{(w)}(\bm{x},t;\bm{c}_{new}))^T\]
be its corresponding prediction from (9), and
\[\bar{\bm{y}} = (\bar{Y}^{(u)}(\bm{x};\bm{c}_{new}), \bar{Y}^{(v)}(\bm{x};\bm{c}_{new}), \bar{Y}^{(w)}(\bm{x};\bm{c}_{new}))^T\]
be its time-averaged flow. It is easy to verify that, given the simulation data $\mathcal{D} = \{Y^{(r)}(\bm{x},t;\bm{c}_i)\}$, the conditional distribution of $\bm{y}|\mathcal{D}$ is $\mathcal{N}(\hat{\bm{y}},\Phi(\bm{x},t))$, where:
\begin{equation}
\Phi(\bm{x},t) \equiv
\left[\begin{array}{ccc}
\bm{m}^{(u)} & 0 & 0\\
0 & \bm{m}^{(v)} & 0\\
0 & 0 & \bm{m}^{(w)}
\end{array}\right]
\left[ \mathbb{V}\{{\boldsymbol{\beta}}(t;\bm{c}_{new})| \{\boldsymbol{\beta}(t;\bm{c}_i)\}^n_{i=1}\} \right]_{uvw}\left[\begin{array}{ccc}
\bm{m}^{(u)} & 0 & 0\\
0 & \bm{m}^{(v)} & 0\\
0 & 0 & \bm{m}^{(w)}
\end{array}\right]^T,
\label{eq:phi}
\end{equation}
with:
\[
\bm{m}^{(r)}=\left[\begin{array}{cccc}
\mathcal{M}_{new} \{\phi^{(r)}_1(\bm{x})\}, & \mathcal{M}_{new} \{\phi^{(r)}_2(\bm{x})\}, & \cdots & \mathcal{M}_{new} \{\phi^{(r)}_{K_r}(\bm{x})\}
\end{array}\right], \quad r=u,v,w.
\]
Letting $\Phi(t) = \bm{U}\Lambda\bm{U}^T$ be the eigendecomposition of $\Phi(t)$, with $\Lambda = \text{diag}\{\lambda_j\}$, it follows that $\Lambda^{-1/2}\bm{U}^T(\bm{y}-\bar{\bm{y}})|\mathcal{D} \stackrel{d}{=} \mathcal{N}(\boldsymbol{\mu},\bm{I}_K)$, where $\boldsymbol{\mu}=\Lambda^{-1/2}\bm{U}^T(\hat{\bm{y}}-\bar{\bm{y}})$ and $K=K_u+K_v+K_w$. Denoting $\bm{a}=\Lambda^{-1/2}\bm{U}^T(\bm{y}-\bar{\bm{y}})$, the TKE expression in (13) can be rewritten as:
\begin{align}
\begin{split}
\kappa(\bm{x},t) &= \frac{1}{2}(\bm{y}-\bar{\bm{y}})^T(\bm{y}-\bar{\bm{y}})= \frac{1}{2}(\bm{U}\Lambda^{1/2}\bm{a})^T(\bm{U}\Lambda^{1/2}\bm{a})\\
&= \frac{1}{2}(\bm{a}^T\Lambda^{1/2}\bm{U}^T\bm{U}\Lambda^{1/2}\bm{a})\\
&=\frac{1}{2}\bm{a}^T\Lambda\bm{a}=\frac{1}{2}\sum^K_{j=1}\lambda_ja^2_j.
\end{split}
\end{align}
Since $\bm{a}\sim \mathcal{N}(\boldsymbol{\mu},\bm{I}_K)$, $a^2_j$ has a non-central chi-square distribution with one degree-of-freedom and non-centrality parameter $\mu_j^2$ (we denote this as $\chi^2_1(\mu_j^2)$). $\kappa(\bm{x},t)$ then becomes:
\begin{equation}
\sum^K_{j=1}\frac{\lambda_j}{2}\chi^2_1(\mu^2_j),
\label{eq:dist}
\end{equation}
which is a sum of weighted non-central chi-squared distributions. The computation of the distribution function for such a random variable has been studied extensively, see, e.g., \cite{imhof1961computing}, \cite{davies1973numerical,davies1980algorithm}, \cite{castano2005distribution}, and \cite{liu2009new}, and we appeal to these methods for computing the pointwise confidence interval of $\kappa(\bm{x},t)$ in Section 4. Specifically, we employ the method of \cite{liu2009new} through the \textsf{R} \citep{Rcite} package \texttt{CompQuadForm} \citep{CompQuadForm}.
\end{appendices}
\newpage
|
2,869,038,153,958 | arxiv | \section{Introduction and main results}\label{sec-1}
The quasihyperbolic metric (briefly, QH metric) was introduced by Gehring and
his students Palka and Osgood in the 1970's \cite{Geo, GP} in the setting of Euclidean spaces
${\mathbb R}^n$ $(n\ge 2).$ Since its first appearance, the quasihyperbolic metric has become an important tool in the geometric function theory of Euclidean spaces, especially, in the study of quasiconformal and quasisymmetric mappings. Uniform domains in Euclidean spaces were introduced independently by Jones \cite{Jo80} and Martio and Sarvas \cite{MS}. Recently, Bonk, Heinonen and Koskela introduced uniform metric spaces in \cite{BHK} and demonstrated a one-to-one (conformal) correspondence between this class of spaces and geodesic hyperbolic spaces in the sense of Gromov. After its appearance, the uniformity has played a significant role in the related studies; see \cite{BH}, \cite{Her04}, \cite{Her06}, \cite{KL}, \cite{KLM14}, \cite{La11} and references therein.
The class of quasisymmetric mappings on the real axis was first introduced by Beurling and
Ahlfors \cite{BA}, who found a way to obtain a quasiconformal extension of a quasisymmetric self-mapping of the
real axis to a self-mapping of the upper half-plane. This idea
was later generalized by Tukia and V\"ais\"al\"a, who studied quasisymmetric
mappings between metric spaces \cite{TV}. In
1998, Heinonen and Koskela \cite{HK} proved a remarkable result, showing that the concepts of quasiconformality and
quasisymmetry are quantitatively equivalent in a large class of metric spaces, which
includes Euclidean spaces. Also, V\"ais\"al\"a proved the quantitative equivalence between free quasiconformality and
quasisymmetry of homeomorphisms between two Banach spaces, see \cite[Theorem 7.15]{Vai8}. Against this background, it is not surprising that the study of quasisymmetry in metric spaces has recently attracted significant attention \cite{BM,hprw,HL,Tys}.
The main tools in the studying of quasiconformal mappings and uniform spaces are volume integrals (associated doubling or Ahlfors regular measure), conformal modulus, Whitney decomposition and quasihyperbolic metric. The main goal of this paper is to study the subinvariance of uniform domains in general metric spaces under weakly quasisymmetric mappings by means of the approach : quasihyperbolic metric and metric geometry. We start by recalling some basic definitions.
Through this paper, we always assume that $X$ and $Y$ are metric spaces and we do not assume local compactness. We follow the notations and terminology of \cite{HK-1, HK, HL, Tys, Vai8}.
\noindent Here and in what follows, we always use $|x-y|$ to denote the distance between $x$ and $y$.
\bdefe\label{japan-31} A homeomorphism $f$ from $X$ to $Y$ is said to be
\begin{enumerate}
\item $\eta$-{\it quasisymmetric} if there is a homeomorphism $\eta : [0,\infty) \to [0,\infty)$ such that
$$ |x-a|\leq t|x-b|\;\; \mbox{implies}\;\; |f(x)-f(a)| \leq \eta(t)|f(x)-f(b)|$$
for each $t\geq 0$ and for each triple $x,$ $a$, $b$ of points in $X$;
\item {\it weakly $H$-quasisymmetric} if
$$ |x-a|\leq |x-b|\;\; \mbox{ implies}\;\; |f(x)-f(a)| \leq H|f(x)-f(b)|$$
for each triple $x$, $a$, $b$ of points in $X$.
\end{enumerate}\edefe
\br\label{japan-32}
The $\eta$-quasisymmetry implies the weak $H$-quasisymmetry with $H=\eta(1)$. Obviously, $\eta(1)\geq 1$. In general, the converse is not true (cf. \cite[Theorem $8.5$]{Vai8}). See also \cite{hprw} for the related discussions.
\er
It follows from \cite[Remark, p. 121]{FHM} and \cite[Theorem 5.6]{Vai2} that uniform domains are subinvariant with respect to quasiconformal mappings in $\IR^n$ ($n\geq 2$). By this, we mean that if $f:\; G\to G'$ is a $K$-quasiconformal mapping, where $G$ and $G'$ are domains in $\IR^n$, and if $G'$ is $c$-uniform,
then $D'=f(D)$ is $c'$-uniform
for every $c$-uniform subdomain
$D \subset G$,
where $c'=c'(c, K, n)$ which means that the constant $c'$ depends only on the coefficient $c$ of the uniformity of $D$, the coefficient $K$ of quasiconformality of $f$ and the dimension $n$ of the Euclidean space $\IR^n$. See \cite{BHX, GM, hlpw, HVW, Vai2, Vai8, Xie} for similar discussions in this line. We note that a domain $G$ is uniform implies that $G$ is a John domain and is quasiconvex. So it is natural to ask whether it is possible to weaken the assumption ``$G'$ is uniform" to ``$G'$ is a John domain" or ``$G'$ is quasiconvex"?
In fact, we observe from \cite[Theorem 1]{hlpw} and \cite[Proposition 7.12]{BHK} that the following result holds.
\begin{thm}\label{thm-2}
Suppose that $G$ and $G'$ are bounded subdomains in $\mathbb{R}^n$ and that $f:G\to G'$ is a $K$-quasiconformal mapping. If $G'$ is an $a$-John domain with center $y_0'$ and $G_1$ is a subdomain of $G$ which is inner $b$-uniform, then its image $G_1'=f(G_1)$ is inner $\tau$-uniform, where $\tau=\tau(n,K,a,b,\frac{\diam G}{\delta_D(f^{-1}(y_0'))})$.
\end{thm}
\br We remark the above result is not valid for uniform subdomain $G_1$ of $G$, that is, $G_1'$ maybe not uniform. For example, $G'=\mathbb{B}^2\setminus [0,1]$ is a conformal image of $G=\mathbb{B}^2$, and we observe that $G'$ is a John domain and $G_1=G\setminus\{0\}$ is uniform, but $G_1'$ is obviously not an uniform domain. However, if we replace the assumption ``$G'$ is an $a$-John domain" to ``$G'$ is quasiconvex", then we get the following result.
\er
\begin{thm}\label{thm1.1}
Suppose that $X$ and $Y$ are quasiconvex and complete metric spaces, that $G\varsubsetneq X$ is a domain,
$G'\varsubsetneq Y$ is a quasiconvex domain, and that $f: G\to G'$ is weakly quasisymmetric mapping.
For each subdomain $D$ of $G$, if $D$ is uniform, then $D'=f(D)$ is uniform, where the coefficient of uniformity of $D'$ depends
only on the given data of $X$, $Y$, $G$, $G'$, $D$, and $f$.
\end{thm}
Here and in what follows,
the phrase ``the given data of $X$, $Y$, $G$, $G'$, $D$, and $f$" means the data which depends on the given constants which are the coefficients of quasiconvexity of $X$, $Y$ and $G'$, the coefficient of uniformity of $G$
and the coefficient of weak quasisymmetry of $f$.
\br It is worth mentioning that in Theorem \ref{thm1.1}, the domain $G'$ is not required to be ``uniform", and only to be ``quasiconvex" $($From the definitions
in Section \ref{sec-2}, we easily see that uniformity implies quasiconvexity$)$. If $X=Y=\mathbb{R}^n$, then $f$ is $\eta$-quasisymmetric with $\eta=\eta(n,H)$, see \cite[Theorem $2.9$]{Vai0}. Since quasisymmetric maps preserve uniform domains, the assertion follows. But we remind the reader that our result is independent of the dimension in this case.
\er
As an application of our method, we discuss the distortion property for quasihyperbolic short arcs because in general case the quasihyperbolic geodesics may not exist. Actually we establish an analog of Pommerenke's theorem for length and diameter distortion of hyperbolic geodesics under conformal mappings from the unit disk $D$ onto a plane domain, see \cite[Corollary $4.18$ and Theorem $4.20$]{Po}. In higher dimension (that is, $\mathbb{R}^n$), Heinonen and N\"{a}kki found that the quasiconformal image of quasihyperbolic geodesics minimize Euclidean curve-diameter, see \cite[Theorem $6.1$]{HN}. Along this line, we get the diameter uniformity for the image of quasihyperbolic short arcs under weak quasisymmetric mappings which embed into an uniform space, which we state as follows.
\begin{thm}\label{thm-3}
Suppose that $X$ and $Y$ are $c$-quasiconvex and complete metric spaces, and that $f:G\to G'$ is weakly $H$-quasisymmetric between two domains $G\subsetneq X$ and $G'\subsetneq Y$. If $G'$ is $b$-uniform, then for any $\varepsilon$-short arc $\gamma$ in $G$ with endpoints $x$ and $y$, $0<\varepsilon<\min\{1,\frac{k_G(x,y)}{6}\}$, we have
\begin{enumerate}
\item $\min\{ \diam(\gamma'[f(x),f(u)]), \diam(\gamma'[f(y),f(u)])\}\leq \lambda_1\delta_{G'}(f(u));$
\item $\diam (\gamma')\leq \lambda_2 |f(x)-f(y)|,$\end{enumerate}
where $\gamma'=f(\gamma)$ and $\lambda_i$ depends only on $c,H$ and $b$ for $i=1,2$.
\end{thm}
We remark that with the extra local compactness assumption, it is not hard to see that $G$ is a proper geodesic space with respect to the quasihyperbolic metric, see \cite[Proposition $2.8$]{BHK}, because a complete locally compact length space is proper and geodesic. Hence the above result holds also for quasihyperbolic geodesic of $G$ in this situation.
The rest of this paper is organized as follows. In Section \ref{sec-2}, we recall some definitions and preliminary results, particularly, some basic properties of short arcs are given. In Section \ref{sec-4}, Theorem \ref{thm1.1} is proved based on the properties of short arcs.
Section \ref{sec-5} is devoted to the proof of Theorem \ref{thm-2} and Theorem \ref{thm-3}.
\section{Preliminaries}\label{sec-2}
In this section, we give the necessary definitions and auxiliary results, which will be used in the proofs of our main results.
Throughout this paper,
balls and spheres in metric spaces $X$ are written as
$$\mathbb{B}(a,r)=\{x\in X:\,|x-a|<r\},\;\;\mathbb{S}(a,r)=\{x\in X:\,|x-a|=r\}$$
and $$
\mathbb{\overline{B}}(a,r)=\mathbb{B}(a,r)\cup \mathbb{S}(a,r)= \{x\in X:\,|x-a|\leq r\}.$$
For convenience, given
domains $G \subset X,$ $G' \subset Y$, a map $f:G \to G'$
and points $x$, $y$,
$z$, $\ldots$ in $G$, we always denote by $x'$, $y'$, $z'$, $\ldots$
the images in $G'$ of $x$, $y$, $z$, $\ldots$ under $f$,
respectively. Also, we assume that $\gamma$
denotes an arc in $G$ and $\gamma'$
the image in $G'$ of $\gamma$
under $f$.
\subsection{Quasihyperbolic metric, solid arcs and short arcs}
In this subsection, we start with the definition of quasihyperbolic metric. If $X$ is a connected metric space and $G\varsubsetneq X$ is a non-empty open set,
then it follows from \cite[Remark 2.2]{HL} that the boundary of $G$ satisfies $\partial G\not=\emptyset$.
Suppose $\gamma\subset G$ denotes a rectifiable arc or a path, its {\it quasihyperbolic length} is the number:
$$\ell_{k_G}(\gamma)=\int_{\gamma}\frac{|dz|}{\delta_{G}(z)},
$$ where $\delta_G(z)$ denotes the distance from $z$ to $\partial G$.
For each pair of points $x$, $y$ in $G$, the {\it quasihyperbolic distance}
$k_G(x,y)$ between $x$ and $y$ is defined in the following way:
$$k_G(x,y)=\inf\ell_{k_G}(\gamma),
$$
where the infimum is taken over all rectifiable arcs $\gamma$
joining $x$ to $y$ in $G$.
Suppose $X$ is quasiconvex and $G\subsetneq X$. If $\gamma$ is a rectifiable curve in $G$ connecting $x$ and $y$, then (see, e.g., the proof of Theorem $2.7$ in \cite{HL} or \cite{Vai6-0})
\beq\label{base-eq-1}\ell_{k_G}(\gamma)\geq \log\Big(1+\frac{\ell(\gamma)}
{\min\{\delta_{G}(x), \delta_{G}(y)\}}\Big)
\eeq
and thus,
\beq\label{base-eq-2}k_G(x,y)\geq \log\Big(1+\frac{|x-y|}
{\min\{\delta_{G}(x), \delta_{G}(y)\}}\Big).
\eeq
Gehring and Palka \cite{GP} introduced the quasihyperbolic metric of
a domain in $\IR^n$. For the basic properties of this metric we refer to \cite{Geo}. Recall that a curve $\gamma$ from $x$ to
$y$ is a {\it quasihyperbolic geodesic} if
$\ell_{k_G}(\gamma)=k_G(x,y)$. Each subcurve of a quasihyperbolic
geodesic is obviously a quasihyperbolic geodesic. It is known that a
quasihyperbolic geodesic between any two points in a Banach space $X$ exists if the
dimension of $X$ is finite, see \cite[Lemma 1]{Geo}. This is not
true in arbitrary metric spaces (cf. \cite[Example 2.9]{Vai6-0}).
Let us recall a result which is useful for the discussions later on.
\begin{Lem}\label{ll-11}(\cite[Lemma 2.4]{HWZ}) Let $X$ be a $c$-quasiconvex metric space and let $G\subsetneq X$ be a domain. Suppose that $x$, $y\in G$ and either $|x-y|\leq \frac{1}{3c}\delta_G(x)$ or $k_G(x,y)\leq 1$. Then
\be\label{vvm-2} \frac{1}{2}\frac{|x-y|}{\delta_G(x)}< k_G(x,y) < 3c\frac{|x-y|}{\delta_G(x)}.\ee
\end{Lem}
\noindent Here, we say that $X$ is {\it c-quasiconvex} $(c\geq 1)$ if each pair of points $x$, $y\in X$ can be joined by an arc $\gamma$ in $X$ with length ${\ell}(\gamma)\leq c|x-y|$.
\bdefe \label{def1.4}
Suppose $\gamma$ is an arc in a domain $G\varsubsetneq X$ and $X$ is a rectifiably connected metric space. The arc may be closed, open or half open. Let $\overline{x}=(x_0,$ $\ldots,$ $x_n)$,
$n\geq 1$, be a finite sequence of successive points of $\gamma$.
For $h\geq 0$, we say that $\overline{x}$ is {\it $h$-coarse} if
$k_G(x_{j-1}, x_j)\geq h$ for all $1\leq j\leq n$. Let $\Phi_k(\gamma,h)$
denote the family of all $h$-coarse sequences of $\gamma$. Set
$$s_k(\overline{x})=\sum^{n}_{j=1}k_G(x_{j-1}, x_j)$$ and
$$\ell_{k_G}(\gamma, h)=\sup \{s_k(\overline{x}): \overline{x}\in \Phi_k(\gamma,h)\}$$
with the agreement that $\ell_{k_G}(\gamma, h)=0$ if
$\Phi_k(\gamma,h)=\emptyset$. Then the number $\ell_{k_G}(\gamma, h)$ is the
{\it $h$-coarse quasihyperbolic length} of $\gamma$. \edefe
If $X$ is $c$-quasiconvex, then $\ell_{k_G}(\gamma, 0)=\ell_{k_G}(\gamma)$ (see, e.g., \cite[Proposition A.7 and Remark A.13]{BHK} and \cite[Lemma 2.5]{HWZ} ).
\bdefe \label{def1.5} Let $G$ be a proper domain in a rectifiably connected metric space $X$. An arc $\gamma\subset G$
is {\it $(\nu, h)$-solid} with $\nu\geq 1$ and $h\geq 0$ if
$$\ell_{k_G}(\gamma[x,y], h)\leq \nu\;k_G(x,y)$$ for all $x$, $y\in \gamma$.
An arc $\gamma\subset G$ with endpoints $x$ and $y$ is said to be $\varepsilon$-short ($\varepsilon\geq 0$) if $$\ell_{k_G}(\gamma)\leq k_G(x,y)+\varepsilon.$$
Obviously, by the definition of $k_G$, we know that for every $\varepsilon>0$, there exists an arc $\gamma \subset G$ such that $\gamma$ is $\varepsilon$-short, and it is easy to see that every subarc of an $\varepsilon$-short arc is also $\varepsilon$-short.\edefe
\br
For any pair of points $x$ and $y$ in a proper domain $G$ of Banach space $E$, if the dimension of $E$ is finite, then there exists a quasihyperbolic geodesic in $G$ connecting $x$ and $y$ (see \cite[Lemma 1]{Geo}). But if the dimension of $E$ is infinite, this property is no longer valid (see, e.g., \cite[Example 2.9]{Vai6-0}). In order to overcome this shortcoming in Banach spaces, V\"ais\"al\"a proved the existence of neargeodesics or quasigeodesics (see \cite{Vai6}), and every quasihyperbolic geodesic is a quasigeodesic. See also \cite{RT}. In metric spaces, we do not know if this existence property is true or not. However, this existence property plays a very important role in the related discussions.
In order to overcome this disadvantage, in this paper, we will exploit the substitution of ``quasigeodesics" replaced by ``short arcs". The class of short arcs has been introduced when V\"ais\"al\"a studied properties of Gromov hyperbolic spaces \cite{Vai9} (see also \cite{BH,Herron}), and as we see that the existence of such class of arcs is obvious in metric spaces.
\er
By a slight modification of the method used in the proof of \cite[Lemma 6.21]{Vai6}, we get the following result.
\begin{lem}\label{ll-14} Suppose that $X$ is a $c$-quasiconvex metric space and that $G\varsubsetneq X$ is a domain, and that $\gamma$ is a $(\nu,h)$-solid arc in $G$ with endpoints $x$, $y$ such that $\min\{\delta_G(x),\delta_G(y)\}=r\geq 3c|x-y|$. Then there is a constant $\mu_1=\mu_1(c,\nu)$ such that $$\diam(\gamma)\leq \max\{\mu_1|x-y|, 2r(e^h-1)\},$$ where ``$\diam$" means ``{\rm diameter}".
\end{lem}
\bpf Without loss of generality, we assume that $\delta_G(y)\geq \delta_G(x)=r$. Denoting $t=|x-y|$ and applying Lemma \Ref{ll-11}, we get $$k_G(x,y)\leq 3ct/r.$$
Let $u\in \gamma$. To prove this lemma, it suffices to show that there exists a constant $\mu_1=\mu_1(c,\nu)$ such that
\be\label{neq-eq-1}|u-x|\leq \max\big\{\frac{\mu_1}{2}|x-y|,r(e^h-1)\big\}.\ee
To this end, we consider two cases. The first case is: $k_G(u,x)\leq h$. Under this assumption, we see from \eqref{base-eq-2} that
\be\label{dw-2}|u-x|\leq (e^{k_G(u,x)}-1)\delta_G(x)\leq r(e^h-1).\ee
For the remaining case: $k_G(u,x)> h$, we choose a sequence of successive points of $\gamma$: $x=x_0$, $\ldots$, $x_{n}=u$ such that $$ k_G(x_{j-1},x_j)=h \;\;\; {\rm for}\;\;\; j\in\{1,\ldots, n-1\}$$ and $$0< k_G(x_{n-1},x_n)\leq h.$$ Then $n\geq 2$ and
$$(n-1)h\leq \sum_{i=1}^{n-1}k_G(x_{j-1},x_j)\leq \ell_{k_G}(\gamma,h)\leq \nu k_G(x,y)\leq 3c\nu t/r,$$
which shows that $$k_G(x,u)\leq \sum_{i=1}^{n}k_G(x_{j-1},x_j)\leq nh\leq 6c\nu t/r.$$
Let $s=\frac{t}{r}$. Then $s\leq \frac{1}{3c}$ and
$$\frac{|u-x|}{t}\leq \frac{e^{6c\nu s}-1}{s}.$$
Obviously,
the function $g(s)=\frac{1}{s}\big(e^{6c\nu s}-1\big)$ is increasing in
$(0,\frac{1}{3c}]$ and $\lim_{s\to 0}{\frac{e^{6c\nu s}-1}{s}}=6c\nu $. Letting $$\mu_1=6c(e^{2\nu}-1)$$ gives
\be\label{dw-1} |u-x|\leq \frac{1}{2}\mu_1t.\ee
It follows from \eqref{dw-2} and \eqref{dw-1} that \eqref{neq-eq-1} holds, and hence the proof of the lemma is complete.
\epf
\begin{lem} \label{mon-4} Suppose that $X$ is a $c$-quasiconvex metric space and $G\subsetneq X$ is a domain. Suppose, further, that for $x$,
$y\in G$, \begin{enumerate}
\item
$\gamma$ is an $\varepsilon$-short arc in $G$ connecting $x$ and $y$ with $0<\varepsilon\leq \frac{1}{2}k_{G}(x,y)$, and
\item
$|x-y|\leq \frac{1}{3c} \min\{\delta_{G}(x), \delta_{G}(y)\}$. \end{enumerate}
Then $$\ell(\gamma)\leq \frac{9}{2}ce^{\frac{3}{2}}|x-y|.$$\end{lem}
\bpf Without loss of generality, we assume that $\min\{\delta_{G}(x), \delta_{G}(y)\}=\delta_{G}(x)$. It follows from \eqref{base-eq-1} and Lemma \Ref{ll-11} that
$$\log\left(1+\frac{\ell(\gamma)}{\delta_{G}(x)}\right)\leq \ell_{k_{G}}(\gamma)\leq k_{G}(x,y)+\varepsilon \leq \frac{3}{2}k_{G}(x,y)\leq \frac{9c}{2}\frac{|x-y|}{\delta_{G}(x)}\leq \frac{3}{2}.
$$ Hence,
$$\frac{\ell(\gamma)}{\delta_{G}(x)}\leq e^{\frac{3}{2}}-1.$$
Let $f(t)=t-e^{\frac{3}{2}}\log(1+t)$. Then $f(t)$ is decreasing for $t\in [0,e^{\frac{3}{2}}-1]$. Hence, we have $f(t)\leq f(0)=0$ which leads to
$$\frac{\ell(\gamma)}{\delta_{G}(x)}\leq e^{\frac{3}{2}}\log\left(1+\frac{\ell(\gamma)}{\delta_{G}(x)}\right)\leq \frac{9}{2}ce^{\frac{3}{2}}\frac{|x-y|}{\delta_{G}(x)}.$$ Therefore, $$\ell(\gamma)\leq \frac{9}{2}ce^{\frac{3}{2}}|x-y|,$$ as required.\epf
\subsection{Uniform domains and John domains}
In 1961, John \cite{John} introduced the twisted interior cone condition with his work on elasticity, and these domains were first called John domains by Martio and Sarvas in \cite{MS}. In the same paper, Martio and Sarvas also introduced another class of domains which are the uniform domains. The main motivation for studying these domains was in showing global injectivity properties for locally injective mappings. Since then, many other characterizations of uniform and John domains have been established, see
\cite{FW, Geo, Martio-80, Vai6, Vai4, Vai8}, and the importance of these classes of domains in the function theory is well
documented (see e.g. \cite{FW, GH, Vai2}). Moreover, John and uniform domains in
$\mathbb{R}^n$ enjoy numerous geometric and function theoretic
properties that are useful in other many fields of modern mathematical analysis as well (see e.g.
\cite{Jo80, Yli, Vai2}, and references therein).
We recall the definition of uniform domains following closely the notation
and terminology of \cite{TV, Vai2, Vai, Vai6-0, Vai6} and \cite{Martio-80}.
\bdefe \label{def1.3} A domain $G$ in $X$ is called $b$-{\it
uniform} provided there exists a constant $b$
with the property that each pair of points $x$, $y$ in $G$ can
be joined by a rectifiable arc $\gamma$ in $G$ satisfying
\bee
\item $\min\{\ell(\gamma[x,z]),\ell(\gamma[z,y])\}\leq b\,\delta_{G}(z)$ for all $z\in \gamma$, and
\item $\ell(\gamma)\leq b\,|x-y|$,
\eee
\noindent where $\ell(\gamma)$ denotes the length of $\gamma$,
$\gamma[x,z]$ the part of $\gamma$ between $x$ and $z$.
At this time, $\gamma$ is said to be a {\it double $b$-cone arc}.
If the condition $(1)$ is satisfied, not necessarily $(2)$, then $G$ is said to be a {\it $b$-John domain}. At this time, the arc $\gamma$ is called a {\it $b$-cone arc}.
\edefe
Let us recall the following useful property of uniform domains.
\begin{Lem}\label{BHK-lem}$($\cite[Lemma 3.12]{BHK}$)$ Suppose $G\subsetneq X$ is a $b$-uniform domain in a rectifiable connected metric space $X$. Then for any $x, y\in G$, we have $$k_G(x,y)\leq 4b^2\log\left(1+\frac{|x-y|}{\min\{\delta_G(x),\delta_G(y)\}}\right).$$
\end{Lem}
We note that Gehring and Osgood \cite{Geo} characterized uniform domains in terms
of an upper bound for the quasihyperbolic metric in the case of domains in $ {\mathbb R}^n \,$ as follows: a
domain $G$ is {\em uniform} if and only if there exists a constant $C\ge
1$ such that
$$
k_G(x,y)\le C \log\left(1+\frac{|x-y|}{\min\{\delta_G(x),\delta_G(y)\}}\right)
$$
for all $x,y\in G$. As a matter of fact, the above inequality
appeared in \cite{Geo} in a form with an additive constant on the
right hand side: it was shown by Vuorinen \cite[2.50]{Vu2} that the
additive constant can be chosen to be $0$.
The following are the analogues of Lemmas $6.10$ and $6.11$ in \cite{Vai6} in the setting of metric spaces. The proofs are similar.
\begin{Lem}\label{ll-12} Suppose that $G\subsetneq X$ is a $b$-uniform domain in a rectifiable connected metric space $X$, and that $\gamma$ is an arc in $\{x\in G: \delta_G(x)\leq r\}$. If $\gamma$ is $(\nu,h)$-solid, then $$\diam(\gamma)\leq M_1 r,$$ where $M_1=M_1(b, \nu,h)$.
\end{Lem}
\begin{Lem}\label{ll-13} For all $b\geq 1$, $\nu\geq 1$ and $h\geq 0$, there are constants $0<q_0=q_0(b,\nu,h)<1$ and $M_2=M_2(b,\nu,h)\geq 1$
with the following property:
Suppose that $G$ is a $b$-uniform domain and $\gamma$ is a $(\nu,h)$-solid arc starting at $x_0\in G$. If $\gamma$ contains a point $u$ with $\delta_G(u)\leq q_0\delta_G(x_0)$, then $$\diam(\gamma_{u})\leq M_2\delta_G(u),$$ where $\gamma_{u}=\gamma\setminus \gamma[x_0,u)$.
\end{Lem}
Now, we are ready to prove an analogue of Lemma \ref{ll-14} for uniform domains.
\begin{lem}\label{ll-15} Suppose that $X$ is a $c$-quasiconvex metric space and that $G\varsubsetneq X$ is a $b$-uniform domain, and that $\gamma$ is a $(\nu,h)$-solid arc in $G$ with endpoints $x$, $y$. Let $\delta_G(x_0)=\max_{p\in \gamma}\delta_G(p)$. Then there exist constants
$\mu_2=\mu_2( b, \nu, h)\geq 1$ and $\mu_3=\mu_3( b, c, \nu, h)\geq 1$ such that
\begin{enumerate} \item\label{ma-0-1}
$\diam(\gamma[x,u])\leq \mu_2 \delta_G(u)$ for $u\in \gamma[x,x_0],$ and $\diam(\gamma[y,v])\leq \mu_2 \delta_G(v)$ for $v\in \gamma[y, x_0]$;
\item\label{ma-0-2} $\diam(\gamma)\leq \max\big\{\mu_3 |x-y|, 2(e^h-1)\min\{\delta_G(x),\delta_G(y)\}\big\}.$
\end{enumerate}
\end{lem}
\bpf We first prove \eqref{ma-0-1}. Obviously, it suffices to prove the first inequality in \eqref{ma-0-1} because the proof for the second one is similar. Let $$\mu_2=\max\Big\{\frac{M_1}{q_0}, M_2\Big\},$$ where $M_1=M_1(b,\nu,h)$ is the constant from Lemma
\Ref{ll-12}, $q_0=q_0(b,\nu,h)$ and $M_2=M_2(b,\nu,h)$ are the constants from Lemma \Ref{ll-13}.
For $u\in \gamma[x,x_0]$, we divide the proof into two cases. If $\delta_G(u)\leq q_0\delta_G(x_0)$, then Lemma \Ref{ll-13} leads to
\be\label{ma-1} \diam(\gamma[x,u])\leq M_2\delta_G(u).\ee If $\delta_G(u)> q_0\delta_G(x_0)$, then applying Lemma \Ref{ll-12} with the substitution
$r$ replaced by $\delta_G(x_0)$ and $\gamma$ replaced by $\gamma[x,u]$, we easily get
\be\label{ma-2}\diam(\gamma[x,u])\leq M_1\delta_G(x_0)< \frac{M_1}{q_{0}}\delta_G(u).\ee
It follows from \eqref{ma-1} and \eqref{ma-2} that the first assertion in \eqref{ma-0-1} holds, and thus the proof of \eqref{ma-0-1} is complete.
To prove \eqref{ma-0-2}, without loss of generality, we assume that $$\min\{\delta_G(x),\; \delta_G(y)\}= \delta_G(x)\;\;\; {\rm and}\;\;\; \diam(\gamma)> |x-y|.$$
Let
$$\mu_3=\frac{3}{4}\big[1+2(1+6c)(e^{h+4b^2\nu\log(1+4\mu_2)}-1)\big].$$
If $\delta_G(x)\geq 3c|x-y|$, then $(2)$ follows from Lemma \ref{ll-14} since the constant $\mu_1$ in Lemma \ref{ll-14} satisfies $\mu_1< \mu_3$.
Hence, in the following, we assume that \beq\label{add-eq-1}\delta_G(x)<3c|x-y|.\eeq Let $x_1\in \gamma$ (resp. $y_1\in \gamma$) be the first point in $\gamma$ from $x$ to $y$ (resp. from $y$ to $x$) such that (see Figure $1$)
\beq\label{eq-new-1}\diam(\gamma[x,x_1])=\frac{1}{2}|x-y|\;\,(\mbox{resp.}\; \diam(\gamma[y,y_1])=\frac{1}{2}|x-y|).\eeq
\noindent Then we have $$\diam(\gamma[y,x_1])\geq |y-x_1|\geq |y-x|-|x-x_1|\geq\frac{1}{2}|y-x|=\diam(\gamma[x,x_1]),$$ and similarly, we get $$\diam(\gamma[x,y_1])>\diam(\gamma[y,y_1]).$$
Thus, it follows from \eqref{ma-0-1} that $$\frac{1}{2}|x-y|=\diam(\gamma[x,x_1])=\diam(\gamma[y,y_1])\leq \mu_2
\min\{\delta_G(x_1), \delta_G(y_1)\}.$$ Also,
$$|x_1-y_1|\leq |x_1-x|+|x-y|+|y-y_1|\leq 2|x-y|.$$ Then Lemma \Ref{BHK-lem} implies
$$k_G(x_1,y_1)\leq 4b^2\log\left(1+\frac{|x_1-y_1|}{\min\{\delta_G(x_1),\delta_G(y_1)\}}\right)\leq 4b^2\log(1+4\mu_2).$$
Since $\gamma$ is a $(\nu,h)$-solid arc, for any $u_1,$ $u_2\in \gamma[x_1,y_1]$, we have
\begin{eqnarray*}k_G(u_1,u_2)&\leq& \max\{h,\; \ell_{k_G}(\gamma[x_1,y_1],h)\}\leq h+\nu k_G(x_1,y_1)\\&\leq& h+4b^2\nu\log(1+4\mu_2),\end{eqnarray*}
and so, for all $z\in \gamma[x_1,y_1]$, we get from \eqref{base-eq-2}, \eqref{add-eq-1} and \eqref{eq-new-1} that
\begin{eqnarray}\label{eq-new-2}|z-x_1|&\leq& (e^{k_G(z,x_1)}-1)\delta_G(x_1)\\\nonumber&\leq& (e^{h+4b^2\nu\log(1+4\mu_2)}-1)(\delta_G(x)+|x-x_1|)\\\nonumber&\leq& \frac{1}{2}(1+6c)(e^{h+4b^2\nu\log(1+4\mu_2)}-1)|x-y|.\end{eqnarray}
Let $w_1,$ $w_2\in \gamma$ be points such that \be \label{sat-1}|w_1-w_2|\geq \frac{2}{3}\diam(\gamma).\ee
Then we get
\bcl\label{sat-2}
$|w_1-w_2|\leq \frac{2}{3}\mu_3|x-y|$.
\ecl
Since \eqref{eq-new-1} guarantees that neither $\gamma[x,x_1]$ nor $\gamma[y, y_1]$ contains the set $\{w_1,w_2\}$, we see that,
to prove this claim, according to the positions of $w_1$ and $w_2$ in $\gamma$, we need to consider the following four possibilities.\begin{enumerate}
\item
$w_1\in \gamma[x,x_1]$ and $w_2\in \gamma[y,y_1]$.
Obviously, by \eqref{eq-new-1}, we have $$|w_1-w_2|\leq |w_1-x|+|x-y|+|y-w_2|\leq 2|x-y|.$$
\item
$w_1\in \gamma[x,x_1]$ and $w_2\in \gamma[x_1,y_1]$. Then \eqref{eq-new-1} and \eqref{eq-new-2} show that $$|w_1-w_2|\leq |w_1-x_1|+|x_1-w_2|\leq \frac{1}{2}\big[1+(1+6c)(e^{h+4b^2\nu\log(1+4\mu_2)}-1)\big]|x-y|.$$
\item
$w_1,$ $w_2\in \gamma[x_1,y_1]$. Then \eqref{eq-new-2} implies $$|w_1-w_2|\leq |w_1-x_1|+|x_1-w_2|\leq (1+6c)(e^{h+4b^2\nu\log(1+4\mu_2)}-1)|x-y|.$$
\item
$w_1\in \gamma[x_1,y_1]$ and $w_2\in \gamma[y_1,y]$. Again, we infer from \eqref{eq-new-1} and \eqref{eq-new-2} that $$ |w_1-w_2|\leq |w_1-x_1|+|x_1-y_1|+|y_1-w_2|\leq \frac{1}{2}\big[1+2(1+6c)(e^{h+4b^2\nu\log(1+4\mu_2)}-1)\big]|x-y|.$$
\end{enumerate}
The claim is proved.\medskip
Now, we are ready to finish the proof. It follows from \eqref{sat-1} and Claim \ref{sat-2} that
$$\diam(\gamma)\leq \frac{3}{2}|w_1-w_2|\leq \mu_3|x-y|,$$
which implies that \eqref{ma-0-2} also holds in this case. Hence, the proof of the lemma is complete.
\epf
\subsection{Free quasiconformal mappings and Coarsely quasihyperbolic mappings}
The definition of free quasiconformality is as follows.
\bdefe\label{japan-33} Let $G\varsubsetneq X$ and $G'\varsubsetneq Y$ be two domains (open and connected), and let $\varphi:[0,\infty)\to [0,\infty)$ be a homeomorphism with $\varphi(t)\geq t$. We say
that a homeomorphism $f: G\to G'$ is \begin{enumerate}
\item \label{sunday-1}
{\it $\varphi$-semisolid } if $$k_{G'}(f(x),f(y))\leq \varphi(k_G(x,y))$$
for all $x$, $y\in G$;
\item \label{sunday-2} $\varphi$-{\it solid} if both $f$ and ${\it f^{-1}}$ are $\varphi$-semisolid;
\item {\it freely
$\varphi$-quasiconformal} ($\varphi$-FQC in brief) or {\it fully $\varphi$-solid}
if $f$ is
$\varphi$-solid in every subdomain of $G$,\end{enumerate}
where $k_G(x,y)$ denotes the quasihyperbolic distance of $x$ and $y$ in $G$. See Section \ref{sec-2} for the precise definitions of $k_G(x,y)$ and other
notations and concepts in the rest of this section.
\edefe
\bdefe Let $G\varsubsetneq X$ and $G'\varsubsetneq Y$ be two domains. We say
that a homeomorphism $f: G\to G'$ is \begin{enumerate}
\item
{\it $C$-coarsely $M$-quasihyperbolic}, or briefly
$(M,C)$-CQH, if there are constants $M\geq 1$ and $C\geq 0$ such that for all $x$, $y\in G$,
$$\frac{k_G(x,y)-C}{M}\leq k_{G'}(f(x),f(y))\leq M\;k_G(x,y)+C.$$
\item
{\it fully $C$-coarsely $M$-quasihyperbolic} if there are constants $M\geq 1$ and $C\geq 0$ such that
$f$ is $C$-coarsely $M$-quasihyperbolic in every subdomain of $G$.
\end{enumerate}\edefe
Under coarsely quasihyperbolic mappings, we have the following useful relationship between short arcs and solid arcs.
\begin{lem}\label{ll-001} Suppose that $X$ and $Y$ are rectifiable connected metric spaces, and that $G\varsubsetneq X$ and $G'\varsubsetneq Y$ are domains. If $f:\;G\to G'$ is $(M,C)$-CQH, and $\gamma$ is an $\varepsilon$-short arc in $G$ with $0<\varepsilon\leq 1$, then there are constants $\nu=\nu(C,M)$ and $h=h(C, M)$ such that
the image $\gamma'$ of $\gamma$ under $f$ is $(\nu,h)$-solid in $G'$.
\end{lem}
\bpf
Let$$h=(2M+1)C+2M\;\; \mbox{and}\;\; \nu=\frac{4(C+1)M(M+1)}{2C+1}.$$
Obviously, we only need to verify that for $x$, $y\in \gamma$,
\be\label{new-eq-3}\ell_{k_{G'}}(\gamma'[x',y'],h)\leq\nu k_{G'}(x',y').\ee We prove this by considering two cases.
The first case is: $k_G(x,y)<2C+1$. Then for $z_1$, $z_2\in\gamma[x, y]$, we have
$$k_{G'}(z'_1,z'_2)\leq Mk_G(z_1,z_2)+C\leq M(k_G(x,y)+\varepsilon)+C<(2M+1)C+2M=h,$$
and so
\be\label{ma-3}\ell_{k_{G'}}(\gamma'[x',y'],h)=0.\ee
Now, we consider the other case: $k_G(x,y)\geq 2C+1$. Then $$k_{G'}(x',y')\geq \frac{1}{M}(k_G(x,y)-C)> \frac{1}{2M}k_G(x,y).$$ With the aid of \cite[Theorems 4.3 and 4.9]{Vai6}, we have
\beq\label{ma-4}
\ell_{k_{G'}}(\gamma'[x',y'],h) &\leq&
\ell_{k_{G'}}(\gamma'[x',y'],(M+1)C) \leq
(M+1)\ell_{k_G}(\gamma[x,y])\\ \nonumber &\leq&(M+1)(k_G(x,y)+\varepsilon)\\ \nonumber &\leq& \frac{2(C+1)(M+1)}{2C+1}k_{G}(x,y) \\ \nonumber &\leq& \frac{4(C+1)M(M+1)}{2C+1}k_{G'}(x',y').\eeq
It follows from \eqref{ma-3} and \eqref{ma-4} that \eqref{new-eq-3} holds.\epf
The following results are useful in the proof of Theorem \ref{thm1.1}.
\begin{lem}\label{ll-000} Suppose that $X$ and $Y$ are both $c$-quasiconvex and complete metric \;\; spaces, and that $G\varsubsetneq X$ and $G'\varsubsetneq Y$ are domains. If both $f:$ $G\to G'$ and $f^{-1}:$ $G'\to G$ are weakly $H$-quasisymmetric, then
\begin{enumerate}
\item \label{sat-3}
$f$ is $\varphi$-FQC, where $\varphi=\varphi_{c,H}$ which means that the function $\varphi$ depends only on $c$ and $H$;
\item \label{sat-4}
$f$ is fully $(M,C)$-CQH, where $M=M(c,H)\geq 1$ and $C=C(c,H)\geq 0$ are constants.
\end{enumerate}
\end{lem}
\bpf By \cite[Theorem 1.6]{HL}, we know that for every subdomain $D\subset G$, both $f:$ $D\to D'$ and $f^{-1}:$ $D'\to D$ are $\varphi$-semisolid with $\varphi=\varphi_{c,H}$, and so, $f$ is $\varphi$-FQC. Hence \eqref{sat-3} holds. On the other hand, \cite[Theorem 1]{ HWZ} implies that \eqref{sat-3} and \eqref{sat-4} are equivalent, and thus, \eqref{sat-4} also holds.
\epf
\begin{Lem}\label{lem-ll-0}$($\cite[Lemma 6.5]{Vai8}$)$ Suppose that $X$ is $c$-quasiconvex, and that $f:$ $X\to Y$ is weakly $H$-quasisymmetric. If $x,$ $y,$ $z$ are distinct points in $X$ with $|y-x|\leq t|z-x|$, then $$|y'-x'|\leq \theta(t)|z'-x'|,$$ where the function $\theta(t)=\theta_{c, H}(t)$ is increasing in $t$.
\end{Lem}
\begin{Lem}\label{lem-ll-1}$($\cite[Lemma 5.4]{Vai6-0}$)$ Suppose that $f:$ $X\to Y$ is weakly $H$-quasisymmetric and that $f(X)$ is $c$-quasiconvex. If $x,$ $y,$ $z$ are distinct points in $X$ with $|y-x|= t|z-x|$ and if $0<t\leq 1$, then $$|y'-x'|\leq \theta(t)|z'-x'|,$$ where $\theta:[0,1]\to[0,\infty)$ is an embedding with $\theta(0)=0$ depending only on $H$ and $c$.
\end{Lem}
The following result easily follows from Lemma \Ref{lem-ll-1}.
\begin{lem}\label{lem-ll-2} Suppose that $f:$ $X\to Y$ is weakly $H$-quasisymmetric and that $f(X)$ is $c$-quasiconvex. Then $f^{-1}:$ $f(X)\to X$ is weakly $H_1$-quasisymmetric with $H_1$ depending only on $H$ and $c$.
\end{lem}
\bpf Let $x,$ $y,$ $z$ be distinct points in $X$ with $|y-x|= t|z-x|$ and $0<t\leq 1$. Then by Lemma \Ref{lem-ll-1} there exists some constant $c_1>2$ such that $$|y'-x'|\leq \theta(t)|z'-x'|,$$ where $\theta(\frac{1}{c_1})<1$.
In order to prove that $f^{-1}:$ $f(X)\to X$ is weakly $H$-quasisymmetric, we let $a,b,x$ be distinct points in $X$ with $|a'-x'|\leq |b'-x'|$. Then if $|a-x|\leq c_1|b-x|$, there is nothing to prove. Hence, we assume that $|a-x|> c_1|b-x|$.
Then we have
$$|a'-x'|\leq |b'-x'|\leq\theta(1/c_1)|a'-x'|<|a'-x'|,$$
this contradiction completes the proof.
\epf
\section{The proof of Theorem \ref{thm1.1}}\label{sec-4}
In this section, we always assume that $X$ and $Y$ are $c$-quasiconvex and complete metric spaces, and that $G\varsubsetneq X$ and $G'\varsubsetneq Y$ are domains. Furthermore, we suppose that $f:$ $G\to G'$ is weakly $H$-quasisymmetric, $G'$ is $c_1$-quasiconvex and $D\subset G$ is $b$-uniform.
Under these assumptions, it follows from Lemma \ref{ll-000} and Lemma \ref{lem-ll-2} that $f$ is $(M, C)$-$CQH$ with $M=M(c, H)\geq 1$ and $C=C(c, H)\geq 0$.
We are going to show the uniformity of $D'=f(D)$. For this, we let $x'$, $y'\in D'=f(D)\subset G'$, and $\gamma'$ be an $\varepsilon$-short arc in $D'$ joining $x'$ and $y'$ with
$$0<\varepsilon<\min\big\{1,\frac{1}{2}k_{D'}(x',y')\big\}.$$
Then by Lemma \ref{ll-001}, the preimage $\gamma$ of $\gamma'$ is a $(\nu,h)$-solid arc in $D$ with $\nu=\nu(c, H)$ and $h=h(c, H)$. Let $w_0\in\gamma$ be such that (see Figure $2$)
\be\label{wes-1}
\delta_D(w_0)=\max_{p\in \gamma}\delta_D(p).
\ee
\noindent Then by Lemma \ref{ll-15}, there is a constant $\mu=\mu(b,\nu,h)$ such that for each $u\in\gamma[x,w_0]$ and for all $z\in\gamma[u,w_0]$, \be\label{new-eq-4}|u-z|\leq \diam (\gamma[u, z])
\leq\mu\delta_D(z),\ee
and for each $v\in\gamma[y,w_0]$ and for all $z\in\gamma[v,w_0]$, \be\nonumber |v-z|\leq \diam (\gamma[v, z])\leq\mu\delta_D(z).\ee
In the following, we show that $\gamma'$ is a double cone arc in $D'$. Precisely, we shall prove
that there exist constants $A\geq 1$ and $B\geq 1$ such that for every $z'\in\gamma'[x',y']$,
\be\label{main-eq-1}\min\{\ell(\gamma'[x',z']),\ell(\gamma'[z',y'])\}\leq A\delta_{D'}(z')\ee
and
\be\label{main-eq-2}\ell(\gamma')\leq B|x'-y'|.\ee
The verification of \eqref{main-eq-1} and \eqref{main-eq-2} is given in the following two subsections.
\subsection{The proof of \eqref{main-eq-1}}
Let
$$A=2e^{8b^2A_1(C+1)M}\;\; {\rm and}\;\; A_1=2e^{M+C}(1+\mu)\theta''\Big(6c\theta'(\mu)e^{4b^2M+C}\Big),$$ where the functions $\theta'=\theta'_{b,H}$ and $\theta''=\theta''_{c_1,H}$ are from Lemma \Ref{lem-ll-0}. Obviously, we only need to get the following estimate:
for all $z'\in\gamma'[x',w'_0]$ (resp. $z'\in\gamma'[y',w'_0]$),
\be\label{main-eq-3}
\ell(\gamma'[x',z'])\leq A\delta_{D'}(z')\; \; ({\rm resp.}\; \ell(\gamma'[y',z'])\leq A\delta_{D'}(z')).
\ee
It suffices to prove the case $z'\in\gamma'[x',w'_0]$ since the proof of the case $z'\in\gamma'[y',w'_0]$ is similar.
Suppose on the contrary that there exists some point $x'_0\in \gamma'[x',w'_0]$ such that
\be\label{main-eq-4}
\ell(\gamma'[x',x_0'])>A\delta_{D'}(x_0').
\ee Then we choose $x'_1\in\gamma'[x',w'_0]$ to be the first point from $x'$ to $w_0'$ such that (see Figure $3$)
\be\label{sat-5}
\ell(\gamma'[x',x'_1])=A\delta_{D'}(x'_1).
\ee
Let $x_2\in D$ be such that (see Figure $2$)
$$|x_1-x_2|=\frac{1}{2}\delta_{D}(x_1).$$
\noindent Then we have
\bcl\label{eq-lwz4}
$|x_1'-x_2'|< e^{4b^2M+C}\delta_{D'}(x_1').$
\ecl
Obviously,
$$\delta_{D}(x_2)\geq \delta_{D}(x_1)-|x_1-x_2|=|x_1-x_2|,$$
and so, \eqref{base-eq-2} and Lemma \Ref{BHK-lem} imply
\begin{eqnarray*}\log\left(1+\frac{|x_1'-x_2'|}{\delta_{D'}(x_1')}\right)&\leq& k_{D'}(x_1',x_2')\leq Mk_D(x_1,x_2)+C\\&\leq& 4b^2M\log\left(1+\frac{|x_1-x_2|}{\min\{\delta_D(x_1),\delta_D(x_2)\}}\right)+C\\&<& 4b^2M+C,
\end{eqnarray*}
whence
$$
|x_1'-x_2'|< e^{4b^2M+C}\delta_{D'}(x_1'),
$$ which shows that the claim holds.\medskip
Let $x'_3\in\gamma'[x',x'_1]$ be such that
\be\label{sun-2}\ell(\gamma'[x',x'_3])=\frac{1}{2}\ell(\gamma'[x',x'_1]),\ee
and then, we get an estimate on $|x_1'-x_2'|$ in terms of $d_{D'}(x_3')$ as stated in the following claim.
\bcl\label{eq-lwz1}
$|x_1'-x_2'|< 2e^{4b^2M+C}\delta_{D'}(x_3').$
\ecl
It follows from \eqref{sat-5} and \eqref{sun-2} that
$$
\delta_{D'}(x_1')< 2 \delta_{D'}(x_3'),
$$ since the choice of $x_1'$ implies $\ell(\gamma'[x',x'_3])<Ad_{D'}(x_3')$. Hence, Claim \ref{eq-lwz4} leads to
$$
|x_1'-x_2'|< 2e^{4b^2M+C}\delta_{D'}(x_3'),
$$ as required.\medskip
On the basis of Claim \ref{eq-lwz1}, we have
\bcl\label{sun-1}
$|x'_1-x'_3|\leq 2\theta'(\mu)e^{4b^2M+C}\delta_{D'}(x'_3).$
\ecl
In order to apply Lemma \Ref{lem-ll-0} to prove this claim, we need some preparation. It follows from \eqref{base-eq-1}, \eqref{sat-5} and \eqref{sun-2} that
\begin{eqnarray*} k_{D'}(x'_1,x'_3) &\geq& \ell_{k_{D'}}(\gamma'[x'_1,x'_3])-\varepsilon
\geq \log\Big(1+\frac{\ell(\gamma'[x'_1,x'_3])}{\delta_{D'}(x'_1)}\Big)-1
\\ \nonumber&=& \log\big(1+\frac{A}{2}\big)-1.
\end{eqnarray*} Hence, by Lemma \Ref{BHK-lem}, we have
\begin{eqnarray*}\log\left(1+\frac{|x_1-x_3|}{\min\{\delta_D(x_1),\delta_D(x_3)\}}\right)&\geq&\frac{1}{4b^2}k_D(x_1,x_3)\geq \frac{1}{4b^2M}(k_{D'}(x'_1,x'_3)-C)\\&\geq&
\frac{1}{4b^2M}\Big(\log\big(1+\frac{A}{2}\big)-1-C\Big)\\ \nonumber
&>&\log(1+A_1),\end{eqnarray*}
and so \be\label{z-006}|x_1-x_3|>A_1\min\{\delta_D(x_1),\delta_D(x_3)\}>\frac{A_1}{1+\mu}\delta_D(x_3),\ee then \eqref{new-eq-4} implies
$$\delta_D(x_3)\leq \delta_D(x_1)+|x_1-x_3|\leq (1+\mu)\delta_D(x_1).$$
Again, by \eqref{new-eq-4}, we know
$$|x_1-x_3|\leq\mu\delta_D(x_1)= 2\mu|x_1-x_2|.$$
Now, we are ready to apply Lemma \Ref{lem-ll-0} to the points $x_1$, $x_2$ and $x_3$ in $D$. Since $f$ is weakly $H$-quasisymmetric and $D$ is $b$-uniform, by considering the restriction $f|_D$ of $f$ onto $D'$, we know from Lemma \Ref{lem-ll-0} that there is an increasing function $\theta'=\theta'_{b,H}$ such that
$$|x'_1-x'_3|\leq \theta'(2\mu)|x'_1-x'_2|,$$ and thus, Claim
\ref{eq-lwz1} assures that
$$
|x'_1-x'_3|\leq 2\theta'(2\mu)e^{4b^2M+C}\delta_{D'}(x'_3),
$$
which completes the proof of Claim \ref{sun-1}.
\medskip
Let us proceed with the proof. To get a contradiction to the contrary assumption \eqref{main-eq-4}, we choose $x'_4\in D'$ such that
\be\label{sun-3} |x'_3-x'_4|=\frac{1}{3c}\delta_{D'}(x'_3).\ee
Then Lemma \Ref{ll-11} implies that \begin{eqnarray*}\log\left(1+\frac{|x_3-x_4|}{\delta_{D}(x_3)}\right)&\leq& k_{D}(x_3,x_4)\leq Mk_{D'}(x_3',x_4')+C\\ \nonumber&\leq& 3cM\frac{|x_3'-x_4'|}{\delta_{D'}(x_3')}+C\leq M+C,\end{eqnarray*} which yields that \be\label{sun-3-1}|x_3-x_4|< e^{M+C}\delta_{D}(x_3).\ee
On the other hand, Claim \ref{sun-1} and \eqref{sun-3} imply that $$|x_1'-x_3'|\leq 2\theta'(2\mu)e^{4b^2M+C}\delta_{D'}(x_3')= 6c\theta'(2\mu)e^{4b^2M+C}|x_3'-x_4'|.$$
Now, we apply Lemma \Ref{lem-ll-0} to the points $x_1'$, $x_3'$ and $x_4'$ in $G'$.
Since by Lemma \ref{lem-ll-2} $f^{-1}:$ $G'\to G$ is weakly $H$-quasisymmetric and $G'$ is $c_1$-quasiconvex, we know from Lemma \Ref{lem-ll-0} that there is an increasing function $\theta''=\theta''_{c_1,H}$ such that
$$|x_1-x_3|\leq \theta''\Big(6c\theta'(2\mu)e^{4b^2M+C}\Big)|x_3-x_4|,$$
which, together with \eqref{z-006} and \eqref{sun-3-1}, shows that
\begin{eqnarray*}
|x_1-x_3|&\leq& e^{M+C}\theta''\Big(6c\theta'(2\mu)e^{4b^2M+C}\Big)\delta_D(x_3)\\ \nonumber&\leq& \frac{1+\mu}{A_1}e^{M+C}\theta''\Big(6c\theta'(2\mu)e^{4b^2M+C}\Big)|x_1-x_3|\\ \nonumber&=&
\frac{1}{2}|x_1-x_3|.
\end{eqnarray*}
This obvious contradiction shows that \eqref{main-eq-1} is true.
\qed
\subsection{The proof of \eqref{main-eq-2}}
Let
$$B=12cA^2e^{6b^2M\mu\big(1+\theta''\big(\frac{1+12cA}{3c}\big)\big)},$$
and suppose on the contrary that \be\label{eq-lwz3}\ell(\gamma')> B|x'-y'|.\ee
Since $\frac{9}{2}ce^{\frac{3}{2}}<B$, we see from Lemma \ref{mon-4} that
\be \label{eq-lwz2}|x'-y'|>\frac{1}{3c} \max\{\delta_{D'}(x'), \delta_{D'}(y')\}.\ee
For convenience, in the following, we assume that $$\max\{\delta_{D'}(x'), \delta_{D'}(y')\}=\delta_{D'}(x').$$
First, we choose some special points from $\gamma'$.
By \eqref{eq-lwz3}, we know that there exist $w'_1$ and $w'_2\in \gamma'$ such that $x'$, $w'_1$, $w'_2$ and $y'$ are successive points in $\gamma'$ and
\be\label{eq-11-1}\ell(\gamma'[x',w'_1])=\ell(\gamma'[w'_2,y'])=6cA|x'-y'|.\ee
Then we have
\bcl\label{zzz-002} $|x'-w'_1|\geq \frac{1}{2}\delta_{D'}(w'_1)$ and $|y'-w'_2|\geq \frac{1}{2}\delta_{D'}(w'_2)$.
\ecl
Obviously, it suffices to show the first inequality in the claim. Suppose
$$|x'-w'_1|< \frac{1}{2}\delta_{D'}(w'_1).$$ Then \eqref{main-eq-1} and \eqref{eq-lwz2} lead to
$$\delta_{D'}(x')\geq \delta_{D'}(w'_1)-|x'-w'_1|>\frac{1}{2}\delta_{D'}(w'_1)\geq \frac{1}{2A}\ell(\gamma'[x',w'_1])=3c|x'-y'|>\delta_{D'}(x').$$
This obvious contradiction completes the proof of Claim \ref{zzz-002}.\medskip
By using Claim \ref{zzz-002}, we get a lower bound for $|w_1-w_2|$ in terms of $\min\{\delta_D(w_1),\; \delta_D(w_2)\}$, which is as follows.
\bcl\label{eq-l2}
$|w_1-w_2|> \Big(1+\theta''\Big(\frac{1+12cA}{3c}\Big)\Big)\mu\min\{\delta_D(w_1),\; \delta_D(w_2)\}.$
\ecl
Without loss of generality, we assume that $\min\{\delta_D(w_1),\; \delta_D(w_2)\}=\delta_{D}(w_1)$.
Then by \eqref{eq-11-1} and Claim \ref{zzz-002}, we have
\be\label{eq-l10}\delta_{D'}(w'_1)\leq 2|x'-w'_1|\leq 2\ell(\gamma'[x',w'_1])=12cA|x'-y'|.\ee
Since $\gamma'$ is an $\varepsilon$-short arc and $D$ is $b$-uniform, by Lemma \Ref{BHK-lem}, we have
\begin{eqnarray*}
\log\left(1+\frac{|w_1-w_2|}{\delta_D(w_1)}\right) &\geq& \frac{1}{4b^2}k_D(w_1,w_2) \geq \frac{1}{4Mb^2}k_{D'}(w'_1,w'_2)-\frac{C}{4Mb^2}\\&\geq& \frac{1}{4Mb^2}\ell_{k_{D'}}(\gamma'[w'_1,w'_2])-\frac{\varepsilon+C}{4Mb^2}
\\ \nonumber&\geq& \frac{1}{4Mb^2}\log\left(1+\frac{\ell(\gamma'[w'_1,w'_2])}{\delta_{D'}(w'_1)}\right)-\frac{1+C}{4Mb^2}
\\ \nonumber&\geq& \frac{1}{4Mb^2}\log\Big(1+\frac{B-12cA}{12cA}\Big)-\frac{1+C}{4Mb^2}
\\ \nonumber &=&\lambda,
\end{eqnarray*}
where the last inequality follows from \eqref{eq-l10} and the following inequalities:
$$\ell(\gamma'[w_1',w_2'])=\ell(\gamma')-\ell(\gamma'[x',w_1'])-\ell(\gamma'[y',w_2'])>(B-12cA)|x'-y'|.$$ Hence $$|w_1-w_2|\geq(e^{\lambda}-1)\delta_D(w_1)> \Big(1+\theta''\Big(\frac{1+12cA}{3c}\Big)\Big)\mu\delta_D(w_1),$$ as required. \medskip
Next, we get the following upper bound for $|w_1-w_2|$ in terms of $\min\{\delta_D(w_1),\; \delta_D(w_2)\}$.
\bcl\label{mon-3}
$|w_1-w_2|\leq \theta''\left(\frac{1+12cA}{3c}\right)\mu \min\{\delta_D(w_1),\; \delta_D(w_2)\}.$
\ecl
First, we see that $w_0\in\gamma[w_1,y]$ (see Figure $4$), where $w_0$ is the point in $\gamma$ which satisfies \eqref{wes-1} (see Figure $2$), because otherwise \eqref{new-eq-4} gives that $$|w_1-w_2|\leq\mu\delta_D(w_1),$$ which contradicts with Claim \ref{eq-l2}.
We are going to apply Lemma \Ref{lem-ll-0} to the points $x'$, $w_1'$ and $w_2'$ in $G'$. We need a relationship between $|w'_1-w'_2|$ and $|x'-w_1'|$.
To this end, it follows from \eqref{eq-11-1} that $$|w'_1-w'_2|\leq |w'_1-x'|+|x'-y'|+|y'-w'_2|\leq (1+12cA)|x'-y'|\leq\frac{1+12cA}{3c}|x'-w_1'|,$$
since we infer from
the choice of $w'_1$, \eqref{main-eq-1} and Claim \ref{zzz-002} that
$$|x'-w_1'|\geq \frac{1}{2}\delta_{D'}(w_1')\geq \frac{1}{2A}\ell(\gamma'[x',w'_1])=3c|x'-y'|.$$
Then by Lemma \Ref{lem-ll-0}, we have known that there is an increasing function $\theta''=\theta''_{c_1,H}$ such that
$$|w_1-w_2|\leq \theta''\left(\frac{1+12cA}{3c}\right)|x-w_1|,$$
and thus, \eqref{new-eq-4} leads to
$$|w_1-w_2|\leq \theta''\left(\frac{1+12cA}{3c}\right)\mu \min\{\delta_D(w_1),\; \delta_D(w_2)\},$$
which shows that Claim \ref{mon-3} holds.\medskip
We observe that Claim \ref{eq-l2} contradicts Claim \ref{mon-3}, which completes the proof of \eqref{main-eq-2}.
Inequalities \eqref{main-eq-1} and \eqref{main-eq-2}, together with the arbitrariness of the choice of $x'$ and $y'$ in $D'$, show that $D'$ is $B$-uniform, which implies that Theorem \ref{thm1.1} holds.\qed
\bigskip
\section{The proofs of Theorem \ref{thm-2} and Theorem \ref{thm-3}}\label{sec-5}
A homeomorphism $f$ from $X$ to $Y$ is said to be {\it quasiconformal} if there is a constant $K<\infty$ such that
$$\limsup_{r\to 0}\frac{L_f(x,r)}{l_f(x,r)}\leq K$$
for all $x\in X$, where $L_f(x,r)=\sup_{|y-x|\leq r}\{|f(y)-f(x)|\}$ and $l_f(x,r)=\inf_{|y-x|\geq r}\{|f(y)-f(x)|\}$.
\subsection{The proof of Theorem \ref{thm-2}}
Since $G_1$ is inner $b$-uniform, it has the length $b$-cigar property, and thus by \cite[Theorem $2.21$]{Vai}, it also has the $b_1$-carrot property with some center point $z_0\in G_1$ for some constant $b_1>0$. Then it follows from \cite[Theorem $1.1$]{hlpw} that its image $G_1'=f(G_1)$ is a $b_2$-John domain with center $z_0'=f(z_0)\in f(G_1)$ with $b_2$ depending only on $n,K,a,b_1,\frac{\diam G}{\delta_G(f^{-1}(y_0'))})$. Again using \cite[Theorem $2.21$]{Vai} to obtain that $G_1'$ has the length $b_3$-cigar property. Furthermore, by \cite[Theorem $4.2$]{Vai05} we know that $G_1'$ is $b_4-LLC_2$.
On the other hand, one easily sees from \cite[Theorem $3.6$]{BHK} that $(G_1,k_{G_1})$ is a geodesic Gromov hyperbolic space because $G_1$ is inner $b$-uniform. Moreover, by \cite[Theorem $3$]{Geo} we see that $(G_1',k_{G_1'})$ is also a geodesic Gromov hyperbolic space since Gromov hyperbolicity is invariance under quasi-isometry maps between two geodesic metric spaces; see \cite[Theorem $3.20$]{Vai9}.
Combining the above two facts, $G_1'$ is $LLC_2$ and Gromov hyperbolic, with \cite[Proposition $7.12$]{BHK} we get that $G_1'$ is inner $\tau$-uniform for some
$\tau=\tau(n,K,a,b,\frac{\diam G}{\delta_G((f^{-1}y_0'))}).$ Hence we can conclude the proof of this theorem.
\subsection{The proof of Theorem \ref{thm-3}}
First, one easily sees from Lemma \ref{lem-ll-2} that $f^{-1}$ is also weakly $H_1$-quasisymmetric for some constant $H_1\geq 1$ because every $b$-uniform domain is clearly $b$-quasiconvex. Thus by Lemma \ref{ll-000}, we get that $f$ is $(M,C)$-CQH for some $M,C\geq 1$. Moreover, using Lemma \ref{ll-001}, we obtain that the image curve $\gamma'$ is $(\nu,h)$-solid for some $\nu,h\geq 1$. Hence the first assertion follows from Lemma \ref{ll-15}.
So, it remains to show the second assertion. We note that again by Lemma \ref{ll-15} we have
\beq\label{eq-li-new1}\diam (\gamma')\leq \mu_3 \max\{ |x'-y'|, \min\{\delta_{G'}(x'), \delta_{G'}(y')\} \}.\eeq
Without loss of generality, we may assume $\delta_{G'}(x')\leq \delta_{G'}(y')$. Since $f^{-1}$ is also weakly $H_1$-quasisymmetric, by \cite[(3.10)]{HL}, one immediately sees that there is an increasing continuous function $\theta:(0,\frac{1}{54c})\to \mathbb{R}_+$ with $\theta$ depending only on $c$ and $H_1$ and with $\theta(0)=0$ such that
\beq\label{eq-li-new2}\frac{|x-y|}{\delta_G(x)}\leq \theta\Big(\frac{|x'-y'|}{\delta_{G'}(x')}\Big),\eeq
whenever $x',y'\in G'$ with $|x'-y'|\leq \frac{\delta_{G'}(x')}{54c}$. Thus, there is a constant $\lambda\in (0,\frac{1}{54c})$ such that $\theta(\lambda)<\frac{1}{50c^2}$.
Thus we divide the proof of the second assertion into two cases.
\bca\label{ca1}$|x'-y'|\geq \lambda \delta_{G'}(x')$.\eca
Then, obviously, we get from \eqref{eq-li-new1} that
$$\diam (\gamma')\leq \frac{\mu_3}{\lambda} |x'-y'|,$$
as desired.
\bca\label{ca2} $|x'-y'|< \lambda \delta_{G'}(x')$.\eca
Then \eqref{eq-li-new2} and the choice of $\lambda$ imply that $$|x-y|\leq \theta(\lambda)\delta_G(x)<\frac{\delta_G(x)}{50c^2}.$$ We claim that
\beq\label{eq-li-new3}\gamma\subset \mathbb{B}(x,8c|x-y|)\subset \mathbb{B}(x,\frac{\delta_G(x)}{6c}).\eeq
Otherwise, there is some point $w\in\gamma$ such that $|w-x|=8c|x-y|\leq \frac{\delta_G(x)}{6c}$ and $\gamma[x,w]\subset \mathbb{\overline{B}}(x,8c|x-y|)$. Moreover, since $\gamma$ is a $\varepsilon$-short arc, by Lemma \Ref{ll-11} we have that
$$\ell_{k_G}(\gamma)\leq k_G(x,y)+\varepsilon\leq \frac{7}{6}k_G(x,y)\leq \frac{7c|x-y|}{2\delta_G(x)},$$
and
$$\ell_{k_G}(\gamma)\geq k_G(x,w)\geq \frac{1}{2}\frac{|x-w|}{\delta_G(x)}\geq 4c \frac{|x-y|}{\delta_G(x)},$$
which is an obvious contradiction. Hence the required \eqref{eq-li-new3} follows.
Moreover, by \eqref{eq-li-new3} we have for all $z\in\gamma$, $$|x-z|\leq 8c|x-y|.$$ Since $X$ is $c$-quasiconvex, there is a curve $\beta$ joining $x$ and $z$ with $\ell(\beta)\leq c|x-z|$. Indeed, we know that $\beta\subset G$, because
$$\ell(\beta)\leq c|x-z|\leq 8c^2|x-y|\leq \frac{\delta_G(x)}{6}.$$
Then we shall define inductively the successive points $x=x_0,...,x_n=z$ such that each $x_i$ denotes the last point of $\beta$ in $\overline{\mathbb{B}}(x_{i-1},|x-y|)$, for $i\geq 1$. Obviously, $n\geq 2$, and
$$|x_{i-1}-x_{i}|=|x-y|\;\mbox{ for}\; 1\leq i\leq n-1,\;\; \;\; |x_{n-1}-x_{n}|\leq |x-y|.$$
Next, we are going to obtain an upper bound for $n$.
Since for $1\leq i \leq n-1$, $$\ell(\beta[x_{i-1},x_{i}])\geq |x_{i-1}-x_{i}|=|x-y|,$$ we have
$$(n-1)|x-y|\leq \ell(\beta)\leq c|x-z|\leq 8c^2|x-y|,$$
which implies that
$$n\leq 8c^2+1.$$
Now, we are going to complete the proof in this case. Since for $1\leq i\leq n-1$, $|x_{i+1}-x_{i}|\leq |x_{i-1}-x_{i}|$ and $f$ is weakly $H$-quasisymmetric, we have
$$|x'_1-x'_0|\leq H|x'-y'|,$$
$$|x'_2-x'_1|\leq H|x'_1-x'_0|\leq H^2|x'-y'|,$$
$$...$$
$$|x'_n-x'_{n-1}|\leq H^n|x'-y'|.$$
Thus,
\begin{eqnarray*}
|x'-z'|&=&|x'_0-x'_n| \leq \sum_{i=1}^n |x'_i-x'_{i-1}|
\\ \nonumber &\leq& (H+H^2+\cdot\cdot\cdot+H^n)|x'-y'|
\\ \nonumber &\leq& n H^n|x'-y'|
\\ \nonumber &\leq& (8c^2+1)H^{8c^2+1}|x'-y'|.
\end{eqnarray*}
Therefore, we obtain
$$\diam(\gamma')\leq 2(8c^2+1)H^{8c^2+1}|x'-y'|$$ as desired.
Let $\lambda_2=\max\{\frac{\mu_3}{\lambda}, 2(8c^2+1)H^{8c^2+1}\}$.
Then the proof of this theorem is complete.
|
2,869,038,153,959 | arxiv | \section{Introduction}
Gaussian graphical model is a graphical representation of the dependence structure for a Gaussian random vector. It is recognized as a powerful tool in different applied fields such as bioinformatics, error-control codes, speech language, information retrieval and others \cite{Jordan_2004}. There are two aspects related to the study of the graphical model: the computational and algorithmic aspect, and the statistical aspect. The computational aspect has become increasingly popular due to the growing interest in large scale networks \cite{Jordan_2008}. A central question in the statistical aspect is how to recover the structure of an undirected Gaussian graph from observations. This problem is called the Gaussian graphical model selection problem (GGMS problem). A comprehensive survey on different approaches to this problem is given in \cite{Drton_2007}. Most of the constructed statistical procedures for GGMS are based on asymptotically optimal estimations of correlations \cite{Anderson_2003}, \cite{Drton_2004}, \cite{Zhao_Ren_2015}. However, as far as we know, there are no results related to the optimality of statistical procedures
for GGMS for a fixed sample size. This is the subject of this paper.
The quality of statistical procedures for GGMS can be measured by two types of errors: false edge inclusion (Type I error) and false edge exclusion (Type II error). Traditional measures of quality used in GGMS are: FWER (Family Wise Error Rate), FDR (False Discovery Rate), FDP (False Discovery Proportion) and others \cite{Drton_2007}. Most of them are connected with the Type I error (false edge inclusion). It is clear, that the quality of GGMS procedures has to be related to the difference between two graphs: the true graph and selected graph. Therefore in GGMS it is important to take into account both types of errors (Type I and Type II errors). The quality of a statistical procedure can be measured in this case by the linear combination of expected values of the numbers of Type I and Type II errors. We refer to a selection statistical procedure which minimizes this value for a fixed sample size as {\em optimal}. The main goal of this paper is to find an optimal statistical procedure for GGMS. To achieve this goal we consider the graphical model selection problem as part of the multiple decision theory \cite{Lehmann_1957}. The quality of statistical procedures is measured by the risk function. To take into account the numbers of Type I and Type II errors we consider the additive loss function. We prove that in this case the risk function is a linear combination of expected values of the numbers of Type I and Type II errors. Subsequently, we construct tests of a Neyman structure for individual hypotheses and combine them to obtain a multiple decision statistical procedure. We show that the obtained procedure minimizes the risk function in the class of unbiased multiple decision procedures.
The paper is organized as follows. In Section \ref{Problem statement} we give basic definitions and notations.
In Section \ref{Multiple decision approach} we describe the multiple decision framework for the Gaussian graphical model selection problem and prove the representation of the risk function.
In Section \ref{UMPU test for individual hypotheses} we construct the tests of a Neyman structure for individual hypotheses.
In Section \ref{Optimal multiple decision procedures} we combine individual tests in the multiple decision procedure and prove its optimality.
In Section \ref{Concluding remarks} we discuss the proposed approach for different model selection problems.
\section{Problem statement}\label{Problem statement}
Let $X=(X_1,X_2,\ldots,X_p)$ be a random vector with the multivariate Gaussian distribution from $N(\mu, \Sigma)$, where $\mu=(\mu_1,\mu_2,\ldots, \mu_p)$ is the vector of means and
$\Sigma=(\sigma_{i,j})$ is the covariance matrix, $\sigma_{i,j}=\mbox{cov}(X_i,X_j)$, $i,j=1,2,\ldots,p$. Let $x(t)$, $t=1,2,\ldots,n$ be a sample of size $n$ from the distribution of $X$.
We assume in this paper that $n>p$, and that the matrix $\Sigma$ is non degenerate. The case $n<p$ has a practical interest too \cite{Liang_2015}, but it is not considered in this paper.
The undirected Gaussian graphical model is an undirected graph with $p$ nodes. The nodes of the graph are associated with the random variables $X_1,X_2,\ldots,X_p$, edge $(i,j)$ is included in the graph if the random
variables $X_i,X_j$ are conditionally dependent \cite{Lauritzen_1996}, \cite{Anderson_2003}. Gaussian graphical model selection problem consists of the identification of a graphical model from observations.
The partial correlation $\rho_{i,j \bullet N(i,j)}$ of $X_i$, $X_j$ given $X_k$, $k \in N(i,j) = \{1,2,\ldots,p\}\setminus \{i,j\}$ is defined as the correlation of $X_i$, $X_j$
in the conditional distribution of $X_i$, $X_j$ given $X_k$, $k \in N(i,j)$. It is known\cite{Anderson_2003} that the conditional distribution of $X_i$, $X_j$ given
$X_k$, $k \in N(i,j)$ is Gaussian with the correlation $\rho_{i,j \bullet N(i,j)}$. It implies that the conditional independence of $X_i$, $X_j$ given $X_k$,
$k \in N(i,j) = \{1,2,\ldots,p\}\setminus \{i,j\}$ is equivalent to the equation $\rho_{i,j \bullet N(i,j)}=0$. Therefore, the Gaussian graphical model selection is equivalent
to simultaneous inference on hypotheses of pairwise conditional independence $\rho_{i,j \bullet N(i,j)}=0$, $i \neq j$, $i,j=1,2,\ldots,p$.
The inverse matrix for $\Sigma$, $\Sigma^{-1}=(\sigma^{i,j})$ is known as the concentration or precision matrix for the distribution of $X$.
For simplicity we use the notation $\rho^{i,j}=\rho_{i,j \bullet N(i,j)}$. The problem of pairwise conditional independence testing has the form:
\begin{equation}\label{main_problem}
h_{i,j}:\rho^{i,j}=0 \mbox{ vs } k_{i,j}:\rho^{i,j}\neq 0, \quad i \neq j, i,j=1,2,\ldots,p
\end{equation}
According to \cite{Lauritzen_1996} the partial correlation can be written as
$$
\rho^{i,j}=-\frac{\sigma^{i,j}}{\sqrt{\sigma^{i,i}\sigma^{j,j}}}
$$
Note that the problem of pairwise conditional independence testing (\ref{main_problem}) is equivalent to
$$
h_{i, j}:\sigma^{i,j}=0, \ \mbox{ vs } \ k_{i, j}:\sigma^{i,j}\neq 0, i \neq j, i,j=1,2,\ldots,p
$$
The Gaussian graphical model selection problem can be formulated now as multiple testing problem for the set of hypotheses (\ref{main_problem}).
\section{Multiple decision approach}\label{Multiple decision approach}
In this Section we consider the GGMS problem in the framework of decision theory \cite{Wald_1950}. According to this approach we specify the decision statistical procedures and risk function.
Let $X=(X_1,X_2,\ldots,X_p)$ be a random vector with multivariate Gaussian distribution from $N(\mu, \Sigma)$. In the GGMS study observations are modeled as a sequence of random vectors $X(t)$, $t =1, 2,\ldots,n$
where $n$ is the sample size and vectors $X(t)$ are independent and identically distributed as $X$. Let $x=(x_{i}(t))$ be observations of the random variables $X_i(t)$, $t =1, 2, \ldots,n$, $i = 1,2, \ldots,p$.
Consider the set $\cal{G}$ of all $p \times p$ symmetric matrices $G=(g_{i,j})$ with $g_{i,j} \in \{0,1\}$, $i,j=1,2,\ldots,p$, $g_{i,i}=0$, $i=1,2,\ldots,p$. Matrices $G \in \cal{G}$ represent adjacency
matrices of all simple undirected graphs with $p$ vertices. The total number of matrices in $\cal{G}$ is equal to $L=2^M$ with $M=p(p-1)/2$.
The {\it GGMS problem} can be formulated as a multiple decision problem of the choice between $L$ hypotheses:
\begin{equation}\label{N hypotheses}
H_G: \rho^{i,j}=0, { if } g_{i,j}=0, \ \ \rho^{i,j} \neq 0 \mbox{ if } g_{i,j}=1; \ \ i \neq j, \ \ i,j = 1,2,\ldots, p
\end{equation}
The multiple decision statistical procedure $\delta(x)$ is a map from the sample space $R^{p \times n}$ to the decision space $D=\{d_G, g \in \cal{G} \}$, where the decision $d_G$ is the
acceptance of hypothesis $H_G$, $G \in \cal{G}$.
Let $\varphi_{i,j}(x)$ be tests for the individual hypothesis (\ref{main_problem}). More precisely,
$\varphi_{i,j}(x)=1$ means that hypothesis $h_{i,j}$ is rejected (edge $(i,j)$ is included in the graphical model), and $\varphi_{i,j}(x)=0$ means that hypothesis $h_{i,j}$ is accepted (edge $(i,j)$ is not included in the graphical model).
Let $\Phi(x)$ be the matrix
\begin{equation}\label{test_for_N_hypotheses_overall_form}
\Phi(x)=\left(\begin{array}{cccc}
0 &\varphi_{1, 2}(x) &\ldots &\varphi_{1, p}(x)\\
\varphi_{2, 1}(x) & 0 &\ldots &\varphi_{2, p}(x)\\
\ldots&\ldots&\ldots&\ldots\\
\varphi_{p, 1}(x) &\varphi_{p, 2}(x) &\ldots & 0\\
\end{array}\right).
\end{equation}
Any multiple decision statistical procedure $\delta(x)$ based on the simultaneous inference of individual edge tests (\ref{main_problem}) can be written as
\begin{equation}\label{mdp_overall_form}
\delta(x)=d_G, \ \mbox{iff} \ \Phi(x)=G
\end{equation}
According to \cite{Wald_1950} the quality of the statistical procedure
is defined by the risk function. Let $\Omega$ be the set of parameters $\Omega=\{\theta: \theta=(\mu, \Sigma), \mu \in R^p$, $\Sigma \mbox{ is a symmetric positive definite matrix} \}$.
By $\Omega_S$ we denote the parametric region corresponding to hypothesis $H_S$.
Let $S=(s_{i,j})$, $Q=(q_{i,j})$, $S$, $Q$ $\in \cal{G}$. By $w(S,Q)$ we denote the loss from decision $d_Q$ when hypothesis $H_S$ is true, i.e.
$$
w(H_S;d_Q)=w(S,Q), \ \ S,Q \in \cal{G}
$$
Assume that $w(S,S)=0, S \in \cal{G}$. The risk function is defined by
$$
Risk(S, \theta;\delta)=\sum_{Q \in \cal{G}} w(S,Q)P_{\theta}(\delta(x)=d_Q),
$$
where $P_{\theta}(\delta(x)=d_Q)$ is the probability that decision $d_Q$ is taken.
As mentioned before, for the GGMS problem it is important to control Type I and Type II errors.
Let $a_{i,j}$ be the loss from the false inclusion of edge $(i,j)$ in the graphical model, and let $b_{i,j}$, be the
loss from the false non inclusion of the edge $(i,j)$ in the graphical model, $i,j=1,2,\ldots,p; \ i\neq j$.
Define the individual loss as
$$w_{i,j}(S,Q)=\left\{\begin{array}{cc}
a_{i,j}, & \mbox{if } \ s_{i,j}=0,q_{i,j}=1, \\
b_{i,j}, & \mbox{if } \ s_{i,j}=1,q_{i,j}=0, \\
0, & \mbox{ otherwise }
\end{array}\right.$$
To take into account both types of errors we suggest the total loss $w(S,Q)$ is defined as:
\begin{equation}\label{additive_loss_function}
w(S,Q)=\sum_{i=1}^p\sum_{j=1}^p w_{i,j} (S,Q)
\end{equation}
It means that the total loss from the misclassification of $H_S$ is equal to the sum of losses from the misclassification of individual edges:
$$
w(S,Q)=\sum_{\{i,j:s_{i,j}=0;q_{i,j}=1\}}a_{i,j}+\sum_{\{i,j:s_{i,j}=1;q_{i,j}=0\}}b_{i,j}
$$
The main result of this Section is the following theorem
\newtheorem{teo}{Theorem
\begin{teo}
Let the loss function $w$ be defined by (\ref{additive_loss_function}), and $a_{i,j}=a$, $b_{i,j}=b$, $i \neq j$, $i,j=1,2,\ldots,p$. Then
$$
Risk(S, \theta;\delta)=aE_{\theta}[Y_I(S, \delta)]+bE_{\theta}[Y_{II}(S, \delta)]
$$
where $Y_I(S, \delta)$, $Y_{II}(S, \delta)$ are the numbers of Type I and Type II errors for model selection by the statistical procedure $\delta$ when the true decision is $d_S$.
\end{teo}
\noindent
{\bf Proof.} One has
$$
Risk(S, \theta;\delta)=\sum_{Q \in \cal{G}} w(S,Q)P_{\theta}(\delta(x)=d_Q)=
$$
$$
\sum_{Q \in \cal{G}}[\sum_{\{i,j:s_{i,j}=0;q_{i,j}=1\}}a_{i,j}+\sum_{\{i,j:s_{i,j}=1;q_{i,j}=0\}}b_{i,j}]P_{\theta}(\delta(x)=d_Q)=
$$
$$
= \sum_{Q \in \cal{G}} [aN_I(Q)+bN_{II}(Q)]P_{\theta}(\delta(x)=d_Q)=aE_{\theta}[Y_I(S, \delta]+bE_{\theta}[(Y_{II}(S, \delta)]
$$
where $N_I(Q)$ is the number of Type I errors when the procedure $\delta(x)$ takes decision $d_Q$ and $\theta \in \Omega_S$, and $N_{II}(Q)$ is the number of Type II errors
when the procedure $\delta(x)$ takes decision $d_Q$ and $\theta \in \Omega_S$.
\section{Uniformly most powerful unbiased tests for individual hypotheses}\label{UMPU test for individual hypotheses}
In this Section we briefly present the uniformly most powerful unbiased tests for individual hypotheses (\ref{main_problem}). More details are given in our paper \cite{Koldanov_2017}.
Consider the statistics
$$
S_{k,l}=\frac{1}{n} \Sigma_{t=1}^n(X_{k}(t)-\overline{X_{k}})(X_{l}(t)-\overline{X_{l}}),
$$
The joint distribution of statistics $S_{k,l}$, $k,l = 1,2,\ldots,N$, $n>p$ is given by the Wishart density function \cite{Anderson_2003}:
$$
f(\{s_{k,l}\})=\displaystyle \frac{ [\det (\sigma^{k,l})]^{n/2}
\times [\det(s_{k,l})]^{(n-p-2)/2}\times \exp[-(1/2)\sum_k \sum_l
s_{k,l} \sigma^{k,l}]} {2^{(pn/2)}\times \pi^{p(p-1)/4} \times
\Gamma(n/2)\Gamma((n-1)/2)\cdots\Gamma((n-p+1)/2)}
$$
if the matrix $S=(s_{k,l})$ is positive definite, and $f(\{s_{k,l}\})=0$ otherwise. The Wishart density function can be written as:
$$
f(\{s_{k,l}\})=\displaystyle C(\{\sigma^{k,l}\})
\exp[-\sigma^{i,j}s_{i,j} - \frac{1}{2} \sum_{(k,l)\neq
(i,j);(k,l)\neq(j,i)} s_{k,l} \sigma^{k,l}] m(\{s_{k,l}\})
$$
where
$$
C(\{\sigma^{k,l}\})=c_1^{-1}[\det (\sigma^{k,l})]^{n/2}
$$
$$
c_1=2^{(pn/2)}\times \pi^{p(p-1)/4}\times
\Gamma(n/2)\Gamma((n-1)/2)\cdots\Gamma((n-p+1)/2)
$$
$$ m(\{s_{k,l}\})=[\det(s_{k,l})]^{(n-p-2)/2}
$$
According to \cite{Lehmann_2005} (Ch. 4) the uniformly most powerful unbiased (UMPU) test for hypothesis $h_{i,j}$ has the form:
\begin{equation}\label{Nstructure}
\varphi_{i, j}(\{s_{k, l}\})=\left\{\begin{array}{rl}
\ 0, &\mbox{}\: if \: c_{i,j}'(\{s_{k,l}\})<s_{i,j}<c_{i,j}'' (\{s_{k,l}\}),\ (k,l)\neq (i,j)\\
\ 1, &\mbox{}\: if \: s_{i,j}\leq c_{i,j}'(\{s_{k,l}\})\mbox{ or } s_{i,j}\geq c_{i,j}''(\{s_{k,l}\}),\ (k,l)\neq (i,j)
\end{array}\right.
\end{equation}
where the critical values $c'_{i,j}, c''_{i,j}$ are defined from the equations
\begin{equation}\label{threshold1}
\displaystyle \frac{\int_{I \cap [c_{i,j}';c_{i,j}'']}
[\det(s_{k,l})]^{(n-p-2)/2} ds_{i,j}}
{\int_{I} [\det(s_{k,l})]^{(n-p-2)/2}
ds_{i,j}} =1-\alpha_{i,j}
\end{equation}
\begin{equation}\label{threshold2}
\begin{array}{l}
\displaystyle \int_{I \cap (-\infty;c_{i,j}']}
s_{i,j}[\det(s_{k,l})]^{(n-p-2)/2}
ds_{i,j}+\\
+\displaystyle \int_{I \cap [c_{i,j}'';+\infty)}
s_{i,j} [\det(s_{k,l})]^{(n-p-2)/2}
ds_{i,j}=\\
=\alpha_{i,j} \int_I s_{i,j}[\det(s_{k,l})]^{(n-p-2)/2} ds_{i,j}
\end{array}
\end{equation}
where $I$ is the interval of values of $s_{i,j}$ such that the matrix $S=(s_{k,l})$ is positive definite, and $\alpha_{i,j}$ is the significance level of the test.
It is shown in \cite{Koldanov_2017} that the constructed UMPU test is equivalent to the following partial correlation test
\begin{equation}\label{Neyman_structure_final}
\varphi_{i,j}^{umpu}=\left\{\
\begin{array}{ll}
0, & \displaystyle 2q_{i,j}-1 < r^{i,j} < 1-2q_{i,j} \\
1, & \mbox{otherwise},
\end{array}\right.
\end{equation}
where $r^{i,j}$ is the sample partial correlation, and $q_{i,j}$ is the $(\alpha_{i,j}/2)$-quantile of the beta distribution $Be(\frac{n-p}{2},\frac{n-p}{2})$.
Finally, we need to specify the optimality and unbiasedeness of the test (\ref{Neyman_structure_final}). Let $\omega_{i,j}$ be the set of parameters defined by
$$
\omega_{i,j}=\bigcup _{S: s_{i,j}=0}\Omega_S
$$
Denote by $\omega_{i,j}^{-1}=\Omega \setminus w_{i,j}$. Let $\varphi_{i,j}$ be a test for individual hypothesis $h_{i,j}$ with significance level $\alpha_{i,j}$.
The test $\varphi_{i,j}$ is reffered to as unbiased \cite{Lehmann_2005} if
\begin{equation}\label{unbiased_individual}
E_{\theta}(\varphi_{i,j}) \leq \alpha_{i,j}, \ \forall \theta \in \omega_{i,j}; \ \
E_{\theta}(\varphi_{i,j}) \geq \alpha_{i,j}, \ \forall \theta \in \omega_{i,j}^{-1}
\end{equation}
The test $\varphi_{i,j}^{umpu}$ defined by (\ref{Neyman_structure_final}) is unbiased and the following inequality holds:
\begin{equation}\label{umpu_individual}
E_{\theta}(\varphi_{i,j}^{umpu}) \geq E_{\theta}(\varphi_{i,j}), \ \ \forall \theta \in \omega_{i,j}^{-1}
\end{equation}
for any unbiased test $\varphi_{i,j}$.
\section{Optimal multiple decision procedures}\label{Optimal multiple decision procedures}
According to Wald \cite{Wald_1950} the procedure $\delta^*$ is referred to as optimal in the class of statistical procedures $\cal{C}$ if
\begin{equation}\label{optimality}
Risk(S, \theta; \delta^*) \leq Risk(S, \theta; \delta),
\end{equation}
for any $S \in \cal{G}$, $\theta \in {\Omega}_S$, $\delta \in \cal{C}$.
In this paper we consider the class of $w$-unbiased statistical procedures. Statistical procedure $\delta(x)$ is reffered to as $w$-unbiased if one has
\begin{equation}\label{unbiasedeness}
Risk(S, \theta;\delta)=E_{\theta} w(S; \delta) \leq E_{\theta} w(S'; \delta)=Risk(S', \theta;\delta),
\end{equation}
for any $S, S' \in \cal{G}$, $\theta \in {\Omega}_S$. The following theorem describes the optimal procedure in the class of $w$-unbiased multiple decision procedures for Gaussian graphical model selection.
\begin{teo}
Let the loss function $w$ be defined by (\ref{additive_loss_function}). Let the procedure $\delta^{ou}$ be defined by (\ref{test_for_N_hypotheses_overall_form})- (\ref{mdp_overall_form}), where $\varphi_{i,j}=\varphi_{i,j}^{umpu}$ is defined by (\ref{Neyman_structure_final}) and
\begin{equation}\label{abalpha}
\alpha_{i,j}=\frac{b_{i,j}}{a_{i,j}+b_{i,j}}.
\end{equation}
Then procedure $\delta^{ou}$ is an optimal multiple decision statistical procedure in the class of $w$-unbiased procedures for Gaussian graphical model selection.
\end{teo}
\noindent
{\bf Proof.} We use the general approach by Lehmann \cite{Lehmann_1957} and give a direct proof.
Let $\delta$ be a statistical procedure defined by (\ref{test_for_N_hypotheses_overall_form})-(\ref{mdp_overall_form}). If the loss function $w$ satisfies (\ref{additive_loss_function}) then the risk of statistical procedure $\delta$ is the sum of the risks of individual tests $\varphi_{i,j}$. Indeed, one has
$$
Risk(S, \theta;\delta)=\sum_{Q \in \cal{G}} w(S,Q)P_{\theta}(\delta(x)=d_Q)=
$$
$$
= \sum_{Q \in \cal{G}}[\sum_{\{i,j:s_{i,j}=0;q_{i,j}=1\}}a_{i,j}+\sum_{\{i,j:s_{i,j}=1;q_{i,j}=0\}}b_{i,j}]P_{\theta}(\delta(x)=d_Q)=
$$
$$
= \sum^p_{i, j=1, s_{i,j}=0} a_{i,j} \sum_{Q, q_{i,j}=1} P_{\theta}(\delta(x)=d_Q)+
\sum^p_{i, j=1, s_{i,j}=1} b_{i,j} \sum_{Q, q_{i,j}=0} P_{\theta}(\delta(x)=d_Q)=
$$
$$
=\sum^p_{i, j=1, s_{i,j}=0}a_{i,j}P_{\theta}(\varphi_{i,j}(x)=1)+\sum^p_{i, j=1, s_{i,j}=1} b_{i,j}P_{\theta}(\varphi_{i,j}(x)=0)=
$$
$$
=\sum_{i=1}^p\sum_{j=1}^p Risk(s_{i,j},\theta; \varphi_{i,j})
$$
where
$$
Risk(s_{i,j}, \theta; \varphi_{i,j})=\left\{\begin{array}{lll}
a_{i,j}P_{\theta}(\varphi_{i,j}=1), & \mbox{if} & \theta \in \omega_{i,j} \\
b_{i,j}P_{\theta}(\varphi_{i,j}=0), & \mbox{if} & \theta \in \omega_{i,j}^{-1} \\
\end{array}\right.
$$
Now we prove that $\delta^{ou}$ is a $w$-unbiased multiple decision procedure. One has from relation (\ref{abalpha}) and unbiasedness (\ref{unbiased_individual}) of $\varphi^{umpu}$
$$
a_{i,j}P_{\theta}(\varphi^{umpu}{i,j}=1) \leq b_{i,j}P_{\theta}(\varphi^{umpu}_{i,j}=0), \ \mbox{if} \ \theta \in \omega_{i,j},
$$
and
$$
b_{i,j}P_{\theta}(\varphi^{umpu}{i,j}=0) \leq a_{i,j}P_{\theta}(\varphi^{umpu}_{i,j}=1), \ \mbox{if} \ \theta \in \omega^{-1}_{i,j}
$$
It implies that
$$
Risk(s_{i,j}, \theta; \varphi_{i,j}) \leq Risk(s'_{i,j}, \theta; \varphi_{i,j}), \ \ \forall s_{i,j}, s'_{i,j}=0,1
$$
Therefore,
$$
Risk(S, \theta;\delta^{ou})= \sum_{i=1}^p\sum_{j=1}^p Risk(s_{i,j},\theta; \varphi^{umpu}_{i,j}) \leq
$$
$$
\leq \sum_{i=1}^p\sum_{j=1}^p Risk(s'_{i,j},\theta; \varphi^{umpu}_{i,j})=Risk(S', \theta;\delta^{ou}),
$$
for any $S, S' \in \cal{G}$, $\theta \in {\Omega}_S$.
Finally, we prove that $\delta^{ou}$ is optimal in the class of $w$-unbiased statistical procedures.
Let $\delta$ be an $w$-unbiased statistical procedure defined by (\ref{test_for_N_hypotheses_overall_form})-(\ref{mdp_overall_form}).
One has:
$$
Risk(S, \theta;\delta) \leq Risk(S', \theta;\delta), \ \ \forall S,S' \in \cal{G}
$$
Take $S'$ such that $S'$ and $S$ differ only in two positions $(i,j)$ and $(j,i)$. In this case one has
$$
Risk(S, \theta;\delta)= 2Risk(s_{i,j},\theta; \varphi_{i,j}) +\sum_{(k,l) \neq (i,j)} Risk(s_{k,l},\theta; \varphi_{k.l})
$$
and:
$$
Risk(S', \theta;\delta)= 2Risk(s'_{i,j},\theta; \varphi_{i,j}) +\sum_{(k,l) \neq (i,j)} Risk(s_{k,l},\theta; \varphi_{k.l})
$$
Therefor,
$$
Risk(s_{i,j}, \theta; \varphi_{i,j}) \leq Risk(s'_{i,j}, \theta; \varphi_{i,j})
$$
It implies that,
$$
a_{i,j}P_{\theta}(\varphi_{i,j}=1) \leq b_{i,j}P_{\theta}(\varphi_{i,j}=0), \ \mbox{if} \ \theta \in \omega_{i,j}
$$
and:
$$
b_{i,j}P_{\theta}(\varphi{i,j}=0) \leq a_{i,j}P_{\theta}(\varphi_{i,j}=1), \ \mbox{if} \ \theta \in \omega^{-1}_{i,j}
$$
This means that the individual test $\varphi_{i,j}$ satisfies (\ref{unbiased_individual}) with the significance level $\alpha_{i,j}=b_{i,j}/(a_{i,j}+b_{i,j})$.
Taking into account that $\varphi^{umpu}_{i,j}$ is optimal in the class of unbiased tests one gets
$$
Risk(s_{i,j}, \theta; \varphi^{umpu}_{i,j}) \leq Risk(s_{i,j}, \theta; \varphi_{i,j})
$$
Therefore,
$$
Risk(S, \theta;\delta^{ou})= \sum_{i=1}^p\sum_{j=1}^p Risk(s_{i,j},\theta; \varphi^{umpu}_{i,j}) \leq
$$
$$\leq \sum_{i=1}^p\sum_{j=1}^p Risk(s_{i,j},\theta; \varphi_{i,j})=Risk(S, \theta;\delta),
$$
for any $w$-unbiased statistical procedure $\delta$. The theorem is proved.
The main result of the paper is the following
\begin{teo}
Let $0 < \alpha <1$, and the loss function $w$ be defined by (\ref{additive_loss_function}) with $a_{i,j}=1-\alpha$, $b_{i,j}=\alpha$, $i,j =1,2,\ldots,p$. Then for any $w$-unbiased multiple decision statistical procedure $\delta$ defined by (\ref{test_for_N_hypotheses_overall_form})- (\ref{mdp_overall_form}) one has
$$
(1-\alpha)E_{\theta}[Y_I(S, \delta^{ou})]+\alpha E_{\theta}[Y_{II}(S, \delta^{ou})] \leq (1-\alpha)E_{\theta}[Y_I(S, \delta)]+\alpha E_{\theta}[Y_{II}(S, \delta)],
$$
for any $S \in \cal{G}$, $\theta \in {\Omega}_S$. Here $Y_I(S, \delta)$, $Y_{II}(S, \delta)$ are the numbers of Type I and Type II errors for model selection by statistical procedure $\delta$.
\end{teo}
\noindent
{\bf Proof.} For any statistical procedure $\delta$ one has from the Theorem 1
$$
Risk(S, \theta;\delta)=(1-\alpha)E_{\theta}[Y_I(S, \delta)]+\alpha E_{\theta}[Y_{II}(S, \delta)]
$$
From Theorem 2 one has:
$$
Risk(S, \theta;\delta^{ou}) \leq Risk(S, \theta;\delta)
$$
for any $w$-unbiased procedure $\delta$. The theorem is implied. Note that in this case $\alpha$ is the significance level of all individual tests.
\section{Concluding remarks}\label{Concluding remarks}
The main result of the paper states that statistical procedure $\delta^{ou}$ gives the minimal value for the linear combination of expectations of numbers of Type I and Type II errors
for any true Gaussian graphical model $S$. It is interesting to compare Gaussian graphical model selection procedures known in literature with respect to this criteria for different
$S$. Our experiences shows that the structure of $S$ will play an important role in this comparison. This will be a subject for a forthcoming study.
\noindent
{\bf Aknowledgement:} This work is conducted at the National Research University Higher School of Economics, Laboratory of algorithms and technologies for network analysis. The authors Kalyagin V. and Pardalos P. are supported by RSF grant 14-41-00039.
|
2,869,038,153,960 | arxiv | \section{Introduction}
Black hole evaporation is perhaps the salient problem of fundamental
physics nowadays, since it tests gravity, quantum field theory and
thermodynamics in their full regimes. Hawking's calculation showing
that black holes radiate a thermal spectrum initiated the study of
this phenomenon. However, the calculation assumes a fixed given
space-time, whereas it is expected that the black hole loses mass
through the radiation and eventually evaporates completely. Associated
with the evaporation process is the issue of loss of information,
whatever memory of what formed the black hole is lost as it evaporates
in a thermal state characterized by only one number, its
temperature. Having a model calculation that follows the formation of
a black hole and its evaporation including quantum effects would be
very useful to gain insights into the process. Here we would like to
present such a model. We will consider the collapse of a null
shell. The associated space-time is very simple: it is Schwarzschild
outside the shell and flat space-time inside. We will consider a
quantum evolution of the shell with uncertainty in its position and
momentum and we will superpose the corresponding space-times to
construct a quantum space-time. On it we will study the emission of
Hawking radiation in the geometric optics approximation. We will see
that in the classical limit one recovers ordinary Hawking
radiation. However, when quantum fluctuations of the collapsing shell
are taken into account we will see that non vanishing off-diagonal
terms appear in the density matrix representing the field. The
correlations and the resulting profile of particle emission are
modulated with information about the initial quantum state of the
shell, showing that information can be retrieved. At the moment we do
not know for sure if all information is retrieved.
The model we will consider is motivated in previous studies of the
collapse of a shell \cite{lwf,hajicek,shellqg}. In all these, an
important role is played by the fact that that there are two conjugate
Dirac observables. One of them is the ADM mass of the shell. The other
is related to the position along scri minus from which the shell was
sent inwards. These studies are of importance because they show that
the quantization of the correct Dirac observables for the problem lead
to a different scenario than those considered in the past using
other reduced models of the fluctuating horizon of the shell (see for
instance \cite{medvedetal}).
The organization of this paper is as follow. In the next section we
review the calculation of the radiation with a background given by a
classical collapsing shell for late times in the geometric optics
approximation, mostly to fix notation to be used in the rest of the
work. In section 3 we will remove the late time approximation
providing an expression of the radiation of the shell for all times.
We will also derive a closed expression for the distribution of
radiation as a function of the position of the detector on scri
plus. We will show that when the shell approaches the horizon the
usual thermal radiation is recovered.
We will see that the use of the complete expression for all times is
useful when one considers the case of fluctuating horizons in the
early (non-thermal) phases of the radiation prior to the formation of
a horizon. This element had been missed in previous calculations that
tried to incorporate such effects. In section 4 we will consider a
quantum shell and the radiation it produces, we will proceed in two
stages. First we will compute the expectation value of Bogoliubov
coefficients. This will allow to explain in a simple case the
technique that shall be used. However, the calculation of the number
of particles produced requires the expectation value of a product of
Bogoliubov coefficients. In section 5
we consider the calculation of the density
matrix in terms of the product of Bogoliubov operators and show that
the radiation profile reproduces the usual thermal spectrum for the
diagonal elements of the density matrix, but with some departures due
to the fluctuations in the mass of the shell.
In section 6 we will show that it differs significantly
from the product of the expectation values, particularly in the late
stages of the process. In section 7 we will
analyze coherences that vanish in the classical case
and show they are non-vanishing and that allow information from the
initial state of the shell to be retrieved. We end with a summary and outlook.
\section{Radiation of a collapsing classical shell}
Here we reproduce well known results
\cite{h74} for the late time radiation of a
collapsing classical shell in a certain amount of
detail since we will use them later on. The metric of the
space-time is given by
\begin{equation}
ds^{2}=-\left(1-\frac{2M\theta(v-v_{s})}{r}\right)dv^{2}+2dvdr+r^{2}d\Omega^{2},
\end{equation}
where $v_s$ represents the position of the shell (in ingoing
Eddington--Finkelstein coordinates) and $M$ its mass
\footnote{
The parameters $v_s$ and $M$ are canonically conjugate variables in a Hamiltonian treatment of the system \cite{lwf}. They will be promoted to quantum operators in section IV.}.
Throughout this paper we will be working in the geometric
optics approximation (i.e. large frequencies).
In this
geometry, light rays that leave $I^{-}$ with coordinate $v$ less than
$v_{0}=v_{s}-4M$ can escape to $I^{+}$ and the rest are trapped in the
black hole that forms. Therefore $v=v_0$ defines the position of the
event horizon. We will use that a light ray departing from $I^-$
with $v<v_0$ reaches $I^+$ at an outgoing Eddington--Finkelstein
coordinate $u$ given by
\begin{equation}
u(v)=v-4M \ln \left(\frac{v_{0}-v}{4M_{0}}\right)\label{eq:u(v)},
\end{equation}
where $M_0$ is an arbitrary parameter that is usually chosen as
$M_0=M$, stemming from the definition of the tortoise coordinate which
involves a constant of integration.
\begin{figure}
\includegraphics[height=10.5cm]{espacio_tiempo}
\caption{The Penrose diagram of a classical collapsing shell. $v_s$ indicates
the position at scri minus from which the shell is sent in. Light
rays sent in to the left of $v_0$ make it to scri plus, whereas rays
sent in to the right of $v_0$ get trapped in the black hole.}
\end{figure}
On the above metric we would like to study Hawking radiation
corresponding to a scalar field. We consider the ``in'' vacuum
associated with the mode expansion $\psi_{lm\omega'}$. The asymptotic
form of the modes in $I^-$ is given by,
\[
\psi_{lm\omega'}(r,v,\theta,\phi)=\frac{e^{-i\omega'v}}{4\pi r\sqrt{\omega'}}Y_{lm}(\theta,\phi),
\]
and the ``out'' vacuum corresponding to modes $\chi_{lm\omega}$ with
asymptotic form in $I^+$ given by
\[
\chi_{lm\omega}(r,u,\theta,\phi)=\frac{e^{-i\omega u}}{4\pi r\sqrt{\omega}}Y_{lm}(\theta,\phi).
\]
The geometric optics approximation consists of mapping the modes
$\chi_{lm\omega}$ into $I^-$ as
\[
\frac{e^{-i\omega u(v)}}{4\pi r\sqrt{\omega}}Y_{lm}(\theta,\phi),
\]
where $u\left(v\right)$ is determined by the path of the light rays
that emanate from $I^-$ at time $v$ and arrive in $I^+$ at $u(v)$.
The Bogoliubov coefficients are given by the
Klein-Gordon inner products,
\[
\alpha_{\omega\omega'}=\left\langle \chi_{lm\omega},\psi_{lm\omega'}\right\rangle,
\]
\[
\beta_{\omega\omega'}=-\left\langle \chi_{lm\omega},\psi_{lm\omega'}^{*}\right\rangle.
\]
They can be computed in the geometric optics approximation projecting
the out modes in $I^-$ and substituting the expression for $u(v)$. Focusing on the beta coefficient we get,
\begin{equation}
\beta_{\omega\omega'}=-\frac{1}{2\pi}\sqrt{\frac{\omega'}{\omega}}
{\int_{-\infty}^{v_0}}dve^{-i\omega\left[v-4M\ln\left(\frac{v_{0}-v}{4M_{0}}\right)\right]- i\omega'v}.\label{refA}
\end{equation}
Since we are considering modes that are not normalizable one in
general will get divergences. This can be dealt with by considering
wave-packets localized in both frequency and time. For example,
\begin{equation}
\chi_{lmn\omega_{j}}=\frac{1}{\sqrt{\epsilon}}{\int_{j\epsilon}^{\left(j+1\right)\epsilon}}d\omega e^{u_n\omega i}\chi_{lm\omega}\label{eq:paquete_def},
\end{equation}
constitute an orthonormal countable complete basis of packets centered
in time $u_n=\frac{2\pi n}{\epsilon}$, and in frequency $\omega_{j}=\left(j+\frac{1}{2}\right)\epsilon$.
The original Hawking calculation assumes that the rays depart just
before the formation of the horizon and arrive at $I^+$ at late
times. In that case one can approximate,
\[
u(v)=v-4M\ln\left(\frac{v_{0}-v}{4M_{0}}\right)\approx v_{0}-4M\ln\left(\frac{v_{0}-v}{4M_{0}}\right).
\]
Defining a new integration variable
$x=\frac{v_{0}-v}{4M_{0}}$ one gets
\begin{equation}
\beta_{\omega\omega'}=-\frac{4M_{0}}{2\pi}\sqrt{\frac{\omega'}{\omega}}\underset{\epsilon\to0}{\lim}{\int_0^\infty}dxe^{-i\omega\left[v_{0}-4M\ln\left(x\right)\right]- i\omega'\left(v_{0}-4M_{0}x\right)}e^{-\epsilon x}\label{eq:coef_bogol_1},
\end{equation}
where the last factor was added to make the integral convergent since
we have used plane waves instead of localized packets as the basis of
modes, following Hawking's original derivation. Using the identity
\begin{equation}
{\int_0^\infty}dxe^{a\ln\left(x\right)}e^{-bx}=e^{-(1+a)\ln\left(b\right)}\Gamma\left(1+a\right),\quad Re(b)>0\label{eq:propiedad_int_gamma},
\end{equation}
and the usual prescription for the logarithm of a complex variable we can take the limit and get
\begin{equation}
\beta_{\omega\omega'}=-\frac{i}{2\pi}\frac{e^{-i\left(\omega+\omega'\right)v_{0}}}{\sqrt{\omega\omega'}}e^{-2\pi M\omega}\Gamma\left(1+4M\omega i\right)e^{-4M\omega i\ln(4M_{0}\omega')}\label{eq:coef_bogol2}.
\end{equation}
Now from the Bogoliubov coefficients we can calculate the expectation value of the number of particles per unit frequency detected at scri using
\[
\left\langle N^{H}_{\omega}\right\rangle ={\int}_0^\infty d\omega'\beta_{\omega\omega'}\beta_{\omega\omega'}^{*}=\frac{1}{4\pi^{2}\omega}e^{-4\pi M\omega}\left|\Gamma\left(1+4M\omega i\right)\right|^{2}{\int_0^\infty}d\omega'\frac{1}{\omega'},
\]
where we added the superscript ``$H$'' to indicate this is the
calculation originally carried out by Hawking.
The pre-factor is computed using the identity
\[
\Gamma\left(1+z\right)\Gamma\left(1-z\right)=\frac{z\pi}{\sin\left(z\pi\right)},
\]
with $z=4M\omega i$, which leads to,
\[
\left|\Gamma\left(1+4M\omega i\right)\right|^{2}=\frac{8M\pi\omega}{e^{+4M\omega\pi}-e^{-4M\omega\pi}}.
\]
To handle the divergent integral we note that
\[
{\int_0^\infty}d\omega'\frac{1}{\omega'}=\underset{\alpha\to0}{\lim}
{\int_0^\infty}d\omega'\frac{1}{\omega'}e^{i4M\alpha\ln(\omega')}=\left[\begin{array}{l}
y=\ln\left(\omega'\right)\\
dy=\frac{d\omega'}{\omega'}
\end{array}\right]=
\]
\[
=\underset{\alpha\to0}{\lim}{\int_0^\infty}dye^{i4M\alpha y}=\frac{1}{4M}\delta\left(0\right).
\]
Therefore,
\begin{equation}
\langle N_\omega^H\rangle=\frac{1}{e^{8M\omega\pi}-1}\frac{4M}{2\pi}
{\int_0^\infty}d\omega'\frac{1}{\omega'}=\frac{1}{e^{8M\omega\pi}-1}\delta\left(0\right)\label{eq:N_omega_1}.
\end{equation}
Again, the results is infinite because we considered plane waves. The
time of arrival has infinite uncertainty and we are therefore adding
up all the particles generated for an infinite amount of time. To deal
with this we can consider wave-packets centered in time
$u_n$ and frequency $\omega_{j}$ for
which the Bogoliubov coefficients are,
\[
\beta_{\omega_{j}\omega'}=\frac{1}{\sqrt{\epsilon}}
{\int_{j\epsilon}^{{\left(j+1\right)\epsilon}}}d\omega e^{u_n\omega i}\beta_{\omega\omega'}.
\]
We start computing the density matrix
\[
\rho^H_{\omega_1,\omega_2}={\int_0^\infty}d\omega'\beta_{\omega_{1}\omega'}\beta_{\omega_{2}\omega'}^{*}=\frac{1}{4\pi^{2}\sqrt{\omega_{1}\omega_{2}}}e^{-i\left(\omega_{1}-\omega_{2}\right)v_{0}}e^{-2\pi M\left(\omega_{1}+\omega_{2}\right)}\Gamma\left(1+4M\omega_{1}i\right)\Gamma\left(1-4M\omega_{2}i\right)\times
\]
\[
\times{\int_0^\infty}d\omega'\frac{1}{\omega'}e^{-4M\left(\omega_{1}-\omega_{2}\right)\ln\left(4M_{0}\omega'\right)}=\left[\begin{array}{l}
y=\ln\left(4M_{0}\omega'\right)\\
dy=\frac{d\omega'}{\omega'}
\end{array}\right]=
\]
\[
=\frac{1}{4\pi^{2}\sqrt{\omega_{1}\omega_{2}}}e^{-i\left(\omega_{1}-\omega_{2}\right)v_{0}}e^{-2\pi M\left(\omega_{1}+\omega_{2}\right)}\Gamma\left(1+4M\omega_{1}i\right)\Gamma\left(1-4M\omega_{2}i\right){\int_{-\infty}^\infty}dye^{-4M\left(\omega_{1}-\omega_{2}\right)y}=
\]
\begin{equation}
=\frac{1}{4\pi^{2}\omega_{1}}e^{-4\pi M\omega_{1}}\left|\Gamma\left(1+4M\omega_{1}i\right)\right|^{2}2\pi\delta\left(4M\left(\omega_{1}-\omega_{2}\right)\right)=\frac{1}{e^{8M\omega_{1}\pi}-1}\delta\left(\omega_{1}-\omega_{2}\right).\label{eq:density_matrix_H}
\end{equation}
Therefore,
\begin{equation}
\langle N_{\omega_j}^H\rangle={\int_0^\infty}d\omega'\beta_{\omega_{j}\omega'}\beta_{\omega_{j}\omega'}^{*}=
\frac{1}{\epsilon}{\int\int_{j\epsilon}^{\left(j+1\right)\epsilon}}d\omega_{1}d\omega_{2}
e^{u_n\left(\omega_{1}-\omega_{2}\right)i}\rho^H_{\omega_1,\omega_2}
=\frac{1}{\epsilon}{\int_{j\epsilon}^{\left(j+1\right)\epsilon}}\frac{1}{e^{8M\omega_{1}\pi}-1}d\omega_{1}\sim\frac{1}{e^{8M\omega_{j}\pi}-1}\label{eq:N_omega_2},
\end{equation}
which is the standard result for the Hawking radiation spectrum.
\section{Calculation without approximating $u(v)$}
We will carry out the computation of the Bogoliubov
coefficients using the exact expression for $u(v)$. This will be of
importance for the case with quantum fluctuations. This is because if
one looks at the expression of the time of arrival,
\begin{equation}
u(v)=v-4M\ln\left(\frac{v_{0}-v}{4M_{0}}\right),
\end{equation}
when one has quantum fluctuations, even close to the horizon, the
second term is not necessarily very large. For instance, if one
considers fluctuations of Planck length size and a Solar sized black
hole, it is around $100M$. Therefore it is not warranted to neglect
the first term as we did in the previous section. In this section we
will not consider quantum fluctuations yet. However, using the exact
expression allows to compute the radiation emitted by a shell
far away from the horizon.
Starting with the
expression:
\[
\beta_{\omega\omega'}=-\frac{1}{2\pi}\sqrt{\frac{\omega'}{\omega}}
{\int_{-\infty}^{v_{0}}}dve^{i4M\omega\ln\left(\frac{v_{0}-v}{4M_{0}}\right)- i\omega'v}e^{-i\omega v},
\]
we change variables to
$x=\frac{v_{0}-v}{4M_{0}}$ and introduce a regulator $e^{-\epsilon
x}$. We get,
\begin{equation}
\beta_{\omega\omega'}=-\frac{4M_{0}}{2\pi}\sqrt{\frac{\omega'}{\omega}}e^{-i\left(\omega+\omega'\right)v_{0}}\underset{\epsilon\to0}{\lim}{\int_0^\infty}dxe^{i4M\omega \ln\left(x\right)}e^{-\left(\epsilon-i\left[\omega+\omega'\right]4M_{0}\right)x}.
\label{betaclasico}
\end{equation}
For $\omega\ll\omega'$ we recover Hawking's original
calculation. However, we can continue without approximating. Using
again (\ref{eq:propiedad_int_gamma}) we get,
\[
\beta_{\omega\omega'}=-\frac{4M_{0}}{2\pi}\sqrt{\frac{\omega'}{\omega}}e^{-i\left(\omega+\omega'\right)v_{0}}\Gamma(1+4M\omega i)\underset{\epsilon\to0}{\lim}\;e^{-(1+4M\omega i)\ln\left(\epsilon-i\left[\omega+\omega'\right]4M_{0}\right)}.
\]
And taking the limit,
\begin{equation}
\beta_{\omega\omega'}=-\frac{i}{2\pi}\frac{1}{\omega'+\omega}\sqrt{\frac{\omega'}{\omega}}e^{-i\left(\omega+\omega'\right)v_{0}}\Gamma(1+4M\omega i)e^{-2\pi M\omega}e^{4M\omega i\ln\left(4M_{0}\left[\omega'+\omega\right]\right)}\label{eq:coef_bogol3}.
\end{equation}
To compare with Hawking's calculation we first compute
\[
\langle N_\omega^{CS}\rangle={\int_0^\infty}d\omega'\beta_{\omega\omega'}\beta_{\omega\omega'}^{*}=\frac{1}{4\pi^{2}}\frac{1}{\omega}\left|\Gamma(1+4M\omega i)\right|^{2}e^{-4\pi M\omega}{\int_0^\infty}d\omega'\frac{\omega'}{\left(\omega'+\omega\right)^{2}},
\]
where the superscript ``$CS$'' stands for classical shell.
The difference with the calculation in the previous section is
the argument of the last integral with no divergence in $\omega'=0$.
We can formally compute the divergent integral using the change
of variable $y=\ln\left(\omega'+\omega\right)$. We get,
\[
{\int_0^\infty}d\omega'\frac{\omega'}{\left(\omega'+\omega\right)^{2}}=
{\int_{\ln(\omega)}^{\infty}}dye^{-y}\left(e^{y}-\omega\right)=
{\int_{\ln(\omega)}^{\infty}}dy-1=
\]
\[
=\left.{\int_{\ln(\omega)}^{\infty}}dye^{i4M\alpha
y}\right|_{\alpha=0}-1=\left.
{\int_0^\infty}dye^{i4M\alpha
y}e^{i4M\alpha\ln(\omega)}\right|_{\alpha=0}-1=\frac{1}{4M}\left(\pi\delta\left(0\right)+{\rm
p.v.}\left(\frac{i}{0}\right)\right)-1,
\]
with ${\rm p.v.}$ the principal value.
Therefore,
\begin{equation}
\langle N_\omega^{CS}\rangle=\frac{1}{e^{8M\omega\pi}-1}\frac{4M}{2\pi}
{\int_0^\infty}d\omega'\frac{\omega'}{\left(\omega'+\omega\right)^2}=\frac{1}{e^{8M\omega\pi}-1}\left[\left(\frac{\delta\left(0\right)}{2}+{\rm
p.v.}\left(\frac{i}{2\pi0}\right)\right)-\frac{2M}{\pi}\right]\label{eq:N_omega3}.
\end{equation}
This is an infinite result but it looks different from Hawking's. To deal with the
infinities it is necessary to compute $\langle {N_{\omega_j}}^{CS}\rangle$
for a wave-packet of frequency
$\omega_{j}$. We start by computing the density matrix:
\[
\rho^{CS}_{\omega_1,\omega_2}={\int_0^\infty}d\omega'\beta_{\omega_{1}\omega'}\beta_{\omega_{2}\omega'}^{*}=\frac{1}{4\pi^{2}\sqrt{\omega_{1}\omega_{2}}}e^{-i\left(\omega_{1}-\omega_{2}\right)v_{0}}\Gamma\left(1+4M\omega_{1}i\right)\Gamma\left(1-4M\omega_{2}i\right)e^{-2\pi M\left[\omega_{1}+\omega_{2}\right]}\times
\]
\begin{equation}
\times{\int_0^\infty}d\omega'\frac{\omega'}{\left(\omega'+\omega_{1}\right)\left(\omega'+\omega_{2}\right)}e^{-4Mi\left[\omega_{1}\ln\left(4M_{0}\left[\omega'+\omega_{1}\right]\right)-\omega_{2}\ln\left(4M_{0}\left[\omega'+\omega_{2}\right]\right)\right]}.\label{casoclasico}
\end{equation}
Since the packet is centered in
$\omega_{j}$ with width $\epsilon\ll\omega_j$ we introduce $\Delta\omega=\omega_{2}-\omega_{1}+$ and $\bar{\omega}=\frac{\omega_1+\omega_2}{2}$. As a consequence, the last
integral takes the form,
\[
{\int_0^\infty}d\omega'\frac{\omega'e^{-4Mi\left[\left(\bar{\omega}-\frac{\Delta\omega}{2}\right)\ln\left(4M_{0}\left[\omega'+\bar{\omega}-\frac{\Delta\omega}{2}\right]\right)-\left(\bar{\omega}+\frac{\Delta\omega}{2}\right)\ln\left(4M_{0}\left[\omega'+\bar{\omega}+\frac{\Delta\omega}{2}\right]\right)\right]}}{\left(\omega'+\bar{\omega}\right)^2-\left(\frac{\Delta\omega}{2}\right)^2}=
\]
\[
={\int_0^\infty}d\omega'\frac{\omega'e^{4Mi\Delta\omega \ln\left(4M_{0}\left[\omega'+\bar{\omega}\right]\right)}}{\left(\omega'+\bar{\omega}\right)^{2}}+O\left(\Delta\omega\right),
\]
where we have not expanded the exponential
$e^{4Mi\Delta\omega \ln\left(4M_{0}\left[\omega'+\bar{\omega}\right]\right)}$
since it controls the divergent part of the integral when $\Delta\omega\to 0$. Changing
variable to $y=\ln\left(4M_{0}\left[\omega'+\bar{\omega}\right]\right)$ the integral becomes,
\[
{\int_{\ln\left(4M_{0}\bar{\omega}\right)}^{\infty}}dy\left(1-4M_{0}\bar{\omega}e^{-y}\right)e^{4Mi\Delta\omega y}+O\left(\Delta\omega\right)=
\]
\[
={\int_0^\infty}dye^{4Mi\Delta\omega y}e^{4Mi\Delta\omega\ln\left(4M_{0}\bar{\omega}\right)}+\frac{e^{4Mi\Delta\omega\ln\left(4M_{0}\bar{\omega}\right)}}{-1+4M_{0}\Delta\omega i}+O\left(\Delta\omega\right)=
\]
\[
=\left[\pi\delta\left(4M\Delta\omega\right)+{\rm p.v.}\left(\frac{i}{4M\Delta\omega}\right)\right]e^{4Mi\Delta\omega\ln\left(4M_{0}\bar{\omega}\right)}+O\left(\Delta\omega^0\right).
\]
So, the divergent part of the density matrix when $\Delta\omega\to0$ is
\[
\rho^{CS}_{\omega_1,\omega_2}\sim\frac{1}{4\pi^{2}\bar{\omega}}e^{i\Delta\omega
v_{0}}\left|\Gamma\left(1+4M\bar{\omega}i\right)\right|^{2}e^{-4\pi
M\bar{\omega}}\left[\pi\delta\left(4M\Delta\omega\right)+{\rm p.v.}\left(\frac{i}{4M\Delta\omega}\right)\right]e^{4Mi\Delta\omega\ln\left(4M_{0}\bar{\omega}\right)}.
\]
\begin{equation}
=\frac{2M}{\pi}\frac{e^{4Mi\Delta\omega\ln\left(4M_{0}\bar{\omega}\right)}}{e^{8M\omega_{j}\pi}-1}\left[\pi\delta\left(4M\Delta\omega\right)+{\rm p.v.}\left(\frac{i}{4M\Delta\omega}\right)\right].\label{eq:density_matrix_CS}
\end{equation}
We proceed to compute $\langle N_{\omega_j}^{CS}\rangle$
by integrating both Bogoliubov coefficients in an interval around
$\omega_{j}$ using the approximation that factors depending on $\bar{\omega}$ are constant since the interval of integration is very small as it ranges between $\omega_j\pm\frac{\epsilon-\vert\Delta\omega\vert}{\epsilon}$,
\[
\langle N_{\omega_j}^{CS}\rangle=
\frac{1}{\epsilon}{\int_{j\epsilon}^{\left(j+1\right)\epsilon}\int_{j\epsilon}^{\left(j+1\right)\epsilon}}d\omega_{1}d\omega_{2}e^{u_n\Delta\omega i}\rho^{CS}_{\omega_1,\omega_2}\sim
\]
\[
\sim\frac{1}{4\pi^{2}\omega_{j}}\frac{1}{\epsilon}\left|\Gamma\left(1+4M\omega_{j}i\right)\right|^{2}e^{-4\pi M\omega_{j}}{\int_{j\epsilon}^{\left(j+1\right)\epsilon}\int_{j\epsilon}^{\left(j+1\right)\epsilon}}d\omega_{1}d\omega_{2}e^{-\frac{2\pi n}{\epsilon}\Delta\omega i}e^{i\Delta\omega v_{0}}\times
\]
\[
\times\frac{1}{4M}\left[\pi\delta\left(\Delta\omega\right)+{\rm p.v.}\left(\frac{ie^{4M\Delta\omega\ln\left(4M_{0}\bar{\omega}\right)i}}{\Delta\omega}\right)\right]=
\]
\[
\sim\frac{1}{2\pi\epsilon}\frac{1}{e^{8M\omega_{j}\pi}-1}{\int_{j\epsilon}^{\left(j+1\right)\epsilon}\int_{j\epsilon}^{\left(j+1\right)\epsilon}}d\omega_{1}d\omega_{2}e^{-\left[u_n-v_{0}-4M\ln\left(4M_{0}\bar{\omega}\right)\right]\Delta\omega
i}\left[\pi\delta\left(\Delta\omega\right)+{\rm p.v.}\left(\frac{i}{\Delta\omega}\right)\right].
\]
Changing variables to $\bar{\omega}$ and $\Delta\omega$ we get,
\[
\langle N_{\omega_j}^{CS}\rangle\sim\frac{1}{e^{8M\omega_{j}\pi}-1}\left[\frac{1}{2}+\frac{i}{2\pi\epsilon}
{\int_{-\epsilon}^{\epsilon}}d\left(\Delta\omega\right) {\rm p.v.}\left(\frac{1}{\Delta\omega}\right)
{\int_{\omega_{j}-\frac{\epsilon-\left|\Delta\omega\right|}{2}}^{\omega_{j}+\frac{\epsilon-\left|\Delta\omega\right|}{2}}}e^{-\left[u_n-v_{0}-4M\ln\left(4M_{0}\bar{\omega}\right)\right]\Delta\omega i}d\bar{\omega}\right]\sim
\]
\[
\sim\frac{1}{e^{8M\omega_{j}\pi}-1}\left[\frac{1}{2}+\frac{i}{2\pi}
{\int_{-\epsilon}^{\epsilon}}d\left(\Delta\omega\right) {\rm p.v.}\left(\frac{\epsilon-\left|\Delta\omega\right|}{\epsilon\Delta\omega}\right)e^{-\alpha\Delta\omega i}\right],
\]
where we defined
\begin{equation}
\alpha\equiv u_n-v_{0}-4M\ln\left(4M_{0}\omega_{j}\right)\label{alpha}.
\end{equation}
Notice that there appears the indeterminate parameter $M_0$. This corresponds to the choice of origin of the affine parameter at scri plus.
A further change of variable $t=\alpha\Delta\omega$ leads us to
\begin{equation}
\langle N_{\omega_j}^{CS}\rangle=\frac{1}{e^{8M\omega_{j}\pi}-1}\left[\frac{1}{2}+\frac{1}{\pi}{\rm Si}\left(\epsilon\alpha\right)+\frac{1}{\pi}\frac{\cos\left(\alpha\epsilon\right)-1}{\alpha\epsilon}\right]\label{eq:N_omega4}
\end{equation}
where ${\rm Si}$ is the sine integral. When
$\epsilon\alpha\to\infty$
we have that ${\rm Si}\left(\epsilon\alpha\right)\to\frac{\pi}{2}$ and
the expression goes to
\[\langle N_{\omega_j}^{CS}\rangle\to
\frac{1}{e^{8M\omega_{j}\pi}-1}.
\]
This happens when either $n\to+\infty$ or $\omega_{j}\to0$. That is,
at late times or in the deep infra-red regime.
On the
other hand,
when $n\to-\infty$ (a detector close to spatial infinity or very early
times) we have that ${\rm Si}\left(\epsilon\alpha\right)\to-\frac{\pi}{2}$
and therefore
\[
\langle N_{\omega_j}^{CS}\rangle \to0.
\]
We have obtained a closed form for the spectrum of the
radiation of the classical shell along its complete trajectory. It only becomes
thermal at late times. This agrees with previous
numerical results \cite{vachaspati}. Previous efforts had differing
predictions on the thermality or not of the radiation \cite{previous}.
\section{Radiation from the collapse of a quantum shell}
\subsection{The basic quantum operators}
A reduced phase-space analysis of the shell shows that the Dirac
observables $v_s$ and $M$ are canonically conjugate variables \cite{lwf}. We thus promote them to quantum operators satisfying,
\begin{equation}
\left[\widehat{M},\widehat{v}_{s}\right]=i\hbar\widehat{I}\label{eq:conmutacio_M_v},
\end{equation}
with $\widehat{I}$ the identity operator.
It will be more convenient to use the operator
$\widehat{v}_{0}=\widehat{v}_{s}-4\widehat{M}$ which is also conjugate
to $\widehat{M}$. We call the expectation values of these quantities
$\overline{M}\equiv\left\langle \hat{M}\right\rangle $
and $\overline{v}_{0}\equiv\left\langle \hat{v}_{0}\right\rangle $.
In terms of them we define the operator
\begin{equation}
\hat{u}\left(v,\widehat{v}_{0},\widehat{M}\right)=v\widehat{I}-2\left[\widehat{M}\ln\left(\frac{\widehat{v}_{0}-v\widehat{I}}{4M_{0}}\right)+\ln\left(\frac{\widehat{v}_{0}-v\widehat{I}}{4M_{0}}\right)\widehat{M}\right]\label{eq:operador_u},
\end{equation}
where $v$ is a real parameter and $M_0$ an arbitrary scale. This
operator represents the variable $u(v)$. Given a value of the
parameter $v$ the operator $\hat{u}$ is well defined in the basis
$\left\{ v_{0}\right\} _{v_{0}\in\mathbb{R}}$ of eigenstates of
$\hat{v}_{0}$ only for eigenvalues $v_0>v$. This is the relevant
region for the computation of Bogoliubov coefficients. It is however
convenient to provide an extension of the operator $\hat{u}$ to the
full range of $v_0$ so that one can work in the full Hilbert space of
the shell. The (quantum) Bogoliubov coefficients are independent of
such extension.
For instance, defining
the function $f_{\epsilon}(x)=\left\{ \begin{array}{l}
\ln(x),\;x\geq\epsilon\\
\ln(\epsilon),\;x<\epsilon
\end{array}\right.$ one can construct the operator
\begin{equation}
\hat{u}_{\epsilon}\left(v,\widehat{v}_{0},\widehat{M}\right)=v\widehat{I}-2\left[\widehat{M}f_{\epsilon}\left(\frac{\widehat{v}_{0}-v\widehat{I}}{4M_{0}}\right)+f_{\epsilon}\left(\frac{\widehat{v}_{0}-v\widehat{I}}{4M_{0}}\right)\widehat{M}\right]\label{eq:operador_u_epsilon},
\end{equation}
which extends $\hat{u}$ to the full Hilbert space. To understand the
physical meaning, we recall that for values of $v$ less than $v_0$ the
packets escape to scri, whereas for $v$ larger than $v_0$ they fall
into the black hole. The extension corresponds to considering particle
detectors that either live at scri or live on a time-like trajectory a
small distance outside the horizon. As we shall see, the Bogoliubov
coefficients will have a well-defined $\epsilon \to 0$ limit.
Next we seek for the eigenstates of $\hat{u}_{\epsilon}$. We work with
wave-functions
$\psi\left(v_{0}\right)=\left\langle v_{0}\vert\psi\right\rangle $. The
operator $\hat{M}$ (conjugate to $\hat{v}_0$) is,
\begin{equation}
\left\langle v_{0}\vert\hat{M}\psi\right\rangle =i\hbar\frac{\partial\psi}{\partial v_{0}}\label{eq:operadorM}.
\end{equation}
The eigenstates of of $\hat{u}_\epsilon$ are given by the equation
\[
\left\langle v_{0}\vert\hat{u}_{\epsilon}\psi_{u}\right\rangle =u\psi_{u}\left(v_{0}\right),
\]
that is,
\begin{equation}
v\psi_{u}\left(v_{0}\right)-2i\hbar\frac{\partial}{\partial v_{0}}\left[f_{\epsilon}\left(\frac{v_{0}-v}{4M_{0}}\right)\psi_{u}\left(v_{0}\right)\right]-2i\hbar f_{\epsilon}\left(\frac{v_{0}-v}{4M_{0}}\right)\frac{\partial\psi}{\partial v_{0}}=u\psi_{u}\left(v_{0}\right)\label{eq:ecuaci=0000F3n_autoestados_u},
\end{equation}
\[
v\psi_{u}\left(v_{0}\right)-4i\hbar f_{\epsilon}\left(\frac{v_{0}-v}{4M_{0}}\right)\frac{\partial\psi}{\partial v_{0}}-\frac{2i\hbar}{4M_{0}}f'_{\epsilon}\left(\frac{v_{0}-v}{4M_{0}}\right)\psi_{u}\left(v_{0}\right)=u\psi_{u}\left(v_{0}\right).
\]
It is useful to make a change of variable
$x=\frac{v_{0}-v}{4M_{0}}$ which leads to
\[
-\frac{4i\hbar}{4M_{0}}f_{\epsilon}\left(x\right)\frac{\partial\psi}{\partial x}-\frac{2i\hbar}{4M_{0}}f'_{\epsilon}\left(x\right)\psi_{u}\left(x\right)=\left(u-v\right)\psi_{u}\left(x\right).
\]
Defining $\phi_u(x)$ by
$\psi_{u}(x)=\frac{\phi_{u}(x)}{\sqrt{\left|f_{\epsilon}(x)\right|}}$
we get,
\[
\frac{\partial\phi_{u}}{\partial x}=\frac{iM_{0}}{\hbar}\frac{u-v}{f_{\epsilon}}\phi_{u},
\]
with general solution
\[
\phi_{u}(x)=\phi_{0}\exp\left(\frac{iM_{0}}{\hbar}(u-v)\int\frac{ds}{f_{\epsilon}(s)}\right).
\]
Substituting $f_\epsilon$ and going back to the original variables
\[
\psi_{u}(x)=\left\{ \begin{array}{l}
\frac{\psi_{0}^{I}}{\sqrt{\left|\ln(x)\right|}}\exp\left(\frac{iM_{0}}{\hbar}(u-v){\rm li}(x)\right),\quad x\geq\epsilon,\\
\frac{\psi_{0}^{II}}{\sqrt{\left|\ln(\epsilon)\right|}}\exp\left(\frac{iM_{0}}{\hbar}(u-v)\frac{x}{\ln(\epsilon)}\right),\quad x<\epsilon,
\end{array}\right.
\]
where $\phi_0, \psi_{0}^I$ and $\psi_0^{II}$ are independent, complex,
constants and
\begin{equation}
{\rm li}(x)={\int_0^x}\frac{dt}{\ln(t)}\label{eq:Li},
\end{equation}
is the logarithmic integral, which is plotted in figure (\ref{li}).
\begin{figure}
\includegraphics[scale=0.7]{logIntegral}
\caption{The logarithmic integral function.}
\label{li}
\end{figure}
The discontinuity of $\psi_u$ in $x=1$ introduces a degeneracy in the
eigenstates of $\hat{u}$. For each eigenvalue we can choose two
independent eigenstates,
\begin{equation}
\psi_{u}^{1}(x)=\left\{ \begin{array}{l}
\frac{1}{\sqrt{8\pi\hbar\left|\ln(\epsilon)\right|}}\exp\left(\frac{iM_{0}}{\hbar}(u-v)\frac{x-\epsilon}{\ln(\epsilon)}\right),\quad x<\epsilon\\
\frac{1}{\sqrt{8\pi\hbar\left|\ln(x)\right|}}\exp\left(\frac{iM_{0}}{\hbar}(u-v)\left[{\rm
li}\left(x\right)-{\rm li}\left(\epsilon\right)\right]\right),\quad\epsilon\leq x<1\\
0,\quad x\geq 1
\end{array}\right.\label{eq:autoestados_de_u_1}
\end{equation}
\begin{equation}
\psi_{u}^{2}(x)=\left\{ \begin{array}{l}
0,\quad x\leq 1\\
\frac{1}{\sqrt{8\pi\hbar\left|\ln(x)\right|}}\exp\left(\frac{iM_{0}}{\hbar}(u-v)\left[{\rm
li}\left(x\right)-{\rm li}\left(\epsilon\right)\right]\right),\quad x>1
\end{array}\right.\label{eq:autoestados_de_u_2}
\end{equation}
which we have chosen as orthonormal. We will adopt the notation
$\left|u,J\right\rangle _{\epsilon}$ with $J=1,2$ for these states.
\subsection{Operators associated with the Bogoliubov coefficients
and their expectation values}
On the previously described quantum space-time we will study Hawking radiation
associated with a scalar field. We will assume that the scalar field
sees a superposition of geometries corresponding to different masses
of the black hole. Therefore, to measure observables associated with
the field one needs to take their expectation value with respect to
the wave-function of the black hole.
In this subsection we will apply these ideas to the computation of the Bogoliubov coefficients and in the next we will extend it to compute the density matrix. We will go from the usual Bogoliubov coefficient $\beta_{\omega\omega'}$ to the
operator $\hat{\beta}_{\omega\omega'}$. We will then compute its expectation
value on a wave-function packet associated to the black hole and
centered on the classical values
$\overline{M}$ and $\overline{v}_{0}$. We start with the expression
(\ref{refA}) and promote it to a well defined operator
\begin{equation}
\hat{\beta}_{\omega\omega'}=-\frac{1}{2\pi}\sqrt{\frac{\omega'}{\omega}}\underset{\epsilon\to0}{\lim}{\int_{-\infty}^{+\infty}}dv\theta\left(\widehat{v}_{0}-v\widehat{I}\right)e^{-i\omega\hat{u}_{\epsilon}(v)- i\omega'v}\theta\left(\widehat{v}_{0}-v\widehat{I}\right)\label{eq:coef_bogol_cuanticos}.
\end{equation}
We then consider a state $\Psi$ associated with the black hole and
compute the expectation value,
\[
\left\langle
\hat{\beta}\right\rangle_{\omega\omega'}=-\frac{1}{2\pi}\sqrt{\frac{\omega'}{\omega}}\underset{\epsilon\to0}{\lim}\left\langle
\Psi\right|
{\int_{-\infty}^{+\infty}}dv{\int_{-\infty}^{+\infty}}dv_{0}\left|v_{0}\right\rangle \left\langle v_{0}\right|\theta\left(\widehat{v}_{0}-v\widehat{I}\right)e^{-i\omega\hat{u}_{\epsilon}(v)- i\omega'v}\times
\]
\[
\times\underset{J=1,2}{\sum}{\int_{-\infty}^{+\infty}}du\left|u,J\right\rangle _{\epsilon\epsilon}\left\langle u,J\right|{\int_{-\infty}^{+\infty}}dv'_{0}\left|v'_{0}\right\rangle \left\langle v'_{0}\right|\theta\left(\widehat{v}_{0}-v\widehat{I}\right)\left|\Psi\right\rangle,
\]
where we have introduced bases of eigenstates of
$\hat{v}_{0}$
and $\hat{u}$.
Given,
\[
\left\langle \hat{\beta}\right\rangle_{\omega\omega'}=-\frac{1}{2\pi}\sqrt{\frac{\omega'}{\omega}}\underset{\epsilon\to0}{\lim}{\int_{-\infty}^\infty\int_{-\infty}^\infty\int_{-\infty}^\infty\int_{-\infty}^\infty}dvdv_{0}dv'_{0}du\Psi^{*}(v_{0})\Psi(v'_{0})\theta\left(v_{0}-v\right)\theta\left(v'_{0}-v\right)\times
\]
\[
\times e^{-i\omega u - i\omega'v}\underset{J=1,2}{\sum}\psi_{u,J}(v_{0})\psi_{u,J}^{*}(v'_{0}),
\]
and changing variables
$x_{1}=\frac{v_{0}-v}{4M_{0}}$
and $x_{2}=\frac{v'_{0}-v}{4M_{0}}$
we get
\[
\left\langle \hat{\beta}\right\rangle_{\omega\omega'}=-\frac{\left(4M_{0}\right)^{2}}{2\pi}\sqrt{\frac{\omega'}{\omega}}\underset{\epsilon\to0}{\lim}{\int_{-\infty}^\infty}dve^{- i\omega'v}{\int_0^\infty\int_0^\infty}dx_{1}dx_{2}\Psi^{*}(4M_{0}x_{1}+v)\times
\]
\begin{equation}
\times\Psi(4M_{0}x_{2}+v){\int_{-\infty}^{+\infty}}due^{-i\omega u}\underset{J=1,2}{\sum}\psi_{u}^{J}(x_{1})\psi_{u}^{J*}(x_{2}).\label{genericexpression}
\end{equation}
The definition of the eigenstates $\psi_{u}^{I}$ reduces the integral
in
${\int_0^\infty\int_0^\infty}dx_{1}dx_{2}$ to
\[
{\int_0^\epsilon\int_0^\epsilon}dx_{1}dx_{2}+{\int_0^\epsilon}{\int_\epsilon^1}dx_{1}dx_{2}+{\int_\epsilon^1}{\int_0^\epsilon}dx_{1}dx_{2}+{\int_\epsilon^1\int_\epsilon^1}dx_{1}dx_{2}+{\int_1^\infty\int_1^\infty}dx_{1}dx_{2}.
\]
In the appendix we show that the first 3 integrals do not contribute
in the limit $\epsilon\to 0$. Therefore the calculation reduces to,
\[
\left\langle \hat{\beta}\right\rangle_{\omega\omega'}=-\frac{\left(4M_{0}\right)^{2}}{2\pi8\pi\hbar}\sqrt{\frac{\omega'}{\omega}}\underset{\epsilon\to0}{\lim}{\int_{-\infty}^\infty}dve^{- i\omega'v}\left({\int_\epsilon^1\int_\epsilon^1}dx_{1}dx_{2}+{\int_1^\infty\int_1^\infty}dx_{1}dx_{2}\right)\Psi^{*}(4M_{0}x_{1}+v)\times
\]
\[
\times\Psi(4M_{0}x_{2}+v){\int_{-\infty}^\infty}due^{-i\omega
u}\frac{1}{\sqrt{\left|\ln(x_{2})\right|\left|\ln(x_{1})\right|}}\exp\left(\frac{iM_{0}}{\hbar}(u-v)\left[{\rm
li}(x_{1})-{\rm li}(x_{2})\right]\right).
\]
Computing the integral in $u$ we get,
\[
\left\langle \hat{\beta}\right\rangle_{\omega\omega'}=-\frac{\left(4M_{0}\right)^{2}}{2\pi8\pi\hbar}\sqrt{\frac{\omega'}{\omega}}\underset{\epsilon\to0}{\lim}{\int_{-\infty}^\infty}dv\,e^{- i\omega'v}\left({\int_\epsilon^1\int_\epsilon^1}dx_{1}dx_{2}+{\int_1^\infty\int_1^\infty}dx_{1}dx_{2}\right)\Psi^{*}(4M_{0}x_{1}+v)\times
\]
\[
\times\Psi(4M_{0}x_{2}+v)\frac{2\pi\delta\left(\omega-\frac{M_{0}}{\hbar}
\left[{\rm li}(x_{1})-{\rm li}(x_{2})\right]\right)}
{\sqrt{\left|\ln(x_{2})\right|\left|\ln(x_{1})\right|}}e^{-i\omega
v}.
\]
Since ${\rm li}$ is invertible in $\left(0,1\right)$ and in $\left(1,+\infty\right)$
we can then integrate in $x_{2}$ to get
\[
\left\langle \hat{\beta}\right\rangle_{\omega\omega'}=-\frac{2M_{0}}{\pi}\sqrt{\frac{\omega'}{\omega}}{\int_{-\infty}^\infty}dve^{- i\omega'v}\left({\int_0^1}dx_{1}+{\int_1^\infty}dx_{1}\right)\times
\]
\[
\times\Psi^{*}(4M_{0}x_{1}+v)\Psi(4M_{0}x_{2}\left(x_{1}\right)+v)\sqrt{\frac{\left|\ln(x_{2})\right|}{\left|\ln(x_{1})\right|}}e^{-i\omega v},
\]
where $x_{2}\left(x_{1}\right)={\rm li}^{-1}\left[{\rm li}\left(x_{1}\right)-\frac{\omega\hbar}{M_{0}}\right]$
and we have used that $\partial_t{\rm li}\left(t\right)=\frac{1}{\left|\ln(t)\right|}$.
We redefine $x=x_{1}$ and
\begin{equation}
\bar{x}_\omega(x)={\rm li}^{-1}\left[{\rm li}\left(x\right)-\frac{\omega\hbar}{M_{0}}\right]\label{eq:inversaLi}.
\end{equation}
Therefore,
\[
\left\langle \hat{\beta}\right\rangle_{\omega\omega'}=-\frac{2M_{0}}{\pi}\sqrt{\frac{\omega'}{\omega}}{\int_0^\infty}dx\sqrt{\frac{\left|\ln(\bar{x}_\omega\left(x\right))\right|}{\left|\ln(x)\right|}}{\int_{-\infty}^\infty}dve^{-i\left[\omega+\omega'\right]v}\Psi^{*}(4M_{0}x+v)\Psi(4M_{0}\bar{x}_\omega\left(x\right)+v),
\]
where we have inverted the order of the integrals for convenience of
subsequent calculations. Finally, the change of variable $s\equiv v+2M_0\left[x+\bar{x}_\omega(x)\right]$ gives us
\[
\left\langle \hat{\beta}\right\rangle_{\omega\omega'}=-\frac{2M_{0}}{\pi}\sqrt{\frac{\omega'}{\omega}}{\int_0^\infty}dx\sqrt{\frac{\left|\ln(\bar{x}_\omega\left(x\right))\right|}{\left|\ln(x)\right|}}e^{i2M_0\left[\omega+\omega'\right]\left[x+\bar{x}_\omega(x)\right]}{\int_{-\infty}^\infty}ds e^{-i\left[\omega+\omega'\right]s}\Psi^{*}(s+2M_0\Delta_{\omega}(x))\Psi(s-2M_0\Delta_{\omega}(x)),
\]
with $\Delta_{\omega}(x)\equiv x-\bar{x}_\omega(x)$. To better connect this expression with the classical case we can make the general assumption that the wave-packet $\Psi$ of the shell is centered in time $\bar{v}_0$ and mass $\bar{M}$. We define $\Phi$ such that
\begin{equation}
\Psi(v_0)\equiv\Phi(v_0-\bar{v}_0)e^{-i\bar{M}\frac{v_0-\bar{v}_0}{\hbar}}\label{eq:redef_wavefunction_shell}.
\end{equation}
Now
\[
\left\langle \hat{\beta}\right\rangle_{\omega\omega'}=-\frac{2M_{0}e^{-i\left[\omega+\omega'\right]\bar{v}_0}}{\pi}\sqrt{\frac{\omega'}{\omega}}{\int_0^\infty}dx\sqrt{\frac{\left|\ln(\bar{x}_\omega\left(x\right))\right|}{\left|\ln(x)\right|}}e^{i4M_0\left[\omega+\omega'\right]x}e^{-i2M_0\left[\omega+\omega'\right]\Delta_{\omega}(x)}e^{i\frac{4\bar{M}M_0}{\hbar}\Delta_{\omega}(x)}\times
\]
\begin{equation}
\times{\int_{-\infty}^\infty}ds e^{-i\left[\omega+\omega'\right]s}\Phi^{*}(s+2M_0\Delta_{\omega}(x))\Phi(s-2M_0\Delta_{\omega}(x))\label{eq:coef_bogol_cuanticos_final}.
\end{equation}
As a possible wave-function for the black hole we consider a Gaussian centered in $\bar{v}_0$ and $\bar{M}_0$ whose $v_0$ representation is
\begin{equation}
\Psi_{b}\left(v_{0}\right)=\frac{1}{\left(\pi\sigma^{2}\right)^{\frac{1}{4}}}e^{-\frac{\left(v_{0}-\bar{v}_{0}\right)^{2}}{2\sigma^{2}}}e^{-i\bar{M}\frac{v_{0}-\bar{v_{0}}}{\hbar}}\label{eq:gaussiana}.
\end{equation}
Using this wave-function we get
\[
\left\langle \hat{\beta}\right\rangle_{\omega\omega'}=-\frac{2M_{0}e^{-i\left[\omega+\omega'\right]\bar{v}_0}}{\pi}\sqrt{\frac{\omega'}{\omega}}{\int_0^\infty}dx\sqrt{\frac{\left|\ln(\bar{x}_\omega\left(x\right))\right|}{\left|\ln(x)\right|}}e^{i4M_0\left[\omega+\omega'\right]x}e^{-i2M_0\left[\omega+\omega'\right]\Delta_{\omega}(x)}e^{i\frac{4\bar{M}M_{0}}{\hbar}\Delta_{\omega}(x)}\times
\]
\[
\times\frac{1}{\left(\pi\sigma^{2}\right)^{\frac{1}{2}}}e^{-\frac{4M_0^2\Delta_{\omega}(x)^2}{\sigma^2}} {\int_{-\infty}^\infty}ds e^{-i\left[\omega+\omega'\right]s}e^{-\frac{s^2}{\sigma^2}}.
\]
Computing the Gaussian integral
\[
\left\langle \hat{\beta}\right\rangle_{\omega\omega'}=-\frac{2M_{0}}{\pi}\sqrt{\frac{\omega'}{\omega}}e^{-i\left[\omega+\omega'\right]\bar{v}_{0}}e^{-\left[\omega+\omega'\right]^{2}\frac{\sigma^{2}}{4}}{\int_0^\infty}dx\sqrt{\frac{\left|\ln(\bar{x}_\omega\left(x\right))\right|}{\left|\ln(x)\right|}}\times
\]
\begin{equation}
\times e^{i4M_{0}\left[\omega+\omega'\right]x} e^{i\frac{4\bar{M}M_{0}}{\hbar}\Delta_{\omega}(x)}e^{-\frac{4M_{0}^{2}}{\sigma^{2}}\Delta_{\omega}(x)^{2}}e^{-i2M_{0}\left[\omega+\omega'\right]\Delta_{\omega}(x)}\label{eq:coef_bogol_gaussiana}.
\end{equation}
To check the consistency of this result we can get the classical limit
by taking $\hbar$ to zero and the width of the packet in both
canonical variables to zero as well,
\begin{equation}
\hbar \to 0, \ \sigma \to 0 \quad \text{with } \quad \frac{\hbar}{\sigma} \to 0 . \label{classlim}
\end{equation}
In that limit
$\bar{x}_\omega(x)={\rm li}^{-1}\left[{\rm li}(x)-\frac{\omega\hbar}{M_{0}}\right]\to
x$ and
$\frac{\Delta_{\omega}(x)}{\hbar}\to\frac{\omega}{M_{0}}\ln(x)$. Therefore,
\[
\left\langle \hat{\beta}\right\rangle_{\omega\omega'}\underset{\hbar\to0}{\longrightarrow}-\frac{4M_{0}}{2\pi}\sqrt{\frac{\omega'}{\omega}}e^{-i\left[\omega+\omega'\right]\bar{v_{0}}}{\int_0^\infty}dxe^{4M_{0}i\left[\omega+\omega'\right]x}e^{i4\bar{M}\omega \ln(x)}=\beta_{\omega\omega'}
\]
and we recover the classical expression (\ref{betaclasico}).
\subsection{Corrections to Hawking radiation: a first approach}
In the previous subsection we obtained the Bogoliubov coefficients in
the full quantum treatment and showed that we recover the classical
result in the classical limit (\ref{classlim}). Here we would like to
study deviations from the classical behaviour. For it, we will use the
expectation values derived in the previous section. This is only a
first approximation since the correct expression involves the
expectation value of products of the operators associated with the
Bogoliubov coefficients. We will later see that this implies an
important difference and an interesting example of how the quantum
fluctuations may be determinant and lead to significant departures
from the mean field approach.
We will consider the
example of a Gaussian wave-packet for the wave-function of the shell and
arrive to some general conclusions. Then, to maintain tractable
expressions, we will restrict attention to ``extreme'' cases of the
latter: one with the Gaussian very peaked in mass (with large
dispersion in $v_0$) and the other with the Gaussian very peaked in
$v_0$ (with large dispersion in the mass).
Let us start with some general considerations about the expectation
value of the operator associated with the Bogoliubov coefficients.
Expression (\ref{eq:coef_bogol_gaussiana}) has several differences with the classical limit (\ref{betaclasico}), especially in the dependence with the frequency $\omega'$. Lets focus in the integrand
$$\sqrt{\frac{\left|\ln(\bar{x}_\omega\left(x\right))\right|}{\left|\ln(x)\right|}}e^{i4M_{0}\left[\omega+\omega'\right]x} e^{i\frac{4\bar{M}M_{0}}{\hbar}\Delta_{\omega}(x)}e^{-\frac{4M_{0}^{2}}{\sigma^{2}}\Delta_{\omega}(x)^{2}}e^{-i2M_{0}\left[\omega+\omega'\right]\Delta_{\omega}(x)}.$$
Taking into account that
$$\Delta_{\omega}(x)\underset{x\to0}{\longrightarrow} {\rm li}^{-1}\left(-\frac{\hbar\omega}{M_0}\right)$$
$$\Delta_{\omega}(x)\underset{x\to+\infty}{\sim} \frac{\hbar\omega}{M_0}\ln(x),$$
and remembering that
$$\bar{x}_\omega(x)={\rm li}^{-1}\left({\rm li}(x)-\frac{\hbar\omega}{M_0}\right),$$
we see it vanishes when $x\to0$, also $\sqrt{\frac{\left|\ln(\bar{x}_\omega\left(x\right))\right|}{\left|\ln(x)\right|}}$ is bounded by $1$ and finally
$$e^{-\frac{4M_{0}^{2}}{\sigma^{2}}\Delta_{\omega}(x)^{2}}\underset{x\to+\infty}{\sim}e^{-\frac{4\hbar^2}{\sigma^{2}}\ln(x)^{2}}.$$
Therefore the integral has a bound (independent of $\omega'$) given by
$${\int_0^\infty}dxe^{-\frac{4M_{0}^{2}}{\sigma^{2}}\Delta_{\omega}(x)^{2}}.$$
This fact, together with the exponential factor
$e^{-\left[\omega+\omega'\right]^{2}\frac{\sigma^{2}}{4}}$ outside
the integral, ensures exponential suppression of large $\omega'$
contributions. The integral also lacks the $\frac{1}{\omega'+\omega}$
dependence that the classical expression has since setting
$\omega'=\omega=0$ inside the integral still gives us a finite result.
One quantity that is extremely sensitive to these differences is
the total number of emitted particles per unit frequency. If we compute it using the
expectation value of the Bogoliubov coefficients it will be given
by
\begin{equation}
\left\langle N_{\omega}^{AQS}\right\rangle ={\int_0^\infty}d\omega'\left\langle \hat{\beta}\right\rangle _{\omega\omega'}\left\langle \hat{\beta}\right\rangle _{\omega\omega'}^{*}\label{eq:N_AQS}
\end{equation}
where the superscript ``AQS'' stands for Approximate Quantum
Shell. The reason to call it approximate is that the correct way to
compute it would be with the expectation value of the product of
Bogoliubov coefficients instead of the product of expectation
values. We will address this important issue in the next section,
but for now we will assume that fluctuations are small and this is a good
approximation.
Given the previous general remarks about Bogoliubov coefficients we
conclude $\left\langle N_{\omega}^{AQS}\right\rangle$ is not divergent
as in the classical expression (\ref{eq:N_omega3}) but finite which is
a big departure from eternal Hawking radiation.\\
A more explicit analysis can be performed with a state that is
squeezed with large dispersion in the position of the shell and very
peaked in the mass.
Specifically, we will consider the case where the shell is in a Gaussian (\ref{eq:gaussiana}) squeezed state with large dispersion in $v_0$ and small dispersion in $M$. The leading quantum correction for such states is obtained by taking the limit $\hbar \to 0$ with
\begin{equation}
\Delta v_0 = \sigma = \text{constant} = Z \ell_{\text{Planck}} \; , \; Z \gg 1 \, ; \quad \Delta M= \hbar/\sigma.\label{eq:squeezed_state_limit}
\end{equation}
Even though this limit is
different from the one we took following (\ref{eq:coef_bogol_gaussiana}) it has similarities with it. The
terms inside the integral go to their classical values but the
external factor involving $\sigma$ now remains. One then finds that (\ref{eq:coef_bogol_gaussiana}) goes to:
\begin{equation}
\left\langle \hat{\beta}\right\rangle_{\omega\omega'}\to e^{-\left[\omega+\omega'\right]^{2}\frac{\sigma^{2}}{4}}\beta_{\omega\omega'}. \label{betasq}
\end{equation}
The deviation from the classical Bogoliubov coefficients is only
through a multiplicative factor that disappears in the classical limit
where $\sigma \to 0$. For non-zero $\sigma$ the factor suppresses
frequencies greater than $1/\sigma$. This produces important
corrections to the calculation of Hawking radiation as we already
mentioned. However this calculation is based in an approximation in
which we computed the square of the expectation value of the
Bogoliubov coefficients instead of the expectation value of the
square. It turns out this approximation breaks down. We present
detailed calculations in the appendix. Here we just outline the
calculation.
Estimating the expectation value of the number operator using expression (\ref{betasq}) we get,
\[
\left\langle N_{\omega}^{AQS}\right\rangle ={\int_0^\infty}d\omega'\left\langle \hat{\beta}\right\rangle _{\omega\omega'}\left\langle \hat{\beta}\right\rangle _{\omega\omega'}^{*}=\frac{1}{4\pi^{2}}\frac{1}{\omega}\left|\Gamma(1+4\bar{M}\omega i)\right|^{2}e^{-4\pi \bar{M}\omega}{\int_0^\infty}d\omega'\frac{\omega'e^{-\left[\omega+\omega'\right]^{2}\frac{\sigma^{2}}{2}}}{\left(\omega'+\omega\right)^{2}}=\]
\begin{equation}
=\frac{1}{e^{8\bar{M}\pi\omega}-1}\frac{2\bar{M}}{\pi}{\int_0^\infty}d\omega'\frac{\omega'e^{-\left[\omega+\omega'\right]^{2}\frac{\sigma^{2}}{2}}}{\left(\omega'+\omega\right)^{2}}.\label{eq:Number of particles_AQS}
\end{equation}
This expression has the same pre-factor Hawking radiation has but with $\bar{M}$ in the role of mass. However, unlike (\ref{eq:N_omega3}) this is a finite expression for all $\omega\neq0$ and has a logarithmic divergence when $\omega\to0$. Furthermore, it has a $\exp(-\frac{\omega^2\sigma^2}{2})$ dependence when $\omega\to+\infty$ instead of the usual $\exp(-8\bar{M}\pi\omega)$ for Hawking radiation. \\
Since we are interested in the behaviour of the Hawking radiation as a
function of time it is convenient to introduce wave packets as we
considered before and therefore to compute the number of particles at
time $u_n$ around $\omega_j$ given by,
\[
\langle N_{\omega_{j}}^{AQS}\rangle=\frac{1}{\epsilon}
{\int_{j\epsilon}^{\left(j+1\right)\epsilon}\int_{j\epsilon}^{\left(j+1\right)\epsilon}}d\omega_{1}d\omega_{2}e^{-u_n\Delta\omega i}\left\langle\rho^{AQS}_{\omega_1,\omega_2}\right\rangle.
\]
Using the results in appendix 2 it can be computed explicitly,
yielding,
\[
\langle N_{\omega_{j}}^{AQS}\rangle
=\frac{\bar{M}\epsilon}{\pi}\frac{1}{e^{8\bar{M}\pi\omega_{j}}-1}
{\int_1^\infty}dy\frac{e^{-\frac{\omega_{j}^{2}\sigma^{2}}{2}y}}{y}\left\{\frac{\sin\left[\frac{\epsilon}{2}\left(\alpha-2\bar{M}\ln\left(y\right)\right)\right]}{\frac{\epsilon}{2}\left(\alpha-2\bar{M}\ln\left(y\right)\right)}\right\} ^{2}.
\]
Where $\alpha$ is the same quantity defined in equation (\ref{alpha})
with $M$ and $v_0$ replaced by their respective expectation values in
the Gaussian state given above.
The presence of the factor
$\sin^{2}(a)/a^{2}$
and the decreasing exponential imply that the integral decreases when
$\alpha$ grows and also drastically decreases when
$\alpha<0$. The latter is a result we already knew from the classical
case, but the former is a result of the quantum nature of the black
hole since it is not present if
$\sigma=0$. Figure (\ref{nwj}) shows the departure from the classical result that appears when one computes the frequency distribution starting from $\left\langle\hat{\beta}\right\rangle$.
We can estimate the time of emission for each frequency using both
extremes. In the appendix we also show that the features are robust
with respect to the choice of the quantum state by considering
squeezed
states with large dispersion in the mass, which is the opposite of the
choice we considered here.
However, as we shall see in the next section, the decrease
in emission for late time is an artifact of the approximation
considered that neglects the fluctuations of the number of particles.
\begin{figure}
\includegraphics[height=8cm]{factor_in_N_Omega}
\caption{This plot shows the departure from the classical result of $N^{AQS}_{\omega_j}/N^{H}_{\omega_j}$. We have considered $\omega_j$ corresponding to the $\lambda$ of maximum emission ($\lambda_m \sim 16 R_s$), the frequency interval $\epsilon = c/R_s$ and the shell's position uncertainty $\sigma= 5 R_s \times 10^{-38}$ ($\sim 3 l_P$ for $R_s= 1 km$). Note that the time step is $2\pi/\epsilon$.}
\label{nwj}
\end{figure}
\section{Computing the expectation value of the density matrix in the
complete quantum treatment}
In this section we will obtain an exact expression for the expectation
value of the density matrix with the same technique used to compute
the expectation value of Bogoliubov coefficients. From its diagonal
terms we can compute the number of particles produced as a function of
frequency.
From expression ({\ref{eq:coef_bogol_cuanticos}) for the operator associated to a Bogoliubov coefficient we can compute the expectation value of the density matrix as
\[
\left\langle \rho_{\omega_1\omega_2}^{QS}\right\rangle=\int_{0}^{\infty} d\omega'\left\langle\hat{\beta}_{\omega_1\omega'}\hat{\beta}_{\omega_2\omega'}^{*}\right\rangle.
\]
where $QS$ stands for quantum shell. The full expression is
\[
\left\langle \rho_{\omega_1\omega_2}^{QS}\right\rangle=\frac{1}{(2\pi)^2}\int_{0}^{\infty} d\omega'\frac{\omega'}{\sqrt{\omega_1\omega_2}}\left\langle
\Psi\right|
{{\int\int}_{-\infty}^{+\infty}}dvdv'{\int_{-\infty}^{+\infty}}dv_{0}\left|v_{0}\right\rangle \left\langle v_{0}\right|\theta\left(\widehat{v}_{0}-v\widehat{I}\right)e^{-i\omega_1\hat{u}(v)- i\omega'v}\times
\]
\[
\times \underset{J=1,2}{\sum}{\int_{-\infty}^{+\infty}}du\left|u,J\right\rangle\left\langle u,J\right|{\int_{-\infty}^{+\infty}}dv''_{0}\left|v''_{0}\right\rangle \left\langle v''_{0}\right|\theta\left(\widehat{v}_{0}-v\widehat{I}\right){\int_{-\infty}^{+\infty}}dv'''_{0}\left|v'''_{0}\right\rangle \left\langle v'''_{0}\right|\theta\left(\widehat{v}_{0}-v'\widehat{I}\right)\times
\]
\[
\times\underset{L=1,2}{\sum}{\int_{-\infty}^{+\infty}}du'\left|u',L\right\rangle\left\langle u',L\right| e^{i\omega_2\hat{u'}(v')+ i\omega'v'}{\int_{-\infty}^{+\infty}}dv'_{0}\left|v'_{0}\right\rangle \left\langle v'_{0}\right|\theta\left(\widehat{v}_{0}-v'\widehat{I}\right)\left|\Psi\right\rangle.
\]
Here we have considered bases of eigenstates of $\hat{v}_{0}$ and $\hat{u}$ and we have omitted the $\epsilon$ dependence in $\hat{u}$ eigenstates. Identical arguments as the ones used in the appendix allow us to do so. Simplifying the expression we get
\[
\left\langle \rho_{\omega_1\omega_2}^{QS}\right\rangle=\frac{1}{(2\pi)^2}\int_{0}^{\infty} d\omega'\frac{\omega'}{\sqrt{\omega_1\omega_2}}
{{\int}_{-\infty}^{+\infty}}dvdv'dv_0dv'_0dv''_0dv'''_0\theta\left(v_{0}-v\right)\theta\left(v''_{0}-v\right)\theta\left(v'''_{0}-v'\right)\theta\left(v'_{0}-v'\right)\delta\left(v''_{0}-v'''_0\right)\times
\]
\[
\times{\int}_{-\infty}^{+\infty}dudu' e^{-i\omega_1 u - i\omega'v}\underset{J=1,2}{\sum}\psi_{u,J}(v_{0})\psi_{u,J}^{*}(v''_{0}) e^{i\omega_2 u' + i\omega'v'}\underset{L=1,2}{\sum}\psi_{u',L}(v'''_{0})\psi_{u',L}^{*}(v'_{0})\Psi^{*}(v_0)\Psi(v'_0),
\]
The change of variables $x_{1}=\frac{v_{0}-v}{4M_{0}}$, $x_{2}=\frac{v''_{0}-v}{4M_{0}}$, $x_{3}=\frac{v'''_{0}-v'}{4M_{0}}$ y $x_{4}=\frac{v'_{0}-v'}{4M_{0}}$
take us to
\[
\left\langle \rho_{\omega_1\omega_2}^{QS}\right\rangle=\frac{\left(4M_0\right)^4}{(2\pi)^2}\int_{0}^{\infty} d\omega'\frac{\omega'}{\sqrt{\omega_1\omega_2}}
{{\int}_{-\infty}^{+\infty}}dvdv'{\int}_{0}^{+\infty}dx_1dx_2dx_3dx_4\delta\left(4M_0\left[x_2-x_3\right]+v-v'\right)\times
\]
\[
\times{\int}_{-\infty}^{+\infty}dudu' e^{-i\omega_1 u - i\omega'v}\underset{J=1,2}{\sum}\psi_{u,J}(x_1)\psi_{u,J}^{*}(x_2) e^{i\omega_2 u' + i\omega'v'}\underset{L=1,2}{\sum}\psi_{u',L}(x_3)\psi_{u',L}^{*}(x_4)\Psi^{*}(4M_{0}x_{1}+v)\Psi(4M_{0}x_{4}+v').
\]
Using expressions (\ref{eq:autoestados_de_u_1}) and (\ref{eq:autoestados_de_u_2}) for the eigenfunctions of the $\hat{u}$ operator
\[
\left\langle \rho_{\omega_1\omega_2}^{QS}\right\rangle=\frac{\left(4M_0\right)^4}{(16\pi^2\hbar)^2}\int_{0}^{\infty} d\omega'\frac{\omega'}{\sqrt{\omega_1\omega_2}}
{{\int}_{-\infty}^{+\infty}}dvdv'{\int}_{0}^{+\infty}dx_1dx_2{\int}_{0}^{+\infty}dx_3dx_4\delta\left(4M_0\left[x_2-x_3\right]+v-v'\right){\int}_{-\infty}^{+\infty}dudu' \times
\]
\[
\times e^{-i\omega_1 u - i\omega'v}e^{i\omega_2 u' + i\omega'v'}\frac{\exp\left(\frac{iM_{0}}{\hbar}(u-v)\left[{\rm
li}(x_{1})-{\rm li}(x_{2})\right]\right)\exp\left(\frac{iM_{0}}{\hbar}(u'-v')\left[{\rm
li}(x_{3})-{\rm li}(x_{4})\right]\right)}{\sqrt{\left|\ln(x_{1})\right|\left|\ln(x_{2})\right|\left|\ln(x_{3})\right|\left|\ln(x_{4})\right|}}\Psi^{*}(4M_{0}x_{1}+v)\Psi(4M_{0}x_{4}+v').
\]
Integrating in $u$ y $u'$ we get,
\[
\left\langle \rho_{\omega_1\omega_2}^{QS}\right\rangle=\frac{\left(4M_0\right)^4}{(8\pi\hbar)^2}\int_{0}^{\infty} d\omega'\frac{\omega'}{\sqrt{\omega_1\omega_2}}
{{\int}_{-\infty}^{+\infty}}dvdv'{\int}_{0}^{+\infty}dx_1dx_2{\int}_{0}^{+\infty}dx_3dx_4\delta\left(4M_0\left[x_2-x_3\right]+v-v'\right)\times
\]
\[
\times e^{- i\left[\omega'+\omega_1\right]v}e^{i\left[\omega'+\omega_2\right]v'}\frac{\delta\left(\omega_1-\frac{M_{0}}{\hbar}\left[{\rm
li}(x_{1})-{\rm li}(x_{2})\right]\right)\delta\left(\omega_2+\frac{M_{0}}{\hbar}\left[{\rm
li}(x_{3})-{\rm li}(x_{4})\right]\right)}{\sqrt{\left|\ln(x_{1})\right|\left|\ln(x_{2})\right|\left|\ln(x_{3})\right|\left|\ln(x_{4})\right|}}\Psi^{*}(4M_{0}x_{1}+v)\Psi(4M_{0}x_{4}+v').
\]
Since ${\rm li}$ is invertible in $\left(0,1\right)$ and in $\left(1,+\infty\right)$
we can integrate in $x_{2}$ and $x_3$ to get
\[
\left\langle \rho_{\omega_1\omega_2}^{QS}\right\rangle=\frac{\left(2M_0\right)^2}{\pi^2}\int_{0}^{\infty} d\omega'\frac{\omega'}{\sqrt{\omega_1\omega_2}}
{{\int}_{-\infty}^{+\infty}}dvdv'{\int}_{0}^{+\infty}dx_1{\int}_{0}^{+\infty}dx_4\delta\left(4M_0\left[x_2(x_1)-x_3(x_4)\right]+v-v'\right)\times
\]
\[
\times e^{- i\left[\omega'+\omega_1\right]v}e^{i\left[\omega'+\omega_2\right]v'}\sqrt{\frac{\left|\ln(x_{2}(x_1)\right|\left|\ln(x_{3}(x_4))\right|}{\left|\ln(x_{1})\right|\left|\ln(x_{4})\right|}}\Psi^{*}(4M_{0}x_{1}+v)\Psi(4M_{0}x_{4}+v'),
\]
where $x_{2}\left(x_{1}\right)={\rm li}^{-1}\left[{\rm li}\left(x_{1}\right)-\frac{\omega_1\hbar}{M_{0}}\right]$, $x_{3}\left(x_{4}\right)={\rm li}^{-1}\left[{\rm li}\left(x_{4}\right)-\frac{\omega_2\hbar}{M_{0}}\right]$
and we have used that $\partial_t{\rm li}\left(t\right)=\frac{1}{\left|\ln(t)\right|}$.
We redefine $x=x_{1}$, $x'=x_{4}$ and then
\[
\left\langle \rho_{\omega_1\omega_2}^{QS}\right\rangle=\frac{\left(2M_0\right)^2}{\pi^2}\int_{0}^{\infty} d\omega'\frac{\omega'}{\sqrt{\omega_1\omega_2}}
{{\int}_{-\infty}^{+\infty}}dvdv'{\int}_{0}^{+\infty}dxdx'\delta\left(4M_0\left[\bar{x}_{\omega_1}(x)-\bar{x}_{\omega_2}(x')\right]+v-v'\right)\times
\]
\[
\times e^{- i\left[\omega'+\omega_1\right]v}e^{i\left[\omega'+\omega_2\right]v'}\sqrt{\frac{\left|\ln(\bar{x}_{\omega_1}(x))\right|\left|\ln(\bar{x}_{\omega_2}(x'))\right|}{\left|\ln(x)\right|\left|\ln(x')\right|}}\Psi^{*}(4M_{0}x+v)\Psi(4M_{0}x'+v')
.\]
Integrating in $v'$
\[
\left\langle \rho_{\omega_1\omega_2}^{QS}\right\rangle=\frac{\left(2M_0\right)^2}{\pi^2}\int_{0}^{\infty} d\omega'\frac{\omega'}{\sqrt{\omega_1\omega_2}}
{\int}_{0}^{+\infty}dxdx'e^{-i4M_0\left[\omega'+\omega_2\right]x'}e^{i4M_0\left[\omega'+\omega_2\right]x}
e^{i4M_0\left[\omega'+\omega_2\right]\Delta_{\omega_1\omega_2}(x,x')}\times
\]
\[
\times\sqrt{\frac{\left|\ln(\bar{x}_{\omega_1}(x))\right|\left|\ln(\bar{x}_{\omega_2}(x'))\right|}{\left|\ln(x)\right|\left|\ln(x')\right|}} {{\int}_{-\infty}^{+\infty}}dv e^{- i\left[\omega_1-\omega_2\right]v}\Psi^{*}(4M_{0}x+v)\Psi(4M_{0}x+v+4M_0\Delta_{\omega_1\omega_2}(x,x'))
\]
where $\Delta_{\omega_1\omega_2}(x,x')=\Delta_{\omega_2}(x')-\Delta_{\omega_1}(x)$. Now, changing variable $v$ to $s=v+4M_0x+2M_0\Delta_{\omega_1\omega_2}(x,x')$
\[
\left\langle \rho_{\omega_1\omega_2}^{QS}\right\rangle=\frac{\left(2M_0\right)^2}{\pi^2}\int_{0}^{\infty} d\omega'\frac{\omega'}{\sqrt{\omega_1\omega_2}}
{\int}_{0}^{+\infty}dxdx'e^{-i4M_0\left[\omega'+\omega_2\right]x'}e^{i4M_0\left[\omega'+\omega_1\right]x}
e^{i4M_0\left[\omega'+\bar{\omega}\right]\Delta_{\omega_1\omega_2}(x,x')}\times
\]
\[
\times\sqrt{\frac{\left|\ln(\bar{x}_{\omega_1}(x))\right|\left|\ln(\bar{x}_{\omega_2}(x'))\right|}{\left|\ln(x)\right|\left|\ln(x')\right|}} {{\int}_{-\infty}^{+\infty}}ds e^{- i\left[\omega_1-\omega_2\right]s}\Psi^{*}(s-2M_0\Delta_{\omega_1\omega_2}(x,x'))\Psi(s+2M_0\Delta_{\omega_1\omega_2}(x,x'))
\]
where $\bar{\omega}=\frac{\omega_1+\omega_2}{2}$. Finally, using definition (\ref{eq:redef_wavefunction_shell}) we get
\[
\left\langle \rho_{\omega_1\omega_2}^{QS}\right\rangle=\frac{\left(2M_0\right)^2e^{- i\left[\omega_1-\omega_2\right]\bar{v}_0}}{\pi^2\sqrt{\omega_1\omega_2}}\int_{0}^{\infty} d\omega'\omega'
{\int}_{0}^{+\infty}dxdx'e^{-i4M_0\left[\omega'+\omega_2\right]x'}e^{i4M_0\left[\omega'+\omega_1\right]x}
e^{i4M_0\left[\omega'+\bar{\omega}\right]\Delta_{\omega_1\omega_2}(x,x')} \times
\]
\begin{equation}
\times e^{-i\frac{4M_0\bar{M}}{\hbar}\Delta_{\omega_1\omega_2}(x,x')}\sqrt{\frac{\left|\ln(\bar{x}_{\omega_1}(x))\right|\left|\ln(\bar{x}_{\omega_2}(x'))\right|}{\left|\ln(x)\right|\left|\ln(x')\right|}} {{\int}_{-\infty}^{+\infty}}ds e^{- i\left[\omega_1-\omega_2\right]s}\Phi^*(s-2M_0\Delta_{\omega_1\omega_2}(x,x'))
\Phi(s+2M_0\Delta_{\omega_1\omega_2}(x,x')).
\label{eq:matriz_densidad_quantum}
\end{equation}
where $\Delta\omega=\omega_2-\omega_1$. Taking again the Gaussian wavepacket (\ref{eq:gaussiana}) as an example, we get
\[
\left\langle \rho_{\omega_1\omega_2}^{QS}\right\rangle=\frac{\left(2M_0\right)^2e^{ i\Delta\omega\bar{v}_0}e^{-\frac{\Delta\omega^2\sigma^2}{4}}}{\pi^2\sqrt{\omega_1\omega_2}}\int_{0}^{\infty} d\omega'\omega'
{\int}_{0}^{+\infty}dxdx'e^{i4M_0\left[\omega'+\omega_1\right]x}e^{-i4M_0\left[\omega'+\omega_2\right]x'}\times
\]
\begin{equation}
\times e^{i4M_0\left[\omega'+\bar{\omega}\right]\Delta_{\omega_1\omega_2}(x,x')} e^{-i\frac{4M_0\bar{M}}{\hbar}\Delta_{\omega_1\omega_2}(x,x')}\sqrt{\frac{\left|\ln(\bar{x}_{\omega_1}(x))\right|\left|\ln(\bar{x}_{\omega_2}(x'))\right|}{\left|\ln(x)\right|\left|\ln(x')\right|}}e^{-\frac{4M_0^2\Delta_{\omega_1\omega_2}(x,x')^2}{\sigma^2}}.\label{eq:matriz_densidad_quantum_gaussiana}
\end{equation}
This is the final result for the expectation value of the density
matrix in the complete quantum treatment.
From this expression we can compute the classical limit (\ref{classlim}).
In that limit
$\bar{x}_{\omega_1}(x)\to x$, $\bar{x}_{\omega_2}(x')\to x'$ and $\frac{\Delta_{\omega_1\omega_2}(x,x')}{\hbar}\to\frac{\omega_2\ln(x')-\omega_1\ln(x)}{M_{0}}$. Therefore,
\[
\left\langle \rho_{\omega_1\omega_2}^{QS}\right\rangle=\frac{\left(2M_0\right)^2e^{ i\Delta\omega\bar{v}_0}}{\pi^2\sqrt{\omega_1\omega_2}}\int_{0}^{\infty} d\omega'\omega'
{\int}_{0}^{+\infty}dxe^{i4M_0\left[\omega'+\omega_1\right]x}e^{i4\bar{M}\omega_1\ln(x)}{\int}_{0}^{+\infty}dx'e^{-i4M_0\left[\omega'+\omega_2\right]x'}e^{-i4\bar{M}\omega_2\ln(x')}
\]
which is the classical expression for the density matrix
\[
\rho^{CS}_{\omega_1,\omega_2}={\int_0^\infty}d\omega'\beta_{\omega_{1}\omega'}\beta_{\omega_{2}\omega'}^{*}\]
with $\beta_{\omega\omega'}$ given by (\ref{betaclasico}).
We analyze the consequences of these calculations in the next section.
\section{Corrections to Hawking radiation due to the quantum
background}
We have studied the corrections to Hawking radiation using the
approximate expression (\ref{eq:approx_density matrix}) discussed in
appendix 2. Now we can do
the same calculation from the exact expression
(\ref{eq:matriz_densidad_quantum_gaussiana}). As in the previous
section we begin with some general remarks about the result for a
Gaussian state and then explore the same squeezed states we considered
before.
Unlike the density matrix constructed from
(\ref{eq:coef_bogol_gaussiana}), expression
(\ref{eq:matriz_densidad_quantum_gaussiana}) has a double integral
that can not be separated in $x$ and $x'$ variables. But the most
significant differences are the missing $\omega'$ dependence in the
exponential
$$e^{-\frac{\Delta\omega^2\sigma^2}{4}},$$ and the exponential inside
the
double integral
$$e^{-\frac{4M_0^2\left[\Delta_{\omega_1\omega_2}(x,x')\right]^2}{\sigma^2}}.$$
The first point significantly changes the $\omega'$ integral. The
second expression does not make the integrand fall rapidly when
$x,x'\to+\infty$ because the exponential remains constant in the
directions give by the equation
$$\Delta_{\omega_1\omega_2}(x,x')=\Delta_{\omega_2}(x')-\Delta_{\omega_1}(x)={\rm
const.}$$
As we will see in better detail with the following examples, the
consequence of the above remarks are that radiation does not end at a
finite time as predicted by evaluations of the expectation value of
Bogoliubov coefficients. However, the significant difference between
$\left\langle N^{QS}_\omega\right\rangle$ and $\left\langle
N^{AQS}_\omega\right\rangle$ is also generically associated with
the appearance of fluctuations in the Bogoliubov coefficients at
finite time. We will see that may leads to new correlations in the
Hawking radiation that are not present in the classical calculation.
\subsection{States peaked in the mass recover the classical results}
Let us consider first the case of a squeezed state with large
dispersion in the position of the shell.
Taking the limit (\ref{eq:squeezed_state_limit}),
\begin{equation}
\left\langle \rho_{\omega_1\omega_2}^{QS}\right\rangle\to e^{-\frac{\Delta\omega^2\sigma^2}{4}}\left\langle \rho_{\omega_1\omega_2}^{CS}\right\rangle.
\end{equation}
It is clear that there are no corrections to the total number of
particles $\left\langle
N^{QS}_\omega\right\rangle=\left\langle\rho_{\omega\omega}^{QS}\right\rangle$
since the exponential factor is one if $\omega_1=\omega_2$.
Also, for late times $\left\langle
\rho_{\omega_1\omega_2}^{CS}\right\rangle$ is diagonal so the are no
non-vanishing correlations for different frequencies. We therefore
recover the classical results in their entirety for the particular
case of squeezed states we consider that are highly peaked in the mass
and with large dispersion in the position of the shell.
\subsection{States with dispersion in the mass}
To illustrate this point
let us consider now a squeezed state with large dispersion in the mass
of the shell. To compare with the previous result let us compute the
number of particles taking the limit
(\ref{eq:squeezed_state_M_limit}). We get,
\begin{equation}
\left\langle N_{\omega}^{QS}\right\rangle=\left\langle\rho_{\omega\omega}^{QS}\right\rangle\to\frac{\left(2M_0\right)^2}{\pi^2\omega}\int_{0}^{\infty} d\omega'\omega'
{\int}_{0}^{+\infty}dxdx'e^{-\epsilon\left(x+x'\right)}e^{-i4M_0\left[\omega'+\omega\right]\left(x'-x\right)}e^{-i4\bar{M}\omega\ln\left(\frac{x'}{x}\right)}e^{-4\Delta M^2\omega^2\ln\left(\frac{x'}{x}\right)^2}\label{eq:matriz_densidad_quantum_gaussiana_dM}
\end{equation}
where we introduced the same $\epsilon$ regulator used for the integration of Bogoliubov coefficients. The change of variables $x=r\cos(\theta)$, $x'=r\sin(\theta)$ allow us to compute the double integral as
\[
\int_0^{\pi/2}d\theta\int_0^{+\infty}r dr e^{-\epsilon r\left[\sin(\theta)+\cos(\theta)\right]}e^{-i4M_0\left[\omega'+\omega\right]r\left[\sin(\theta)-cos(\theta)\right]}e^{-i4\bar{M}\omega\ln\left[\tan(\theta)\right]}e^{-4\Delta M^2\omega^2\ln\left[\tan(\theta)\right]^2}.
\]
The $r$ integral can be computed, leading to,
\[
-\frac{1}{\left[\omega'+\omega\right]^2(4M_0)^2}\underset{\epsilon\to0}{\lim}\int_0^{\pi/2}d\theta\frac{e^{-i4\bar{M}\omega\ln\left[\tan(\theta)\right]}e^{-4\Delta M^2\omega^2\ln\left[\tan(\theta)\right]^2}}{\left[\frac{\tan(\theta)-1}{\tan(\theta)+1}-i\epsilon\right]^2}\frac{1+\tan(\theta)^2}{\left[1+\tan(\theta)\right]^2},
\]
where we have redefined $\epsilon$ conveniently. A final change of variable $y=\ln\left[\tan(\theta)\right]$ turns the integral into
\[
-\frac{1}{\left[\omega'+\omega\right]^2(4M_0)^2}\underset{\epsilon\to0}{\lim}\int_{-\infty}^{+\infty}dy\frac{1}{2\cosh(y/2)}\frac{e^{-i4\bar{M}\omega y}e^{-4\Delta M^2\omega^2y^2}}{\left[\tanh(y/2)-i\epsilon\right]^2}.
\]
This expression can be rewritten as
$$-\frac{1}{4\left[\omega'+\omega\right]^2(4M_0)^2}\left[\underset{\epsilon\to0}{\lim}\int_{-\infty}^{+\infty}dy\frac{1}{\cosh^2(y/2)}\frac{e^{-i4\bar{M}\omega y}}{\left[\tanh(y/2)-i\epsilon\right]^2}-\int_{-\infty}^{+\infty}dye^{-i4\bar{M}\omega y}\frac{1-e^{-4\Delta M^2\omega^2y^2}}{\sinh^2(y/2)}\right].$$
Now the first integral can be computed by contour integration to obtain the classical result (\ref{eq:N_omega3}) with the expectation value $\bar{M}$ in the role of mass,
$$\underset{\epsilon\to0}{\lim}\int_{-\infty}^{+\infty}dy\frac{1}{\cosh^2(y/2)}\frac{e^{-i4\bar{M}\omega y}}{\left[\tanh(y/2)-i\epsilon\right]^2}=\frac{-32\bar{M}\omega\pi}{e^{8\bar{M}\omega\pi}-1}$$
The second term,
$$f(\bar{M},\omega,\Delta M)\equiv \int_{-\infty}^{+\infty}dye^{-i4\bar{M}\omega y}\frac{1-e^{-4\Delta M^2\omega^2y^2}}{\sinh^2(y/2)}$$
is a finite correction which vanishes in the classical limit. Regarding the dependence in $\omega$, unlike the leading term it vanishes for $\omega\to0$ and also as the Fourier transform of a smooth and rapidly falling function it falls rapidly with $\omega\to+\infty$. Finally,
\[
\left\langle N_{\omega}^{QS}\right\rangle=\left[\frac{1}{e^{8\bar{M}\omega\pi}-1}+\frac{f(\bar{M},\omega,\Delta M)}{32\pi\omega\bar{M}}\right]\frac{4\bar{M}}{2\pi}\int_{0}^{\infty} d\omega'\frac{\omega'}{\left[\omega'+\omega\right]^2}.
\]
This expression is clearly divergent, with the same divergent integral
that appears in the classical case but with a small departure from
thermality given by $f$. It could be made finite considering packets
as we did before. Notice that the expression has the thermal spectrum
plus a term that only vanishes when there are no fluctuations in the
mass. The extra term essentially depends on the Fourier transform of
the initial state of the shell and suggests that the complete
information of the initial state could be retrieved from the
radiation. Recall that in order to recover finite results one needs to
compute the number expectation value for wave packets localized in
time and frequency. We are therefore led to an expression that departs
more and more from ordinary Hawking radiation when the uncertainty in the mass
increases.
\section{Coherence}
Hawking radiation stemming from a classical black hole is
incoherent. This manifests itself in the vanishing of the off-diagonal
elements of the density matrix in the frequency basis. We will see
that the density matrix of the Hawking radiation of the quantum
space-time of the collapsing null shell has non-vanishing off-diagonal
coherence terms which gives additional evidence that it contains
quantum information from the initial state of the shell that gave rise
to the black hole. While
they vanish for standard Hawking radiation on classical space-times
they are nonvanishing here.
Starting from expression (\ref{eq:matriz_densidad_quantum_gaussiana})
for the density matrix of a Gaussian packet we already discussed the
case of a state extremely peaked in mass and we found no corrections
to the number of particles and no correlations between different
frequencies for late time radiation. On the other hand we studied the
somewhat opposite case of a state with dispersion in the mass
and well defined position. For that state we found corrections to the
number of particles and now we will study corrections to density
matrix $\rho^{CS}_{\omega_1,\omega_2}$ due to these fluctuations.
We will only calculate corrections to
the late time density matrix $\rho^{H}_{\omega_1,\omega_2}$. In this
limit the classical matrix is diagonal and therefore the only source
of non diagonal terms will be from the quantum nature of the shell. In
the limit (\ref{eq:squeezed_state_M_limit}) the late time density
matrix takes the form
\[
\left\langle \rho_{\omega_1\omega_2}^{QS}\right\rangle=\frac{\left(2M_0\right)^2e^{ i\Delta\omega\bar{v}_0}}{\pi^2\sqrt{\bar{\omega}^2-\frac{\Delta\omega^2}{4}}}\int_{0}^{\infty} d\omega'\omega'
{\int\int}_{0}^{+\infty}dxdx'e^{i4M_0\omega'(x-x')}e^{-i4\bar{M}\bar{\omega}\ln(\frac{x'}{x})}e^{-i2\bar{M}\Delta\omega\ln(x'x)}\times
\]
\[
\times e^{-\epsilon(x+x')}e^{-4\Delta M^2\bar{\omega}^2\left[\ln\left(\frac{x'}{x}\right)+\frac{\Delta\omega}{2\bar{\omega}}\ln(x'x)\right]^2},
\]
where we introduced $\Delta\omega=\omega_2-\omega_1$ ,
$\bar{\omega}=\frac{\omega_1+\omega_2}{2}$ and the regulator
$\epsilon$ as before. With the change of variables $x=r\cos(\theta) ,
x'=r\sin(\theta)$ the double integral in $x,x'$ becomes,
\[
\int_0^{\pi/2}d\theta e^{-i4\bar{M}\bar{\omega}\ln\left[\tan(\theta)\right]}
e^{-i2\bar{M}\Delta\omega \ln\left[\sin(\theta)cos(\theta)\right]}
e^{-4\Delta M^2\bar{\omega}^2\left[\ln\left(\tan(\theta)\right)^2+\frac{\Delta\omega}{\bar{\omega}}\ln\left(\tan(\theta)\right)\ln\left(\cos(\theta)\sin(\theta)\right)\right]}
\times
\]
\[\times \int_0^{+\infty}r dr e^{-i4M_0\omega'\left[\sin(\theta)-cos(\theta)\right]r}e^{-i4\bar{M}\Delta\omega\ln(r)}e^{-\epsilon \left[\sin(\theta)+\cos(\theta)\right]r} e^{-8\Delta M^2\bar{\omega}\Delta\omega\ln\left[\tan(\theta)\right]\ln(r)},
\]
where we are using the same $\Delta\omega<<\bar{\omega}$ approximation
used for the study of the classical case in order to simplify the
calculation.
The $r$ integral can be computed using formula (\ref{eq:propiedad_int_gamma}) to obtain
\[
\int_0^{\pi/2}d\theta \Gamma\left(2-8\Delta M^2\bar{\omega}\Delta\omega\ln\left[\tan(\theta)\right]-4\bar{M}\Delta\omega i\right)e^{-i4\bar{M}\bar{\omega}\ln\left[\tan(\theta)\right]}
e^{-i2\bar{M}\Delta\omega \ln\left[\sin(\theta)cos(\theta)\right]}
\times
\]
\[
\times e^{-4\Delta M^2\bar{\omega}^2\left[\ln\left(\tan(\theta)\right)^2+\frac{\Delta\omega}{\bar{\omega}}\ln\left(\tan(\theta)\right)\ln\left(\cos(\theta)\sin(\theta)\right)\right]}
e^{-\left(2-8\Delta M^2\bar{\omega}\Delta\omega\ln\left[\tan(\theta)\right]-4\bar{M}\Delta\omega i\right)\ln\left(\epsilon+4M_0i\omega'\left[\sin(\theta)-
\cos(\theta)\right]\right)}.
\]
Another change of variable $y=\ln\left(\tan(\theta)\right)$ simplifies
the expression to
\[
-\frac{e^{4\bar{M}\Delta\omega i\ln\left(4M_0\omega'\right)}e^{-2\bar{M}\Delta\omega \pi}}{\left(4M_0\omega'\right)^2}\int_{-\infty}^{+\infty}dy \Gamma\left(2-8\Delta M^2\bar{\omega}\Delta\omega y-4\bar{M}\Delta\omega i\right)e^{-i4\bar{M}\bar{\omega}y}e^{-4\Delta M^2\bar{\omega}^2y^2}\times
\]
\[
\times
e^{-\left(2-8\Delta M^2\bar{\omega}\Delta\omega y-4\bar{M}\Delta\omega i\right)\ln\left(\sinh(y/2)-i\epsilon\right)}e^{8\Delta M^2\bar{\omega}\Delta\omega y\ln\left(4M_0\omega'\right)}e^{i4\pi\Delta M^2\bar{\omega}\Delta\omega y}.
\]
Using again the approximation $\Delta\omega<<\bar{\omega}$ the
integral can be further simplified to
\[
-\frac{e^{4\bar{M}\Delta\omega i\ln\left(4M_0\omega'\right)}e^{-2\bar{M}\Delta\omega \pi}}{\left(4M_0\omega'\right)^2}\Gamma\left(2-4\bar{M}\Delta\omega i\right)\int_{-\infty}^{+\infty}dy e^{-i4\bar{M}\bar{\omega}y}e^{-\left(2-4\bar{M}\Delta\omega i\right)\ln\left(\sinh(y/2)-i\epsilon\right)}\times
\]
\[
\times
e^{-4\Delta M^2\bar{\omega}^2y^2}e^{8\Delta M^2\bar{\omega}\Delta\omega y\ln\left(4M_0\omega'\right)}.
\]
The last two terms are responsible for the corrections. The Gaussian changes the profile of the number of particles as we discussed before and the other exponential introduces non diagonal terms in the density matrix. Without these terms, the integral in $\omega'$ produces the $\delta(4\bar{M}\Delta\omega)$ dependence seen in Hawking radiation.\\
\section{Summary and outlook}
We have studied the Hawking radiation emitted by a collapsing quantum shell
using the geometric optics approximation. After reviewing the
calculation of the radiation for a classical collapsing null shell, we
proceeded to consider a quantized shell with fluctuating horizons. A
new element we introduce is to take into account the canonically
conjugate variables describing the shell, its mass and the position
along scri minus from which it is incoming. In order to allow
arbitrary superposition of shells with different Schwarzschild radii
the calculation is also performed without assuming from the beginning
that we are considering rays that are close to the horizon.
We find the following results:
1) Given that we deal with a quantum geometry, the Bogoliubov
coefficients become quantum operators acting on the states of the
geometry. We discover that for computing the Hawking radiation it is
not enough to assume the mean field approximation and consider the
square of the expectation value of the Bogoliubov coefficients
evaluated on the quantum geometry. Such a calculation misleadingly
suggests the Hawking radiation cuts off after a rather short time (the
``scrambling time''). One needs to go beyond mean-field and consider
the expectation value of the square of the Bogoliubov coefficients to
see that the radiation continues forever and that there are departures
from thermality that depend on the initial state of the shell.
2) The resulting Hawking radiation exhibits coherences of the density
matrix, with non vanishing off-diagonal elements for different
frequencies that vanish for the usual calculation on a classical
space-time. The new correlations that arise in the quantum case have an
imprint of the details of the initial quantum state of the shell. This
indicates that at least part of the information that went into
creating the black hole can be retrieved in the Hawking radiation. It
should be kept in mind that our calculations do not include back
reaction, so to have information retrieval at this level is somewhat
surprising.
3) The non-trivial correlations can be made to vanish taking a shell
with arbitrarily small deviations in the ADM mass. However, such a
shell would have large uncertainties in its initial
position. Therefore such a quantum state would not correspond to a
semi-classical situation. A semi-classical shell will generically have
uncertainty in both the initial position and the ADM mass and will
therefore have non-trivial corrections to the Hawking radiation
through which information can be retrieved.
In our computations we used three simplifying assumptions which should
be improved upon: First, we worked in the geometric optics
approximation which neglects back-scattering. Moreover, no
back-reaction was considered. This has two implications. On one hand,
information can fall into the black hole and also leak out, violating
no-cloning, in particular the quantum state of the shell is not
modified by the Hawking radiation, which nevertheless gains an imprint
of its characteristics. Moreover, the lack of back reaction eliminates
possible decoherence effects for the shell, which may also lead to information
leakage. Finally, the collapsing system is a very
simple one: a massless shell. However, the idea that non-trivial
commutation relations between some indicator of the position of the
collapsing system and its ADM mass are expected generically
\cite{carlip} and therefore effects similar to the ones found here are
expected in other collapsing systems. All in all our calculations
suggests that some level of ``drama at the horizon'' is taking place
that allows to retrieve information from the incoming quantum state.
Summarizing, using the simple example of collapsing quantum shells to
model a fluctuating horizon we have shown that non-trivial quantum
effects can take place, which in particular may allow to retrieve
information from the incoming quantum state at scri plus. A more
careful study is required to determine if the complete information of
the incoming state can be retrieved and if the model generalizes to
more complicated models of horizon formation.
\section*{Appendix 1: Integrals on $I^{-}$ that contribute in the case of a
quantum black hole}
The generic expression of interest for the Bogoliubov coefficient
(\ref{genericexpression}) is,
\[
\left\langle
\hat{\beta}\right\rangle_{\omega\omega'}=-\frac{\left(4M_{0}\right)^{2}}{2\pi}\sqrt{\frac{\omega'}{\omega}}\underset{\epsilon\to0}{\lim}
{\int_{-\infty}^\infty}dve^{- i\omega'v}{\int_0^\infty\int_0^\infty}dx_{1}dx_{2}\Psi^{*}(4M_{0}x_{1}+v)\times
\]
\[
\times\Psi(4M_{0}x_{2}+v){\int_{-\infty}^\infty}due^{-i\omega u}\underset{I=1,2}{\sum}\psi_{u}^{I}(x_{1})\psi_{u}^{I*}(x_{2})
\]
and the expressions for
$\psi_{u}^{I}(x)$ are (\ref{eq:autoestados_de_u_1})
and (\ref{eq:autoestados_de_u_2}). Let us show that the integrals,
\[
{\int_0^\epsilon\int_0^\epsilon}dx_{1}dx_{2}+
{\int_0^\epsilon}{\int_\epsilon^1}dx_{1}dx_{2}+{\int_\epsilon^1}{\int_0^\epsilon}dx_{1}dx_{2}
\]
do not contribute in the limit $\epsilon\to0$.
\begin{enumerate}
\item The integral ${\int_0^\epsilon\int_0^\epsilon}dx_{1}dx_{2}$ is
\[
\left\langle
\hat{\beta}\right\rangle_{\omega\omega'}=-\frac{\left(4M_{0}\right)^{2}}{2\pi}\sqrt{\frac{\omega'}{\omega}}\underset{\epsilon\to0}{\lim}
{\int_{-\infty}^\infty}dve^{- i\omega'v}{\int_0^\epsilon\int_0^\epsilon}dx_{1}dx_{2}\Psi^{*}(4M_{0}x_{1}+v)\times
\]
\[
\times\Psi(4M_{0}x_{2}+v){\int_{-\infty}^\infty}
dve^{- i\omega'v}{\int_0^\epsilon\int_0^\epsilon}dx_{1}dx_{2}\Psi^{*}(4M_{0}x_{1}+v)\times
\]
\[
\times\Psi(4M_{0}x_{2}+v)\frac{1}{4\hbar\left|\ln(\epsilon)\right|}\delta\left(\frac{M_{0}}{\hbar}\frac{x_{1}-x_{2}}{\ln(\epsilon)}-\omega\right)e^{-i\omega v}=
\]
\[
=-\frac{\left(4M_{0}\right)^{2}}{2\pi}\sqrt{\frac{\omega'}{\omega}}\underset{\epsilon\to0}{\lim}{\int_{-\infty}^\infty}dve^{- i\omega'v}{\int_0^\epsilon\int_0^\epsilon}dx_{1}dx_{2}\Psi^{*}(4M_{0}x_{1}+v)\times
\]
\[
\times\Psi(4M_{0}x_{2}+v)\frac{1}{4M_{0}}\delta\left(x_{1}-x_{2}-\frac{\omega\hbar \ln\left(\epsilon\right)}{M_{0}}\right)e^{-i\omega v}.
\]
This integral vanishes because one can choose
$\epsilon$ small, in such a way that the argument of the Dirac delta
never vanishes.
\item The integral ${\int_0^\epsilon}{\int_\epsilon^1}dx_{1}dx_{2}$
is
\[
\left\langle \hat{\beta}\right\rangle_{\omega\omega'}=-\frac{\left(4M_{0}\right)^{2}}{2\pi}\sqrt{\frac{\omega'}{\omega}}\underset{\epsilon\to0}{\lim}{\int_{-\infty}^\infty}dve^{- i\omega'v}{\int_0^\epsilon}{\int_\epsilon^1}dx_{1}dx_{2}\Psi^{*}(4M_{0}x_{1}+v)\times
\]
\[
\times\Psi(4M_{0}x_{2}+v){\int_{-\infty}^\infty}due^{-i\omega
u}\frac{\exp\left(\frac{iM_{0}}{\hbar}(u-v)\frac{x_{1}}{\ln(\epsilon)}\right)\exp\left(-\frac{iM_{0}}{\hbar}(u-v){\rm
li}\left(x_{2}\right)\right)}{8\pi\hbar\sqrt{\left|\ln(x)\right|\left|\ln(\epsilon)\right|}}=
\]
\[
=-\frac{\left(4M_{0}\right)^{2}}{2\pi}\sqrt{\frac{\omega'}{\omega}}\underset{\epsilon\to0}{\lim}{\int_{-\infty}^\infty}dve^{-i\omega'v}{\int_0^\epsilon}{\int_\epsilon^1}dx_{1}dx_{2}\Psi^{*}(4M_{0}x_{1}+v)\times
\]
\[
\times\Psi(4M_{0}x_{2}+v)\frac{\delta\left(\frac{M_{0}}{\hbar}\frac{x_{1}-\epsilon}{\ln(\epsilon)}-\frac{M_{0}}{\hbar}\left[{\rm
li}\left(x_{2}\right)-{\rm li}\left(\epsilon\right)\right]-\omega\right)}{4\hbar\sqrt{\left|\ln(x)\right|\left|\ln(\epsilon)\right|}}e^{-i\omega v}=
\]
\[
=-\frac{\left(4M_{0}\right)^{2}}{2\pi}\sqrt{\frac{\omega'}{\omega}}\underset{\epsilon\to0}{\lim}{\int_{-\infty}^\infty}dve^{-i\omega'v}{\int_0^\epsilon}{\int_\epsilon^1}dx_{1}dx_{2}\Psi^{*}(4M_{0}x_{1}+v)\times
\]
\[
\times\Psi(4M_{0}x_{2}+v)\frac{\delta\left(\frac{x_{1}-\epsilon}{\ln(\epsilon)}-{\rm
li}\left(x_{2}\right)+{\rm li}\left(\epsilon\right)-\frac{\omega\hbar}{M_{0}}\right)}{4M_{0}\sqrt{\left|\ln(x_{2})\right|\left|\ln(\epsilon)\right|}}e^{-i\omega v}=
\]
\[
=-\frac{4M_{0}}{2\pi}\sqrt{\frac{\omega'}{\omega}}\underset{\epsilon\to0}{\lim}{\int_{-\infty}^\infty}dve^{-i\omega'v}{\int_0^\epsilon}dx_{1}\Psi^{*}(4M_{0}x_{1}+v)\times
\]
\[
\times\Psi(4M_{0}x_{2}\left(x_{1}\right)+v)\sqrt{\frac{\left|\ln(x_{2})\right|}{\left|\ln(\epsilon)\right|}}e^{-i\omega v}
\]
with
$x_{2}(x_{1})={\rm li}^{-1}\left(\frac{x_{1}-\epsilon}{\ln(\epsilon)}+{\rm li}\left(\epsilon\right)-\frac{\omega\hbar}{M_{0}}\right)$.
In the integrand
$\sqrt{\frac{\left|\ln(x_{2})\right|}{\left|\ln(\epsilon)\right|}}$
is bounded above by $1$ since $x_{2}\in\left(\epsilon,1\right)$ and
$\Psi$ is a wave-packet that we can take to be bounded in all the range
of its variable. Therefore the integral
${\int_0^\epsilon}dx_{1}$
tends to zero when $\epsilon\to0$.
\item The integral ${\int_\epsilon^1}{\int_0^\epsilon}dx_{1}dx_{2}$
yields the same result that
${\int_0^\epsilon}{\int_\epsilon^1}dx_{1}dx_{2}$
since the only change is to substitute $x_{1}$ for $x_{2}$.
\end{enumerate}
\section*{Appendix 2}
Here we present details of the evaluation of the square of the
expectation value of the Bogoliubov coefficients as an approximation
to the number of particles produced.
If we estimate the expectation value of the number operator using expression (\ref{betasq}) we get,
\[
\left\langle N_{\omega}^{AQS}\right\rangle ={\int_0^\infty}d\omega'\left\langle \hat{\beta}\right\rangle _{\omega\omega'}\left\langle \hat{\beta}\right\rangle _{\omega\omega'}^{*}=\frac{1}{4\pi^{2}}\frac{1}{\omega}\left|\Gamma(1+4\bar{M}\omega i)\right|^{2}e^{-4\pi \bar{M}\omega}{\int_0^\infty}d\omega'\frac{\omega'e^{-\left[\omega+\omega'\right]^{2}\frac{\sigma^{2}}{2}}}{\left(\omega'+\omega\right)^{2}}.\]
Changing variable to $y=\left[\omega+\omega'\right]^{2}\frac{\sigma^{2}}{2}$,
\[
\left\langle N_{\omega}^{AQS}\right\rangle =\frac{\bar{M}}{\pi}\frac{1}{e^{8\bar{M}\pi\omega}-1}{\int_{\frac{\omega^{2}\sigma^{2}}{2}}^\infty}dy\left(y^{-1}-\frac{\omega\sigma}{\sqrt{2}}y^{-3/2}\right)e^{-y}=
\]
\[
=\frac{\bar{M}}{\pi}\frac{1}{e^{8\bar{M}\pi\omega}-1}{\int_{\frac{\omega^{2}\sigma^{2}}{2}}^\infty}dy\frac{e^{-y}}{y}-\frac{\omega\sigma}{\sqrt{2}}{\int_{\frac{\omega^{2}\sigma^{2}}{2}}^\infty}dyy^{-1-1/2}e^{-y}=
\]
\[
=\frac{\bar{M}}{\pi}\frac{1}{e^{8\bar{M}\pi\omega}-1}\left[-\operatorname{Ei}\left(-\frac{\omega^{2}\sigma^{2}}{2}\right)-\frac{\omega\sigma}{\sqrt{2}}\Gamma\left(-\frac{1}{2},\frac{\omega^{2}\sigma^{2}}{2}\right)\right],
\]
where $\operatorname{Ei}$ is the exponential integral and $\Gamma\left(s,x\right)$ is the upper incomplete Gamma function. Taking into account the identities
$\Gamma\left(s+1,x\right)=s\Gamma\left(s,x\right)+x^{s}e^{-x}$
and $\Gamma\left(\frac{1}{2},x\right)=\sqrt{\pi}\operatorname{erfc}\left(x\right)$, with $\operatorname{erfc}$ the complementary error function,
we get
\begin{equation}
\langle N_\omega^{AQS}\rangle=\frac{\bar{M}}{\pi}\frac{1}{e^{8\bar{M}\pi\omega}-1}\left[-\operatorname{Ei}\left(-\frac{\omega^{2}\sigma^{2}}{2}\right)+2\left\{ \frac{\omega\sigma}{\sqrt{2}}\sqrt{\pi}\operatorname{erfc}\left(\frac{\omega\sigma}{\sqrt{2}}\right)-e^{-\frac{\omega^{2}\sigma^{2}}{2}}\right\} \right],
\end{equation}
which is finite for $\omega\neq0$ and is suppressed as
$e^{-\frac{\omega^2\sigma^2}{2}}$ for $\omega\to+\infty$ (exhibiting
in this approximation a decay that is not present in ordinary thermal
radiation). In fact, the total radiated
energy would be finite since the integral
\[
E={\int_0^\infty}d\omega\hbar\omega\left\langle N_{\omega}^{AQS}\right\rangle,
\]
is convergent.
In the previous calculation we do not have information about the
dependence of intensity of the radiation as a function of time nor its
luminosity, which could be very relevant since the energy loss by the
black hole leads to increased radiation if one were to take into
account back-reaction in the calculations.
As in the classical case (\ref{casoclasico}) we start by computing the density matrix
\[
\left\langle\rho^{AQS}_{\omega_1,\omega_2}\right\rangle = {\int_0^\infty}d\omega'\left\langle \beta\right\rangle_{\omega_{1}\omega'}\left\langle \beta\right\rangle_{\omega_{2}\omega'}^{*}=\frac{1}{4\pi^{2}\sqrt{\omega_{1}\omega_{2}}}e^{-i\left(\omega_{1}-\omega_{2}\right)\bar{v}_{0}}\Gamma\left(1+4\bar{M}\omega_{1}i\right)\Gamma\left(1-4\bar{M}\omega_{2}i\right)e^{-2\pi \bar{M}\left[\omega_{1}+\omega_{2}\right]}\times
\]
\begin{equation}
\times{\int_0^\infty}d\omega'\frac{\omega'e^{-\left\{ \left[\omega_{1}+\omega'\right]^{2}+\left[\omega_{2}+\omega'\right]^{2}\right\} \frac{\sigma^{2}}{4}}}{\left(\omega'+\omega_{1}\right)\left(\omega'+\omega_{2}\right)}e^{-4\bar{M}i\left[\omega_{1}\ln\left(4M_{0}\left[\omega'+\omega_{1}\right]\right)-\omega_{2}\ln\left(4M_{0}\left[\omega'+\omega_{2}\right]\right)\right]},
\label{eq:approx_density matrix}
\end{equation}
with the same approximation used to compute its diagonal elements (the
number of particles emitted). We assume $\omega_1$ and $\omega_2$ are
close and we expand in $\Delta\omega=\omega_{2}-\omega_{1}\ll\omega_1$
and use $\bar{\omega}=\frac{\omega_{1}+\omega_{2}}{2}$. We obtain,
\[
\left\langle\rho^{AQS}_{\omega_1,\omega_2}\right\rangle=\frac{2\bar{M}}{\pi}\frac{1}{e^{8\bar{M}\pi\bar{\omega}}-1}e^{i\Delta\omega \bar{v}_{0}}{\int_0^\infty}d\omega'\frac{\omega'e^{-\left[\bar{\omega}+\omega'\right]^{2}\frac{\sigma^{2}}{2}}}{\left(\bar{\omega}+\omega'\right)^{2}}e^{4\bar{M}i\Delta\omega\ln\left(4M_{0}\left[\omega'+
\bar{\omega}\right]\right)}+O\left(\Delta\omega\right).\]
Changing variable to $y=\frac{\left[\bar{\omega}+\omega'\right]^{2}}{\bar{\omega}^{2}}$ we go to
\[
\left\langle\rho^{AQS}_{\omega_1,\omega_2}\right\rangle\sim
\frac{\bar{M}}{\pi}\frac{1}{e^{8\bar{M}\pi\bar{\omega}}-1}e^{i\Delta\omega\bar{v}_{0}}e^{4\bar{M}i\Delta\omega\ln\left(4M_{0}\bar{\omega}\right)}\frac{1}{2}{\int_1^\infty}dy\left(y^{-1}-y^{-3/2}\right)e^{-y\frac{\bar{\omega}^{2}\sigma^{2}}{2}}e^{2\bar{M}i\Delta\omega\ln\left(y\right)}.
\]
Finally,
\[
\left\langle\rho^{AQS}_{\omega_1,\omega_2}\right\rangle\sim\underset{\delta\to0}{\lim}\frac{\bar{M}}{\pi}\frac{1}{e^{8\bar{M}\pi\bar{\omega}}-1}e^{i\Delta\omega\bar{v}_{0}}e^{4\bar{M}i\Delta\omega\ln\left(4M_{0}\bar{\omega}\right)}\frac{1}{2}\times
\]
\[
\times\left[e^{\left(\delta-2\bar{M}i\Delta\omega\right)\ln\left(\frac{\bar{\omega}^{2}\sigma^{2}}{2}\right)}\Gamma\left(-\delta+2\bar{M}i\Delta\omega,\frac{\bar{\omega}^{2}\sigma^{2}}{2}\right)-e^{\left(\frac{1}{2}-2\bar{M}i\Delta\omega\right)\ln\left(\frac{\bar{\omega}^{2}\sigma^{2}}{2}\right)}\Gamma\left(-\frac{1}{2}+2\bar{M}i\Delta\omega,\frac{\bar{\omega}^{2}\sigma^{2}}{2}\right)\right].
\]
The divergent part of the density matrix when $\Delta\omega\to0$ is due to the first term so,
\[
\left\langle\rho^{AQS}_{\omega_1,\omega_2}\right\rangle\sim\underset{\delta\to0}{\lim}\frac{\bar{M}}{2\pi}\frac{1}{e^{8\bar{M}\pi\bar{\omega}}-1}e^{i\Delta\omega\bar{v}_{0}}e^{4Mi\Delta\omega\ln\left(4M_{0}\bar{\omega}\right)}e^{\left(\delta-2\bar{M}i\Delta\omega\right)\ln\left(\frac{\bar{\omega}^{2}\sigma^{2}}{2}\right)}\Gamma\left(-\delta+2\bar{M}i\Delta\omega,\frac{\bar{\omega}^{2}\sigma^{2}}{2}\right).
\]
Now we can calculate the number of particles at time $u_n$ and around $\omega_j$ as
\[
\langle N_{\omega_{j}}^{AQS}\rangle=\frac{1}{\epsilon}
{\int_{j\epsilon}^{\left(j+1\right)\epsilon}\int_{j\epsilon}^{\left(j+1\right)\epsilon}}d\omega_{1}d\omega_{2}e^{-u_n\Delta\omega i}\left\langle\rho^{AQS}_{\omega_1,\omega_2}\right\rangle.
\]
To carry out the integrals we change variables from $\omega_{1,2}$ to
$\Delta\omega$ and $\bar{\omega}$. The result is,
\[
\langle N_{\omega_{j}}^{AQS}\rangle\sim\frac{\bar{M}}{2\pi}\frac{1}{e^{8\bar{M}\pi\omega_{j}}-1}\underset{\delta\to0}{\lim}{\int_{-\epsilon}^\epsilon}d\left(\Delta\omega\right)\left[1-\frac{\left|\Delta\omega\right|}{\epsilon}\right]e^{\delta\ln\left(\frac{\omega_{j}^{2}\sigma^{2}}{2}\right)}e^{-i\phi\Delta\omega}\Gamma\left(-\delta+2\bar{M}i\Delta\omega,\frac{\omega_{j}^{2}\sigma^{2}}{2}\right),
\]
with $\phi=\left[\left[\frac{2\pi
n}{\epsilon}\right]-\bar{v}_{0}+4\bar{M}\ln\left(\frac{\omega_{j}\sigma}{\sqrt{2}}\right)-4\bar{M}\ln\left(4M_{0}\omega_{j}\right)\right]$. In order to interpret the result we use an integral representation of
the incomplete Gamma function and reverse the integration order. Then,
\[
\langle
N_{\omega_{j}}^{AQS}\rangle=\frac{\bar{M}}{2\pi}\frac{1}{e^{8\bar{M}\pi\omega_{j}}-1}\underset{\delta\to0}{\lim}e^{\delta\ln\left(\frac{\omega_{j}^{2}\sigma^{2}}{2}\right)}
{\int_{\frac{\omega_{j}^{2}\sigma^{2}}{2}}^\infty}dt\frac{e^{-t}}{t}
{\int_{-\epsilon}^\epsilon}d\left(\Delta\omega\right)\left[1-\frac{\left|\Delta\omega\right|}{\epsilon}\right]e^{-i\Delta\omega\left[\phi-2\bar{M}\ln\left(t\right)\right]}e^{-\delta\ln\left(t\right)}=
\]
\[
=\frac{\bar{M}\epsilon}{\pi}\frac{1}{e^{8\bar{M}\pi\omega_{j}}-1}\underset{\delta\to0}{\lim}e^{\delta\ln\left(\frac{\omega_{j}^{2}\sigma^{2}}{2}\right)}
{\int_{\frac{\omega_{j}^{2}\sigma^{2}}{2}}^\infty}dt\frac{e^{-\left[t+\delta \ln\left(t\right)\right]}}{t}\left\{\frac{\sin\left[\frac{\epsilon}{2}\left(\phi-2\bar{M}\ln\left(t\right)\right)\right]}{\frac{\epsilon}{2}\left(\phi-2\bar{M}\ln\left(t\right)\right)}\right\} ^{2}.
\]
The change of variable
$y=t/\frac{\omega_{j}^{2}\sigma^{2}}{2}$
clarifies the interpretation of the integral. We get,
\[
\langle N_{\omega_{j}}^{AQS}\rangle=\frac{\bar{M}\epsilon}{\pi}\frac{1}{e^{8\bar{M}\pi\omega_{j}}-1}\underset{\delta\to0}{\lim}{\int_1^\infty}dy\frac{e^{-\frac{\omega_{j}^{2}\sigma^{2}}{2}y}e^{-\delta\ln\left(y\right)}}{y}\left\{\frac{\sin\left[\frac{\epsilon}{2}\left(\alpha-2\bar{M}\ln\left(y\right)\right)\right]}{\frac{\epsilon}{2}\left(\alpha-2\bar{M}\ln\left(y\right)\right)}\right\} ^{2},
\]
where $\alpha$ is the same quantity defined in (\ref{alpha}) with $M$ and $v_0$ replaced by $\bar{M}$ and $\bar{v_0}$. Due to the decreasing exponential we can take the limit in
$\delta\to0$
getting,
\[
\langle N_{\omega_{j}}^{AQS}\rangle
=\frac{\bar{M}\epsilon}{\pi}\frac{1}{e^{8\bar{M}\pi\omega_{j}}-1}
{\int_1^\infty}dy\frac{e^{-\frac{\omega_{j}^{2}\sigma^{2}}{2}y}}{y}\left\{\frac{\sin\left[\frac{\epsilon}{2}\left(\alpha-2\bar{M}\ln\left(y\right)\right)\right]}{\frac{\epsilon}{2}\left(\alpha-2\bar{M}\ln\left(y\right)\right)}\right\} ^{2}.
\]
The presence of a factor
$\sin^{2}(a)/a^{2}$
and the decreasing exponential imply that the integral decreases when
$\alpha$ grows and also drastically decreases when
$\alpha<0$. The latter is a result we already knew from the classical
case, but the former is a result of the quantum nature of the black
hole since it is not present if
$\sigma=0$. Figure (\ref{nwj}) shows the departure from the classical result that appears when one computes the frequency distribution starting from $\left\langle\hat{\beta}\right\rangle$.
We can estimate the time of emission for each frequency using both
extremes. On the one hand the start of the emission happens when
$\alpha-2\bar{M}\ln\left(y\right)=0$ for $y \sim 1$ that is,
\[
u_{i}-\bar{v}_{0}-4\bar{M}\ln\left(4M_{0}\omega_{j}\right)\sim 0.
\]
We can estimate the end of the emission when
$\alpha-2 \bar{M}\ln\left(y\right)=0$
for $y\sim\frac{2}{\omega_{j}^{2}\sigma^{2}}$ since larger $y's$
are suppressed by the exponential. For $\omega> \sqrt{2}/\sigma$ this value of $y$ is outside the integration range and the total integral is suppressed. For $\omega< \sqrt{2}/\sigma$ we find the condition,
\[
u_{f}-\bar{v}_{0}-4\bar{M}\ln\left(4M_{0}\omega_{j}\right)-2\bar{M}\ln\left(\frac{2}{\omega_{j}^{2}\sigma^{2}}\right)\sim 0,
\]
or,
\[
u_{f}-\bar{v}_{0}-4\bar{M}\ln\left(4M_{0}\sqrt{2}\frac{1}{\sigma}\right)\sim 0.
\]
Note that the time for the end of the emission does not depend on
the frequency. Finally,
\[
\Delta t=u_{f}-u_{i}\sim4\bar{M}\ln\left(4M_{0}\sqrt{2}\frac{1}{\sigma}\right)-4\bar{M}\ln\left(4M_{0}\omega_{j}\right)=-4\bar{M}\ln\left(\frac{\sigma\omega_{j}}{\sqrt{2}}\right).
\]
Restoring the appropriate dimensions,
\begin{equation}
\Delta t\sim-\frac{2R_{s}}{c}\ln\left(\frac{\sqrt{2}\pi\sigma}{\lambda_{j}}\right),
\label{tiempototal}
\end{equation}
where $R_s$ is the Schwarzschild radius and $\lambda_j$ is the
wavelength of frequency $\omega_j$. Recall we are considering frequencies such that $\omega_j < \sqrt{2}/\sigma$ so that $\Delta t >0$. For $\omega_j >\sqrt{2}/\sigma$ the radiation is suppressed at all times.
One can see that if one integrates $\sum_j\hbar\omega_j\left\langle
N^{AQS}_{\omega_j}\right\rangle$ with that time interval one obtains
a total emitted energy that is finite. Note that this result
corresponds to a deep quantum regime since we are not considering
$\sigma$ to be very small.
Interestingly, the time (45) corresponds, for the dominant
wavelengths of emission ($R_S$), with the {\em scrambling time}
\cite{harlow}
\begin{equation}
t_{\rm scr} \sim R_s*\ln\left(\frac{R_s}{\ell_{\rm Planck}}\right).
\end{equation}
Quantum information arguments indicate this is precisely the time of information retrieval \cite{1111.6580}.
It should be noted that the result we are obtaining is not due to the
choice of a particular quantum state. To demonstrate this,
let us now consider a somehow opposite state to the one considered
previously: the case where the shell is in a Gaussian (\ref{eq:gaussiana}) squeezed state with large dispersion in $M$ and small dispersion in $v_0$. The leading quantum correction for such states is obtained by taking the limit $\hbar \to 0$ with
\begin{equation}
\Delta M = \text{constant} = Z \ell_{\text{Planck}} \; , \; Z \gg 1 \, ; \quad \Delta v_0= \hbar/{\Delta M}.\label{eq:squeezed_state_M_limit}
\end{equation}
In this limit (\ref{eq:coef_bogol_gaussiana}) goes to:
\begin{equation}
\left\langle \hat{\beta}\right\rangle_{\omega\omega'}\to-\frac{2M_{0}}{\pi}\sqrt{\frac{\omega'}{\omega}}e^{-i\left[\omega+\omega'\right]\bar{v}_{0}}{\int_0^\infty}dxe^{i4M_{0}\left[\omega+\omega'\right]x} e^{i 4\bar{M}\omega\ln(x)}e^{-4\Delta M^2\omega^2Ln(x)^2}\label{eq:coef_bogol_gaussiana_dM}
\end{equation}
If we extend the integrand in this expression to $0$ for $x<0$ we recognize the integral as the Fourier transform in $4M_0\left[\omega'+\omega\right]$ of a smooth and rapidly falling function. This implies the Bogoliubov coefficient is a rapidly falling function of $4M_0\left[\omega'+\omega\right]$. It also vanishes for $\omega'=0$ so the total number of emitted particles,
$$\left\langle N_{\omega}^{AQS}\right\rangle ={\int_0^\infty}d\omega'\left\langle \hat{\beta}\right\rangle _{\omega\omega'}\left\langle \hat{\beta}\right\rangle _{\omega\omega'}^{*}$$
is finite for $\omega\neq 0$ as in the previous case.
\section*{Acknowledgement}
We wish to thank Ivan Agull\'o and Don Marolf for discussions.
This work was supported in part by Grant No. NSF-PHY-1305000,
NSF-PHY-1603630, funds of the Hearne Institute for Theoretical
Physics, CCT-LSU, and Pedeciba.
|
2,869,038,153,961 | arxiv | \section{Introduction}
In this paper, we consider time fractional parabolic equations with a non-local type time derivative term of the form
\begin{equation}
\label{eq0525_01}
- \partial_t^\alpha u + a^{ij}(t,x) D_{ij} u + b^i(t,x) D_i u + c(t,x) u = f(t,x)
\end{equation}
in $(0,T) \times \mathbb{R}^d$, where $\partial_t^\alpha u$ is the Caputo fractional derivative of order $\alpha \in (0,1)$:
$$
\partial_t^\alpha u(t,x) = \frac{1}{\Gamma(1-\alpha)} \frac{d}{dt} \int_0^t (t-s)^{-\alpha} \left[ u(s,x) - u(0,x) \right] \, ds.
$$
See Sections \ref{sec2} and \ref{Sec3} for a precise definition and properties of $\partial_t^\alpha u$.
Our main result is that, for a given $f \in L_p\left((0,T) \times \mathbb{R}^d \right)$, there exists a unique solution $u$ to the equation \eqref{eq0525_01} in $(0,T) \times \mathbb{R}^d$ with the estimate
$$
\||\partial_t^\alpha u|+|u|+|
Du|+|D^2u|\|_{L_p\left((0,T) \times \mathbb{R}^d \right)} \le N \|f\|_{L_p\left((0,T) \times \mathbb{R}^d \right)}.
$$
The assumptions on the coefficients $a^{ij}$, $b^i$, and $c$ are as follows.
The leading coefficients $a^{ij}=a^{ij}(t,x)$ satisfy the uniform ellipticity condition and have no regularity in the time variable.
Dealing with such coefficients in the setting of $L_p$ spaces is the main focus of this paper.
As functions of $x$, locally the coefficients $a^{ij}$ have small (bounded) mean oscillations (small BMO).
See Assumption \ref{assump2.2}.
The lower-order coefficients $b^i$ and $c$ are assumed to be only bounded and measurable.
If the fractional (or non-local) time derivative $\partial_t^\alpha u$ is replaced with the local time derivative $u_t$, the equation \eqref{eq0525_01} becomes the usual second-order non-divergence form parabolic equation
\begin{equation}
\label{eq0525_02}
-u_t + a^{ij} D_{ij} u + b^i D_i u + c u = f.
\end{equation}
As is well known, there is a great amount of literature on the regularity and solvability for equations as in \eqref{eq0525_02} in various function spaces.
Among them, we only refer the reader to the papers \cite{MR2304157, MR2352490, MR2771670}, which contain corresponding results of this paper to parabolic equations as in \eqref{eq0525_02}.
More precisely, in these papers, the unique solvability results are proved in Sobolev spaces for elliptic and parabolic equations/systems. In particular, for the parabolic case, the leading coefficients are assumed to satisfy the same conditions as mentioned above.
This class of coefficients was first introduced by Krylov in \cite{MR2304157} for parabolic equations in Sobolev spaces.
In \cite{MR2352490}, the results in \cite{MR2304157} were generalized to the mixed Sobolev norm setting, and in \cite{MR2771670} to higher-order elliptic and parabolic systems.
Thus, one can say that the unique solvability of solutions in Sobolev spaces to parabolic equations as in \eqref{eq0525_02} is well established when coefficients are merely measurable in the time variable.
On the other hand, it is well known that the $L_p$-solvability of elliptic and parabolic equations requires the leading coefficients to have some regularity conditions in the spatial variables.
See, for instance, the paper \cite{MR3488249}, where the author shows the impossibility of finding solutions in $L_p$ spaces to one spatial dimensional parabolic equations if $p \notin (3/2,3)$ and the leading coefficient are merely measurable in $(t,x)$.
In view of mathematical interests and applications, it is a natural and interesting question to explore whether the corresponding $L_p$-solvability results hold for equations as in \eqref{eq0525_01} for the same class of coefficients as in \cite{MR2304157, MR2352490, MR2771670}.
In a recent paper \cite{MR3581300} the authors proved the unique solvability of solutions in mixed $L_{p,q}$ spaces to the time fractional parabolic equation \eqref{eq0525_01} under the stronger assumption that the leading coefficients are piecewise continuous in time and uniformly continuous in the spatial variables.
Hence, the results in this paper can be regarded as a generalization of the results in \cite{MR3581300} to a large extent, so that one can have the same class of coefficients as in \cite{MR2304157, MR2352490, MR2771670} for the time non-local equation \eqref{eq0525_01} in $L_p$ spaces.
We note that in \cite{MR3581300} the authors discussed the case $\alpha \in (0,2)$, whereas in this paper we only discuss the parabolic regime $\alpha \in (0,1)$.
It is also worth noting that, for parabolic equations as in \eqref{eq0525_02}, it is possible to consider more general classes of coefficients than those in \cite{MR2304157, MR2352490, MR2771670}.
Regarding this, see \cite{DK15}, where the classes of coefficients under consideration include those $a^{ij}(t,x)$ measurable both in one spatial direction and in time except, for instance, $a^{11}(t,x)$, which is measurable either in time or in the spatial direction.
Besides \cite{MR3581300}, there are a number of papers about parabolic equations with a non-local type time derivative term.
For divergence type time fractional parabolic equations in the Hilbert space setting, see \cite{MR2538276}, where the time fractional derivative is a generalized version of the Caputo fractional derivative.
One can find De Giorgi-Nash-Moser type H\"{o}lder estimates for time fractional parabolic equations in \cite{MR3038123}, and for parabolic equations with fractional operators in both $t$ and $x$ in \cite{MR3488533}.
For other related papers and further information about time fractional parabolic equations and their applications, we refer to \cite{MR3581300} and the references therein.
As a standard scheme in $L_p$-theory, to establish the main results of this paper, we prove a priori estimates for solutions to \eqref{eq0525_01}.
In \cite{MR3581300} a representation formula for a solution to the time fractional heat operator $-\partial_t^\alpha u + \Delta u$ is used, from which the $L_p$-estimate is derived for the operator.
Then for uniformly continuous coefficients, a perturbation argument takes places to derive the main results of the paper.
Our proof is completely different.
Since $a^{ij}$ are measurable in time, it is impossible to treat the equation via a perturbation argument from the time fractional heat equation.
Thus, instead of considering a representation formula for equations with coefficients measurable in time, which does not seem to be available, we start with the $L_2$-estimate and solvability, which can be obtained from integration by parts.
We then exploit a level set argument originally due to Caffarelli and Peral \cite{MR1486629} as well as a ``crawling of ink spots'' lemma, which was originally due to Safonov and Krylov \cite{MR579490, MR563790}.
The main difficulty arises in the key step where one needs to estimate local $L_\infty$ estimates of the Hessian of solutions to locally homogeneous equations.
Starting from the $L_2$-estimate and applying the Sobolev type embedding results proved in Appendix, we are only able to show that such Hessian are in $L_{p_1}$ for some $p_1>2$, instead of $L_\infty$.
Nevertheless, this allows us to obtain the $L_p$ estimate and solvability for any $p\in [2,p_1)$ and $a^{ij}=a^{ij}(t)$ by using a modified level set type argument.
Then we repeat this procedure and iteratively increase the exponent $p$ for any $p\in [2,\infty)$. In the case when $p\in (1,2)$, we apply a duality argument.
For equations with the leading coefficients being measurable in $t$ and locally having small mean oscillations in $x$, we apply a perturbation argument (see, for instance, \cite{MR2304157}).
This is done by incorporating the small mean oscillations of the coefficients into local mean oscillation estimates of solutions having compact support in the spatial variables.
Then, the standard partition of unity argument completes the proof.
In forthcoming work, we will generalize our results for time fraction parabolic equations with more general coefficients considered, for example, in \cite{DK15}. We will also consider solutions in Sobolev spaces with mixed norms as in \cite{MR3581300} as well as equations in domains.
The remainder of the paper is organized as follows.
In the next section, we introduce some notation and state the main results of the paper. In Section \ref{Sec3}, we define function spaces for fractional time derivatives and show some of their properties. In Section \ref{sec4}, we prove the $L_2$ estimate and solvability for equations with coefficients depending only on $t$, and then derive certain local estimates, which will be used later in the iteration argument. We give the estimates of level sets of Hessian in Section \ref{sec5} and complete the proofs of the main theorems in Section \ref{sec6}. In Appendix, we establish several Sobolev type embedding theorems involving time fractional derivatives and prove a ``crawling of ink spots'' lemma adapted to our setting.
\section{Notation and main results}
\label{sec2}
We first introduce some notation used through the paper.
For $\alpha \in (0,1)$, denote
$$
I^\alpha \varphi(t) = I_0^\alpha \varphi(t) = \frac{1}{\Gamma(\alpha)} \int_0^t (t-s)^{\alpha - 1} \varphi(s) \, ds
$$
for $\varphi \in L_1(\mathbb{R}^+)$,
where
$$
\Gamma(\alpha) = \int_0^\infty t^{\alpha - 1} e^{-t} \, dt.
$$
In \cite{MR1544927} $I^\alpha \varphi$ is called $\alpha$-th integral of $\varphi$ with origin $0$.
For $0 < \alpha < 1$ and sufficiently smooth function $\varphi(t)$, we set
$$
D_t^\alpha \varphi(t) = \frac{d}{dt} I^{1-\alpha} \varphi(t) = \frac{1}{\Gamma(1-\alpha)} \frac{d}{dt} \int_0^t (t-s)^{-\alpha} \varphi(s) \, ds,
$$
and
\begin{align*}
\partial_t^\alpha \varphi(t) &= \frac{1}{\Gamma(1-\alpha)} \int_0^t (t-s)^{-\alpha} \varphi'(s) \, ds\\
&= \frac{1}{\Gamma(1-\alpha)} \frac{d}{dt} \int_0^t (t-s)^{-\alpha} \left[ \varphi(s) - \varphi(0) \right] \, ds.
\end{align*}
Note that if $\varphi(0) = 0$, then
$$
\partial_t(I^{1-\alpha} \varphi) = \partial_t^\alpha \varphi.
$$
Let $\mathcal{D}$ be a subset (not necessarily open) of $\mathbb{R}^k$, $k \in \{1,2, \ldots\}$.
By $\varphi \in C_0^\infty(\mathcal{D})$,
we mean that $\varphi$ is infinitely differentiable in $\mathcal{D}$ and is supported in the intersection of $\mathcal{D}$ and a bounded open subset in $\mathbb{R}^d$.
In particular, $\varphi$ may not be zero on the boundary of $\mathcal{D}$, unless $\mathcal{D}$ is an open subset of $\mathbb{R}^k$.
For $\alpha \in (0,1)$, we denote
$$
Q_{R_1,R_2}(t,x) = (t-R_1^{2/\alpha}, t) \times B_{R_2}(x) \quad \text{and} \quad Q_R(t,x)=Q_{R,R}(t,x).
$$
We often write $B_R$ and $Q_R$ instead of $B_R(0)$ and $Q_R(0,0)$, respectively.
In this paper, we assume that there exists $\delta \in (0,1)$ such that
$$
a^{ij}(t,x)\xi_j \xi_j \geq \delta |\xi|^2,\quad |a^{ij}| \leq \delta^{-1}
$$
for any $\xi \in \mathbb{R}^d$ and $(t,x) \in \mathbb{R} \times \mathbb{R}^d$.
Our first main result is for equations with coefficients $a^{ij}$ depending only on the time variable without any regularity assumptions.
\begin{theorem}
\label{thm0412_1}
Let $\alpha \in (0,1)$, $T \in (0,\infty)$, $a^{ij} = a^{ij}(t)$, and $p \in (1,\infty)$.
Suppose that $u \in \mathbb{H}_{p,0}^{\alpha,2}(\mathbb{R}^d_T)$ satisfies
\begin{equation}
\label{eq0411_03}
-\partial_t^\alpha u + a^{ij} D_{ij} u
= f
\end{equation}
in $\mathbb{R}^d_T := (0,T) \times \mathbb{R}^d$.
Then there exists $N = N(d,\delta,\alpha,p)$ such that
\begin{equation}
\label{eq0411_04}
\|\partial_t^\alpha u\|_{L_p(\mathbb{R}^d_T)} + \|D^2 u\|_{L_p(\mathbb{R}^d_T)} \leq N \|f\|_{L_p(\mathbb{R}^d_T)}.
\end{equation}
Moreover, for $f \in L_p(\mathbb{R}^d_T)$, there exists a unique $u \in \mathbb{H}_{p,0}^{\alpha,2}(\mathbb{R}^d_T)$ satisfying \eqref{eq0411_03} and \eqref{eq0411_04}.
\end{theorem}
We refer the reader to Section \ref{Sec3} for the definitions of function spaces including $\mathbb{H}_{p,0}^{\alpha,2}(\mathbb{R}^d_T)$.
We also consider more general operators with lower-order terms and with coefficients depending on both $t$ and $x$. In this case, we impose the following VMO$_x$ condition on the leading coefficients.
\begin{assumption}[$\gamma_0$]
\label{assump2.2}
There is a constant $R_0\in (0,1]$ such that for each parabolic cylinder $Q_r(t_0,x_0)$ with $r\le R_0$ and $(t_0,x_0)\in \mathbb{R}^{d+1}$, we have
$$
\sup_{i,j}\dashint_{Q_r(t_0,x_0)}|a^{ij}-\bar a^{ij}(t)|\,dx\,dt\le \gamma_0,
$$
where $\bar a^{ij}(t)$ is the average of $a^{ij}(t,\cdot)$ in $B_r(x_0)$.
\end{assumption}
\begin{remark}
\label{rem2.3}
From the above assumption, we have that for any $x_0\in \mathbb{R}^d$ and $a, b \in \mathbb{R}$ such that $b - a > R_0^{2/\alpha}$, there exists $\bar{a}^{ij}(t)$ satisfying the ellipticity condition and
$$
\dashint_{\!a}^{\,\,\,b} \dashint_{B_{R_0}(x_0)} |a^{ij} - \bar{a}^{ij}(t)| \, dx \, dt \leq 2 \gamma_0.
$$
Indeed, find $k \in \{1,2, \ldots\}$ such that
$$
b - (k+1) R_0^{2/\alpha} \leq a < b - k R_0^{2/\alpha},\quad \text{i.e.,}\,\,
\frac{1}{k+1} \leq \frac{R_0^{2/\alpha}}{b-a} < \frac{1}{k},
$$
and set $\bar a^{ij}(t)$ to be the average of $a^{ij}(t,\cdot)$ in $B_{R_0}(x_0)$.
We then see that
\begin{align*}
&\dashint_{\!a}^{\,\,\,b} \dashint_{B_{R_0}(x_0)} |a^{ij} - \bar{a}^{ij}(t)| \, dx \, dt = \frac{1}{b-a} \int_a^b \dashint_{B_{R_0}(x_0)} |a^{ij} - \bar{a}^{ij}(t)| \, dx \, dt\\
&\leq \frac{R_0^{2/\alpha}}{b-a} \sum_{j=0}^k \dashint_{\!b-(j+1)R_0^{2/\alpha}}^{\,\,\,b-j R_0^{2/\alpha}} \dashint_{B_{R_0}(x_0)} |a^{ij} - \bar{a}^{ij}(t)| \, dx \, dt\\
&\leq \frac{R_0^{2/\alpha}}{b-a} (k+1) \gamma_0 \leq \frac{k+1}{k} \gamma_0 \leq 2 \gamma_0.
\end{align*}
\end{remark}
We also assume that the lower-order coefficients $b^i$ and $c$ satisfy
$$
|b^i|\le \delta^{-1},\quad |c|\le \delta^{-1}.
$$
\begin{theorem}
\label{main_thm}
Let $\alpha \in (0,1)$, $T \in (0,\infty)$, and $p \in (1,\infty)$. There exists $\gamma_0\in (0,1)$ depending only on $d$, $\delta$, $\alpha$, and $p$, such that, under Assumption \ref{assump2.2} ($\gamma_0$), the following hold. Suppose that $u \in \mathbb{H}_{p,0}^{\alpha,2}(\mathbb{R}^d_T)$ satisfies
\begin{equation}
\label{eq0411_03c}
-\partial_t^\alpha u + a^{ij} D_{ij} u+b^i D_i u+cu
= f
\end{equation}
in $\mathbb{R}^d_T$.
Then there exists $N = N(d,\delta,\alpha,p,R_0,T)$ such that
\begin{equation}
\label{eq0411_04c}
\|u\|_{\mathbb{H}_p^{\alpha,2}(\mathbb{R}^d_T)} \leq N \|f\|_{L_{p}(\mathbb{R}^d_T)}.
\end{equation}
Moreover, for $f \in L_{p}(\mathbb{R}^d_T)$, there exists a unique $u \in \mathbb{H}_{p,0}^{\alpha,2}(\mathbb{R}^d_T)$ satisfying \eqref{eq0411_03c} and \eqref{eq0411_04c}.
\end{theorem}
\section{Function spaces}
\label{Sec3}
Let $\Omega$ be a domain (open and connected, but not necessarily bounded) in $\mathbb{R}^d$.
For $T > 0$, we denote
$$
\Omega_T = (0,T) \times \Omega \subset \mathbb{R} \times \mathbb{R}^d.
$$
Thus, if $\Omega = \mathbb{R}^d$, we write $\mathbb{R}^d_T = (0,T) \times \mathbb{R}^d$.
For $S>-\infty$ and $\alpha\in (0,1)$, let $I_S^{1-\alpha} u$ be the $(1-\alpha)$-th integral of $u$ with origin $S$:
$$
I_S^{1-\alpha} u = \frac{1}{\Gamma(1-\alpha)}\int_S^t (t-s)^{-\alpha} u(s, x) \, ds.
$$
Throughout the paper, $I_0^{1-\alpha}$ is denoted by $I^{1-\alpha}$.
For $1 \le p \le \infty$, $\alpha \in (0,1)$, $T > 0$, and $k \in \{1,2,\ldots\}$, we set
$$
\widetilde{\mathbb{H}}_p^{\alpha,k}(\Omega_T) = \left\{ u \in L_p(\Omega_T): D_t^\alpha u, \, D^\beta_x u \in L_p(\Omega_T), \, 0 \leq |\beta| \leq k
\right\}
$$
with the norm
\begin{equation*}
\|u\|_{\widetilde{\mathbb{H}}_p^{\alpha,k}(\Omega_T)} = \|D_t^\alpha u\|_{L_p(\Omega_T)} + \sum_{0 \leq |\beta| \leq k}\|D_x^\beta u\|_{L_p(\Omega_T)},
\end{equation*}
where by $D_t^\alpha u$ or $\partial_t(I^{1-\alpha}u) (= \partial_t (I_0^{1-\alpha}u))$ we mean that there exists $g \in L_p(\Omega_T)$ such that
\begin{equation}
\label{eq0122_01}
\int_0^T\int_\Omega g(t,x) \varphi(t,x) \, dx \, dt = - \int_0^T\int_\Omega I^{1-\alpha}u(t,x) \partial_t \varphi(t,x) \, dx \, dt
\end{equation}
for all $\varphi \in C_0^\infty(\Omega_T)$.
If we have a domain $(S,T) \times \Omega$ in place of $\Omega_T$, where $-\infty < S < T < \infty$, we write
$\widetilde{\mathbb{H}}_p^{\alpha,k}\left((S,T) \times \Omega\right)$.
In this case
$$
D_t^\alpha u(t,x) = \partial_t I_S^{1-\alpha} u(t,x).
$$
Now we set
$$
\mathbb{H}_p^{\alpha,k}(\Omega_T) = \left\{ u \in \widetilde{\mathbb{H}}_p^{\alpha,k}(\Omega_T): \text{\eqref{eq0122_01} is satisfied for all}\,\, \varphi \in C_0^\infty\left([0,T) \times \Omega\right)\right\}
$$
with the same norm as for $\widetilde{\mathbb{H}}_p^{\alpha,k}(\Omega_T)$.
Similarly, we define $\mathbb{H}_p^{\alpha,k}((S,T)\times\Omega)$.
If \eqref{eq0122_01} holds for all functions $\varphi \in C_0^\infty\left([0,T) \times \Omega \right)$, then one can regard that $I^{1-\alpha}u(t)|_{t=0} = 0$ in the trace sense with respect to the time variable.
In Lemma \ref{lem0123_1} below, we show that, if $\alpha \leq 1 - 1/p$, then $\mathbb{H}_p^{\alpha,k}(\Omega_T)=\widetilde{\mathbb{H}}_p^{\alpha,k}(\Omega_T)$.
\begin{lemma}
\label{lem0123_1}
Let $p \in [1,\infty]$, $\alpha \in (0,1)$, $k \in \{1,2,\ldots\}$ and
$$
\alpha \le 1 - 1/p.
$$
Then, for $u \in \widetilde{\mathbb{H}}_p^{\alpha,k}(\Omega_T)$, the equality \eqref{eq0122_01} holds for all $\varphi \in C_0^\infty\left([0,T) \times \Omega\right)$.
\end{lemma}
\begin{proof}
Let $\eta_k(t)$ be an infinitely differentiable function such that $0 \leq \eta_k(t) \leq 1$, $\eta_k(t) = 0$ for $t \leq 0$, $\eta_k(t) = 1$ for $t \geq 1/k$, and $|\partial_t \eta_k(t)| \leq 2k$.
Then
\begin{align*}
&\int_0^T I^{1-\alpha} u(t,x) \partial_t (\varphi(t) \eta_k(t)) \, dt\\
&= \int_0^T I^{1-\alpha}u(t,x) \partial_t \varphi(t) \eta_k(t) \, dt + \int_0^T I^{1-\alpha} u(t,x) \varphi(t) \partial_t \eta_k(t) \, dt.
\end{align*}
To prove the desired equality, we only need to show that
$$
\int_0^T \int_\Omega I^{1-\alpha}u(t,x) \varphi(t) \partial_t \eta_k(t)\, dx \, dt \to 0
$$
as $k \to \infty$.
Note that
$$
\int_0^T I^{1-\alpha}u(t,x) \varphi(t) \partial_t \eta_k(t) \, dt = \int_0^{1/k} I^{1-\alpha}u(t,x) \varphi(t) \partial_t \eta_k(t) \, dt =:J_k(x).
$$
Then, by Lemma \ref{lem1018_01} with $1-\alpha$ in place of $\alpha$, for any $q \in [1,\infty]$ satisfying
$$
1- \alpha - 1/p > -1/q,
$$
we have
\begin{align*}
|J_k(x)| &\leq N k \int_0^{1/k} I^{1-\alpha} |u(t,x)| \, dt
\leq N k^{1/q} \left( \int_{0}^{1/k} \left|I^{1-\alpha}|u(t,x)|\right|^q \, dt \right)^{1/q}\\
&\leq N k^{\alpha - 1 + 1/p}\|u(\cdot,x)\|_{L_p(0,1/k)} \to 0
\end{align*}
as $k \to \infty$, provided that $\alpha \le 1 - 1/p$.
The lemma is proved.
\end{proof}
We now prove that every function in $\mathbb{H}_p^{\alpha,k}
(\Omega_T)$ can be approximated by infinitely differentiable functions up to the boundary with respect to the time variable.
\begin{proposition}
\label{prop0120_1}
Let $p \in [1,\infty)$, $\alpha \in (0,1)$, and $k \in \{1,2,\ldots\}$.
Then functions in $C^\infty\left([0,T] \times \Omega\right)$ vanishing for large $|x|$ are dense in $\mathbb{H}_p^{\alpha,k}(\Omega_T)$.
\end{proposition}
\begin{proof}
We prove only the case when $\Omega = \mathbb{R}^d$.
More precisely, we show that
$C^\infty_0\left([0,T] \times \mathbb{R}^d\right)$ is dense in $\mathbb{H}_p^{\alpha,k}(\mathbb{R}^d_T)$.
The proof of the case when $\Omega = \mathbb{R}^d_+$ is similar.
For a general $\Omega$, the claim is proved using a partition of unity with respect to the spatial variables.
See, for instance, \cite{MR0164252}.
Let $u \in \mathbb{H}_p^{\alpha,k}(\mathbb{R}^d_T)$.
Let $\eta(t,x)$ be an infinitely differentiable function defined in $\mathbb{R}^{d+1}$ satisfying $\eta \ge 0$,
$$
\eta(t,x) = 0 \quad \text{outside} \,\,(0,1)\times B_1,\quad \int_{\mathbb{R}^{d+1}} \eta \, dx \, dt = 1.
$$
Set
$$
\eta_\varepsilon(t,x) = \frac{1}{\varepsilon^{d+2/\alpha}} \eta(t/\varepsilon^{2/\alpha}, x/\varepsilon)
$$
and
$$
u^{(\varepsilon)}(t,x) = \int_\mathbb{R} \int_{\mathbb{R}^d} \eta_{\varepsilon}(t-s,x-y) u(s,y) I_{0 < s < T} \, dy \, ds.
$$
Then it follows easily that $u^{(\varepsilon)}(t,x) \in C^\infty(\mathbb{R}^{d+1})$ and, for $(t,x) \in (0,T) \times \mathbb{R}^d$ and $0 \leq |\beta| \leq k$,
\begin{equation}
\label{eq0120_01}
D^\beta_x u^{(\varepsilon)}(t,x) = \int_\mathbb{R} \int_{\mathbb{R}^d} \eta_{\varepsilon}(t-s,x-y) D^\beta_x u(s,y) I_{0<s<T} \, dy \, ds.
\end{equation}
Moreover, for $(t,x) \in (0,T) \times \mathbb{R}^d$,
\begin{equation}
\label{eq0120_02}
D_t^\alpha u^{(\varepsilon)}(t,x) = \int_{\mathbb{R}}\int_{\mathbb{R}^d} \eta_{\varepsilon}(t-s,x-y) D^\alpha_t u(s,y) I_{0<s<T} \, dy \, ds.
\end{equation}
To see \eqref{eq0120_02}, we first check that
\begin{equation}
\label{eq0124_01}
I^{1-\alpha} u^{(\varepsilon)}(t,x) = (I^{1-\alpha} u)^{(\varepsilon)}(t,x).
\end{equation}
Indeed,
\begin{align*}
&\Gamma(1-\alpha) I^{1-\alpha} u^{(\varepsilon)}(t,x)\\
&= \int_0^t (t-s)^{-\alpha}\int_0^T \int_{\mathbb{R}^d} \eta_\varepsilon(s-r,x-y) u(r,y) \, dy \, dr \, ds\\
&= \int_{\mathbb{R}^d} \int_0^T \int_0^t (t-s)^{-\alpha} \eta_\varepsilon(s-r,x-y) u(r,y) \, ds \, dr \, dy\\
&= \int_{\mathbb{R}^d} \int_0^t \int_r^t (t-s)^{-\alpha} \eta_\varepsilon(s-r,x-y) u(r,y) \, ds \, dr \, dy,
\end{align*}
where we used the fact that $\eta(t,x) = 0$ if $t \leq 0$.
Then by the change of variable $\rho = t-s+r$ in the integration with respect to $s$,
we have
\begin{align*}
&\Gamma(1-\alpha) I^{1-\alpha} u^{(\varepsilon)}(t,x) = \int_{\mathbb{R}^d} \int_0^t \int_r^t (\rho-r)^{-\alpha} \eta_\varepsilon(t-\rho,x-y) u(r,y) \, d\rho \, dr \, dy\\
&= \int_{\mathbb{R}^d} \int_0^t \eta_\varepsilon(t-\rho,x-y) \int_0^\rho (\rho-r)^{-\alpha} u(r,y) \, dr \, d\rho \, dy\\
&= \int_{\mathbb{R}^d} \int_0^T \eta_\varepsilon(t-\rho,x-y) \int_0^\rho (\rho-r)^{-\alpha} u(r,y) \, dr \, d\rho \, dy\\
&= \Gamma(1-\alpha) (I^{1-\alpha}u)^{(\varepsilon)}(t,x).
\end{align*}
Hence, the inequality \eqref{eq0124_01} is proved.
Now observe that
\begin{align*}
&\int_{\mathbb{R}}\int_{\mathbb{R}^d} \eta_{\varepsilon}(t-s,x-y) D^\alpha_t u(s,y) I_{0<s<T} \, dy \, ds\\
&= \int_0^T \int_{\mathbb{R}^d} \eta_\varepsilon(t-s,x-y) \partial_s I^{1-\alpha}u(s,y) \, dy \, ds\\
&= \int_0^T \int_{\mathbb{R}^d} (\partial_t \eta_\varepsilon) (t-s,x-y) I^{1-\alpha}u(s,y) \, dy \, ds\\
&= \partial_t \left[\int_0^T \int_{\mathbb{R}^d} \eta_\varepsilon(t-s,x-y) I^{1-\alpha} u(s,y) \, dy \, ds\right]\\
&= \partial_t (I^{1-\alpha} u)^{(\varepsilon)}(t,x) = \partial_t I^{1-\alpha} u^{(\varepsilon)}(t,x) = D_t^\alpha u^{(\varepsilon)}(t,x),
\end{align*}
where in the second equality we used the fact that $u$ satisfies \eqref{eq0122_01} for all $\varphi \in C_0^\infty\left([0,T) \times \mathbb{R}^d\right)$
and, by the choice of $\eta$, $\eta_\varepsilon(t-T,x-y) = 0$.
From the equalities \eqref{eq0120_01} and \eqref{eq0120_02}, we see that
$$
\|u^{(\varepsilon)} - u\|_{\mathbb{H}_p^{\alpha,k}(\mathbb{R}^d_T)} \to 0
$$
as $\varepsilon \to 0$.
Finally, we take a smooth cutoff function $\zeta\in C_0^\infty(\mathbb{R}^d)$ such that $\operatorname{supp} \zeta \subset B_2$ and $\zeta=1$ in $B_1$, and denote $\zeta_\varepsilon(x)=\zeta(x/\varepsilon)$. Then by the uniform bound of $
\|u^{(\varepsilon)}\|_{\mathbb{H}_p^{\alpha,k}(\mathbb{R}^d_T)}$, it is easily seen that
$$
\|u^{(\varepsilon)} - u^{(\varepsilon)}\zeta_\varepsilon\|_{\mathbb{H}_p^{\alpha,k}(\mathbb{R}^d_T)} \to 0
$$
as $\varepsilon \to 0$. The lemma is proved.
\end{proof}
\begin{remark}
\label{rem0606_1}
If the boundary of $\Omega$ is sufficiently smooth, for instance $\Omega$ is a Lipschitz domain, then $C^\infty\big([0,T] \times \overline{\Omega} \big)$ is dense in $\mathbb{H}_p^{\alpha,k}(\Omega_T)$.
\end{remark}
\begin{remark}
Lemma \ref{lem0123_1} shows that $\mathbb{H}_p^{\alpha,k}(\Omega_T) = \widetilde{\mathbb{H}}_p^{\alpha,k}(\Omega_T)$ whenever $\alpha \leq 1 - 1/p$, $p \in [1,\infty]$.
Hence, by Proposition \ref{prop0120_1}, it follows that functions in $C^\infty\left([0,T] \times \Omega\right)$ vanishing for large $|x|$ are dense in $\widetilde{\mathbb{H}}_p^{\alpha,k}(\Omega_T)$, provided that $\alpha \leq 1 - 1/p$, $p \in [1,\infty)$, $\alpha \in (0,1)$, and $k \in \{1,2,\dots\}$.
However, in the case $\alpha > 1 - 1/p$, we have $$
\mathbb{H}_p^{\alpha,k}(\Omega_T) \subsetneq \widetilde{\mathbb{H}}_p^{\alpha,k}(\Omega_T).
$$
To see this, let
$$
u(t)=t^{\alpha-1},
$$
where $\alpha \in (1-1/p,1)$ and $p \in [1,\infty)$.
Then $u \in L_p(0,T)$ and
$$
I^{1-\alpha} u(t) = \frac{1}{\Gamma(1-\alpha)} \int_0^t (t-s)^{-\alpha} s^{\alpha-1} \, ds = \Gamma(\alpha),
$$
which is a nonzero constant, so that $$
\partial_t I^{1-\alpha} u=0.
$$
Thus,
$$
u, \, D_t^\alpha u \in L_p(0,T).
$$
However, clearly
the integration by parts formula \eqref{eq0122_01} does not hold for $\varphi \in C_0^\infty[0,T)$.
The above example also shows that, even though we have
$$
u, \, D_t^\alpha u\in L_p((0,T))
$$
for $\alpha > 1-1/p$, it is not likely to gain better integrability or regularity (up to the boundary) of $u$, as apposed to the usual Sobolev embedding results.
\end{remark}
To deal with solutions with the zero initial condition,
we define
$\mathbb{H}_{p,0}^{\alpha,k}((S,T)\times\Omega)$
to be functions in $\mathbb{H}_p^{\alpha,k}((S,T)\times\Omega)$
each of which is
approximated by a sequence $\{u_n(t,x)\} \subset C^\infty\left([S,T]\times \Omega\right)$ such that $u_n$ vanishes for large $|x|$ and $u_n(S,x) = 0$.
For $u \in \mathbb{H}_{p,0}^{\alpha,k}((S,T)\times\Omega)$ and for any approximation sequences $\{u_n\}$ such that
$u_n \to u$ in $\mathbb{H}_p^{\alpha,k}
((S,T)\times\Omega)$ with $u_n \in C^\infty\left([S,T] \times \Omega\right)$ and $u_n(S,x) = 0$, we have
$$
\partial_t^\alpha u_n = D_t^\alpha u_n.
$$
Thus, when, for instance, $S=0$, for $u \in \mathbb{H}_{p,0}^{\alpha,k}(\Omega_T)$, we define
$$
\partial_t^\alpha u := D_t^\alpha u = \frac{1}{\Gamma(1-\alpha)} \partial_t \int_0^t (t-s)^{-\alpha} u(s,x) \, ds.
$$
\begin{lemma}
\label{lem0206_1}
Let $p \in [1,\infty)$, $\alpha \in (0,1)$, $k \in \{1,2,\ldots\}$, $-\infty < S < t_0 < T < \infty$, and $u \in \mathbb{H}_{p,0}^{\alpha,k}\left( (t_0,T) \times \Omega \right)$.
If $u$ is extended to be zero for $t \leq t_0$, denoted by $\bar{u}$, then $\bar{u} \in \mathbb{H}_{p,0}^{\alpha,k}\left((S,T) \times \Omega\right)$.
\end{lemma}
\begin{proof}
Without loss of generality, we assume $t_0 = 0$ so that
$$
- \infty < S < 0 < T < \infty.
$$
For $u \in \mathbb{H}_{p,0}^{\alpha,k}(\Omega_T)$,
let $\{u_n\}$ be an approximating sequence of $u$ such that $u_n \in \mathbb{H}_{p,0}^{\alpha,k}(\Omega_T) \cap C^\infty\left([0,T] \times \Omega\right)$, $u_n$ vanishes for large $|x|$, and $u_n(0,x) = 0$.
Extend $u_n$ to be zero for $t \leq 0$, denoted by $\bar{u}_n$.
It is readily seen that, for $0 \leq |\beta| \leq k$,
$$
D_x^\beta \bar{u}_n = \left\{
\begin{aligned}
D_x^\beta u_n, \quad 0 \leq t \leq T,
\\
0, \quad S \leq t < 0,
\end{aligned}
\right.
$$
$$
D^\beta_x \bar{u}_n \in L_p\left((S,T) \times \Omega \right).
$$
Now we check that
\begin{equation}
\label{eq0120_03}
D_t^\alpha \bar{u}_n = \partial_t I_S^{1-\alpha} \bar{u}_n =
\left\{
\begin{aligned}
\partial_t I_0^{1-\alpha} u_n, \quad 0 \leq t \leq T,
\\
0, \quad S \leq t < 0,
\end{aligned}
\right.
\end{equation}
$$
D_t^\alpha \bar{u}_n \in L_p\left((S,T) \times \Omega\right).
$$
To see this, note that $I_S^{1-\alpha}\bar{u}_n(t,x) = 0$ for $S \leq t < 0$.
For $0 \leq t \leq T$, we have
\begin{align*}
&I_S^{1-\alpha} \bar{u}_n = \frac{1}{\Gamma(1-\alpha)} \int_S^t (t-s)^{-\alpha} \bar{u}_n(s,x) \, dy\\
&= \frac{1}{\Gamma(1-\alpha)} \int_0^t (t-s)^{-\alpha} u_n(s,x) \, dy
= I_0^{1-\alpha} u_n(t,x).
\end{align*}
We now observe that, for $\varphi \in C_0^\infty\left( (S,T) \times \Omega\right)$,
\begin{align*}
&\int_S^T \int_\Omega I_S^{1-\alpha} \bar{u}_n(t,x) \varphi_t(t,x) \, dx \, dt = \int_0^T \int_\Omega I_0^{1-\alpha} u_n (t,x) \varphi_t(t,x) \, dx \, dt\\
&= - \int_0^T \int_\Omega \partial_t I_0^{1-\alpha} u_n(t,x) \varphi(t,x) \, dx \, dt,
\end{align*}
where we used the fact that $I_0^{1-\alpha}u_n(0,x) = 0$.
This proves \eqref{eq0120_03}
Since $\{\bar{u}_n\}$ is Cauchy in $\mathbb{H}_p^{\alpha,k}\left((S,T) \times \Omega\right)$ and $\bar{u}_n \to \bar{u}$ in $L_p\left((S,T) \times \Omega\right)$, we see that $\bar{u} \in \mathbb{H}_p^{\alpha,k}\left((S,T) \times \Omega\right)$.
Moreover, since $\bar{u}_n(S,x) = 0$, $\bar{u} \in \mathbb{H}_{p,0}^{\alpha,k}\left((S,T) \times \Omega\right)$.
In fact, $\bar{u}_n$'s are not necessarily in $C^\infty\left( [S,T] \times \Omega \right)$, but by using mollifications from $\bar{u}_n$ one can easily obtain $v_n \in C^\infty\left( [S,T] \times \Omega \right)$ vanishing for large $|x|$ such that $v_n(S,x) = 0$ and
$$
v_n \to \bar{u} \quad \text{in} \quad \mathbb{H}_p^{\alpha,k}\left((S,T) \times \Omega\right).
$$
The lemma is proved.
\end{proof}
\begin{lemma}
\label{lem0207_1}
Let $p \in [1,\infty)$, $\alpha \in (0,1)$, $k \in \{1,2,\ldots\}$, $-\infty < S < t_0 < T < \infty$, and $v \in \mathbb{H}_p^{\alpha,k}\left((S,T) \times \Omega \right)$.
Then, for any infinitely differentiable function $\eta$ defined on $\mathbb{R}$ such that $\eta(t)=0$ for $t \leq t_0$ and
$$
|\eta'(t)| \le M, \quad t \in \mathbb{R},
$$
the function $\eta v$ belongs to $\mathbb{H}_{p,0}^{\alpha,k}\left( (t_0,T) \times \Omega \right)$ and
\begin{equation}
\label{eq9.45}
\partial_t^\alpha (\eta v)(t,x) = \partial_t I_{t_0}^{1-\alpha} (\eta
v)(t,x) = \eta(t) \partial_t I_S^{1-\alpha} v (t,x) - g(t,x),
\end{equation}
for $(t,x) \in (t_0,T) \times \Omega$,
where
\begin{equation}
\label{eq0207_04}
g(t,x) = \frac{\alpha}{\Gamma(1-\alpha)} \int_S^t (t-s)^{-\alpha-1} \left(\eta(s) - \eta(t)\right) v(s,x) \, ds
\end{equation}
satisfies
\begin{equation}
\label{eq0207_01}
\|g\|_{L_p\left((t_0,T) \times \Omega\right)} \le N(\alpha, p, M, T, S) \|v\|_{L_p\left( (S,T) \times \Omega\right)}.
\end{equation}
\end{lemma}
\begin{proof}
As in Lemma \ref{lem0206_1}, we assume that $t_0 = 0$.
First we check \eqref{eq0207_01}.
Note that since $|\eta'(t)| \leq M$, we have
\begin{align*}
&\left| \int_S^t (t-s)^{-\alpha-1} \left(\eta(t) - \eta(s) \right) v(s,x) \, ds \right|\\
&\leq M \int_S^t (t-s)^{-\alpha}|v(s,x)| \, ds = M \Gamma(1-\alpha) I^{1-\alpha}_S |v(t,x)|
\end{align*}
for $(t,x) \in \Omega_T$.
Hence, the inequality \eqref{eq0207_01} follows from Lemma \ref{lem1018_01} with $1-\alpha$ in place of $\alpha$ (also see Remark \ref{rem0120_1}).
Since $v \in \mathbb{H}_p^{\alpha,2}\left((S,T) \times \Omega \right)$, there exists a sequence $\{v_n\} \subset \mathbb{H}_p^{\alpha,2}\left((S,T) \times \Omega \right) \cap C^\infty\left([S,T] \times \Omega \right)$ such that $v_n$ vanishes for large $|x|$ and
$$
\|\partial_t I_S^{1-\alpha} (v_n - v) \|_{L_p\left((S,T) \times \Omega \right)} + \sum_{0 \leq |\beta| \leq 2} \|D_x^\beta(v_n - v)\|_{L_p\left((S,T) \times \Omega\right)} \to 0
$$
as $n \to \infty$.
Let
$$
g_n(t,x)= \eta(t) \partial_t I_S^{1-\alpha} v_n (t,x) - \partial_t I_0^{1-\alpha} (\eta v_n) (t,x).
$$
Then
\begin{align*}
&- \Gamma(1-\alpha) g_n(t,x)\\
&= \partial_t \int_0^t (t-s)^{-\alpha} \eta(s) v_n(s,x) \, ds - \eta(t) \partial_t \int_S^t (t-s)^{-\alpha} v_n(s,x) \, ds\\
&= \frac{\partial}{\partial t}\left[\int_0^t (t-s)^{-\alpha} \eta(s) v_n(s,x) \, ds - \eta(t) \int_S^t (t-s)^{-\alpha} v_n(s,x) \, ds \right]\\
&\qquad + \eta'(t) \int_S^t (t-s)^{-\alpha} v_n(s,x) \, ds\\
&= \frac{\partial}{\partial t} \left[ \int_S^t (t-s)^{-\alpha} \left(\eta(s) - \eta(t)\right) v_n(s,x) \, ds \right] + \eta'(t) \int_S^t (t-s)^{-\alpha} v_n(s,x) \, ds\\
&= -\alpha \int_S^t (t-s)^{-\alpha-1} \left(\eta(s) - \eta(t)\right) v_n(s,x) \, ds.
\end{align*}
Hence,
$$
g_n(t,x) = \frac{\alpha}{\Gamma(1-\alpha)} \int_S^t (t-s)^{-\alpha-1} \left( \eta(s) - \eta(t) \right) v_n(s,x) \, ds
$$
for $(t,x) \in \Omega_T$.
Clearly,
$$
\eta(t) \partial_t I_S^{1-\alpha} v_n(t,x) \to \eta(t) \partial_t I_S^{1-\alpha} v(t,x)
$$
in $L_p(\Omega_T)$.
From the estimate for $g$ with $v_n - v$ in place of $v$, it follows that
$$
\left\| g_n - g \right\|_{L_p(\Omega_T)} \to 0
$$
as $n \to \infty$.
That is, $$
\partial_t I_0^{1-\alpha}(\eta v_n)(t,x) \to \eta(t) \partial_t I_S^{1-\alpha} v(t,x) - g(t,x)
$$
in $L_p(\Omega_T)$.
This together with $I_0^{1-\alpha} (\eta v_n) \to I_0^{1-\alpha} (\eta v)$ in $L_p(\Omega_T)$
implies \eqref{eq9.45} and $\partial_t I_0^{1-\alpha}(\eta v_n)(t,x) \to \partial_t I_0^{1-\alpha}(\eta v)(t,x)$ in $L_p(\Omega_T)$.
Obviously, $D_x^{\beta} (\eta v_n) \to D_x^\beta (\eta v)$ in $L_p(\Omega_T)$ for $0 \leq |\beta| \leq k$.
Then from the fact that $\eta v_n \in C_0^\infty\left([0,T] \times \Omega \right)$ vanishing for large $|x|$ with $(\eta v_n)(0,x) = 0$, we conclude that $\eta v \in \mathbb{H}_{p,0}^{\alpha,k}(\Omega_T)$.
\end{proof}
\section{Auxiliary results}
\label{sec4}
Throughout this section, we assume that $a^{ij}$ are measurable functions of only $t \in \mathbb{R}$. That is,
$a^{ij} = a^{ij}(t)$.
\begin{proposition}
\label{prop0720_1}
Theorem \ref{thm0412_1} holds when $p=2$.
\end{proposition}
\begin{proof}
A version of this result for divergence type equations can be found in \cite{MR2538276}.
Roughly speaking, the results in this proposition can be obtained by taking the spatial derivatives of the equation in \cite{MR2538276}.
For the reader's convenience, we present here a detailed proof.
By the results from \cite{MR3581300} and the method of continuity, we only prove the a priori estimate \eqref{eq0411_04}.
Moreover, since infinitely differentiable functions with compact support in $x$ and with the zero initial condition are dense in $\mathbb{H}_{2,0}^{\alpha,2}(\mathbb{R}^d_T)$, it suffices to prove \eqref{eq0411_04} for $u$ in $C_0^\infty\left([0,T] \times \mathbb{R}^d\right)$ satisfying $u(0,x) =0$ and \eqref{eq0411_03}.
Multiplying both sides of \eqref{eq0411_03} by $\Delta u$ and then integrating on $(0,T) \times \mathbb{R}^d$, we have
\begin{equation}
\label{eq0125_02}
- \int_{\mathbb{R}^d_T} \partial_t^\alpha u \Delta u \, dx \, dt + \int_{\mathbb{R}^d_T} a^{ij}(t) D_{ij} u \Delta u \, dx \, dt = \int_{\mathbb{R}^d_T} f \Delta u \, dx \, dt.
\end{equation}
By integration by parts and the ellipticity condition, it follows that
\begin{align*}
&\int_{\mathbb{R}^d_T} a^{ij}(t) D_{ij} u \Delta u \, dx \, dt = \int_{\mathbb{R}^d_T} \sum_{k=1}^d\sum_{i,j=1}^d a^{ij}(t) D_{ij} u D_k^2 u \, dx \, dt\\
&= \int_{\mathbb{R}^d_T} \sum_{k=1}^d \sum_{i,j=1}^d a^{ij}(t) D_{ki} D_{kj}u \, dx \, dt
\geq \delta \int_{\mathbb{R}^d_T} \sum_{i,k = 1}^d |D_{ki}u|^2 \, dx \, dt.
\end{align*}
The term on the right-hand side of \eqref{eq0125_02} is taken care of by Young's inequality.
Moreover, the estimate for the term $\partial_t^\alpha u$ follows from that of $D^2u$ and the equation.
Thus, to obtain \eqref{eq0411_04} we only need to see that the first integral in \eqref{eq0125_02} is non-negative.
To do this, by setting $\nabla u = v$, we have
$$
- \int_{\mathbb{R}^d_T} \partial_t^\alpha u \, \Delta u \, dx \, dt = \int_{\mathbb{R}^d_T} \partial_t^\alpha v \cdot v \, dx \, dt.
$$
We claim that, for each $(t,x) \in \mathbb{R}^d_T$,
\begin{equation}
\label{eq0904_01}
\partial_t^\alpha v(t,x) \cdot v(t,x)
\ge \frac{1}{2} \partial_t^\alpha |v|^2(t,x) .
\end{equation}
To see this, for fixed $t\in (0,T)$ and $x\in \mathbb{R}^d$, let
$$
F_1(s)=\frac 1 2 |v(s,x)|^2,\quad F_2(s)=v(s,x)\cdot v(t,x),
$$
and
$$
F(s)=\frac 1 2 (|v(s,x)|^2-|v(t,x)|^2)-(v(s,x)-v(t,x))\cdot v(t,x).
$$
Because
$$
F(s)=\frac 1 2|v(s,x)-v(t,x)|^2\ge 0
$$
on $[0,T]$ with the equality at $s=t$, integration by parts clearly yields that
$$
\int_0^t (t-s)^{-\alpha}(F_1'(s)-F_2'(s))\,ds
=\int_0^t (t-s)^{-\alpha}F'(s)\,ds\le 0,
$$
which together with the definition of $\partial_t^\alpha$ implies \eqref{eq0904_01}.
Therefore, because $F_1(0)=0$ we have
\begin{align*}
&2 \Gamma(1-\alpha)\int_0^T\partial_t^\alpha v(t,x) \cdot v(t,x)\,dt\\
&\geq \int_0^T \frac{\partial}{\partial t}\left[ \int_0^t (t-s)^{-\alpha} |v(s,x)|^2 \, ds \right] \, dt =
\left[\int_0^t (t-s)^{-\alpha} |v(s,x)|^2 \, ds\right]_{t=0}^{t=T}\\
&= \int_0^T (T-s)^{-\alpha} |v(s,x)|^2 \, ds \geq 0,
\end{align*}
where we used the fact that $v(s,x)$ is bounded on $[0,T] \times \mathbb{R}^d$ so that
\begin{align*}
&\int_0^t (t-s)^{-\alpha} |v(s,x)|^2 \, ds
= \int_0^1 (t - tr)^{-\alpha} |v(tr,x)|^2 t \, dr\\
&= t^{1-\alpha} \int_0^1 (1-r)^{-\alpha} |v(tr,x)|^2 \, dr \to 0
\end{align*}
as $t \to 0$.
\end{proof}
\begin{lemma}[Local estimate]
\label{lem0731_1}
Let $p\in (1,\infty)$, $\alpha \in (0,1)$, $T \in (0,\infty)$, and $0 < r < R < \infty$.
If Theorem \ref{thm0412_1} holds with this $p$ and $v \in \mathbb{H}_{p,0}^{\alpha,2}\left((0, T) \times B_R\right)$ satisfies
$$
-\partial_t^\alpha v + a^{ij}(t) D_{ij} v
= f
$$
in $(0,T) \times B_R$, then
\begin{align*}
&\| \partial_t^\alpha v \|_{L_p\left((0,T) \times B_r\right)} + \| D^2 v\|_{L_p\left((0,T) \times B_r\right)} \\
&\le \frac{N}{(R-r)^2} \|v\|_{L_p\left((0,T) \times B_R\right)} + N \|f\|_{L_p\left((0,T) \times B_R\right)},
\end{align*}
where $N = N(d,\delta,\alpha,p)$.
\end{lemma}
\begin{proof}
Set
$$
r_0 = r, \quad r_k = r+(R-r)\sum_{j=1}^k \frac{1}{2^j}, \quad k = 1, 2, \ldots.
$$
Let $\zeta_k = \zeta_k(x)$ be an infinitely differentiable function defined on $\mathbb{R}^d$ such that
$$
\zeta_k = 1 \quad \text{on} \quad B_{r_k}, \quad \zeta = 0 \quad \text{outside} \quad \mathbb{R}^d \setminus B_{r_{k+1}},
$$
and
$$
|D_x \zeta_k(x)| \le \frac{2^{k+2}}{R-r}, \quad |D^2_x \zeta_k(x)| \le \frac{2^{2k+4}}{(R-r)^2}.
$$
Then $v \zeta_k$ belongs to $\mathbb{H}_{p,0}^{\alpha,2}(\mathbb{R}^d_T)$ and satisfies
$$
-\partial_t^\alpha (v \zeta_k) + a^{ij}D_{ij} (v \zeta_k)
= 2 a^{ij} D_i v D_j \zeta_k + a^{ij} v D_{ij} \zeta_k + f\zeta_k
$$
in $(0,T) \times \mathbb{R}^d$.
By Theorem \ref{thm0412_1}, it follows that
\begin{align}
\label{eq0728_01}
&\|D^2 (v\zeta_k)\|_{L_p(\mathbb{R}^d_T)}\nonumber\\
&\le \frac{N2^k}{R-r} \|Dv\|_{L_p\left((0,T)\times B_{r_{k+1}}\right)} + \frac{N2^{2k}}{(R-r)^2} \|v\|_{L_p\left((0,T)\times B_{r_{k+1}}\right)} + N \|f\|_{L_p\left((0,T)\times B_R\right)}\nonumber
\\
&\le \frac{N 2^k}{R-r} \|D(v\zeta_{k+1})\|_{L_p(\mathbb{R}^d_T)} + \frac{N 2^{2k}}{(R-r)^2} \|v\|_{L_p\left((0,T)\times B_R\right)} + N \|f\|_{L_p\left((0,T)\times B_R\right)},
\end{align}
where $N = N(d,\delta,\alpha,p)$.
By an interpolation inequality with respect to the spatial variables,
\begin{align*}
&\frac{2^k}{R-r}\|D(v\zeta_{k+1})\|_{L_p(\mathbb{R}^d_T)}\\
&\le \varepsilon \|D^2(v \zeta_{k+1})\|_{L_p(\mathbb{R}^d_T)} + N \varepsilon^{-1} \frac{2^{2k}}{(R-r)^2} \|v \zeta_{k+1}\|_{L_p(\mathbb{R}^d_T)}
\end{align*}
for any $\varepsilon \in (0,1)$, where $N=N(d,p)$.
Combining this inequality with \eqref{eq0728_01}, we obtain that
\begin{align*}
&\|D^2 (v \zeta_k) \|_{L_p(\mathbb{R}^d_T)}\\
&\le \varepsilon \|D^2 (v \zeta_{k+1}) \|_{L_p(\mathbb{R}^d_T)} + N \varepsilon^{-1} \frac{4^k}{(R-r)^2}\|v\|_{L_p\left((0,T) \times B_R\right)} + N \|f\|_{L_p\left((0,T)\times B_R\right)},
\end{align*}
where $N = N(d,\delta,\alpha,p)$.
By multiplying both sides of the above inequality by $\varepsilon^k$ and making summation with respect to $k = 0, 1, \ldots$, we see that
\begin{align*}
&\|D^2(v\zeta_0)\|_{L_p(\mathbb{R}^d_T)} + \sum_{k=1}^\infty \varepsilon^k \|D^2(v\zeta_k)\|_{L_p(\mathbb{R}^d_T)}\\
&\le \sum_{k=1}^\infty \varepsilon^k \|D^2(v\zeta_k)\|_{L_p(\mathbb{R}^d_T)} + N \frac{\varepsilon^{-1}}{(R-r)^2} \|v\|_{L_p\left((0,T) \times B_R\right)} \sum_{k=0}^\infty (4\varepsilon)^k\\
&+ N \|f\|_{L_p\left((0,T)\times B_R\right)} \sum_{k=0}^\infty \varepsilon^k,
\end{align*}
where the convergence of the summations are guaranteed if $\varepsilon = 1/8$.
We then obtain the desired inequality in the lemma after we remove the same terms from both sides of the above inequality and use the fact that $\zeta_0 = 1$ on $B_r$.
\end{proof}
\begin{lemma}
\label{lem0211_1}
Let $p \in [1,\infty)$, $\alpha \in (0,1)$, $0 < T < \infty$, and $0 < r < R < \infty$.
If $v \in \mathbb{H}_{p,0}^{\alpha,2}\left((0, T) \times B_R\right)$, then, for $\varepsilon \in (0, R-r)$,
$$
D_x v^{(\varepsilon)}(t,x) \in \mathbb{H}_{p,0}^{\alpha,2}\left((0,T) \times B_r\right),
$$
where $v^{(\varepsilon)}$ is a mollification of $v$ with respect to the spatial variables, that is,
$$
v^{(\varepsilon)}(t,x) = \int_{B_R} \phi_\varepsilon(x-y) v(t,y) \, dy, \quad \phi_\varepsilon(x) = \varepsilon^{-d} \phi(x/\varepsilon),
$$
and $\phi\in C_0^\infty(B_1)$ is a smooth function with unit integral.
\end{lemma}
\begin{proof}
Since $v \in \mathbb{H}_{p,0}^{\alpha,2}\left((0,T) \times B_R\right)$, there exists a sequence $\{v_n\} \subset C^\infty\big([0,T] \times B_R\big)$ such that $v_n(0,x) = 0$ and
$$
\left\| v_n - v \right\|_{\mathbb{H}_p^{\alpha,2}\left((0,T) \times B_R \right)} \to 0
$$
as $n \to \infty$.
Then, $D_x v_n^{(\varepsilon)} \in C^\infty\left([0,T]\times B_r\right)$ and $D_x v_n^{(\varepsilon)}(0,x) = 0$.
For $(t,x) \in (0,T) \times B_r$, we have
$$
D_x^k D_x v^{(\varepsilon)}(t,x) = \int_{B_R} (D_x \phi_\varepsilon)(x-y) D_x^k v(t,y) \, dy, \quad k = 0,1,2,
$$
$$
D_t^\alpha D_x v^{(\varepsilon)}(t,x) = \int_{B_R} (D_x \phi_\varepsilon)(x-y) \partial_t^\alpha v(t,y) \, dy.
$$
We also have the same expressions for $v_n$ in place of $v$.
Hence, we see that
$$
\left\| D_x v_n^{(\varepsilon)} - D_x v^{(\varepsilon)} \right\|_{\mathbb{H}_p^{\alpha,2}\left((0,T) \times B_r \right)} \to 0
$$
as $n \to \infty$.
This shows that $D_x v^{(\varepsilon)} \in \mathbb{H}_{p,0}^{\alpha,2}\left((0,T) \times B_r\right)$.
\end{proof}
If $v \in \mathbb{H}_{p,0}^{\alpha,2}\left((S,T) \times \mathbb{R}^d\right)$ is a solution to a homogenous equation, one can improve its regularity as follows.
\begin{lemma}
\label{lem0731_2}
Let $p\in (1,\infty)$, $\alpha \in (0,1)$, $-\infty< S < t_0 < T < \infty$, and $0 < r < R < \infty$.
Suppose that Theorem \ref{thm0412_1} holds with this $p$ and $v \in \mathbb{H}_{p,0}^{\alpha,2}\left((S, T) \times B_R\right)$ satisfies
$$
-\partial_t^\alpha v + a^{ij}(t) D_{ij} v
= f
$$
in $(S,T) \times B_R$, where $f(t,x) = 0$ on $(t_0,T) \times B_R$ and, as we recall,
$$
\partial_t^\alpha v (t,x) = \frac{\partial}{\partial t} I_S^{1-\alpha} v(t,x) = \frac{1}{\Gamma(1-\alpha)} \frac{\partial}{\partial t} \int_S^t (t-s)^{-\alpha} v(s,x) \, ds.
$$
Then, for any infinitely differentiable function $\eta$ defined on $\mathbb{R}$ such that $\eta(t)=0$ for $t \leq t_0$, the function
$D^2 (\eta v) = D^2_x (\eta v)$ belongs to $\mathbb{H}_{p,0}^{\alpha,2}\left( (t_0,T) \times B_r \right)$ and satisfies
$$
- \partial_t^\alpha (D^2(\eta v)) + a^{ij}(t) D_{ij} (D^2(\eta v)) = \mathcal{G}
$$
in $(t_0,T) \times B_r$,
where $\partial_t^\alpha = \partial_t I_{t_0}^{1-\alpha}$ and $\mathcal{G}$ is defined by
$$
\mathcal{G}(t,x) = \frac{\alpha}{\Gamma(1-\alpha)} \int_S^t (t-s)^{-\alpha-1}\left(\eta(t) - \eta(s)\right) D^2 v(s,x) \, ds.
$$
Moreover,
\begin{equation}
\label{eq0208_02}
\|D^4 (\eta v)\|_{L_p\left((0,T) \times B_r\right)} \le \frac{N}{(R-r)^2} \|D^2 v\|_{L_p\left((0,T) \times B_R\right)} + N \|\mathcal{G}\|_{L_p\left((0,T) \times B_R\right)},
\end{equation}
where $N = N(d,\delta,\alpha,p)$.
\end{lemma}
\begin{proof}
Without loss of generality we assume $t_0 = 0$ so that
$$
- \infty < S < 0 < T < \infty.
$$
By Lemma \ref{lem0207_1} and the fact that $f(t,x) = 0$ on $(0,T) \times B_R$, we have that $\eta v$ belongs to $\mathbb{H}_{p,0}^{\alpha,2}\left((0,T) \times B_R\right)$ and satisfies
$$
- \partial_t^\alpha (\eta v) + a^{ij}(t) D_{ij}(\eta v) = g
$$
in $(0,T) \times B_R$, where $g \in L_p\left((0,T) \times B_R\right)$ is from \eqref{eq0207_04}.
Find $r_i$, $i=1,2,3$, such that
$r = r_1 < r_2 < r_3 < R$.
Set $w = \eta v$ and
consider
$w^{(\varepsilon)}$, $\varepsilon \in (0,R-r_3)$, from Lemma \ref{lem0211_1}, which is a mollification of $w$ with respect to the spatial variables.
Since $w \in \mathbb{H}_{p,0}^{\alpha,2}\left((0,T) \times B_R\right)$, by Lemma \ref{lem0211_1}, $D_x w^{(\varepsilon)}$ belongs to $\mathbb{H}_{p,0}^{\alpha,2}\left((0,T) \times B_{r_3}\right)$ and satisfies
$$
- \partial_t^\alpha (D_x w^{(\varepsilon)}
) + a^{ij}(t) D_{ij} (D_x w^{(\varepsilon)} w) = D_x g^{(\varepsilon)}
$$
in $(0,T) \times B_{r_3}$, where
$$
D_x g^{(\varepsilon)}(t,x) = \frac{\alpha}{\Gamma(1-\alpha)} \int_S^t (t-s)^{-\alpha-1} \left(\eta(s) - \eta(t)\right) D_x v^{(\varepsilon)}(s,x) \, ds.
$$
It then follows from Lemma \ref{lem0731_1} that
\begin{multline}
\label{eq0209_01}
\| \partial_t^\alpha (D_x w^{(\varepsilon)})\|_{L_p\left((0,T) \times B_{r_2}\right)} + \| D^2 (D_x w^{(\varepsilon)})\|_{L_p\left((0,T) \times B_{r_2}\right)}
\\
\le \frac{N}{(r_3-r_2)^2} \|D_x w^{(\varepsilon)}\|_{L_p\left((0,T) \times B_{r_3}\right)} + N \|D_x g^{(\varepsilon)}\|_{L_p\left((0,T) \times B_{r_3}\right)},
\end{multline}
where $N = N(d,\delta,p)$.
Note that
\begin{equation}
\label{eq0211_01}
\|D_x w^{(\varepsilon)} - D_x w\|_{L_p\left((0,T) \times B_{r_3}\right)} \to 0, \quad \|D_x g^{(\varepsilon)} - \mathcal{G}_0 \|_{L_p\left((0,T) \times B_{r_3}\right)} \to 0,
\end{equation}
where $\mathcal{G}_0$ is defined as $\mathcal{G}$ with $Dv$ in place of $D^2 v$. In particular, the latter convergence in \eqref{eq0211_01} is guaranteed by \eqref{eq0207_01} and the properties of mollifications.
Recall that $D_x w^{(\varepsilon)} \in \mathbb{H}_{p,0}^{\alpha,2}\left((0,T) \times B_{r_3}\right)$.
Then, from \eqref{eq0209_01} and \eqref{eq0211_01}, we conclude that $D_x w$ belongs to $\mathbb{H}_{p,0}^{\alpha,2}\left((0,T) \times B_{r_2}\right)$ and satisfies
$$
- \partial_t^\alpha (D_x w ) + a^{ij}(t) D_{ij} (D_x w) = \mathcal{G}_0
$$
in $(0,T) \times B_{r_2}$.
We now repeat the above argument with $Dw$, $r_1$, and $r_2$ in place of $w$, $r_2$, and $r_3$, respectively, along with the observation that the limits in \eqref{eq0211_01} hold with $Dw$ in place of $w$.
In particular, the estimate \eqref{eq0209_01} with $Dw$ in place of $w$ implies \eqref{eq0208_02}.
The lemma is proved.
\end{proof}
\section{Level set arguments}
\label{sec5}
Recall that $Q_{R_1,R_2}(t,x) = (t-R_1^{2/\alpha}, t) \times B_{R_2}(x)$ and $Q_R(t,x)=Q_{R,R}(t,x)$.
For $(t_0,x_0) \in \mathbb{R} \times \mathbb{R}^d$ and a function $g$ defined on $(-\infty,T) \times \mathbb{R}^d$, we set
\begin{equation}
\label{eq0406_03b}
\mathcal{M} g(t_0,x_0) = \sup_{Q_{R}(t,x) \ni (t_0,x_0)} \dashint_{Q_{R}(t,x)}|g(s,y)| I_{(-\infty,T) \times \mathbb{R}^d} \, dy \, ds
\end{equation}
and
\begin{equation}
\label{eq0406_03}
\mathcal{S}\mathcal{M} g(t_0,x_0) = \sup_{Q_{R_1,R_2}(t,x) \ni (t_0,x_0)} \dashint_{Q_{R_1,R_2}(t,x)}|g(s,y)| I_{(-\infty,T) \times \mathbb{R}^d} \, dy \, ds.
\end{equation}
The first one is called the (parabolic) maximal function of $g$, and second one is called the strong (parabolic) maximal function of $g$.
\begin{proposition}
\label{prop0406_1}
Let $p\in (1,\infty)$, $\alpha \in (0,1)$, $T \in (0,\infty)$, and $a^{ij} = a^{ij}(t)$.
Assume that Theorem \ref{thm0412_1} holds with this $p$ and $u \in \mathbb{H}_{p,0}^{\alpha,2}(\mathbb{R}^d_T)$ satisfies
$$
-\partial_t^\alpha u + a^{ij} D_{ij} u
= f
$$
in $(0,T) \times \mathbb{R}^d$.
Then there exists
$$
p_1 = p_1(d, \alpha,p)\in (p,\infty]
$$
satisfying
\begin{equation}
\label{eq0411_05}
p_1 > p + \min\left\{\frac{2\alpha}{\alpha d + 2 - 2\alpha}, \alpha, \frac{2}{d} \right\}
\end{equation}
and the following.
For $(t_0,x_0) \in [0,T] \times \mathbb{R}^d$ and $R \in (0,\infty)$,
there exist
$$
w \in \mathbb{H}_{p,0}^{\alpha,2}((t_0-R^{2/\alpha}, t_0)\times \mathbb{R}^d), \quad v \in \mathbb{H}_{p,0}^{\alpha,2}((S,t_0) \times \mathbb{R}^d),
$$
where $S = \min\{0, t_0 - R^{2/\alpha}\}$,
such that $u = w + v$ in $Q_R(t_0,x_0)$,
\begin{equation}
\label{eq8.13}
\left( |D^2w|^p \right)_{Q_R(t_0,x_0)}^{1/p} \le N \left( |f|^p \right)_{Q_{2R}(t_0,x_0)}^{1/p},
\end{equation}
and
\begin{multline}
\label{eq0411_01}
\left( |D^2v|^{p_1} \right)_{Q_{R/2}(t_0,x_0)}^{1/p_1} \leq N \left( |f|^p \right)_{Q_{2R}(t_0,x_0)}^{1/p}
\\
+ N \sum_{k=0}^\infty 2^{-k\alpha} \left( \dashint_{\!t_0 - (2^{k+1}+1)R^{2/\alpha}}^{\,\,\,t_0} \dashint_{B_R(x_0)} |D^2u(s,y)|^p \, dy \, ds \right)^{1/p},
\end{multline}
where $N=N(d,\delta, \alpha,p)$.
Here we understand that $u$ and $f$ are extended to be zero whenever $t < 0$
and
$$
\left( |D^2v|^{p_1} \right)_{Q_{R/2}(t_0,x_0)}^{1/p_1} = \|D^2v\|_{L_\infty(Q_{R/2}(t_0,x_0))},
$$
provided that $p_1 = \infty$.
\end{proposition}
\begin{proof}
We extend $u$ and $f$ to be zero, again denoted by $u$ and $f$, on $(-\infty,0) \times \mathbb{R}^d$.
Thanks to translation, it suffices to prove the desired inequalities when $x_0 = 0$.
Moreover, we assume that $R = 1$.
Indeed, for $R > 0$, we set
$$
\tilde{u}(t,x) = R^{-2}u(R^{2/\alpha}t, R x), \quad \tilde{a}^{ij} = a^{ij}(R^{2/\alpha}t), \quad \tilde{f}(t,x) = f(R^{2/\alpha}t, Rx).
$$
Then
$$
- \partial_t^\alpha \tilde{u} + \tilde{a}^{ij}(t) D_{ij} \tilde{u} = \tilde{f}
$$
in $(0,R^{-2/\alpha} T) \times \mathbb{R}^d$.
We then apply the result for $R=1$ to this equation on
$$
(\tilde{t}_0-1,\tilde{t}_0) \times B_1, \quad \tilde{t}_0 = R^{-2/\alpha} t_0
$$
and return to $u$.
For $R=1$ and $t_0 \in (0,\infty)$, set $\zeta = \zeta(t,x)$ to be an infinitely differentiable function defined on $\mathbb{R}^{d+1}$ such that
$$
\zeta = 1 \quad \text{on} \quad (t_0-1, t_0) \times B_1,
$$
and
$$
\zeta = 0 \quad \text{on} \quad \mathbb{R}^{d+1} \setminus (t_0-2^{2/\alpha}, t_0+2^{2/\alpha}) \times B_2.
$$
Using Theorem \ref{thm0412_1}, find a solution $w\in \mathbb{H}_{p,0}^{\alpha,2}(\mathbb{R}^d_T)$ to the problem
$$
\left\{
\begin{aligned}
-\partial_t^\alpha w + a^{ij}(t) D_{ij} w &= \zeta(t,x) f(t,x)\quad \text{in} \,\, (t_0-1, t_0) \times \mathbb{R}^d,
\\
w(t_0 - 1,x) &= 0 \quad \text{on} \quad \mathbb{R}^d.
\end{aligned}
\right.
$$
where we recall that
$$
\partial_t^\alpha w = \frac{1}{\Gamma(1-\alpha)} \partial_t \int_{t_0-1}^t (t-s)^{-\alpha} w(s,x) \, ds.
$$
Again extend $w$ to be zero on $(-\infty,t_0-1) \times \mathbb{R}^d$.
From Theorem \ref{thm0412_1} it follows that
\begin{equation}
\label{eq1214_01}
\|\partial_t^\alpha w\|_{L_p\left(Q_r(t_0,0)\right)} + \|D^2 w \|_{L_p\left(Q_r(t_0,0)\right)}
\le N \|f\|_{L_p\left(Q_2(t_0,0)\right)}
\end{equation}
for any $r > 0$.
Set $v = u - w$ so that
$$
v =
\left\{
\begin{aligned}
u-w, &\quad t \in (t_0 - 1,t_0),
\\
u, &\quad t \in (-\infty, t_0 - 1],
\end{aligned}
\right.
$$
where we note that it is possible to have $t_0 - 1 < 0$.
Then by Lemma \ref{lem0206_1}, $v$ belongs to $\mathbb{H}_{p,0}^{\alpha,2}\left((S,t_0) \times \mathbb{R}^d\right)$ for $S := \min \{0, t_0 -1\}$ and satisfies
$$
\partial_t^\alpha w = \partial_t I_{t_0-1}^{1-\alpha} w = \partial_t I_S^{1-\alpha} w, \quad \partial_t^\alpha u = \partial_t I_0^{1-\alpha} u = \partial_t I_S^{1-\alpha} u,
$$
and
$$
-\partial_t^\alpha v + a^{ij} D_{ij} v = h
$$
in $(S,t_0)\times \mathbb{R}^d$, where
$$
h(t,x) = \left\{
\begin{aligned}
\left( 1 -\zeta(t,x)\right) f(t,x) \quad &\text{in} \,\, (t_0 - 1, t_0) \times \mathbb{R}^d,
\\
f(t,x) \quad &\text{in} \,\, (S, t_0 - 1) \times \mathbb{R}^d.
\end{aligned}
\right.
$$
In particular, we note that $h = 0$
in $(t_0 - 1,t_0) \times B_1$.
Find an infinitely differentiable function $\eta$ defined on $\mathbb{R}$ such that
$$
\eta =
\left\{
\begin{aligned}
1 \quad &\text{if} \quad t \in (t_0-(1/2)^{2/\alpha},t_0),
\\
0 \quad &\text{if} \quad t \in \mathbb{R} \setminus (t_0-1,t_0+1),
\end{aligned}
\right.
$$
and
$$
\left|\frac{\eta(t)-\eta(s)}{t-s}\right| \le N(\alpha).
$$
By Lemma \ref{lem0731_2}, $D^2(\eta v)$ belongs to $\mathbb{H}_{p,0}^{\alpha,2}\left((t_0-1,t_0)\times B_{3/4}\right)$ and satisfies
$$
-\partial_t^\alpha \left( D^2(\eta v) \right) + a^{ij} D_{ij} D^2(\eta v) = \mathcal{G}
$$
in $(t_0-1,t_0)\times B_{3/4}$,
where
$$
\mathcal{G}(t,x) = \frac{\alpha}{\Gamma(1-\alpha)} \int_S^t (t-s)^{-\alpha-1}\left(\eta(t) - \eta(s)\right) D^2 v(s,x) \, ds.
$$
If $p \leq 1/\alpha$, take $p_1$ satisfying
$$
p_1 \in \left(p, \frac{1/\alpha + d/2}{1/(\alpha p) + d/(2 p) -1}\right) \quad \text{if} \quad p \leq d/2,
$$
$$
p_1 \in \left(p, p(\alpha p + 1)\right) \quad \text{if} \quad p > d/2.
$$
If $p > 1/\alpha$, take $p_1$ satisfying
$$
p_1 \in \left(p, p + 2p^2/d\right) \quad \text{if} \quad p \leq d/2,
$$
$$
p_1 \in (p, 2p) \quad \text{if} \quad p > d/2, \quad p \leq d/2 + 1/\alpha,
$$
$$
p_1 = \infty \quad \text{if} \quad p > d/2 + 1/\alpha.
$$
Note that $p_1$ satisfies \eqref{eq0411_05}
and the increment $\min \{2\alpha/(\alpha d + 2 - 2\alpha), \alpha, 2/d\}$ is independent of $p$.
By Lemma \ref{lem0731_2} and the embedding results in Appendix (Corollary \ref{cor1211_1}, Theorem \ref{thm1207_2}, Corollary \ref{cor0225_1}, Theorem \ref{thm0214_1}, and Theorem \ref{thm5.18}), we have
\begin{align}
\label{eq0715_01}
&\|D^2 v\|_{L_{p_1}\left(Q_{1/2}(t_0,0)\right)} \le \|D^2(\eta v)\|_{L_{p_1}\left((t_0-1,t_0)\times B_{1/2}\right)}\nonumber
\\
&\le N\| |D^2(\eta v)| + |D^4(\eta v)| + |D_t^\alpha D^2 (\eta v)| \|_{L_p\left((t_0-1,t_0)\times B_{3/4}\right)}\nonumber
\\
&\le N \||D^2(\eta v)| + |\mathcal{G}|\|_{L_p\left((t_0-1,t_0)\times B_1\right)} \le N \||D^2 v| + |\mathcal{G}|\|_{L_p\left((t_0-1,t_0)\times B_1\right)}\nonumber
\\
&\leq N \| |D^2 u| + |D^2 w| + |\mathcal{G}| \|_{L_p\left((t_0-1,t_0)\times B_1\right)},
\end{align}
where $N = N(d, \delta,
\alpha,p,p_1)$ and we used the fact that
$$
D_t^\alpha D^2(\eta v) = a^{ij} D_{ij} D^2 (\eta v) - \mathcal{G}
$$
in $(t_0-1,t_0) \times B_{3/4}$.
Since $D^2 v = 0$ for $t \leq S$, we write
\begin{align*}
&\frac{\Gamma(1-\alpha)}{\alpha} \mathcal{G}(t,x) = \int_{-\infty}^t (t-s)^{-\alpha-1} \left( \eta(s) - \eta(t) \right) D^2 v(s,x) \, ds\\
&= \int_{t-1}^t (t-s)^{-\alpha-1}\left( \eta(s) - \eta(t) \right) D^2 v(s,x) \, ds\\
&\quad + \int_{-\infty}^{t-1} (t-s)^{-\alpha-1}\left( \eta(s) - \eta(t) \right) D^2 v(s,x) \, ds := I_1(t,x)+I_2(t,x),
\end{align*}
where
\begin{align*}
|I_1(t,x)| \le N \int_{t-1}^t |t-s|^{-\alpha} |D^2 v(s,x)|\,ds= N \int_0^1 |s|^{-\alpha} |D^2 v(t-s,x)|\, ds,
\end{align*}
From this we have
\begin{multline}
\label{eq0715_02}
\|I_1\|_{L_p\left((t_0-1,t_0) \times B_1\right)} \le N \|D^2 v\|_{L_p\left((t_0-2,t_0)\times B_1\right)}
\\
= N \|D^2 v\|_{L_p\left((t_0-1,t_0)\times B_1\right)} + N \|D^2 u\|_{L_p\left((t_0-2,t_0-1)\times B_1\right)}.
\end{multline}
To estimate $I_2$, we see that
$\eta(s) = 0$ for any $s \in (-\infty, t-1)$ with $t \in (t_0-1,t_0)$.
Thus we have
$$
I_2(t,x) = -\eta(t) \int_{-\infty}^{t-1} (t-s)^{-\alpha-1} D^2 v(s,x) \, ds.
$$
Then,
\begin{align*}
|I_2(t,x)| &\le \int_{-\infty}^{t-1} |t-s|^{-\alpha-1} |D^2 v(s,x)| \, ds\\
&= \sum_{k=0}^\infty \int_{t-2^{k+1}}^{t-2^k} |t-s|^{-\alpha-1} |D^2 v(s,x)|\,ds\\
&\le \sum_{k=0}^\infty \int_{t-2^{k+1}}^{t-2^k } 2^{-k(\alpha+1)} |D^2 v(s,x)|\,ds.
\end{align*}
From this we have
\begin{equation*}
\|I_2\|_{L_p\left((t_0-1,t_0) \times B_1\right)}\le \sum_{k=0}^\infty 2^{-k(\alpha+1)} \left\| \int_{t-2^{k+1}}^{t-2^k} |D^2 v(s,x)| \, ds \right\|_{L_p\left((t_0-1,t_0) \times B_1\right)}.
\end{equation*}
Since $t_0 - 1 < t < t_0$,
$$
\int_{t-2^{k+1}}^{t-2^k} |D^2 v(s,x)|\, ds \leq \int_{t_0-(2^{k+1}+1)}^{t_0-2^k} |D^2 v(s,x)|\, ds.
$$
Hence, by the Minkowski inequality,
\begin{align*}
&\left\| \int_{t-2^{k+1}}^{t-2^k } |D^2 v(s,x)| \, ds \right\|_{L_p\left(Q_1(t_0,0)\right)}\\
&\le \int_{t_0 - (2^{k+1}+1)}^{t_0 - 2^k} \left( \int_{B_1} |D^2 v(s,x)|^p \, dx \right)^{1/p} \, ds\\
&\le 2^{k+2}\left( \dashint_{\!t_0 - (2^{k+1}+1)}^{\,\,\,t_0} \dashint_{B_1} |D^2 v(s,x)|^p \, dx \, ds \right)^{1/p}.
\end{align*}
It then follows that
\begin{align*}
&\|I_2\|_{L_p\left(Q_1(t_0,0)\right)}\\
&\le \sum_{k=0}^\infty 2^{-k\alpha+2}\left( \dashint_{\!t_0 - (2^{k+1}+1)}^{\,\,\,t_0} \dashint_{B_1} |D^2 v(s,x)|^p \, dx \, ds \right)^{1/p}\\
&\le \sum_{k=0}^\infty 2^{-k\alpha+2} \left( \dashint_{\!t_0 - (2^{k+1}+1)}^{\,\,\,t_0} \dashint_{B_1} |D^2 u(s,x)|^p \, dx \, ds \right)^{1/p}\\
&\quad + \sum_{k=0}^\infty 2^{-k\alpha+2}\left( \dashint_{\!t_0 - (2^{k+1}+1)}^{\,\,\,t_0} \dashint_{B_1} |D^2 w(s,x)|^p \, dx \, ds \right)^{1/p},
\end{align*}
where
$$
\sum_{k=0}^\infty 2^{-k\alpha+2}\left( \dashint_{\!t_0 - (2^{k+1}+1)}^{\,\,\,t_0} \dashint_{B_1} |D^2 w(s,x)|^p \, dx \, ds \right)^{1/p} \leq N(\alpha) \left( |D^2 w|^p \right)^{1/p}_{Q_1(t_0,0)}.
$$
Combining the above inequalities, \eqref{eq0715_01}, and \eqref{eq0715_02}, we get
\begin{align*}
&\|D^2 v\|_{L_{p_1}\left(Q_{1/2}(t_0,0)\right)} \leq N \left( |D^2 w|^p \right)_{Q_1(t_0,0)}^{1/p}\\
&\quad + N \sum_{k=0}^\infty 2^{-k\alpha} \left( \dashint_{\!t_0 - (2^{k+1}+1)}^{\,\,\,t_0} \dashint_{B_1(x_0)} |D^2u(s,y)|^p \, dy \, ds \right)^{1/p}.
\end{align*}
We then use \eqref{eq1214_01} with $r=1$ to obtain \eqref{eq0411_01} with $R = 1$.
The proposition is proved.
\end{proof}
Let $\gamma\in (0,1)$, and let $p\in (1,\infty)$ and $p_1=p_1(d,\alpha,p)$ be from the above proposition.
Denote
\begin{equation}
\label{eq0406_04}
\mathcal{A}(s) = \left\{ (t,x) \in (-\infty,T) \times \mathbb{R}^d: |D^2 u(t,x)| > s \right\}
\end{equation}
and
\begin{multline}
\label{eq0406_05}
\mathcal{B}(s) = \big\{ (t,x) \in (-\infty,T) \times \mathbb{R}^d:
\\
\gamma^{-1/p}\left( \mathcal{M} |f|^p (t,x) \right)^{1/p} + \gamma^{-1/p_1}\left( \mathcal{S}\mathcal{M} |D^2 u|^p(t,x)\right)^{1/p} > s \big\},
\end{multline}
where, to well define $\mathcal{M}$ and $\mathcal{S}\mathcal{M}$ (recall the definitions in \eqref{eq0406_03b} and \eqref{eq0406_03}), we extend a given function to be zero for $t \leq S$ if the function is defined on $(S,T) \times \mathbb{R}^d$.
Set
\begin{equation}
\label{eq0606_01}
\mathcal{C}_R(t,x) = (t-R^{2/\alpha},t+R^{2/\alpha}) \times B_R(x),\quad
\hat \mathcal{C}_R(t,x)=\mathcal{C}_R(t,x)\cap \{t\le T\}.
\end{equation}
\begin{lemma}
\label{lem0409_1}
Let $p\in (1,\infty)$, $\alpha \in (0,1)$, $T \in (0,\infty)$, $a^{ij} = a^{ij}(t)$, $R \in (0,\infty)$, and $\gamma \in (0,1)$.
Assume that Theorem \ref{thm0412_1} holds with this $p$ and $u \in \mathbb{H}_{p,0}^{\alpha,2}(\mathbb{R}^d_T)$ satisfies
$$
-\partial_t^\alpha u + a^{ij} D_{ij} u
= f
$$
in $(0,T) \times \mathbb{R}^d$.
Then, there exists a constant $\kappa = \kappa(d,\delta,\alpha,p) > 1$ such that the following hold:
for $(t_0,x_0) \in (-\infty,T] \times \mathbb{R}^d$ and $s>0$, if
\begin{equation}
\label{eq0406_02}
|\mathcal{C}_{R/4}(t_0,x_0) \cap \mathcal{A}(\kappa s)| \geq \gamma |\mathcal{C}_{R/4}(t_0,x_0)|,
\end{equation}
then we have
$$
\hat\mathcal{C}_{R/4}(t_0,x_0) \subset \mathcal{B}(s).
$$
\end{lemma}
\begin{proof}
By dividing the equation by $s$, we may assume that $s = 1$.
We only consider $(t_0,x_0) \in (-\infty,T] \times \mathbb{R}^d$ such that $t_0 + (R/4)^{2/\alpha} \geq 0$, because otherwise,
$$
\mathcal{C}_{R/4}(t_0,x_0) \cap \mathcal{A}(\kappa) \subset \left\{ (t,x) \in (-\infty,0] \times \mathbb{R}^d: |D^2 u(t,x)| > s \right\} = \emptyset
$$
as $u(t,x)$ is extended to be zero for $t < 0$.
Suppose that there is a point $(s,y) \in \hat\mathcal{C}_{R/4}(t_0,x_0)$
such that
\begin{equation}
\label{eq0406_01}
\gamma^{-1/p}\left( \mathcal{M} |f|^p (s,y) \right)^{1/p} + \gamma^{-1/p_1} \left( \mathcal{S}\mathcal{M} |D^2 u|^p(s,y)\right)^{1/p} \leq 1.
\end{equation}
Set
$$
t_1 := \min \{ t_0 + (R/4)^{2/\alpha}, T\} \quad \text{and} \quad x_1 := x_0.
$$
Then $(t_1,x_1) \in [0,T] \times \mathbb{R}^d$ and by Proposition \ref{prop0406_1} there exist $p_1 = p_1(d,\alpha,p) \in (p,\infty]$ and $w \in \mathcal{H}_{p,0}^{\alpha,2}\left((t_1-R^{2/\alpha}, t_1) \times \mathbb{R}^d\right)$, $v \in \mathcal{H}_{p,0}^{\alpha,2}\left((S, t_1) \times \mathbb{R}^d\right)$, where $S=\min\{0,t_1-R^{2/\alpha}\}$, such that $u = w + v$ in $Q_R(t_1,x_1)$,
\begin{equation}
\label{eq0409_01}
\left(|D^2 w|^p\right)^{1/p}_{Q_R(t_1,x_1)} \leq N \left( |f|^p \right)_{Q_{2R}(t_1,x_1)}^{1/p},
\end{equation}
and
\begin{multline}
\label{eq0409_02}
\left( |D^2v|^{p_1} \right)_{Q_{R/2}(t_1,x_1)}^{1/p_1} \le N \left( |f|^p \right)_{Q_{2R}(t_1,x_1)}^{1/p}
\\
+ N \sum_{k=0}^\infty 2^{-\kappa \alpha} \left(\dashint_{\! t_1 - (2^{k+1}+1)R^{2/\alpha}}^{\,\,\,t_1} \dashint_{B_R(x_1)} |D^2u(\ell,z)|^p \, dz \, d \ell \right)^{1/p},
\end{multline}
where $N=N(d,\delta, \alpha,p)$.
Since $t_0 \leq T$, we have
$$
(s,y) \in \hat\mathcal{C}_{R/4}(t_0,x_0)
\subset Q_{R/2}(t_1,x_1) \subset Q_{2R}(t_1,x_1),
$$
$$
(s,y) \in \hat\mathcal{C}_{R/4}(t_0,x_0) \subset (t_1- (2^{k+1}+1)R^{2/\alpha}, t_1) \times B_R(x_1)
$$
for all $k = 0,1,\ldots$.
From these set inclusions, in particular, we observe that
$$
\dashint_{\! t_1 - (2^{k+1}+1)R^{2/\alpha}}^{\,\,\,t_1} \dashint_{B_R(x_1)} |D^2u(\ell,z)|^p \, dz \, d \ell \leq \mathcal{S}\mathcal{M} |D^2u|^p(s,y)
$$
for all $k=0,1,2,\ldots$.
Thus the inequality \eqref{eq0406_01} along with \eqref{eq0409_01} and \eqref{eq0409_02} implies that
$$
\left( |D^2v|^{p_1} \right)_{Q_{R/2}(t_1,x_1)}^{1/p_1} \leq N \gamma^{1/p} + N \gamma^{1/p_1} \leq N_0 \gamma^{1/p_1},
$$
$$
\left(|D^2 w|^p\right)^{1/p}_{Q_R(t_1,x_1)} \leq N_1 \gamma^{1/p},
$$
where $N_0$ and $N_1$ depend only on $d$, $\delta$, $\alpha$, and $p$.
Note that, for a sufficiently large $K_1$,
\begin{align*}
&|\mathcal{C}_{R/4}(t_0,x_0) \cap \mathcal{A}(\kappa)|
= |\{(t,x) \in \mathcal{C}_{R/4}(t_0,x_0), t \in (-\infty,T): |D^2u(t,x)| > \kappa\}|\\
&\leq \left|\{ (t,x) \in Q_{R/2}(t_1,x_1): |D^2 u(t,x)| > \kappa\}\right|\\
&\leq \left|\{(t,x) \in Q_{R/2}(t_1,x_1): |D^2 w(t,x)| > \kappa - K_1 \}\right|\\
&\quad + \left|\{(t,x) \in Q_{R/2}(t_1,x_1): |D^2 v(t,x)| > K_1 \}\right|\\
&\leq (\kappa-K_1)^{-p} \int_{Q_{R/2}(t_1,x_1)} |D^2 w|^p \, dx \, dt + K_1^{-p_1} \int_{Q_{R/2}(t_1,x_1)} |D^2v|^{p_1} \, dx \, dt\\
&\leq \frac{N_1^p \gamma |Q_R|}{(\kappa - K_1)^p} + \frac{N_0^{p_1}\gamma|Q_{R/2}|}{K_1^{p_1}}I_{p_1 \neq \infty}\\
&\leq N(d,\alpha) |\mathcal{C}_{R/4}| \left(\frac{N_1^p \gamma }{(\kappa - K_1)^p} + \gamma \left(\frac{N_0}{K_1}\right)^p I_{p_1 \neq \infty} \right) < \gamma |\mathcal{C}_{R/4}(t_0,x_0)|,
\end{align*}
provided that we choose a sufficiently large $K_1(\ge N_0)$ depending only on $d$, $\delta$, $\alpha$, and $p$, so that
$$
N(d,\alpha) (N_0/K_1)^p < 1/2,
$$
and then choose a $\kappa$ depending only on $d$, $\delta$, $\alpha$, and $p$, so that
$$
N(d,\alpha) N_1^p/(\kappa-K_1)^p < 1/2.
$$
Considering \eqref{eq0406_02}, we get a contradiction.
The lemma is proved.
\end{proof}
\section{$L_p$-estimates}
\label{sec6}
Now we are ready to give the proof of Theorem \ref{thm0412_1}.
\begin{proof}[Proof of Theorem \ref{thm0412_1}]
We first consider the case when $p\in [2,\infty)$ by using an iterative argument to successively increase the exponent $p$. When $p=2$, the theorem follows from Proposition \ref{prop0720_1}.
Now suppose that the theorem is proved for some $p_0\in [2,\infty)$. Let $p_1=p_1(d,\alpha,p_0)$ be from Proposition \ref{prop0406_1}, and $p\in (p_0,p_1)$.
As in the proof of Proposition \ref{prop0720_1} we assume $u \in C_0^\infty\left([0,T] \times \mathbb{R}^d\right)$ with $u(0,x) = 0$ and prove the a priori estimate \eqref{eq0411_04}.
Note that
\begin{equation}
\label{eq8.47}
\|D^2u\|_{L_p(\mathbb{R}^d_T)}^p = p \int_0^\infty |\mathcal{A}(s)| s^{p-1} \, ds = p
\kappa^p \int_0^\infty |\mathcal{A}(\kappa s)| s^{p-1} \, ds.
\end{equation}
By Lemmas \ref{lem0409_1} and \ref{lem0409_2} it follows that
\begin{equation}
\label{eq8.46}
|\mathcal{A}(\kappa s)| \leq N(d,\alpha) \gamma|\mathcal{B}(s)|
\end{equation}
for all $s \in (0,\infty)$.
Hence, by the Hardy-Littlewood maximal function theorem,
\begin{align*}
&\|D^2u\|_{L_p(\mathbb{R}^d_T)}^p
\leq N p \kappa^p \gamma \int_0^\infty |\mathcal{B}(s)| s^{p-1} \, ds\\
&\le N\gamma \int_0^\infty\left|\left\{ (t,x) \in (-\infty,T) \times \mathbb{R}^d:\gamma^{-\frac 1{ p_1}}\left( \mathcal{S}\mathcal{M} |D^2 u|^{p_0}(t,x)\right)^{\frac 1 {p_0}} > s/2 \right\}\right| s^{p-1} \, ds\\
&\quad +
N\gamma \int_0^\infty\left|\left\{ (t,x) \in (-\infty,T) \times \mathbb{R}^d:\gamma^{-\frac 1 {p_0}}\left( \mathcal{M} |f|^{p_0} (t,x) \right)^{\frac 1 {p_0}} > s/2 \right\}\right| s^{p-1} \, ds\\
&\leq N \gamma^{1-p/p_1} \|D^2u\|^p_{L_p(\mathbb{R}^d_T)} + N \gamma^{1-p/p_0} \|f\|^p_{L_p(\mathbb{R}^d_T)},
\end{align*}
where $N = N(d,\delta,\alpha,p)$.
Now choose $\gamma \in (0,1)$ so that
$$
N \gamma^{1-p/p_1} < 1/2,
$$
which is possible because $p\in (p_0,p_1)$.
Then we have
$$
\|D^2u\|_{L_p(\mathbb{R}^d_T)} \leq N \|f\|_{L_p(\mathbb{R}^d_T)}.
$$
From this and the equation, we arrive at \eqref{eq0411_04} for $p \in (p_0,p_1)$.
We repeat this procedure.
Recall \eqref{eq0411_05}, which shows that each time the increment from $p_0$ to $p_1$ can be made bigger than a positive number depending only on $d$ and $\alpha$.
Thus in finite steps, we get a $p_0$ which is larger than $d/2 + 1/\alpha$, so that $p_1=p_1(d,\alpha,p_0)=\infty$. Therefore, the theorem is proved for any $p\in [2,\infty)$.
For $p \in (1,2)$, we use a duality argument.
We only prove the a priori estimate \eqref{eq0411_04}.
Without loss of generality, assume that $u \in C_0^\infty\left([0,T] \times \mathbb{R}^d\right)$ with $u(0,x) = 0$ satisfies
$$
-\partial_t^\alpha u + a^{ij} D_{ij} u = f
$$
in $(0,T) \times \mathbb{R}^d$.
Let $\phi \in L_q(\mathbb{R}^d_T)$, where $1/p+1/q=1$.
Then
$$
\phi(-t,x) \in L_q\left((-T,0) \times \mathbb{R}^d\right).
$$
Find $w \in \mathbb{H}_{q,0}^{\alpha,2}\left((-T,0) \times \mathbb{R}^d\right)$ satisfying
$$
- \partial_t^\alpha w + a^{ij}(-t) D_{ij} w = \phi(-t,x)
$$
in $(-T,0) \times \mathbb{R}^d$
with the estimate
$$
\|D^2w\|_{L_q\left((-T,0) \times \mathbb{R}^d\right)} \leq N \|\phi(-t,x)\|_{L_q\left((-T,0) \times \mathbb{R}^d\right)} = N \|\phi\|_{L_q(\mathbb{R}^d_T)},
$$
where
$$
\partial_t^\alpha w = \partial_t I_{-T}^{1-\alpha} w.
$$
Considering $w_k \in C_0^\infty\left([-T,0] \times \mathbb{R}^d\right)$ with $w_k(-T,0) = 0$ such that $w_k \to w$ in $\mathbb{H}_{q,0}^{\alpha,2}\left((-T,0) \times \mathbb{R}^d \right)$, we observe that
\begin{align*}
&\int_0^T \int_{\mathbb{R}^d} \phi D^2 u \, dx \, dt = \int_{-T}^0 \int_{\mathbb{R}^d} \phi(-t,x) D^2 u(-t,x) \, dx \, dt\\
&= \int_{-T}^0 \int_{\mathbb{R}^d} \left(-\partial_t^\alpha w + a^{ij}(-t) D_{ij} w \right) D^2 u(-t,x) \, dx \, dt\\
&= \int_0^T \int_{\mathbb{R}^d} \left( -\partial_t^\alpha u(t,x) + a^{ij}(t)D_{ij} u(t,x) \right) D^2 w(-t,x) \, dx \, dt\\
&= \int_0^T \int_{\mathbb{R}^d} f(t,x) D^2 w(-t,x) \, dx \, dt \leq N\|f\|_{L_p(\mathbb{R}^d_T)} \|\phi\|_{L_q(\mathbb{R}^d_T)}.
\end{align*}
It then follows that
$$
\|D^2u\|_{L_p(\mathbb{R}^d_T)} \leq N \|f\|_{L_p(\mathbb{R}^d_T)},
$$
from which and the equation, we finally obtain \eqref{eq0411_04}.
\end{proof}
To prove Theorem \ref{main_thm}, we extend Proposition \ref{prop0406_1} to the case when $a^{ij}=a^{ij}(t,x)$ satisfying Assumption \ref{assump2.2}.
\begin{proposition}
\label{prop0515_1}
Let $p\in (1,\infty)$, $\alpha,\gamma_0 \in (0,1)$, $T \in (0,\infty)$, $\mu\in (1,\infty)$, $\nu=\mu/(\mu-1)$, and $a^{ij} = a^{ij}(t,x)$ satisfying Assumption \ref{assump2.2} ($\gamma_0$).
Assume that $u \in \mathbb{H}_{p,0}^{\alpha,2}(\mathbb{R}^d_T)$ vanishes for $x\notin B_{R_0}(x_1)$ for some $x_1\in \mathbb{R}^d$, and satisfies
\eqref{eq0411_03} in $(0,T) \times \mathbb{R}^d$.
Then there exists
$$
p_1 = p_1(d, \alpha,p)\in (p,\infty]
$$
satisfying \eqref{eq0411_05} and the following.
For $(t_0,x_0) \in [0,T] \times \mathbb{R}^d$ and $R \in (0,\infty)$,
there exist
$$
w \in \mathbb{H}_{p,0}^{\alpha,2}((t_0-R^{2/\alpha}, t_0)\times \mathbb{R}^d), \quad v \in \mathbb{H}_{p,0}^{\alpha,2}((S,t_0) \times \mathbb{R}^d),
$$
where $S = \min\{0, t_0 - R^{2/\alpha}\}$,
such that $u = w + v$ in $Q_R(t_0,x_0)$,
$$
\left( |D^2w|^p \right)_{Q_R(t_0,x_0)}^{1/p} \le N \left( |f|^p \right)_{Q_{2R}(t_0,x_0)}^{1/p}+N\gamma_0^{1/(p\nu)}\left( |D^2 u|^{p\mu} \right)_{Q_{2R}(t_0,x_0)}^{1/(p\mu)},
$$
and
\begin{multline*}
\left( |D^2v|^{p_1} \right)_{Q_{R/2}(t_0,x_0)}^{1/p_1} \leq N \left( |f|^p \right)_{Q_{2R}(t_0,x_0)}^{1/p}+N\gamma_0^{1/(p\nu)}\left( |D^2 u|^{p\mu} \right)_{Q_{2R}(t_0,x_0)}^{1/(p\mu)}
\\
+ N \sum_{k=0}^\infty 2^{-k\alpha + 2} \left( \dashint_{\!t_0 - (2^{k+1}+1)R^{2/\alpha}}^{\,\,\,t_0} \dashint_{B_R(x_0)} |D^2u(s,y)|^p \, dy \, ds \right)^{1/p},
\end{multline*}
where $N=N(d,\delta, \alpha,p,\mu)$.
\end{proposition}
\begin{proof}
Denote
$$
Q:=\left\{
\begin{array}{ll}
Q_{2R}(t_0,x_0) & \hbox{when $2R\le R_0$;} \\
(t_0 - (2R_0)^{2/\alpha}, t_0)\times B_{R_0}(x_1) & \hbox{otherwise.}
\end{array}
\right.
$$
Note that in both cases $|Q|\le |Q_{2R}(t_0,x_0)|$.
Thus,
by Assumption \ref{assump2.2} and Remark \ref{rem2.3}, we can find $\bar a^{ij}=\bar a^{ij}(t)$ such that
\begin{equation}
\label{eq8.09}
\sup_{i,j}\dashint_{Q_{2R}(t_0,x_0)}|a^{ij}-\bar a^{ij}(t)|1_Q\,dx\,dt
\le \sup_{i,j}\dashint_{Q}|a^{ij}-\bar a^{ij}(t)|\,dx\,dt\le 2\gamma_0,
\end{equation}
where $1_Q$ is the indicator function of $Q$.
We then rewrite \eqref{eq0411_03} into
$$
-\partial_t^\alpha u + \bar a^{ij}(t)D_{ij} u = \tilde f
:=f+(\bar a^{ij}(t)-a^{ij})D_{ij} u.
$$
Now that Theorem \ref{thm0412_1} holds for this equation with the same $p$, it follows from Proposition \ref{prop0406_1} that there exist
$$
w, \, v \in \mathbb{H}_p^{\alpha,2}((t_0-R^{2/\alpha}, t_0)\times \mathbb{R}^d)
$$
such that $u = w + v$ in $Q_R(t_0,x_0)$, and \eqref{eq8.13}--\eqref{eq0411_01} hold with $\tilde f$ in place of $f$. To conclude the proof, it remains to notice that by H\"older's inequality and \eqref{eq8.09},
\begin{align*}
&\left( |\tilde f|^p \right)_{Q_{2R}(t_0,x_0)}^{1/p}
\le \left( |f|^p \right)_{Q_{2R}(t_0,x_0)}^{1/p}+
\left( |(\bar a^{ij}(t)-a^{ij})D_{ij} u|^p \right)_{Q_{2R}(t_0,x_0)}^{1/p}\\
&\le \left( |f|^p \right)_{Q_{2R}(t_0,x_0)}^{1/p}+
N\left( |(\bar a-a)1_{Q}|^{p\nu} \right)_{Q_{2R}(t_0,x_0)}^{1/(p\nu)}
\left( |D^2 u|^{p\mu} \right)_{Q_{2R}(t_0,x_0)}^{1/(p\mu)}\\
&\le N \left( |f|^p \right)_{Q_{2R}(t_0,x_0)}^{1/p}+N\gamma_0^{1/(p\nu)}\left( |D^2 u|^{p\mu} \right)_{Q_{2R}(t_0,x_0)}^{1/(p\mu)}.
\end{align*}
\end{proof}
Now we define $\mathcal{A}(s)$ as in \eqref{eq0406_04}, but instead of \eqref{eq0406_05} we define
\begin{multline*}
\mathcal{B}(s) = \big\{ (t,x) \in (-\infty,T) \times \mathbb{R}^d:
\gamma^{-1/p}\left( \mathcal{M} |f|^p (t,x) \right)^{1/p}
\\
+\gamma^{-1/p}\gamma_0^{1/(p\nu)}\left( \mathcal{M} |D^2 u|^{p\mu} (t,x) \right)^{1/(p\mu)}
+ \gamma^{-1/p_1}\left( \mathcal{S}\mathcal{M} |D^2 u|^p(t,x)\right)^{1/p} > s \big\}.
\end{multline*}
By following the proof of Lemma \ref{lem0409_1} with minor modifications, from Proposition \ref{prop0515_1}, we get the following lemma.
\begin{lemma}
\label{lem6.3}
Let $p\in (1,\infty)$, $\alpha,\gamma_0,\gamma \in (0,1)$, $T \in (0,\infty)$, $R \in (0,\infty)$, $\mu\in (1,\infty)$, $\nu=\mu/(\mu-1)$, and $a^{ij} = a^{ij}(t,x)$ satisfying Assumption \ref{assump2.2} ($\gamma_0$).
Assume that $u \in \mathbb{H}_{p,0}^{\alpha,2}(\mathbb{R}^d_T)$ vanishes for $x\notin B_{R_0}(x_1)$ for some $x_1\in \mathbb{R}^d$, and satisfies \eqref{eq0411_03} in $(0,T) \times \mathbb{R}^d$.
Then, there exists a constant $\kappa = \kappa(d,\delta,\alpha,p,\mu) > 1$ such that the following hold:
for $(t_0,x_0) \in (-\infty,T] \times \mathbb{R}^d$ and $s>0$, if
\begin{equation*}
|\mathcal{C}_{R/4}(t_0,x_0) \cap \mathcal{A}(\kappa s)| \geq \gamma |\mathcal{C}_{R/4}(t_0,x_0)|,
\end{equation*}
then we have
$$
\hat\mathcal{C}_{R/4}(t_0,x_0) \subset \mathcal{B}(s).
$$
\end{lemma}
Finally, we give the proof of Theorem \ref{main_thm}.
\begin{proof}[Proof of Theorem \ref{main_thm}]
As before we may assume that $u \in C_0^\infty\left([0,T] \times \mathbb{R}^d\right)$ with $u(0,x) = 0$ and prove the a priori estimate \eqref{eq0411_04c}. We divide the proof into three steps.
{\em Step 1.} We assume that $u$ vanishes for $x\notin B_{R_0}(x_1)$ for some $x_1\in \mathbb{R}^d$, and $b\equiv c\equiv 0$. We take $p_0\in (1,p)$ and $\mu\in (1,\infty)$ depending only on $p$ such that $p_0<p_0\mu<p<p_1$, where $p_1=p_1(d,\alpha,p_0)$ is taken from Proposition \ref{prop0515_1}. By Lemmas \ref{lem6.3} and \ref{lem0409_2}, we have \eqref{eq8.46}, which together with \eqref{eq8.47} and the Hardy-Littlewood maximal function theorem implies that
{\small \begin{align*}
&\|D^2u\|_{L_p(\mathbb{R}^d_T)}^p
\leq N p \kappa^p \gamma \int_0^\infty |\mathcal{B}(s)| s^{p-1} \, ds\\
&\le N\gamma \int_0^\infty\left|\left\{ (t,x) \in (-\infty,T) \times \mathbb{R}^d:\gamma^{-\frac 1 {p_1}}\left( \mathcal{S}\mathcal{M} |D^2 u|^{p_0}(t,x)\right)^{\frac 1 {p_0}} > s/3 \right\}\right| s^{p-1} \, ds\\
&\quad+
N\gamma \int_0^\infty\left|\left\{ (t,x) \in (-\infty,T) \times \mathbb{R}^d:\gamma^{-\frac 1 {p_0}}\left( \mathcal{M} |f|^{p_0} (t,x) \right)^{\frac 1 {p_0}} > s/3 \right\}\right| s^{p-1} \, ds\\
&\quad+
N\gamma \int_0^\infty\left|\left\{ (t,x) \in (-\infty,T) \times \mathbb{R}^d:\gamma^{-\frac 1 {p_0}}\gamma_0^{\frac 1 {p_0\nu}}
\left( \mathcal{M} |D^2 u|^{p_0\mu} (t,x) \right)^{\frac 1 {p_0\mu}} > s/3 \right\}\right| s^{p-1} \, ds\\
&\leq N (\gamma^{1-p/p_1}+\gamma^{1-p/p_0}\gamma_0^{p/(p_0\nu)}) \|D^2u\|^p_{L_p(\mathbb{R}^d_T)} + N \gamma^{1-p/p_0} \|f\|^p_{L_p(\mathbb{R}^d_T)},
\end{align*}}
where $N = N(d,\delta,\alpha,p)$.
Now choose $\gamma \in (0,1)$ sufficiently small and then $\gamma_0$ sufficiently small, depending only on $d$, $\delta$, $\alpha$, and $p$, so that
$$
N (\gamma^{1-p/p_1}+\gamma^{1-p/p_0}\gamma_0^{p/(p_0\nu)}) < 1/2.
$$
Then we have
$$
\|D^2u\|_{L_p(\mathbb{R}^d_T)} \leq N(d,\delta,\alpha,p) \|f\|_{L_p(\mathbb{R}^d_T)}.
$$
From this and the equation, we arrive at \eqref{eq0411_04}.
{\em Step 2.} In this step, we show that under the assumptions of the theorem with $\gamma_0$ being the constant from the previous step, we have
\begin{equation}
\label{eq8.59}
\|\partial_t^\alpha u\|_{L_p(\mathbb{R}^d_T)} + \|D^2 u\|_{L_p(\mathbb{R}^d_T)} \leq N \|f\|_{L_p(\mathbb{R}^d_T)}+N_1\|u\|_{L_p(\mathbb{R}^d_T)},
\end{equation}
where $N=N(d,\delta,\alpha,p)$ and $N_1=N_1(d,\delta,\alpha,p,R_0)$. By moving the lower-order terms to the right-hand side of the equation, and using interpolation inequalities, without loss of generality, we may assume that $b\equiv c\equiv 0$.
Now \eqref{eq8.59} follows a standard partition of unity argument with respect to $x$ and interpolation inequalities.
{\em Step 3.} In this step, we show how to get rid of the second term on the right-hand side of \eqref{eq8.59} and conclude the proof of \eqref{eq0411_04c}. By \eqref{eq8.59} and Lemma \ref{lem1110_1}, we can find $q\in (p,\infty)$, depending on $\alpha$ and $p$, such that for any $T'\in (0,T]$,
\begin{align*}
\|u\|_{L_p\left(\mathbb{R}^d; L_q(0,T')\right)}
&\le N(\alpha,p,T)\|\partial^t_\alpha u\|_{L_p((0,T');L_p(\mathbb{R}^d))}\\
&\le N\|f\|_{L_p(\mathbb{R}^d_{T'})}+N_1\|u\|_{L_p(\mathbb{R}^d_{T'})},
\end{align*}
where $N=N(d,\delta,\alpha,p,T)$ and $N_1=N_1(d,\delta,\alpha,p,T,R_0)$. Next we take a sufficiently large integer $m=m(d,\delta,\alpha,p,T,R_0)$ such that $N_1(T/m)^{1/p-1/q}\le 1/2$. Then for any $j=0,2,\ldots,m-1$, by H\"older's inequality and the above inequality with $T'=(j+1)T/m$, we have
\begin{align*}
&\|u\|_{L_p((jT/m,(j+1)T/m);L_p(\mathbb{R}^d))}
\le (T/m)^{1/p-1/q}\|u\|_{L_p\left(\mathbb{R}^d;L_q(jT/m,(j+1)T/m)\right)}\\
&\le N\|f\|_{L_p(\mathbb{R}^d_T)}+\frac 1 2\|u\|_{L_p((0,(j+1)T/m);L_p(\mathbb{R}^d))}.
\end{align*}
This implies that
$$
\|u\|_{L_p((jT/m,(j+1)T/m);L_p(\mathbb{R}^d))}
\le N\|f\|_{L_p(\mathbb{R}^d_T)}+\|u\|_{L_p((0,jT/m);L_p(\mathbb{R}^d))}.
$$
By an induction on $j$, we obtain
$$
\|u\|_{L_p(\mathbb{R}^d_T)}\le N\|f\|_{L_p(\mathbb{R}^d_T)},
$$
which together with \eqref{eq8.59} yields \eqref{eq0411_04c}. The theorem is proved.
\end{proof}
\section*{Acknowledgment}
The authors would like to thank Nicolai V. Krylov for telling us a simple proof of \eqref{eq0904_01}, and the referee for helpful comments.
The authors also thank Kyeong-hun Kim for bringing our attention to the problems discussed in this paper.
|
2,869,038,153,962 | arxiv | \section{Introduction}
In this paper we discuss the limiting theory for a novel, unifying class of
non-parametric measures of the variation of financial prices. The theory
covers commonly used estimators of variation such as realised volatility,
but it also encompasses more recently suggested quantities like realised
power variation and realised bipower variation. We considerably strengthen
existing results on the latter two quantities, deepening our understanding
and unifying their treatment. We will outline the proofs of these theorems,
referring for the very technical, detailed formal proofs of the general
results to a companion probability theory paper \cite%
{BarndorffNielsenGraversenJacodPodolskyShephard(04shiryaev)}. Our emphasis
is on exposition, explaining where the results come from and how they sit
within the econometrics literature.
Our theoretical development is motivated by the advent of complete records
of quotes or transaction prices for many financial assets. Although market
microstructure effects (e.g. discreteness of prices, bid/ask bounce,
irregular trading etc.) mean that there is a mismatch between asset pricing
theory based on semimartingales and the data at very fine time intervals it
does suggest the desirability of establishing an asymptotic distribution
theory for estimators as we use more and more highly frequent observations.
Papers which directly model the impact of market microstructure noise on
realised variance include \cite{BandiRussell(03)}, \cite{HansenLunde(03)},
\cite{ZhangMyklandAitSahalia(03)}, \cite%
{BarndorffNielsenHansenLundeShephard(04)} and \cite{Zhang(04)}. Related work
in the probability literature on the impact of noise on discretely observed
diffusions can be found in \cite{GloterJacod(01a)} and \cite%
{GloterJacod(01b)}, while \cite{DelattreJacod(97)} report results on the
impact of rounding on sums of functions of discretely observed diffusions.
In this paper we ignore these effects.
Let the $d$-dimensional vector of the log-prices of a set of assets follow
the process
\begin{equation*}
Y=\left( Y^{1},...,Y^{d}\right) ^{\prime }.
\end{equation*}
At time $t\geq 0$ we denote the log-prices as $Y_{t}$. Our aim is to
calculate measures of the variation of the price process (e.g. realised
volatility) over discrete time intervals (e.g. a day or a month). Without
loss of generality we can study the mathematics of this by simply looking at
what happens when we have $n$ high frequency observations on the time
interval $t=0$ to $t=1$ and study what happens to our measures of variation
as $n\rightarrow \infty $ (e.g., for introductions to this, \cite%
{BarndorffNielsenShephard(02realised)}). In this case returns will be
measured over intervals of length $n^{-1}$ as
\begin{equation}
\Delta _{i}^{n}Y=Y_{i/n}-Y_{(i-1)/n},\quad i=1,2,...,n, \label{return}
\end{equation}%
where $n$ is a positive integer.
We will study the behaviour of the realised generalised bipower variation
process
\begin{equation}
\frac{1}{n}\sum_{i=1}^{\left\lfloor nt\right\rfloor }g(\sqrt{n}~\Delta
_{i}^{n}Y)h(\sqrt{n}~\Delta _{i+1}^{n}Y), \label{RGBP}
\end{equation}%
as $n$ becomes large and where $g$ and $h$ are two given, matrix functions
of dimensions $d_{1}\times d_{2}$ and $d_{2}\times d_{3}$ respectively,
whose elements have at most polynomial growth. Here $\left\lfloor
x\right\rfloor $ denotes the largest integer less than or equal to $x$.
Although (\ref{RGBP}) looks initially rather odd, in fact most of the
non-parametric volatility measures used in financial econometrics fall
within this class (a measure not included in this setup is the range
statistic studied in, for example, \cite{Parkinson(80)}). Here we give an
extensive list of examples and link them to the existing literature. More
detailed discussion of the literature on the properties of these special
cases will be given later.
\begin{example}
\label{Example: 1}\textbf{(a)} Suppose $g(y)=\left( y^{j}\right) ^{2}$ and $%
h(y)=1$, then (\ref{RGBP}) becomes%
\begin{equation*}
\sum_{i=1}^{\left\lfloor nt\right\rfloor }\left( \Delta _{i}^{n}Y^{j}\right)
^{2},
\end{equation*}%
which is called the realised quadratic variation process of $Y^{j}$ in
econometrics, e.g. \cite{Jacod(94)}, \cite{JacodProtter(98)}, \cite%
{BarndorffNielsenShephard(02realised)}, \cite%
{BarndorffNielsenShephard(04multi)} and \cite{MyklandZhang(05)}. The
increments of this quantity, typically calculated over a day or a week, are
often called the realised variances in financial economics and have been
highlighted by \cite{AndersenBollerslevDieboldLabys(01)} and \cite%
{AndersenBollerslevDiebold(05)} in the context of volatility measurement and
forecasting.
\noindent \textbf{(b)} Suppose $g(y)=yy^{\prime }$ and $h(y)=I$, then (\ref%
{RGBP}) becomes, after some simplification,
\begin{equation*}
\sum_{i=1}^{\left\lfloor nt\right\rfloor }\left( \Delta _{i}^{n}Y\right)
\left( \Delta _{i}^{n}Y\right) ^{\prime }.
\end{equation*}%
\newline
This is the realised covariation process. It has been studied by \cite%
{JacodProtter(98)}, \cite{BarndorffNielsenShephard(04multi)} and \cite%
{MyklandZhang(05)}. \cite{AndersenBollerslevDieboldLabys(03model)} study the
increments of this process to produce forecast distributions for vectors of
returns. \
\noindent \textbf{(c)} Suppose $g(y)=\left\vert y^{j}\right\vert ^{r}$ for $%
r>0$ and $h(y)=1$, then (\ref{RGBP}) becomes
\begin{equation*}
n^{-1+r/2}\sum_{i=1}^{\left\lfloor nt\right\rfloor }\left\vert \Delta
_{i}^{n}Y^{j}\right\vert ^{r},
\end{equation*}%
which is called the realised $r$-th order power variation. When $r$ is an
integer it has been studied from a probabilistic viewpoint by \cite%
{Jacod(94)} while \cite{BarndorffNielsenShephard(03bernoulli)} look at the
econometrics of the case where $r>0$. The increments of these types of high
frequency volatility measures have been informally used in the financial
econometrics literature for some time when $r=1$, but until recently without
a strong understanding of their properties. Examples of their use include
\cite{Schwert(90JB)}, \cite{AndersenBollerslev(98)} and \cite%
{AndersenBollerslev(97jef)}, while they have also been informally discussed
by \cite[pp. 349--350]{Shiryaev(99)}\ and \cite{MaheswaranSims(93)}.
Following the work by \cite{BarndorffNielsenShephard(03bernoulli)}, \cite%
{GhyselsSantaClaraValkoanov(04)} and \cite{ForsbergGhysels(04)} have
successfully used realised power variation as an input into volatility
forecasting competitions.
\noindent \textbf{(d)} Suppose $g(y)=\left\vert y^{j}\right\vert ^{r}$ and $%
h(y)=\left\vert y^{j}\right\vert ^{s}$ for $r,s>0$, then (\ref{RGBP})
becomes
\begin{equation*}
n^{-1+(r+s)/2}\sum_{i=1}^{\left\lfloor nt\right\rfloor }\left\vert \Delta
_{i}^{n}Y^{j}\right\vert ^{r}\left\vert \Delta _{i+1}^{n}Y^{j}\right\vert
^{s},
\end{equation*}%
which is called the realised $r,s$-th order bipower variation process. This
measure of variation was introduced by \cite{BarndorffNielsenShephard(04jfe)}%
, while a more formal discussion of its behaviour in the $r=s=1$ case was
developed by \cite{BarndorffNielsenShephard(03test)}. These authors'
interest in this quantity was motivated by its virtue of being resistant to
finite activity jumps so long as $\max (r,s)<2$. Recently \cite%
{BarndorffNielsenShephardWinkel(04)} and \cite{Woerner(04power)} have
studied how these results on jumps extend to infinite activity processes,
while \cite{CorradiDistaso(04)} have used these statistics to test the
specification of parametric volatility models.
\noindent \textbf{(e)} Suppose
\begin{equation*}
g(y)=\left(
\begin{array}{cc}
\left\vert y^{j}\right\vert & 0 \\
0 & \left( y^{j}\right) ^{2}%
\end{array}%
\right) ,\quad h(y)=\left(
\begin{array}{c}
\left\vert y^{j}\right\vert \\
1%
\end{array}%
\right) .
\end{equation*}%
Then (\ref{RGBP}) becomes,%
\begin{equation*}
\left(
\begin{array}{c}
\displaystyle\sum_{i=1}^{\left\lfloor nt\right\rfloor }\left\vert \Delta
_{i}^{n}Y^{j}\right\vert \left\vert \Delta _{i+1}^{n}Y^{j}\right\vert \\
\displaystyle\sum_{i=1}^{\left\lfloor nt\right\rfloor }\left( \Delta
_{i}^{n}Y^{j}\right) ^{2}%
\end{array}%
\right) .
\end{equation*}%
\cite{BarndorffNielsenShephard(03test)} used the joint behaviour of the
increments of these two statistics to test for jumps in price processes. \
\cite{HuangTauchen(03)} have empirically studied the finite sample
properties of these types of jump tests. \cite%
{AndersenBollerslevDiebold(03bipower)} \ and \cite{ForsbergGhysels(04)} use
bipower variation as an input into volatility forecasting. \
\end{example}
We will derive the probability limit of (\ref{RGBP}) under a general
Brownian semimartingale, the workhorse process of modern continuous time
asset pricing theory. Only the case of realised quadratic variation, where
the limit is the usual quadratic variation QV (defined for general
semimartingales), has been previously been studied under such wide
conditions. Further, under some stronger but realistic conditions, we will
derive a limiting distribution theory for (\ref{RGBP}), so extending a
number of results previously given in the literature on special cases of
this framework.
The outline of this paper is as follows. Section 2 contains a detailed
listing of the assumptions used in our analysis. Section 3 gives a statement
of a weak law of large numbers for these statistics and the corresponding
central limit theory is presented in Section 4. Extensions of the results to
higher order variations is briefly indicated in Section 5. Section 6
illustrates the theory by discussing how it gives rise to tests for jumps in
the price processes, using bipower and tripower variation. The corresponding
literature which discusses various special cases of these results is also
given in these sections. Section 8 concludes, while there is an Appendix
which provides an outline of the proofs of the results discussed in this
paper. For detailed, quite lengthy and highly technical formal proofs we
refer to our companion probability theory paper \cite%
{BarndorffNielsenGraversenJacodPodolskyShephard(04shiryaev)}.
\section{Notation and models}
We start with $Y$ on some filtered probability space $\left( \Omega ,%
\mathcal{F},\left( \mathcal{F}_{t}\right) _{t\geq 0},P\right) $. In most of
our analysis we will assume that $Y$ follows a $d$-dimensional Brownian
semimartingale (written $Y\in \mathcal{BSM}$). It is given in the following
statement.
\noindent \textbf{Assumption (H): }We have
\begin{equation}
Y_{t}=Y_{0}+\int_{0}^{t}a_{u}\mathrm{d}u+\int_{0}^{t}\sigma _{u-}\mathrm{d}%
W_{u}, \label{H}
\end{equation}%
where $W$ is a $d^{\prime }$-dimensional standard Brownian motion (BM), $a$
is a $d$-dimensional process whose elements are predictable and has locally
bounded sample paths, and the spot covolatility $d,d^{\prime }$-dimensional
matrix $\sigma $ has elements which have c\`{a}dl\`{a}g sample paths.
Throughout we will write
\begin{equation*}
\Sigma _{t}=\sigma _{t}\sigma _{t}^{\prime },
\end{equation*}%
the spot covariance matrix. Typically $\Sigma _{t}$ will be full rank, but
we do not assume that here. We will write $\Sigma _{t}^{jk}$ to denote the $%
j,k$-th element of $\Sigma _{t}$, while we write%
\begin{equation*}
\sigma _{j,t}^{2}=\Sigma _{t}^{jj}.
\end{equation*}
\begin{remark}
Due to the fact that $t\mapsto \sigma _{t}^{jk}$ is c\`{a}dl\`{a}g all
powers of $\sigma _{t}^{jk}$ are locally integrable with respect to the
Lebesgue measure. \ In particular then $\int_{0}^{t}\Sigma _{u}^{jj}\mathrm{d%
}u<\infty $ for all $t$ and $j$.
\end{remark}
\begin{remark}
Both $a$ and $\sigma $ can have, for example, jumps, intraday seasonality
and long-memory.
\end{remark}
\begin{remark}
The stochastic volatility (e.g. \cite{GhyselsHarveyRenault(96)} and \cite%
{Shephard(05)}) component of $Y$,
\begin{equation*}
\int_{0}^{t}\sigma _{u-}\mathrm{d}W_{u},
\end{equation*}%
is always a vector of local martingales each with continuous sample paths,
as $\int_{0}^{t}\Sigma _{u}^{jj}\mathrm{d}u<\infty $ for all $t$ and $j$.
All continuous local martingales with absolutely continuous quadratic
variation can be written in the form of a stochastic volatility process.
This result, which is due to \cite{Doob(53)}, is discussed in, for example,
\cite[p. 170--172]{KaratzasShreve(91)}. Using the Dambis-Dubins-Schwartz
Theorem, we know that the difference between the entire continuous local
martingale class and the SV class are the local martingales which have only
continuous, not absolutely continuous\footnote{%
An example of a continuous local martingale which has no SV representation
is a time-change Brownian motion where the time-change takes the form of the
so-called \textquotedblleft devil's staircase,\textquotedblright\ which is
continuous and non-decreasing but not absolutely continuous (see, for
example, \cite[Section 27]{Munroe(53)}). This relates to the work of, for
example, \cite{CalvetFisher(02)} on multifractals.}, QV. The drift $%
\int_{0}^{t}a_{u}\mathrm{d}u$ has elements which are absolutely continuous.
This assumption looks ad hoc, however if we impose a lack of arbitrage
opportunities and model the local martingale component as a SV process then
this property must hold (\cite[p. 3]{KaratzasShreve(98)} and \cite[p. 583]%
{AndersenBollerslevDieboldLabys(03model)}). Hence (\ref{H}) is a rather
canonical model in the finance theory of continuous sample path processes.
\end{remark}
We are interested in the asymptotic behaviour, for $n\rightarrow \infty $,
of the following volatility measuring process:
\begin{equation}
Y^{n}(g,h)_{t}=\frac{1}{n}\sum_{i=1}^{\left\lfloor nt\right\rfloor }g(\sqrt{n%
}~\Delta _{i}^{n}Y)h(\sqrt{n}~\Delta _{i+1}^{n}Y), \label{XP}
\end{equation}%
where $g$ and $h$ are two given conformable matrix functions and recalling
the definition of $\Delta _{i}^{n}Y$ given in (\ref{return}).
\section{Law of large numbers}
To build a weak law of large numbers for $Y^{n}(g,h)_{t}$ we need to make
the pair $(g,h)$ satisfy the following assumption.
\noindent \textbf{Assumption (K):} All the elements of $f$ on $\mathbf{R}%
^{d} $ are continuous with at most polynomial growth.
This amounts to there being suitable constants $C>0$ and $p\geq 2$ such that
\begin{equation}
x\in \mathbf{R}^{d}\quad \Rightarrow \quad \left\Vert f(x)\right\Vert \leq
C(1+\Vert x\Vert ^{p}). \label{G1}
\end{equation}
We also need the following notation.
\begin{equation*}
\rho _{\sigma }(g)=\mathrm{E}\left\{ g(X)\right\} ,\quad \text{where\quad }%
X|\sigma \sim N(0,\sigma \sigma ^{\prime }),
\end{equation*}%
and
\begin{equation*}
\rho _{\sigma }(gh)=\mathrm{E}\left\{ g(X)h(X)\right\} .
\end{equation*}
\begin{example}
\label{Example: second}\textbf{(a)} Let $g(y)=yy^{\prime }$ and $h(y)=I$,
then $\rho _{\sigma }(g)=\Sigma $ and $\rho _{\sigma }(h)=I$.
\noindent \textbf{(b)} Suppose $g(y)=\left\vert y^{j}\right\vert ^{r}$ then $%
\rho _{\sigma }(g)=\mu _{r}\sigma _{j}^{r}$, where $\sigma _{j}^{2}$ is the $%
j,j$-th element of $\Sigma $, $\mu _{r}=\mathrm{E}(\left\vert u\right\vert
^{r})$ and $u\sim N(0,1)$.
\end{example}
This setup is sufficient for the proof of Theorem 1.2 of \cite%
{BarndorffNielsenGraversenJacodPodolskyShephard(04shiryaev)}, which is
restated here.
\begin{theorem}
\label{TT1} Under (H) and assuming $g$ and $h$ satisfy (K) we have that
\begin{equation}
Y^{n}(g,h)_{t}~\rightarrow ~Y(g,h)_{t}:=\int_{0}^{t}\rho _{\sigma
_{u}}(g)\rho _{\sigma _{u}}(h)\mathrm{d}u, \label{WLLN}
\end{equation}%
where the convergence is in probability, locally uniform in time. \newline
\end{theorem}
The result is quite clean as it is requires no additional assumptions on $Y$
and so is very close to dealing with the whole class of financially coherent
continuous sample path processes.
This Theorem covers a number of existing setups which are currently
receiving a great deal of attention as measures of variation in financial
econometrics. Here we briefly discuss some of the work which has studied the
limiting behaviour of these objects.
\begin{example}
\textbf{(Example \ref{Example: 1}(a) continued)}. Then $g(y)=\left(
y^{j}\right) ^{2}$ and $h(y)=1$, so (\ref{WLLN}) becomes%
\begin{equation*}
\sum_{i=1}^{\left\lfloor nt\right\rfloor }\left( \Delta _{i}^{n}Y^{j}\right)
^{2}\rightarrow ~\int_{0}^{t}\sigma _{j,u}^{2}\mathrm{d}u=[Y^{j}]_{t},
\end{equation*}%
the quadratic variation (QV) of $Y^{j}$. This well known result in
probability theory is behind much of the modern work on realised volatility,
which is compactly reviewed in \cite{AndersenBollerslevDiebold(05)}.
\noindent (\textbf{Example \ref{Example: 1}(b) continued}). As $%
g(y)=yy^{\prime }$ and $h(y)=I$, then
\begin{equation*}
\sum_{i=1}^{\left\lfloor nt\right\rfloor }\left( \Delta _{i}^{n}Y\right)
\left( \Delta _{i}^{n}Y\right) ^{\prime }\rightarrow ~\int_{0}^{t}\Sigma _{u}%
\mathrm{d}u=[Y]_{t},
\end{equation*}%
the well known multivariate version of QV.
\noindent \textbf{(Example \ref{Example: 1}(c) continued).} Then $%
g(y)=\left\vert y^{j}\right\vert ^{r}$ and $h(y)=1$ so
\begin{equation*}
n^{-1+r/2}\sum_{i=1}^{\left\lfloor nt\right\rfloor }\left\vert \Delta
_{i}^{n}Y^{j}\right\vert ^{r}\rightarrow ~\mu _{r}\int_{0}^{t}\sigma
_{j,u}^{r}\mathrm{d}u.
\end{equation*}%
This result is due to \cite{Jacod(94)} and \cite%
{BarndorffNielsenShephard(03bernoulli)}.
\noindent \textbf{(Example \ref{Example: 1}(d) continued).} Then $%
g(y)=\left\vert y^{j}\right\vert ^{r}$ and $h(y)=\left\vert y^{j}\right\vert
^{s}$ for $r,s>0$, so
\begin{equation*}
n^{-1+(r+s)/2}\sum_{i=1}^{\left\lfloor nt\right\rfloor }\left\vert \Delta
_{i}^{n}Y^{j}\right\vert ^{r}\left\vert \Delta _{i+1}^{n}Y^{j}\right\vert
^{s}\rightarrow ~\mu _{r}\mu _{s}\int_{0}^{t}\sigma _{j,u}^{r+s}\mathrm{d}u,
\end{equation*}%
a result due to \cite{BarndorffNielsenShephard(04jfe)}, who derived it under
stronger conditions than those used here.
\noindent \textbf{(Example \ref{Example: 1}(e) continued).} Then
\begin{equation*}
g(y)=\left(
\begin{array}{cc}
\left\vert y^{j}\right\vert & 0 \\
0 & \left( y^{j}\right) ^{2}%
\end{array}%
\right) ,\quad h(y)=\left(
\begin{array}{c}
\left\vert y^{j}\right\vert \\
1%
\end{array}%
\right) ,
\end{equation*}%
so
\begin{equation*}
\left(
\begin{array}{c}
\displaystyle\sum_{i=1}^{\left\lfloor nt\right\rfloor }\left\vert \Delta
_{i}^{n}Y^{j}\right\vert \left\vert \Delta _{i+1}^{n}Y^{j}\right\vert \\
\displaystyle\sum_{i=1}^{\left\lfloor nt\right\rfloor }\left( \Delta
_{i}^{n}Y^{j}\right) ^{2}%
\end{array}%
\right) \rightarrow \left( ~%
\begin{array}{c}
\mu _{1}^{2} \\
1%
\end{array}%
\right) \int_{0}^{t}\sigma _{j,u}^{2}\mathrm{d}u.
\end{equation*}%
\cite{BarndorffNielsenShephard(03test)} used this type of result to test for
jumps as this particular bipower variation is robust to jumps.
\end{example}
\section{Central limit theorem\label{sect:CLT}}
\subsection{Motivation}
It is important to be able to quantify the difference between the estimator $%
Y^{n}(g,h)$ and $Y(g,h)$. In this subsection we do this by giving a central
limit theorem for $\sqrt{n}(Y^{n}(g,h)-Y(g,h))$. We have to make some
stronger assumptions both on the process $Y$ and on the pair $(g,h)$ in
order to derive this result.
\subsection{Assumptions on the process}
We start with a variety of assumptions which strengthen (H) and (K) given in
the previous subsection.
\noindent \textbf{Assumption (H0):} We have (H) with
\begin{equation}
\sigma _{t}=\sigma _{0}+\int_{0}^{t}a_{u}^{\ast }\mathrm{d}%
u+\int_{0}^{t}\sigma _{u-}^{\ast }\mathrm{d}W_{u}+\int_{0}^{t}v_{u-}^{\ast }%
\mathrm{d}Z_{u}, \label{H'}
\end{equation}%
where $Z$ is a $d^{\prime \prime }$-dimensional L\'{e}vy process,
independent of $W$. Further, the processes $a^{\ast }$, $\sigma ^{\ast }$, $%
v^{\ast }$ are adapted c\`{a}dl\`{a}g arrays, with $a^{\ast }$ also being
predictable and locally bounded.
\noindent \textbf{Assumption (H1):} We have (H) with
\begin{eqnarray}
\sigma _{t} &=&\sigma _{0}+\int_{0}^{t}a_{u}^{\ast }\mathrm{d}%
u+\int_{0}^{t}\sigma _{u-}^{\ast }\mathrm{d}W_{u}+\int_{0}^{t}v_{u-}^{\ast }%
\mathrm{d}V_{u} \label{assumption (V)} \\
&&+\int_{0}^{t}\int_{E}\varphi \circ w(u-,x)\left( \mu -\nu \right) \left(
\mathrm{d}u,\mathrm{d}x\right) +\int_{0}^{t}\int_{E}\left( w-\varphi \circ
w\right) \left( u-,x\right) \mu \left( \mathrm{d}u,\mathrm{d}x\right) .
\notag
\end{eqnarray}%
Here $a^{\ast }$, $\sigma ^{\ast }$, $v^{\ast }$ are adapted c\`{a}dl\`{a}g
arrays, with $a^{\ast }$ also being predictable and locally bounded. $V$ is
a $d^{\prime \prime }$-dimensional Brownian motion independent of $W$. $\mu $
is a Poisson measure on $\left( 0,\infty \right) \times E$ independent of $W$
and $V$, with intensity measure $\nu (\mathrm{d}t,\mathrm{d}x)=\mathrm{d}%
t\otimes F(\mathrm{d}x)$ and $F$ is a $\sigma $-finite measure on the Polish
space $\left( E,\mathcal{E}\right) $. $\varphi $ is a continuous truncation
function on $R^{dd^{\prime }}$ (a function with compact support, which
coincide with the identity map on the neighbourhood of $0$). Finally $%
w(\omega ,u,x)$ is a map $\Omega \times \lbrack 0,\infty )\times E$ into the
space of $d\times d^{\prime }$arrays which is $\mathcal{F}_{u}\otimes $ $%
\mathcal{E}-$measurable in $(\omega ,x)$ for all $u$ and c\`{a}dl\`{a}g in $%
u $, and such that for some sequences $\left( S_{k}\right) $ of stopping
times increasing to $+\infty $ we have%
\begin{equation*}
\sup_{\omega \in \Omega ,u<S_{k}(\omega )}\left\Vert w(\omega
,u,x)\right\Vert \leq \psi _{k}(x)\quad \text{where\quad }\int_{E}\left(
1\wedge \psi _{k}(x)^{2}\right) F(\mathrm{d}x)<\infty .
\end{equation*}
\noindent \textbf{Assumption (H2): }$\Sigma =\sigma \sigma ^{\prime }$ is
everywhere invertible.
\begin{remark}
Assumption (H1) looks quite complicated but has been setup so that the same
conditions on the coefficients can be applied both to $\sigma $ and $\Sigma
=\sigma \sigma ^{\prime }$. If there were no jumps then it would be
sufficient to employ the first line of (\ref{assumption (V)}). The
assumption (H1) is rather general from an econometric viewpoint as it allows
for flexible leverage effects, multifactor volatility effects, jumps,
non-stationarities, intraday effects, etc.
\end{remark}
\subsection{Assumptions on $g$ and $h$}
In order to derive a central limit theorem we need to impose some regularity
on $g$ and $h$.
\noindent \textbf{Assumption (K1): }$f$ is even (that is $f(x)=f(-x)$ for $%
x\in R^{d}$) and continuously differentiable, with derivatives having at
most polynomial growth.
In order to handle some of the most interesting cases of bipower variation,
where we are mostly interested in taking low powers of absolute values of
returns which may not be differentiable at zero, we sometimes need to relax
(K1). The resulting condition is quite technical and is called (K2). It is
discussed in the Appendix.
\noindent \textbf{Assumption (K2):} $f$ is even and continuously
differentiable on the complement $B^{c}$\ of a closed subset $B\subset
\mathbb{R}^{d}$ and satisfies%
\begin{equation*}
||y||\leq 1\Longrightarrow |f(x+y)-f(x)|\leq C(1+||x||^{p})||y||^{r}
\end{equation*}%
for some constants $C$, $p\geq 0$ and $r\in \left( 0,1\right] $. Moreover
a) If $r=1$ then $B$ has Lebesgue measure $0$.
b) If $r<1$ then $B$ satisfies
\begin{equation}
\left.
\begin{array}{l}
\text{for any positive definite }d\times d\text{ matrix }C\text{ and } \\
\text{any }N(0,C)\text{-random vector }U\text{ the distance }d(U,B) \\
\text{from }U\text{ to }B\text{ has a density }\psi _{C}\text{ on }R_{+},%
\text{ such that } \\
sup_{x\in R_{+},|C|+|C^{-1}|\leq A}\psi _{C}(x)<\infty \text{ for all }%
A<\infty ,%
\end{array}%
\right\} \label{K13}
\end{equation}
\qquad\ and we have
\begin{equation}
x\in B^{c},~\Vert y\Vert \leq 1\bigwedge {\frac{d(x,B)}{2}}~~\Rightarrow
~~\left\{
\begin{array}{l}
\Vert \nabla f(x)\Vert \leq {\frac{C(1+\Vert x\Vert ^{p})}{d(x,B)^{1-r}}}, \\%
[2.5mm]
\Vert \nabla f(x+y)-\nabla f(x)\Vert \leq {\frac{C(1+\Vert x\Vert ^{p})\Vert
y\Vert }{d(x,B)^{2-r}}}.%
\end{array}%
\right. \label{K11}
\end{equation}
\begin{remark}
These conditions accommodate the case where $f$ equals $\left\vert
x^{j}\right\vert ^{r}$: this function satisfies (K1) when $r>1$, and (K2)
when $r\in (0,1]$ (with the same $r$ of course). When $B$ is a finite union
of hyperplanes it satisfies (\ref{K13}). Also, observe that (K1) implies
(K2) with $r=1$ and $B=\emptyset $.
\end{remark}
\subsection{Central limit theorem}
Each of the following assumptions (J1) and (J2) are sufficient for the
statement of Theorem 1.3 of \cite%
{BarndorffNielsenGraversenJacodPodolskyShephard(04shiryaev)} to hold.
\noindent \textbf{Assumption (J1):} We have (H1) and $g$ and $h$ satisfy
(K1).
\noindent \textbf{Assumption (J2):} We have (H1), (H2) and $g$ and $h$
satisfy (K2).\newline
The result of the Theorem is restated in the following.
\begin{theorem}
\label{TT3}Assume at least one of (J1) and (J2) holds, then the process
\begin{equation*}
\sqrt{n}~(Y^{n}(g,h)_{t}-Y(g,h)_{t})
\end{equation*}%
converges stably in law towards a limiting process $U(g,h)$ having the form%
\begin{equation}
U(g,h)_{t}^{jk}=\sum_{j^{\prime }=1}^{d_{1}}\sum_{k^{\prime
}=1}^{d_{3}}\int_{0}^{t}\alpha (\sigma _{u},g,h)^{jk,j^{\prime }k^{\prime }}~%
\mathrm{d}B_{u}^{j^{\prime },k^{\prime }},
\end{equation}%
where%
\begin{equation*}
\sum_{l=1}^{d_{1}}\sum_{m=1}^{d_{3}}\alpha (\sigma ,g,h)^{jk,lm}\alpha
(\sigma ,g,h)^{j^{\prime }k^{\prime },lm}=A(\sigma ,g,h)^{jk,j^{\prime
}k^{\prime }},
\end{equation*}%
and%
\begin{eqnarray*}
A(\sigma ,g,h)^{jk,j^{\prime }k^{\prime }} &=&\displaystyle%
\sum_{l=1}^{d_{2}}\sum_{l^{\prime }=1}^{d_{2}}\left\{ \rho _{\sigma }\left(
g^{jl}g^{j^{\prime }l^{\prime }}\right) \rho _{\sigma }\left(
h^{lk}h^{l^{\prime }k^{\prime }}\right) +\rho _{\sigma }\left( g^{jl}\right)
\rho _{\sigma }\left( h^{l^{\prime }k^{\prime }}\right) \rho _{\sigma
}\left( g^{j^{\prime }l^{\prime }}h^{lk}\right) \right. \\
&&\displaystyle+\rho _{\sigma }\left( g^{j^{\prime }l^{\prime }}\right) \rho
_{\sigma }\left( h^{lk}\right) \rho _{\sigma }\left( g^{jl}h^{l^{\prime
}k^{\prime }}\right) \\
&&\displaystyle\left. -3\rho _{\sigma }\left( g^{jl}\right) \rho _{\sigma
}\left( g^{j^{\prime }l^{\prime }}\right) \rho _{\sigma }\left(
h^{lk}\right) \rho _{\sigma }\left( h^{l^{\prime }k^{\prime }}\right)
\right\} .
\end{eqnarray*}%
Furthermore, $B$ is a standard Wiener process which is defined on an
extension of $\left( \Omega ,\mathcal{F},\left( \mathcal{F}_{t}\right)
_{t\geq 0},P\right) $ and is independent of the $\sigma $--field $\mathcal{F}
$.
\end{theorem}
\begin{remark}
Convergence stably in law is slightly stronger than convergence in law. It
is discussed in, for example, \cite[pp. 512-518]{JacodShiryaev(03)}.
\end{remark}
\begin{remark}
Suppose $d_{3}=1$, which is the situation looked at in Example \ref{Example:
1}(e). Then $Y^{n}(g,h)_{t}$ is a vector and so the limiting law of $\sqrt{n}%
(Y^{n}(g,h)-Y(g,h))$ simplifies. It takes on the form of
\begin{equation}
U(g,h)_{t}^{j}=\sum_{j^{\prime }=1}^{d_{1}}\int_{0}^{t}\alpha (\sigma
_{u},g,h)^{j,j^{\prime }}~\mathrm{d}B_{u}^{j^{\prime }},
\end{equation}%
where%
\begin{equation*}
\sum_{l=1}^{d_{1}}\alpha (\sigma ,g,h)^{j,l}\alpha (\sigma ,g,h)^{j^{\prime
},l}=A(\sigma ,g,h)^{j,j^{\prime }}.
\end{equation*}%
Here%
\begin{eqnarray*}
A(\sigma ,g,h)^{j,j^{\prime }} &=&\displaystyle\sum_{l=1}^{d_{2}}\sum_{l^{%
\prime }=1}^{d_{2}}\left\{ \rho _{\sigma }(g^{jl}g^{j^{\prime }l^{\prime
}})\rho _{\sigma }(h^{l}h^{l^{\prime }})+\rho _{\sigma }(g^{jl})\rho
_{\sigma }(h^{l^{\prime }})\rho _{\sigma }(g^{j^{\prime }l^{\prime
}}h^{l})\right. \\
&&\displaystyle+\left. \rho _{\sigma }(g^{j^{\prime }l^{\prime }})\rho
_{\sigma }(h^{l})\rho _{\sigma }(g^{jl}h^{l^{\prime }})-3\rho _{\sigma
}(g^{jl})\rho _{\sigma }(g^{j^{\prime }l^{\prime }})\rho _{\sigma
}(h^{l})\rho _{\sigma }(h^{l^{\prime }})\right\} .
\end{eqnarray*}%
In particular, for a single point in time $t$,
\begin{equation*}
\sqrt{n}~(Y^{n}(g,h)_{t}-Y(g,h)_{t})\rightarrow MN\left(
0,\int_{0}^{t}A(\sigma _{u},g,h)\mathrm{d}u\right) ,
\end{equation*}%
where $MN$ denotes a mixed Gaussian distribution. and $A(\sigma ,g,h)$
denotes a matrix whose $j,j^{\prime }$-th element is $A(\sigma
,g,h)^{j,j^{\prime }}$.
\end{remark}
\begin{remark}
Suppose $g(y)=I$, then $A$ becomes
\begin{equation*}
A(\sigma ,g,h)^{jk,j^{\prime }k^{\prime }}=\rho _{\sigma
}(h^{jk}h^{j^{\prime }k^{\prime }})-\rho _{\sigma }(h^{jk})\rho _{\sigma
}(h^{j^{\prime }k^{\prime }}).
\end{equation*}
\end{remark}
\subsection{Leading examples of this result}
\begin{example}
Suppose $d_{1}=d_{2}=d_{3}=1$, then
\begin{equation}
U(g,h)_{t}=\int_{0}^{t}\sqrt{A(\Sigma _{u},g,h)}~\mathrm{d}B_{u},
\end{equation}%
where%
\begin{equation*}
A(\sigma ,g,h)=\rho _{\sigma }(gg)\rho _{\sigma }(hh)+2\rho _{\sigma
}(g)\rho _{\sigma }(h)\rho _{\sigma }(gh)-3\left\{ \rho _{\sigma }(g)\rho
_{\sigma }(h)\right\} ^{2}.
\end{equation*}%
We consider two concrete examples of this setup.
\noindent \textbf{(i)} Power variation. Suppose $g(y)=1$ and $%
h(y)=\left\vert y^{j}\right\vert ^{r}$ where $r>0$, then $\rho _{\sigma
}(g)=1$,
\begin{equation*}
\rho _{\sigma }(h)=\rho _{\sigma }(gh)=\mu _{r}\sigma _{j}^{r},\quad \rho
_{\sigma }(hh)=\mu _{2r}\sigma _{j}^{2r}.
\end{equation*}%
This implies that%
\begin{eqnarray*}
A(\sigma ,g,h) &=&\mu _{2r}\sigma _{j}^{2r}+2\mu _{r}^{2}\sigma
_{j}^{2r}-3\mu _{r}^{2}\sigma _{j}^{2r} \\
&=&\left( \mu _{2r}-\mu _{r}^{2}\right) \sigma _{j}^{2r} \\
&=&v_{r}\sigma _{j}^{2r},
\end{eqnarray*}%
where $v_{r}=\mathrm{Var}(\left\vert u\right\vert ^{r})$ and $u\sim N(0,1)$.
When $r=2$, this yields a central limit theorem for the realised quadratic
variation process, with
\begin{equation*}
U(g,h)_{t}=\int_{0}^{t}\sqrt{2\sigma _{j,u}^{4}}~\mathrm{d}B_{u},
\end{equation*}%
a result which appears in \cite{Jacod(94)}, \cite{MyklandZhang(05)} and,
implicitly, \cite{JacodProtter(98)}, while the case of a single value of $t$
appears in \cite{BarndorffNielsenShephard(02realised)}. For the more general
case of $r>0$ \cite{BarndorffNielsenShephard(03bernoulli)} derived, under
much stronger conditions, a central limit theorem for $U(g,h)_{1}$. Their
result ruled out leverage effects, which are allowed under Theorem \ref{TT3}%
. The finite sample behaviour of this type of limit theory is studied in,
for example, \cite{BarndorffNielsenShephard(05tom)}, \cite%
{GoncalvesMeddahi(04)} and \cite{NielsenFrederiksen(05)}.
\noindent \textbf{(ii)} Bipower variation. Suppose $g(y)=\left\vert
y^{j}\right\vert ^{r}$ and $h(y)=\left\vert y^{j}\right\vert ^{s}$ where $%
r,s>0$, then%
\begin{eqnarray*}
\rho _{\sigma }(g) &=&\mu _{r}\sigma _{j}^{r},\quad \rho _{\sigma }(h)=\mu
_{s}\sigma _{j}^{s},\quad \rho _{\sigma }(gg)=\mu _{2r}\sigma _{j}^{2r},\quad
\\
\rho _{\sigma }(hh) &=&\mu _{2s}\sigma _{j}^{2s},\quad \rho _{\sigma
}(gh)=\mu _{r+s}\sigma _{j}^{r+s}.
\end{eqnarray*}%
This implies that
\begin{eqnarray*}
A(\sigma ,g,h) &=&\mu _{2r}\sigma _{j}^{2r}\mu _{2s}\sigma _{j}^{2s}+2\mu
_{r}\sigma _{j}^{r}\mu _{s}\sigma _{j}^{s}\mu _{r+s}\sigma _{j}^{r+s}-3\mu
_{r}^{2}\sigma _{j}^{2r}\mu _{s}^{2}\sigma _{j}^{2s} \\
&=&\left( \mu _{2r}\mu _{2s}+2\mu _{r+s}\mu _{r}\mu _{s}-3\mu _{r}^{2}\mu
_{s}^{2}\right) \sigma _{j}^{2r+2s}.
\end{eqnarray*}%
In the $r=s=1$ case \cite{BarndorffNielsenShephard(03test)} derived, under
much stronger conditions, a central limit theorem for $U(g,h)_{1}$. Their
result ruled out leverage effects, which are allowed under Theorem \ref{TT3}%
. In that special case, writing
\begin{equation*}
\vartheta =\frac{\pi ^{2}}{4}+\pi -5,
\end{equation*}%
we have
\begin{equation*}
U(g,h)_{t}=\mu _{1}^{2}\int_{0}^{t}\sqrt{\left( 2+\vartheta \right) \sigma
_{j,u}^{4}}~\mathrm{d}B_{u}.
\end{equation*}
\end{example}
\begin{example}
Suppose $g=I$, $h(y)=yy^{\prime }$. Then we have to calculate%
\begin{equation*}
A(\sigma ,g,h)^{jk,j^{\prime }k^{\prime }}=\rho _{\sigma
}(h^{jk}h^{j^{\prime }k^{\prime }})-\rho _{\sigma }(h^{jk})\rho _{\sigma
}(h^{j^{\prime }k^{\prime }}).
\end{equation*}%
However,
\begin{equation*}
\rho _{\sigma }(h^{jk})=\Sigma ^{jk},\quad \rho _{\sigma
}(h^{jk}h^{j^{\prime }k^{\prime }})=\Sigma ^{jk}\Sigma ^{j^{\prime
}k^{\prime }}+\Sigma ^{jj^{\prime }}\Sigma ^{kk^{\prime }}+\Sigma
^{jk^{\prime }}\Sigma ^{kj^{\prime }},
\end{equation*}%
so
\begin{eqnarray*}
A(\sigma ,g,h)^{jk,j^{\prime }k^{\prime }} &=&\Sigma ^{jk}\Sigma ^{j^{\prime
}k^{\prime }}+\Sigma ^{jj^{\prime }}\Sigma ^{kk^{\prime }}+\Sigma
^{jk^{\prime }}\Sigma ^{kj^{\prime }}-\Sigma ^{jk}\Sigma ^{j^{\prime
}k^{\prime }} \\
&=&\Sigma ^{jj^{\prime }}\Sigma ^{kk^{\prime }}+\Sigma ^{jk^{\prime }}\Sigma
^{kj^{\prime }}.
\end{eqnarray*}%
This is the result found in \cite{BarndorffNielsenShephard(04multi)}, but
proved under stronger conditions, and is implicit in the work of \cite%
{JacodProtter(98)}.
\end{example}
\begin{example}
\label{Example:vector case}Suppose $d_{1}=d_{2}=2$, $d_{3}=1$ and $g$ is
diagonal. Then
\begin{equation}
U(g,h)_{t}^{j}=\sum_{j^{\prime }=1}^{2}\int_{0}^{t}\alpha (\sigma
_{u},g,h)^{j,j^{\prime }}~\mathrm{d}B_{u}^{j^{\prime }},
\end{equation}%
where%
\begin{equation*}
\sum_{l=1}^{2}\alpha (\sigma ,g,h)^{j,l}\alpha (\sigma ,g,h)^{j^{\prime
},l}=A(\sigma ,g,h)^{j,j^{\prime }}.
\end{equation*}%
Here%
\begin{eqnarray*}
A(\sigma ,g,h)^{j,j^{\prime }} &=&\rho _{\sigma }(g^{jj}g^{j^{\prime
}j^{\prime }})\rho _{\sigma }(h^{j}h^{j^{\prime }})+\rho _{\sigma
}(g^{jj})\rho _{\sigma }(h^{j^{\prime }})\rho _{\sigma }(g^{j^{\prime
}j^{\prime }}h^{j}) \\
&&+\rho _{\sigma }(g^{j^{\prime }j^{\prime }})\rho _{\sigma }(h^{j})\rho
_{\sigma }(g^{jj}h^{j^{\prime }})-3\rho _{\sigma }(g^{jj})\rho _{\sigma
}(g^{j^{\prime }j^{\prime }})\rho _{\sigma }(h^{j})\rho _{\sigma
}(h^{j^{\prime }}).
\end{eqnarray*}
\end{example}
\begin{example}
\label{Example:joint BPV and RV}Joint behaviour of realised QV and realised
bipower variation. This sets%
\begin{equation*}
g(y)=\left(
\begin{array}{cc}
\left\vert y^{j}\right\vert & 0 \\
0 & 1%
\end{array}%
\right) ,\quad h(y)=\left(
\begin{array}{c}
\left\vert y^{j}\right\vert \\
\left( y^{j}\right) ^{2}%
\end{array}%
\right) .
\end{equation*}%
The implication is that
\begin{equation*}
\rho _{\sigma }(g^{11})=\rho _{\sigma }(g^{22}g^{11})=\rho _{\sigma
}(g^{11}g^{22})=\mu _{1}\sigma _{j},\ \rho _{\sigma }(g^{22})=1,\ \rho
_{\sigma }(g^{11}g^{11})=\sigma _{j}^{2},\ \rho _{\sigma }(g^{22}g^{22})=1,
\end{equation*}%
\begin{equation*}
\rho _{\sigma }(h^{1})=\mu _{1}\sigma _{j},\ \rho _{\sigma }(h^{2})=\rho
_{\sigma }(h^{1}h^{1})=\sigma _{j}^{2},\ \rho _{\sigma }(h^{1}h^{2})=\rho
_{\sigma }(h^{2}h^{1})=\mu _{3}\sigma _{j}^{3},\ \rho _{\sigma
}(h^{2}h^{2})=3\sigma _{j}^{4},
\end{equation*}%
\begin{equation*}
\rho _{\sigma }(g^{11}h^{1})=\sigma _{j}^{2},\ \rho _{\sigma
}(g^{11}h^{2})=\mu _{3}\sigma _{j}^{3},\ \rho _{\sigma }(g^{22}h^{1})=\mu
_{1}\sigma _{j},\ \rho _{\sigma }(g^{22}h^{2})=\sigma _{j}^{2}.
\end{equation*}%
Thus
\begin{eqnarray*}
A(\sigma ,g,h)^{1,1} &=&\sigma _{j}^{2}\sigma _{j}^{2}+2\mu _{1}\sigma
_{j}\mu _{1}\sigma _{j}\sigma _{j}^{2}-3\mu _{1}\sigma _{j}\mu _{1}\sigma
_{j}\mu _{1}\sigma _{j}\mu _{1}\sigma _{j} \\
&=&\sigma _{j}^{4}\left( 1+2\mu _{1}^{2}-3\mu _{1}^{4}\right) =\mu
_{1}^{4}(2+\vartheta )\sigma _{j}^{4},
\end{eqnarray*}%
while%
\begin{equation*}
A(\sigma ,g,h)^{2,2}=3\sigma _{j}^{4}+2\sigma _{j}^{4}-3\sigma
_{j}^{4}=2\sigma _{j}^{4},
\end{equation*}%
and%
\begin{eqnarray*}
A(\sigma ,g,h)^{1,2} &=&\mu _{1}\sigma _{j}\mu _{3}\sigma _{j}^{3}+\mu
_{1}\sigma _{j}\sigma _{j}^{2}\mu _{1}\sigma _{j}+\mu _{1}\sigma _{j}\mu
_{3}\sigma _{j}^{3}-3\mu _{1}\sigma _{j}\mu _{1}\sigma _{j}\sigma _{j}^{2} \\
&=&2\sigma _{j}^{4}\left( \mu _{1}\mu _{3}-\mu _{1}^{2}\right) =2\mu
_{1}^{2}\sigma _{j}^{4}.
\end{eqnarray*}%
This generalises the result given in \cite{BarndorffNielsenShephard(03test)}
to the leverage case. In particular we have that
\begin{equation*}
\left(
\begin{array}{c}
U(g,h)_{t}^{1} \\
U(g,h)_{t}^{2}%
\end{array}%
\right) =\left(
\begin{array}{l}
\displaystyle\mu _{1}^{2}\int_{0}^{t}\sqrt{2\sigma _{u}^{4}}\mathrm{d}%
B_{u}^{1}+\mu _{1}^{2}\int_{0}^{t}\sqrt{\vartheta \sigma _{u}^{4}}\mathrm{d}%
B_{u}^{2} \\
\displaystyle\int_{0}^{t}\sqrt{2\sigma _{u}^{4}}\mathrm{d}B_{u}^{1}.%
\end{array}%
\right)
\end{equation*}
\end{example}
\section{Multipower variation\label{sect:multipower variation}}
A natural extension of generalised bipower variation is to generalised
multipower variation%
\begin{equation*}
Y^{n}(g)_{t}=\frac{1}{n}\sum_{i=1}^{\left\lfloor nt\right\rfloor }\left\{
\dprod\limits_{i^{\prime }=1}^{I\wedge \left( i+1\right) }g_{i^{\prime }}(%
\sqrt{n}~\Delta _{i-i^{\prime }+1}^{n}Y)\right\} .
\end{equation*}%
This measure of variation, for the $g_{i^{\prime }}$ being absolute powers,
was introduced by \cite{BarndorffNielsenShephard(03test)}.
We will be interested in studying the properties of $Y^{n}(g)_{t}$ for given
functions $\left\{ g_{i}\right\} $ with the following properties.
\noindent \textbf{Assumption (K}$^{\ast }$\textbf{):} All the $\left\{
g_{i}\right\} $ are continuous with at most polynomial growth.
The previous results suggests that if $Y$ is a Brownian semimartingale and
Assumption (K$^{\ast }$) holds then
\begin{equation*}
Y^{n}(g)_{t}\rightarrow Y(g)_{t}:=\int_{0}^{t}\dprod\limits_{i=0}^{I}\rho
_{\sigma _{u}}(g_{i})\mathrm{d}u.
\end{equation*}
\begin{example}
\textbf{(a)} Suppose $I=4$ and $g_{i}(y)=\left\vert y^{j}\right\vert $, then
$\rho _{\sigma }(g_{i})=\mu _{1}\sigma _{j}$ so
\begin{equation*}
Y(g)_{t}=\mu _{1}^{4}\int_{0}^{t}\sigma _{j,u}^{4}\mathrm{d}u,
\end{equation*}%
a scaled version of integrated quarticity. \newline
\noindent \textbf{(b)} Suppose $I=3$ and $g_{i}(y)=\left\vert
y^{j}\right\vert ^{4/3}$, then
\begin{equation*}
\rho _{\sigma }(g_{i})=\mu _{4/3}\sigma _{j}^{4/3}
\end{equation*}%
so
\begin{equation*}
Y(g)_{t}=\mu _{4/3}^{3}\int_{0}^{t}\sigma _{j,u}^{4}\mathrm{d}u.
\end{equation*}
\end{example}
\begin{example}
Of some importance is the generic case where $g_{i}(y)=\left\vert
y^{j}\right\vert ^{2/I}$, which implies
\begin{equation*}
Y(g)_{t}=\mu _{2/I}^{I}\int_{0}^{t}\sigma _{j,u}^{2}\mathrm{d}u.
\end{equation*}%
Thus this class provides an interesting alternative to realised variance as
an estimator of integrated variance. Of course it is important to know a
central limit theory for these types of quantities. \ \cite%
{BarndorffNielsenGraversenJacodPodolskyShephard(04shiryaev)} show that when
(H1) and (H2) hold then
\begin{equation*}
\sqrt{n}\left[ Y^{n}(g)_{t}-Y(g)_{t}\right] \rightarrow \int_{0}^{t}\sqrt{%
\omega _{I}^{2}\sigma _{j,u}^{4}}~\mathrm{d}B_{u},
\end{equation*}%
where%
\begin{equation*}
\omega _{I}^{2}=\mathrm{Var}\left( \dprod\limits_{i=1}^{I}\left\vert
u_{i}\right\vert ^{2/I}\right) +2\sum_{j=1}^{I-1}\mathrm{Cov}\left(
\dprod\limits_{i=1}^{I}\left\vert u_{i}\right\vert
^{2/I},\dprod\limits_{i=1}^{I}\left\vert u_{i-j}\right\vert ^{2/I}\right) ,
\end{equation*}%
with $u_{i}\sim NID(0,1)$. Clearly $\omega _{1}^{2}=2$, while recalling that
$\mu _{1}=\sqrt{2/\pi }$,
\begin{eqnarray*}
\omega _{2}^{2} &=&\mathrm{Var}(\left\vert u_{1}\right\vert \left\vert
u_{2}\right\vert )+2\mathrm{Cov}(\left\vert u_{1}\right\vert \left\vert
u_{2}\right\vert ,\left\vert u_{2}\right\vert \left\vert u_{3}\right\vert )
\\
&=&1+2\mu _{1}^{2}-3\mu _{1}^{4},
\end{eqnarray*}
\noindent and
\begin{eqnarray*}
\omega _{3}^{2} &=&\mathrm{Var}(\left( \left\vert u_{1}\right\vert
\left\vert u_{2}\right\vert \left\vert u_{3}\right\vert \right) ^{2/3})+2%
\mathrm{Cov}(\left( \left\vert u_{1}\right\vert \left\vert u_{2}\right\vert
\left\vert u_{3}\right\vert \right) ^{2/3},\left( \left\vert
u_{2}\right\vert \left\vert u_{3}\right\vert \left\vert u_{4}\right\vert
\right) ^{2/3}) \\
&&+2\mathrm{Cov}(\left( \left\vert u_{1}\right\vert \left\vert
u_{2}\right\vert \left\vert u_{3}\right\vert \right) ^{2/3},\left(
\left\vert u_{3}\right\vert \left\vert u_{4}\right\vert \left\vert
u_{5}\right\vert \right) ^{2/3}) \\
&=&\left( \mu _{4/3}^{3}-\mu _{2/3}^{6}\right) +2\left( \mu _{4/3}^{2}\mu
_{2/3}^{2}-\mu _{2/3}^{6}\right) +2\left( \mu _{4/3}\mu _{2/3}^{4}-\mu
_{2/3}^{6}\right) .
\end{eqnarray*}
\end{example}
\begin{example}
The law of large numbers and the central limit theorem also hold for linear
combinations of processes like $Y(g)$ above. For example one may denote by $%
\zeta^n_i$ the $d\times d$ matrix whose $(k,l)$ entry is $%
\sum_{j=0}^{d-1}\Delta^n_{i+j}Y^k\Delta^n_{i+j}Y^l$. Then
\begin{equation*}
Z^n_t=\frac{n^{d-1}}{d!}\sum_{i=1}^{[nt]}\det(\zeta^n_i)
\end{equation*}
is a linear combinations of processes $Y^n(g)$ for functions $g_l$ being of
the form $g_l(y)=y^jy^k$. It is proved in \cite{JacodLejayTalay(05)} that
under (H)
\begin{equation*}
Z^n_t\rightarrow Z_t:=\int_0^t \det(\sigma_u\sigma^{\prime}_u)du
\end{equation*}
in probability, whereas under (H1) and (H2) the associated CLT is the
following convergence in law:
\begin{equation*}
\sqrt{n}(Z^n_t-Z_t)\rightarrow \int_0^t\sqrt{\Gamma(\sigma_u)}~dB_u,
\end{equation*}
where $\Gamma(\sigma)$ denotes the covariance of the variable $%
\det(\zeta)/d! $, and $\zeta$ is a $d\times d$ matrix whose $(k,l)$ entry is
$\sum_{j=0}^{d-1}U_j^kU_j^l$ and the $U_j$'s are i.i.d. centered Gaussian
vectors with covariance $\sigma\sigma^{\prime}$.
This kind of result may be used for testing whether the rank of the
diffusion coefficient is everywhere smaller than $d$ (in which case one
could use a model with a $d^{\prime}<d$ for the dimension of the driving
Wiener process $W$).
\end{example}
\section{Conclusion}
This paper provides some rather general limit results for realised
generalised bipower variation. In the case of power variation and bipower
variation the results are proved under much weaker assumptions than those
which have previously appeared in the literature. In particular the
no-leverage assumption is removed, which is important in the application of
these results to stock data.
There are a number of open questions. It is rather unclear how
econometricians might exploit the generality of the $g$ and $h$ functions to
learn about interesting features of the variation of price processes. It
would be interesting to know what properties $g$ and $h$ must possess in
order for these statistics to be robust to finite activity and infinite
activity jumps. A challenging extension is to construct a version of
realised generalised bipower variation which is robust to market
microstructure effects. Following the work on the realised volatility there
are two leading strategies which may be able to help: the kernel based
approach, studied in detailed by \cite%
{BarndorffNielsenHansenLundeShephard(04)}, and the subsampling approach of
\cite{ZhangMyklandAitSahalia(03)} and \cite{Zhang(04)}. In the realised
volatility case these methods are basically equivalent, however it is
perhaps the case that the subsampling method is easier to extend to the
non-quadratic case.
\section{Acknowledgments}
Ole E. Barndorff-Nielsen's work is supported by CAF (\texttt{www.caf.dk}),
which is funded by the Danish Social Science Research Council. Neil
Shephard's research is supported by the UK's ESRC through the grant
\textquotedblleft High frequency financial econometrics based upon power
variation.\textquotedblright\
\section{Proof of Theorem \protect\ref{TT3}}
\subsection{Strategy for the proof}
Below we give a fairly detailed account of the basic techniques in the proof
of Theorem \ref{TT3}, in the one-dimensional case and under some relatively
minor simplifying assumptions. Throughout we set $h=1$ for the main
difficulty in the proof is being able to deal with the generality in the $g$
function. Once that has been mastered the extension to the bipower measure
is not a large obstacle. We refer the reader to \cite%
{BarndorffNielsenGraversenJacodPodolskyShephard(04shiryaev)} for readers who
wish to see the more general case. In this subsection we provide a brief
outline of the content of the Section.
The aim of this Section is to show that%
\begin{equation}
\sqrt{n}\left( \frac{1}{n}\,\sum_{i=1}^{[nt]}g\left( \sqrt{n}\,\triangle
_{i}^{n}Y\right) -\int_{0}^{t}\rho _{\sigma _{u}}(g)\right) \rightarrow
\int_{0}^{t}\sqrt{\rho _{\sigma _{u}}(g^{2})-\rho _{\sigma _{u}}(g)^{2}}\;%
\mathrm{d}B_{u} \label{eqn 0}
\end{equation}%
where $B$\ is a Brownian motion independent of the process $Y$\ and the
convergence is (stably) in law. This case is important for the extension to
realised generalised bipower (and multipower) variation is relatively simple
once this fundamental result is established.
The proof of this result is done\ in a number of steps, some of them
following fairly standard reasoning, others requiring special techniques.
The first step is to rewrite the left hand side of (\ref{eqn 0}) as follows%
\begin{eqnarray*}
&&\sqrt{n}\left( \frac{1}{n}\,\sum_{i=1}^{[nt]}g(\sqrt{n}\,\triangle
_{i}^{n}Y)-\int_{0}^{t}\rho _{\sigma _{u}}(g)\mathrm{d}u\right) \\
&=&\frac{1}{\sqrt{n}}\,\sum_{i=1}^{[nt]}\left\{ g(\sqrt{n}\,\triangle
_{i}^{n}Y)-\mathrm{E}\left[ g(\triangle _{i}^{n}Y)\,|\,\mathcal{F}_{\frac{i-1%
}{n}}\right] \right\} \, \\
&&+\sqrt{n}\left( \frac{1}{n}\,\sum_{i=1}^{[nt]}\mathrm{E}\left[ g(\triangle
_{i}^{n}Y)\,|\,\mathcal{F}_{\frac{i-1}{n}}\right] \,-\int_{0}^{t}\rho
_{\sigma _{u}}(g)\mathrm{d}u\right) .
\end{eqnarray*}%
It is rather straightforward to show that the first term of the right hand
side satisfies
\begin{equation*}
\frac{1}{\sqrt{n}}\,\sum_{i=1}^{[nt]}\left\{ \,g(\sqrt{n}\,\triangle
_{i}^{n}Y)-\mathrm{E}\left[ g(\triangle _{i}^{n}Y)\,|\,\mathcal{F}_{\frac{i-1%
}{n}}\right] \right\} \rightarrow \int_{0}^{t}\sqrt{\rho _{\sigma
_{u}}(g^{2})-\rho _{\sigma _{u}}(g)^{2}}\mathrm{d}B_{u}.
\end{equation*}%
Hence what remains is to verify that%
\begin{equation}
\sqrt{n}\left( \frac{1}{n}\,\sum_{i=1}^{[nt]}\mathrm{E}\left[ g(\triangle
_{i}^{n}Y)\,|\,\mathcal{F}_{\frac{i-1}{n}}\right] \,-\int_{0}^{t}\rho
_{\sigma _{u}}(g)\mathrm{d}u\right) \rightarrow 0. \label{2}
\end{equation}%
We have%
\begin{eqnarray}
&&\sqrt{n}\left( \frac{1}{n}\,\sum_{i=1}^{[nt]}\mathrm{E}\left[ g(\triangle
_{i}^{n}Y)\,|\,\mathcal{F}_{\frac{i-1}{n}}\right] \,-\int_{0}^{t}\rho
_{\sigma _{u}}(g)\mathrm{d}u\right) \notag \\
&=&\frac{1}{\sqrt{n}}\,\sum_{i=1}^{[nt]}\mathrm{E}\left[ g(\triangle
_{i}^{n}Y)\,|\,\mathcal{F}_{\frac{i-1}{n}}\right] \,-\sqrt{n}%
\sum_{i=1}^{[nt]}\int_{(i-1)/n}^{i/n}\rho _{\sigma _{u}}(g)\mathrm{d}u
\notag \\
&&+\sqrt{n}\left( \sum_{i=1}^{[nt]}\int_{(i-1)/n}^{i/n}\rho _{\sigma _{u}}(g)%
\mathrm{d}u-\int_{0}^{t}\rho _{\sigma _{u}}(g)\mathrm{d}u\right) \label{3}
\end{eqnarray}%
where%
\begin{equation*}
\sqrt{n}\left\{ \sum_{i=1}^{[nt]}\int_{(i-1)/n}^{i/n}\rho _{\sigma _{u}}(g)%
\mathrm{d}u-\int_{0}^{t}\rho _{\sigma _{u}}(g)\mathrm{d}u\right\}
\rightarrow 0.
\end{equation*}%
The first term on the right hand side of (\ref{3}) is now split into the
difference of%
\begin{equation}
\frac{1}{\sqrt{n}}\,\sum_{i=1}^{[nt]}\left\{ \mathrm{E}\left[ g(\triangle
_{i}^{n}Y)\,|\,\mathcal{F}_{\frac{i-1}{n}}\right] \,-\rho _{\frac{i-1}{n}%
}\right\} \label{4}
\end{equation}%
where
\begin{equation*}
\rho _{\frac{i-1}{n}}=\rho _{\sigma _{\frac{i-1}{n}}}(g)=\mathrm{E}\left[
g(\sigma _{\frac{i-1}{n}}\triangle _{i}^{n}W)\,|\,\mathcal{F}_{\frac{i-1}{n}}%
\right]
\end{equation*}%
and%
\begin{equation}
\sqrt{n}\sum_{i=1}^{[nt]}\int_{(i-1)/n}^{i/n}\left\{ \rho _{\sigma _{u}}(g)%
\mathrm{d}u-\rho _{\frac{i-1}{n}}\right\} \mathrm{d}u. \label{4b}
\end{equation}%
It is rather easy to show that (\ref{4}) tends to $0$ in probability
uniformly in $t$. The challenge is thus to show the same result holds for (%
\ref{4b}).
To handle (\ref{4b}) one splits the individual terms in the sum into%
\begin{equation}
\sqrt{n}\ \Phi ^{\prime }\left( \sigma _{\frac{i-1}{n}}\right)
\int_{(i-1)/n}^{i/n}\left( \sigma _{u}-\sigma _{\frac{i-1}{n}}\right)
\mathrm{\,d}u \label{5}
\end{equation}%
plus%
\begin{equation}
\sqrt{n}\,\int_{(i-1)/n}^{i/n}\,\left\{ \Phi (\sigma _{u})-\Phi \left(
\sigma _{\frac{i-1}{n}}\right) -\Phi ^{\prime }\left( \sigma _{\frac{i-1}{n}%
}\right) \cdot \left( \sigma _{u}-\sigma _{\frac{i-1}{n}}\right) \right\}
\,\,\mathrm{d}u, \label{6}
\end{equation}%
where $\Phi (x)$ is a shorthand for $\rho _{x}(g)$ and $\Phi ^{\prime }(x)$
denotes the derivative with respect to $x$.\ That (\ref{6})\ tends to $0$\
may be shown via splitting it into two terms, each of which tends to $0$\ as
is verified by a sequence of inequalities, using in particular Doob's
inequality. To prove that (\ref{5}) converges to $0$, again one splits, this
time into three terms, using the differentiability of $g$\ in the relevant
regions and the mean value theorem for differentiable functions. The two
first of these terms can be handled by relatively simple means, the third
poses the most difficult part of the whole proof and is treated via
splitting it into seven parts. It is at this stage that the assumption that $%
g$\ be even comes into play and is crucial.
This section has six other subsections. In subsection \ref%
{subsection:conventions} we introduce our basic notation, while in \ref%
{subsection:model assumptions} we set out the model and review the
assumptions we use. In subsection \ref{subsection:main result} we state the
theorem we will prove. Subsections \ref{subsection: intermediate limiting
results}, \ref{subsection:13b} and \ref{subsection:proof of 13a} give the
proofs of the successive steps.
\subsection{Notational conventions \label{subsection:conventions}}
All processes mentioned in the following are defined on a given filtered
probability space $(\Omega ,\mathcal{F},(\mathcal{F}_{t}),P)$. We shall in
general use standard notation and conventions. For instance, given a process
$(Z_{t})$ we write
\begin{equation*}
\triangle _{i}^{n}Z:=Z_{\frac{i}{n}}-Z_{\frac{i-1}{n}},\ \ \ i,n\geq 1.
\end{equation*}
We are mainly interested in convergence in law of sequences of c\`{a}dl\`{a}%
g processes. In fact all results to be proved will imply convergence `stably
in law' which is a slightly stronger notion. For this we shall use the
notation
\begin{equation*}
(Z_{t}^{n})\rightarrow (Z_{t}),
\end{equation*}%
where $(Z_{t}^{n})$ and $(Z_{t})$ are given c\`{a}dl\`{a}g processes.
Furthermore we shall write
\begin{equation*}
(Z_{t}^{n})\overset{P}{\rightarrow }0\ \ \ \text{meaning}\ \ \sup_{0\leq
s\leq t}|Z_{s}^{n}|\rightarrow 0\ \ \mbox{\rm in\ probability\ for\ all}\
t\geq 0,
\end{equation*}%
\begin{equation*}
(Z_{t}^{n})\overset{P}{\rightarrow }(Z_{t})\ \ \ \text{meaning}\ \
(Z_{t}^{n}-Z_{t})\overset{P}{\rightarrow }0.
\end{equation*}%
Often
\begin{equation*}
Z_{t}^{n}=\sum_{i=1}^{[nt]}a_{i}^{n}\ \ \ \text{for all}\ t\geq 0,
\end{equation*}%
where the $a_{i}^{n}$'s are $\mathcal{F}_{\frac{i-1}{n}}$-measurable. Recall
here that given c\`{a}dl\`{a}g processes $(Z_{t}^{n}),\,(Y_{t}^{n})$ and $%
(Z_{t})$ we have
\begin{equation*}
(Z_{t}^{n})\rightarrow (Z_{t})\ \ \text{if}\ \ (Z_{t}^{n}-Y_{t}^{n})\overset{%
P}{\rightarrow }0\ \ \text{and}\ \ (Y_{t}^{n})\rightarrow (Z_{t}).\vspace{1mm%
}
\end{equation*}
Moreover, for $h:\mathbf{R}\rightarrow \mathbf{R}$ Borel measurable of at
most polynomial growth we note that $x\mapsto \rho _{x}(h)$ is locally
bounded and continuous if $h$ is continuous at $0$.\vspace{1mm}\newline
In what follows many arguments will consist of a series of estimates of
terms indexed by $i,n$ and $t$. In these estimates we shall denote by $C$ a
finite constant which may vary from place to place. Its value will depend on
the constants and quantities appearing in the assumptions of the model but
it is always independent of $i,n$ and $t$.
\subsection{Model and basic assumptions \label{subsection:model assumptions}}
Throughout the following $(W_{t})$ denotes a $((\mathcal{F}_{t}),P)$-Wiener
process and $(\sigma _{t})$ a given c\`{a}dl\`{a}g $(\mathcal{F}_{t})$%
-adapted process. Define
\begin{equation*}
Y_{t}:=\int_{0}^{t}\sigma _{s-}\,\mathrm{d}W_{s}\ \ \ \ \ t\geq 0,
\end{equation*}%
implying that is $(Y_{t})$ is a continuous local martingale. We have deleted
the drift of the $\left( Y_{t}\right) $ process as taking care of it is a
simple technical task, while its presence increase the clutter of the
notation. Our aim is to study the asymptotic behaviour of the processes
\begin{equation*}
\{(X_{t}^{n}(g))\,|\,n\geq 1\,\}
\end{equation*}%
where%
\begin{equation*}
X_{t}^{n}(g)=\frac{1}{n}\,\sum_{i=1}^{[nt]}g(\sqrt{n}\,\triangle
_{i}^{n}Y),\ \ \ t\geq 0,\,n\geq 1.
\end{equation*}%
Here $g:\mathbf{R}\rightarrow \mathbf{R}$ is a given continuous function of
at most polynomial growth. We are especially interested in $g$'s of the form
$x\mapsto |x|^{r}\ (r>0)$ but we shall keep the general notation since
nothing is gained in simplicity by assuming that $g$ is of power form. We
shall throughout the following assume that $g$ furthermore satisfies the
following.
\begin{assumption}
\textbf{(K)}: $g$ is an even function and continuously differentiable in $%
B^{c}$ where $B\subseteq \mathbf{R}$ is a closed Lebesgue null-set and $%
\exists \ M,\,p\geq 1$ such that
\begin{equation*}
|g(x+y)-g(x)|\leq M(1+|x|^{p}+|y|^{p})\cdot |y|\ ,
\end{equation*}%
for all $x,y\in \mathbf{R}$.
\end{assumption}
\begin{remark}
The assumption (K) implies, in particular, that if $x\in B^{c}$ then
\begin{equation*}
|g^{\prime }(x)|\leq M(1+|x|^{p}).\vspace{1mm}
\end{equation*}%
Observe that only power functions corresponding to $r\geq 1$ do satisfy (K).
The remaining case $0<r<1$ requires special arguments which will be omitted
here\vspace{1mm} (for details see \cite%
{BarndorffNielsenGraversenJacodPodolskyShephard(04shiryaev)}).
\end{remark}
In order to prove the CLT-theorem we need some additional structure on the
volatility process $(\sigma _{t})$. A natural set of assumptions would be
the following.
\begin{assumption}
\textbf{(H0)}: $(\sigma _{t})$ can be written as
\begin{equation*}
\sigma _{t}=\sigma _{0}+\int_{0}^{t}a_{s}^{\ast }\,\mathrm{d}%
s+\int_{0}^{t}\sigma _{s}^{\ast }\,\mathrm{d}W_{s}+\int_{0}^{t}v_{s-}^{\ast
}\,\mathrm{d}Z_{s}
\end{equation*}%
where $(Z_{t})$ is a $((\mathcal{F}_{t}),P)$-L\'{e}vy process independent of
$(W_{t})$ and $(\sigma _{t}^{\ast })$ and $(v_{t}^{\ast })$ are adapted c%
\`{a}dl\`{a}g processes and $(a_{t}^{\ast })$ a predictable locally bounded
process.
\end{assumption}
However, in modelling volatility it is often more natural to define $(\sigma
_{t}^{2})$ as being of the above form, i.e.%
\begin{equation*}
\sigma _{t}^{2}=\sigma _{0}^{2}+\int_{0}^{t}a_{s}^{\ast }\,\mathrm{d}%
s+\int_{0}^{t}\sigma _{s}^{\ast }\,\mathrm{d}W_{s}+\int_{0}^{t}v_{s-}^{\ast
}\,\mathrm{d}Z_{s}.
\end{equation*}%
Now this does not in general imply that $(\sigma _{t})$ has the same form;
therefore we shall replace (H0) by the more general structure given by the
following assumption.
\begin{assumption}
\textbf{(H1)}: $(\sigma _{t})$ can be written, for $t\geq 0$, as%
\begin{equation*}
\begin{array}{lll}
\sigma _{t} & = & \displaystyle\sigma _{0}+\int_{0}^{t}a_{s}^{\ast }\,%
\mathrm{d}s+\int_{0}^{t}\sigma _{s}^{\ast }\,\mathrm{d}W_{s}+%
\int_{0}^{t}v_{s-}^{\ast }\,\mathrm{d}V_{s} \\
& & \displaystyle+\int_{0}^{t}\int_{E}q\circ \phi (s-,x)\,(\mu -\nu )(%
\mathrm{d}s\,\mathrm{d}x) \\
& & \displaystyle+\int_{0}^{t}\int_{E}\ \left\{ \phi (s-,x)-q\circ \phi
(s-,x)\right\} \,\mu (\mathrm{d}s\,\mathrm{d}x).%
\end{array}%
\end{equation*}%
Here $(a_{t}^{\ast }),\,(\sigma _{t}^{\ast })$ and $(v_{t}^{\ast })$ are as
in (H0) and $(V_{t})$ is another $((\mathcal{F}_{t}),P)$-Wiener process
independent of $(W_{t})$ while $q$ is a continuous truncation function on $%
\mathbf{R}$, i.e. a function with compact support coinciding with the
identity on a neighbourhood of $0$. Further $\mu $ is a Poisson random
measure on $(0,\infty )\times E$ independent of $(W_{t})$ and $(V_{t})$ with
intensity measure $\nu (\mathrm{d}s\,\mathrm{d}x)=\mathrm{d}s\otimes F(%
\mathrm{d}x)$, $F$ being a $\sigma $-finite mea\-sure on a measurable space $%
(E,\mathcal{E})$ and
\begin{equation*}
(\omega ,s,x)\mapsto \phi (\omega ,s,x)
\end{equation*}%
is a map from $\Omega \times \,[\,0,\infty )\times E$ into $\mathbf{R}$
which is $\mathcal{F}_{s}\otimes \mathcal{E}$ measurable in $(\omega ,x)$
for all $s$ and c\`{a}dl\`{a}g in $s$, satisfying furthermore that for some
sequence of stopping times $(S_{k})$ increasing to $+\infty $ we have for
all $k\geq 1$
\begin{equation*}
\int_{E}\left\{ 1\wedge \psi _{k}(x)^{2}\right\} \,F(\mathrm{d}x)<\infty ,
\end{equation*}%
where%
\begin{equation*}
\psi _{k}(x)=\sup_{\omega \in \Omega ,\,s<S_{k}(\omega )}|\phi (\omega
,s,x)|.
\end{equation*}
\end{assumption}
\begin{remark}
(H1) is weaker than (H0), and if $(\sigma _{t}^{2})$ satisfies (H1) then so
does $(\sigma _{t})$.\newline
\end{remark}
Finally we shall also assume a non-degeneracy in the model.
\begin{assumption}
\textbf{(H2)}: $(\sigma _{t})$ satisfies
\begin{equation*}
0<\sigma _{t}^{2}(\omega )\ \text{for all}\ (t,\omega ).\vspace{1mm}
\end{equation*}
\end{assumption}
According to general stochastic analysis theory it is known that to prove
convergence in law of a sequence $(Z_{t}^{n})$ of c\`{a}dl\`{a}g processes
it suffices to prove the convergence of each of the stopped processes $%
(Z_{T_{k}\wedge t}^{n})$ for at least one sequence of stopping times $%
(T_{k}) $ increasing to $+\infty $. Applying this together with standard
localisation techniques (for details see \cite%
{BarndorffNielsenGraversenJacodPodolskyShephard(04shiryaev)}), we may assume
that the following more restrictive assumptions are satisfied.
\begin{assumption}
\textbf{(H1a)}: $(\sigma _{t})$ can be written as
\begin{equation*}
\sigma _{t}=\sigma _{0}+\int_{0}^{t}a_{s}^{\ast }\,ds+\int_{0}^{t}\sigma
_{s-}^{\ast }\,\mathrm{d}W_{s}+\int_{0}^{t}v_{s-}^{\ast }\,\mathrm{d}%
V_{s}+\int_{0}^{t}\int_{E}\phi (s-,x)(\mu -\nu )(\mathrm{d}s\,\mathrm{d}x)\
\ \ t\geq 0.\vspace{1mm}
\end{equation*}%
Here $(a_{t}^{\ast }),\,(\sigma _{t}^{\ast })$ and $(v_{t}^{\ast })$ are
real valued uniformly bounded c\`{a}dl\`{a}g $(\mathcal{F}_{t})$-adapted
proces\-ses; $(V_{t})$ is another $((\mathcal{F}_{t}),P)$-Wiener process
independent of $(W_{t})$. Further $\mu $ is a Poisson random measure on $%
(0,\infty )\times E$ independent of $(W_{t})$ and $(V_{t})$ with intensity
measure $\nu (\mathrm{d}s\,\mathrm{d}x)=\mathrm{d}s\otimes F(\mathrm{d}x)$, $%
F$ being a $\sigma $-finite mea\-sure on a measurable space $(E,\mathcal{E})$
and
\begin{equation*}
(\omega ,s,x)\mapsto \phi (\omega ,s,x)
\end{equation*}%
is a map from $\Omega \times \,[\,0,\infty )\times E$ into $\mathbf{R}$
which is $\mathcal{F}_{s}\otimes \mathcal{E}$ measurable in $(\omega ,x)$
for all $s$ and c\`{a}dl\`{a}g in $s$, satisfying furthermore
\begin{equation*}
\psi (x)=\sup_{\omega \in \Omega ,\,s\geq 0}|\phi (\omega ,s,x)|\leq
M<\infty \ \ \text{and}\ \ \int \psi (x)^{2}\,F(\mathrm{d}x)<\infty .\vspace{%
1mm}
\end{equation*}
\end{assumption}
Likewise, by a localisation argument, we may assume
\begin{assumption}
\textbf{(H2a)}: $(\sigma _{t})$ satisfies
\begin{equation*}
a<\sigma _{t}^{2}(\omega )<b\ \ \ \text{for all}\ (t,\omega )\ \text{for some%
}\ a,b\in (0,\infty ).\vspace{1mm}
\end{equation*}
\end{assumption}
Observe that under the more restricted assumptions $(Y_{t})$ is a continuous
martingale having moments of all orders and $(\sigma _{t})$ is represented
as a sum of three square integrable martingales plus a continuous process of
bounded variation. Furthermore, the increments of the increasing processes
corresponding to the three martingales and of the bounded variation process
are dominated by a constant times $\triangle t$, implying in particular that
\begin{equation}
\mathrm{E}\left[ \,\left\vert \sigma _{v}-\sigma _{u}\right\vert ^{2}\right]
\leq C\,(v-u),\ \ \ \ \text{for all}\ 0\leq u<v.\vspace{2mm} \label{8}
\end{equation}
\subsection{Main result \label{subsection:main result}\newline
}
As already mentioned, our aim is to show the following special version of
the general CLT-result given as Theorem \ref{TT3}.
\begin{theorem}
\vspace{2mm}Under assumptions (K), (H1a) and (H2a), there exists a Wiener
process $(B_{t})$ defined on some extension of $(\Omega ,\mathcal{F},(%
\mathcal{F}_{t}),P)$ and independent of $\mathcal{F}$ such that%
\begin{equation}
\left( \sqrt{n}\left( \,\ \frac{1}{n}\,\sum_{i=1}^{[nt]}g(\sqrt{n}%
\,\triangle _{i}^{n}Y)-\int_{0}^{t}\rho _{\sigma _{u}}(g)\,\mathrm{d}%
u\,\right) \right) \rightarrow \int_{0}^{t}\sqrt{\rho _{\sigma
_{u-}}(g^{2})-\rho _{\sigma _{u-}}(g)^{2}}\,\mathrm{d}B_{u}.
\label{Main result}
\end{equation}%
\emph{\ }
\end{theorem}
Introducing the notation%
\begin{equation*}
U_{t}(g)=\int_{0}^{t}\sqrt{\rho _{\sigma _{u-}}(g^{2})-\rho _{\sigma
_{u-}}(g)^{2}}\,\mathrm{d}B_{u}\ \ \ t\geq 0\vspace{1mm}
\end{equation*}%
we may reexpress (\ref{Main result}) as
\begin{equation}
\left( \sqrt{n}\,\left( X_{t}^{n}(g)-\int_{0}^{t}\sigma _{u}(g)\,\mathrm{d}%
u\right) \,\right) \rightarrow (U_{t}(g)). \label{Main result reform}
\end{equation}%
To prove this, introduce the set of variables $\{\beta
_{i}^{n}\,|\,i,\,n\geq 1\}$ given by
\begin{equation*}
\beta _{i}^{n}=\sqrt{n}\cdot \sigma _{\frac{i-1}{n}}\cdot \triangle
_{i}^{n}W,\ \ \ i,\,n\geq 1.
\end{equation*}
The $\beta _{i}^{n}$'s should be seen as approximations to $\sqrt{n}%
\,\triangle _{i}^{n}Y$. In fact, since
\begin{equation*}
\sqrt{n}\,\triangle _{i}^{n}Y-\beta _{i}^{n}=\sqrt{n}\,\int_{(i-1)/n}^{i/n}(%
\sigma _{s}-\sigma _{\frac{i-1}{n}})\,\mathrm{d}W_{s}
\end{equation*}%
and $(\sigma _{t})$ is uniformly bounded, a straightforward application of (%
\ref{8}) and the Burkholder-Davis-Gundy-inequalities (e.g. \cite[pp. 160-171]%
{RevuzYor(99)}) gives for every $p>0$ the following simple estimates.
\begin{equation}
\mathrm{E}\left[ \,|\sqrt{n}\,\triangle _{i}^{n}Y-\beta _{i}^{n}|^{p}\,|\,%
\mathcal{F}_{\frac{i-1}{n}}\right] \leq \frac{C_{p}}{n^{p\wedge 1}}
\end{equation}%
and
\begin{equation}
\mathrm{E}\left[ \,|\sqrt{n}\,\triangle _{i}^{n}Y|^{p}+|\beta
_{i}^{n}|^{p}\,|\,\mathcal{F}_{\frac{i-1}{n}}\right] \leq C_{p}\vspace{1mm}
\label{12}
\end{equation}%
for all $i,n\geq 1$. Observe furthermore that
\begin{equation*}
\mathrm{E}\left[ g(\beta _{i}^{n})\,|\,\mathcal{F}_{\frac{i-1}{n}}\right]
=\rho _{\sigma _{\frac{i-1}{n}}}(g),\ \ \ \text{for all}\ i,\,n\geq 1.%
\vspace{1mm}
\end{equation*}
Introduce for convenience, for each $t>0$ and $n\geq 1$, the shorthand
notation
\begin{equation*}
U_{t}^{n}(g)=\frac{1}{\sqrt{n}}\,\sum_{i=1}^{[nt]}\,\,\left\{ g(\sqrt{n}%
\,\triangle _{i}^{n}Y)-\mathrm{E}\left[ g(\sqrt{n}\,\triangle _{i}^{n}Y)\,|\,%
\mathcal{F}_{\frac{i-1}{n}}\right] \right\} \,
\end{equation*}%
and
\begin{equation*}
\tilde{U}_{t}^{n}(g)=\frac{1}{\sqrt{n}}\,\sum_{i=1}^{[nt]}\,\,\left\{
g(\beta _{i}^{n})-\rho _{\sigma _{\frac{i-1}{n}}}(g)\right\} =\frac{1}{\sqrt{%
n}}\,\sum_{i=1}^{[nt]}\,\,\left\{ g(\beta _{i}^{n})-\mathrm{E}\left[ g(\beta
_{i}^{n})\,|\,\mathcal{F}_{\frac{i-1}{n}}\right] \right\} \,.\vspace{1mm}
\end{equation*}%
The asymptotic behaviour of $(\tilde{U}_{t}^{n}(g))$ is well known. More
precisely under the the given assumptions\thinspace (\thinspace in fact much
less is needed\thinspace ) we have
\begin{equation*}
(U_{t}^{n}(g))\rightarrow (U_{t}(g)).\vspace{1mm}
\end{equation*}%
This result is a rather straightforward consequence of \cite[Theorem IX.7.28]%
{JacodShiryaev(03)}. Thus, if $(U_{t}^{n}(g)-\tilde{U}_{t}^{n}(g))\overset{P}%
{\rightarrow }0$ we may deduce the following result.
\begin{theorem}
\label{theorem B}\ \ \emph{Let $(B_{t})$ and $(U_{t}(g))$ be as above. Then}
\begin{equation*}
(\tilde{U}_{t}^{n}(g))\rightarrow (U_{t}(g)).\vspace{1mm}
\end{equation*}
\end{theorem}
\noindent \textbf{Proof.}
As pointed out just above it is enough to prove that
\begin{equation*}
(U_{t}^{n}(g)-\tilde{U}_{t}^{n}(g))\overset{P}{\rightarrow }0.
\end{equation*}%
But for $t\geq 0$ and $n\geq 1$
\begin{equation*}
U_{t}^{n}(g)-\tilde{U}_{t}^{n}(g)=\sum_{i=1}^{[nt]}\,\left( \xi _{i}^{n}-%
\mathrm{E}\left[ \xi _{i}^{n}\,|\,\mathcal{F}_{\frac{i-1}{n}}\right] \right)
\end{equation*}%
where
\begin{equation*}
\xi _{i}^{n}=\frac{1}{\sqrt{n}}\left\{ g(\sqrt{n}\triangle
_{i}^{n}Y)-g(\beta _{i}^{n})\right\} ,\ \ \ i,n\geq 1.
\end{equation*}%
Thus we have to prove
\begin{equation*}
\left( \,\sum_{i=1}^{[nt]}\,\left\{ \xi _{i}^{n}-\mathrm{E}\left[ \xi
_{i}^{n}\,|\,\mathcal{F}_{\frac{i-1}{n}}\right] \right\} \right) \overset{P}{%
\rightarrow }0.
\end{equation*}%
But, as the left hand side of this relation is a sum of martingale
differences, this is implied by Doob's inequality (e.g. \cite[pp. 54-55]%
{RevuzYor(99)}) if for all $t>0$
\begin{equation*}
\sum_{i=1}^{[nt]}\,\mathrm{E}[(\xi _{i}^{n})^{2}]=\mathrm{E}%
[\,\sum_{i=1}^{[nt]}\,\mathrm{E}[(\xi _{i}^{n})^{2}\,|\,\mathcal{F}_{\frac{%
i-1}{n}}]\,]\rightarrow 0\ \ \ \text{as}\ n\rightarrow \infty .\vspace{1mm}
\end{equation*}%
Fix $t>0$. Using the Cauchy-Schwarz inequality and the
Burkholder-Davis-Gundy inequalities we have for all $i,n\geq 1$.
\begin{eqnarray*}
\mathrm{E}\left[ (\xi _{i}^{n})^{2}\,|\,\mathcal{F}_{\frac{i-1}{n}}\right]
&=&\frac{1}{n}\,\mathrm{E}\left[ \left\{ g(\sqrt{n}\triangle
_{i}^{n}Y)-\beta _{i}^{n}+\beta _{i}^{n}-g(\beta _{i}^{n})\right\} ^{2}\,|\,%
\mathcal{F}_{\frac{i-1}{n}}\right] \\
&\leq &\frac{C}{n}\,\mathrm{E}\left[ \,(1+|\sqrt{n}\triangle
_{i}^{n}Y|^{p}+|\beta _{i}^{n}|^{p})^{2}\cdot (\sqrt{n}\triangle
_{i}^{n}Y-\beta _{i}^{n})^{2}\,|\,\mathcal{F}_{\frac{i-1}{n}}\right] \\
&\leq &\frac{C}{n}\,\sqrt{\mathrm{E}\left[ \,(1+|\sqrt{n}\triangle
_{i}^{n}Y|^{2p}+|\beta _{i}^{n}|^{2p})\,|\,\mathcal{F}_{\frac{i-1}{n}}\right]
}\cdot \sqrt{\mathrm{E}\left[ (\sqrt{n}\triangle _{i}^{n}Y-\beta
_{i}^{n})^{4}\,|\,\mathcal{F}_{\frac{i-1}{n}}\right] } \\
&\leq &C\,\sqrt{\mathrm{E}\left[ \,\left( \int_{(i-1)/n}^{i/n}\left( \sigma
_{u-}-\sigma _{\frac{i-1}{n}}\right) \,\mathrm{d}W_{u}\right) ^{4}\,|\,%
\mathcal{F}_{\frac{i-1}{n}}\right] } \\
&\leq &C\,\sqrt{\mathrm{E}\left[ \left( \int_{(i-1)/n}^{i/n}\left( \sigma
_{u-}-\sigma _{\frac{i-1}{n}}\right) ^{2}\,\mathrm{d}u\right) ^{2}\,|\,%
\mathcal{F}_{\frac{i-1}{n}}\right] }.
\end{eqnarray*}%
Thus
\begin{eqnarray*}
\sum_{i=1}^{[nt]}\,\mathrm{E}[(\xi _{i}^{n})^{2}] &\leq &Cn\,\frac{t}{n}%
\,\sum_{i=1}^{[nt]}\mathrm{E}\,\left[ \sqrt{\mathrm{E}\,\left[ \left(
\int_{(i-1)/n}^{i/n}\left( \sigma _{u-}-\sigma _{\frac{i-1}{n}}\right) ^{2}\,%
\mathrm{d}u\right) ^{2}\,|\,\mathcal{F}_{\frac{i-1}{n}}\right] }\right] \, \\
&\leq &C\,tn\,\sqrt{\frac{1}{n}\,\sum_{i=1}^{[nt]}\mathrm{E}\left[ \left(
\int_{(i-1)/n}^{i/n}\left( \sigma _{u-}-\sigma _{\frac{i-1}{n}}\right) ^{2}\,%
\mathrm{d}u\right) ^{2}\right] } \\
&\leq &Ctn\,\sqrt{\frac{1}{n^{2}}\,\sum_{i=1}^{[nt]}\mathrm{E}\left[
\,\int_{(i-1)/n}^{i/n}\left( \sigma _{u-}-\sigma _{\frac{i-1}{n}}\right)
^{4}\,\mathrm{d}u\right] \,} \\
&\leq &Ct\,\sqrt{\,\sum_{i=1}^{[nt]}\int_{(i-1)/n}^{i/n}\mathrm{E}\left[
\left( \sigma _{u-}-\sigma _{\frac{i-1}{n}}\right) ^{2}\,\right] \mathrm{d}%
u\,} \\
&\rightarrow &\,0\vspace{2mm},
\end{eqnarray*}%
as $n\rightarrow \infty $ by Lebesgue's Theorem and the boundedness of $%
(\sigma _{t})$.
\noindent $\square $
To prove the convergence (\ref{Main result reform}) it suffices, using
Theorem \ref{theorem B} above, to prove that
\begin{equation*}
\left( U_{t}^{n}(g)-\sqrt{n}\,\left\{ \,X_{t}^{n}(g)-\int_{0}^{t}\rho
_{\sigma _{u}}(g)\,\mathrm{d}u\right\} \,\right) \overset{P}{\rightarrow }0.%
\vspace{1mm}
\end{equation*}%
But as
\begin{equation*}
U_{t}^{n}(g)-\sqrt{n}\,X_{t}^{n}(g)=-\frac{1}{\sqrt{n}}\,\sum_{i=1}^{[nt]}%
\mathrm{E}\left[ \,g(\sqrt{n}\,\triangle _{i}^{n}Y)\,|\,\mathcal{F}_{\frac{%
i-1}{n}}\right]
\end{equation*}%
and, as is easily seen,
\begin{equation*}
\left( \sqrt{n}\,\int_{0}^{t}\rho _{\sigma _{u}}\,\left( g\right) \mathrm{d}%
u-\,\sum_{i=1}^{[nt]}\sqrt{n}\,\int_{(i-1)/n}^{i/n}\rho _{\sigma _{u}}(g)\,%
\mathrm{d}u\right) \overset{P}{\rightarrow }0,\vspace{2mm}
\end{equation*}%
the job is to prove that
\begin{equation*}
\,\sum_{i=1}^{[nt]}\eta _{i}^{n}\,\overset{P}{\rightarrow }0\ \ \ \text{for
all}\ t>0,
\end{equation*}%
where for $i,n\geq 1$
\begin{equation*}
\eta _{i}^{n}=\,\frac{1}{\sqrt{n}}\,\mathrm{E}\,\left[ g(\sqrt{n}\,\triangle
_{i}^{n}Y)\,|\,\mathcal{F}_{\frac{i-1}{n}}\right] \,-\sqrt{n}%
\,\int_{(i-1)/n}^{i/n}\rho _{\sigma _{u}}(g)\,\mathrm{d}u.\vspace{1mm}
\end{equation*}%
Fix $t>0$ and write, for all $i,n\geq 1$,
\begin{equation*}
\eta _{i}^{n}=\eta (1)_{i}^{n}+\eta (2)_{i}^{n}
\end{equation*}%
where
\begin{equation}
\eta (1)_{i}^{n}=\frac{1}{\sqrt{n}}\,\,\left\{ \mathrm{E}\left[ \,g(\sqrt{n}%
\,\triangle _{i}^{n}Y)\,|\,\mathcal{F}_{\frac{i-1}{n}}\right] -\rho _{\sigma
_{\frac{i-1}{n}}}(g)\,\right\}
\end{equation}%
and
\begin{equation}
\eta (2)_{i}^{n}=\sqrt{n}\,\int_{(i-1)/n}^{i/n}\left\{ \rho _{\sigma
_{u}}(g)-\rho _{\sigma _{\frac{i-1}{n}}}(g)\right\} \mathrm{d}u.\vspace{1mm}
\end{equation}
We will now separately prove
\begin{equation}
\,\eta (1)^{n}=\sum_{i=1}^{[nt]}\eta (1)_{i}^{n}\,\overset{P}{\rightarrow }0
\label{13a}
\end{equation}%
and%
\begin{equation}
\,\eta (2)^{n}=\,\sum_{i=1}^{[nt]}\eta (2)_{i}^{n}\,\overset{P}{\rightarrow }%
0\vspace{1mm}. \label{13b}
\end{equation}
\subsection{Some auxiliary estimates\label{subsection: intermediate limiting
results}}
In order to show (\ref{13a}) and (\ref{13b}) we need some refinements of the
estimate (\ref{8}) above. To state these we split up $(\sqrt{n}\,\triangle
_{i}^{n}Y-\beta _{i}^{n})$ into several terms. By definition
\begin{equation*}
\sqrt{n}\,\triangle _{i}^{n}Y-\beta _{i}^{n}=\sqrt{n}\,\int_{(i-1)/n}^{i/n}%
\,\left( \sigma _{u-}-\sigma _{\frac{i-1}{n}}\right) \,\mathrm{d}W_{u}%
\vspace{1mm}
\end{equation*}%
for all $i,n\geq 1$. Writing
\begin{equation*}
E_{n}=\{x\in \mathrm{E}\,|\,|\Psi (x)|>1/\sqrt{n}\,\}
\end{equation*}%
the difference $\sigma _{u}-\sigma _{\frac{i-1}{n}}$ equals
\begin{eqnarray*}
&&\int_{(i-1)/n}^{u}a_{s}^{\ast }\,ds+\int_{(i-1)/n}^{u}\sigma _{s-}^{\ast
}\,\mathrm{d}W_{s}+\int_{(i-1)/n}^{u}v_{s-}^{\ast }\,\mathrm{d}%
V_{s}+\int_{(i-1)/n}^{u}\int_{E}\phi (s-,x)\,(\mu -\nu )(\mathrm{d}s\,%
\mathrm{d}x)\vspace{1mm} \\
&=&\displaystyle\sum_{j=1}^{5}\xi (j)_{i}^{n}(u),
\end{eqnarray*}%
for $i,n\geq 1$ and $u\geq (i-1)/n$ where
\begin{eqnarray*}
\displaystyle\xi (1)_{i}^{n}(u) &=&\int_{(i-1)/n}^{u}a_{s}^{\ast }\,\mathrm{d%
}s+\int_{(i-1)/n}^{u}\,\left( \sigma _{s-}^{\ast }-\sigma _{\frac{i-1}{n}%
}^{\ast }\right) \,\mathrm{d}W_{s}+\int_{(i-1)/n}^{u}\,\left( v_{s-}^{\ast
}-v_{\frac{i-1}{n}}^{\ast }\right) \,\mathrm{d}V_{s} \\
\displaystyle\xi (2)_{i}^{n}(u) &=&\sigma _{\frac{i-1}{n}}^{\ast }\,\left(
W_{u}-W_{\frac{i-1}{n}}\right) +v_{\frac{i-1}{n}}^{\ast }\,\left( V_{u}-V_{%
\frac{i-1}{n}}\right) \\
\displaystyle\xi (3)_{i}^{n}(u) &=&\int_{(i-1)/n}^{u}\int_{E_{n}^{c}}\phi
(s-,x)\,(\mu -\nu )(\mathrm{d}s\,\mathrm{d}x) \\
\displaystyle\xi (4)_{i}^{n}(u) &=&\int_{(i-1)/n}^{u}\int_{E_{n}}\left\{
\phi (s-,x)-\phi \left( \frac{i-1}{n},x\right) \right\} \,(\mu -\nu )(%
\mathrm{d}s\,\mathrm{d}x) \\
\displaystyle\xi (5)_{i}^{n}(u) &=&\int_{(i-1)/n}^{u}\int_{E_{n}}\phi \left(
\frac{i-1}{n},x\right) \,(\mu -\nu )(\mathrm{d}s\,\mathrm{d}x)\vspace{1mm}
\end{eqnarray*}%
That is, for $i,n\geq 1$,
\begin{equation}
\sqrt{n}\,\triangle _{i}^{n}Y-\beta _{i}^{n}=\sum_{j=1}^{5}\xi (j)_{i}^{n}
\label{17}
\end{equation}%
where
\begin{equation*}
\ \xi (j)_{i}^{n}=\sqrt{n}\,\int_{(i-1)/n}^{i/n}\,\xi (j)_{i}^{n}(u-)\,%
\mathrm{d}W_{u}\ \ \ \ \mbox{\rm for}\ j=1,2,3,4,5.\vspace{1mm}
\end{equation*}%
The specific form of the variables implies, using Burkholder-Davis-Gundy
inequalities, that for every $q\geq 2$ we have
\begin{eqnarray*}
\mathrm{E}[\,|\xi (j)_{i}^{n}|^{q}\,] &\leq &C_{q}\,n^{q/2}\,\mathrm{E}\,%
\left[ \left( \int_{(i-1)/n}^{i/n}\,\xi (j)_{i}^{n}(u)^{2}\,\mathrm{d}%
u\right) ^{q/2}\right] \\
&\leq &\displaystyle n\int_{(i-1)/n}^{i/n}\,\mathrm{E}[\,|\xi
(j)_{i}^{n}(u)|^{q}\,]\mathrm{\,d}u \\
&\leq &\displaystyle\sup_{(i-1)/n\leq u\leq i/n}\,\mathrm{E}[\,|\xi
(j)_{i}^{n}(u)|^{q}\,]
\end{eqnarray*}%
for all $i,n\geq 1$ and all $j$. These terms will now be estimated. This is
done in the following series of lemmas where $i$ and $n$ are arbitrary and
we use the notation
\begin{equation*}
d_{i}^{n}=\int_{(i-1)/n}^{i/n}\mathrm{E}\,\left[ \left( \sigma _{s-}^{\ast
}-\sigma _{\frac{i-1}{n}}^{\ast }\right) ^{2}+\left( v_{s-}^{\ast }-v_{\frac{%
i-1}{n}}^{\ast }\right) ^{2}+\int_{E}\left\{ \phi (s-,x)-\phi \left( \frac{%
i-1}{n},x\right) \right\} ^{2}\,F(\mathrm{d}x)\right] \mathrm{d}s.\vspace{1mm%
}
\end{equation*}
\begin{lemma}
\label{lemma 1st}
\begin{equation*}
\mathrm{E}[\,(\xi (1)_{i}^{n})^{2}]\leq C_{1}\cdot (1/n^{2}+d_{i}^{n}).
\end{equation*}
\end{lemma}
\begin{lemma}
\label{lemma 2nd}%
\begin{equation*}
\mathrm{E}[\,(\xi (2)_{i}^{n})^{2}]\leq C_{2}/n.
\end{equation*}
\end{lemma}
\begin{lemma}
\begin{equation*}
\mathrm{E}[\,(\xi (3)_{i}^{n})^{2}]\leq C_{3}\,\varphi (1/\sqrt{n})/n,
\end{equation*}%
where%
\begin{equation*}
\varphi (\epsilon )=\int_{\{\,|\Psi |\leq \epsilon \,\}}\Psi (x)^{2}\,F(%
\mathrm{d}x).
\end{equation*}
\end{lemma}
\begin{lemma}
\begin{equation*}
\mathrm{E}[\,(\xi (4)_{i}^{n})^{2}]\leq C_{4}\,d_{i}^{n}.
\end{equation*}
\end{lemma}
\begin{lemma}
\label{lemma 5th}
\begin{equation*}
\mathrm{E}[\,(\xi (5)_{i}^{n})^{2}]\leq C_{5}/n.
\end{equation*}
\end{lemma}
The proofs of these five Lemmas rely on straightforward martingale
inequalities.
Observe that Lebesgue's Theorem ensures, since the processes involved are
assumed c\`{a}dl\`{a}g and uniformly bounded, that as $n\rightarrow \infty $
\begin{equation*}
\sum_{i=1}^{[nt]}d_{i}^{n}\,\rightarrow 0\ \ \ \ \text{for all}\ t>0.\vspace{%
1mm}
\end{equation*}
Taken together these statements imply the following result.
\begin{corollary}
\noindent \emph{For\ all\ $t>0$ as }$n\rightarrow \infty $\emph{\ }
\begin{equation*}
\sum_{i=1}^{[nt]}\,\left\{ \mathrm{E}[\,(\xi (1)_{i}^{n})^{2}]+\mathrm{E}%
[\,(\xi (3)_{i}^{n})^{2}]+\mathrm{E}[\,(\xi (4)_{i}^{n})^{2}]\right\}
\,)\,\rightarrow 0.\vspace{1mm}
\end{equation*}
\end{corollary}
Below we shall invoke this Corollary as well as Lemmas \ref{lemma 2nd} and %
\ref{lemma 5th}.\ \newline
\subsection{Proof of $\,\protect\eta (2)^{n}\protect\overset{P}{\rightarrow }%
0$ \label{subsection:13b}}
Recall we wish to show that
\begin{equation}
\,\eta (2)^{n}=\sum_{i=1}^{[nt]}\eta (2)_{i}^{n}\,\overset{P}{\rightarrow }0.
\label{eqn 7}
\end{equation}%
>From now on let $t>0$ be fixed. We split the $\eta (2)_{i}^{n}$'s according
to
\begin{equation*}
\eta (2)_{i}^{n}=\eta ^{\prime }(2)_{i}^{n}+\eta ^{\prime \prime
}(2)_{i}^{n}\ \ \ \ i,n\geq 1
\end{equation*}%
where, writing $\Phi (x)$ for $\rho _{x}(g)$,
\begin{equation*}
\eta ^{\prime }(2)_{i}^{n}=\sqrt{n}\ \Phi ^{\prime }\left( \sigma _{\frac{i-1%
}{n}}\right) \int_{(i-1)/n}^{i/n}\left( \sigma _{u}-\sigma _{\frac{i-1}{n}%
}\right) \,\mathrm{d}u
\end{equation*}%
and
\begin{equation*}
\eta ^{\prime \prime }(2)_{i}^{n}=\sqrt{n}\,\int_{(i-1)/n}^{i/n}\,\,\left\{
\Phi (\sigma _{u})-\Phi \left( \sigma _{\frac{i-1}{n}}\right) -\Phi ^{\prime
}\left( \sigma _{\frac{i-1}{n}}\right) \cdot \left( \sigma _{u}-\sigma _{%
\frac{i-1}{n}}\right) \right\} \,\,\mathrm{d}u.
\end{equation*}%
Observe that the assumptions on $g$ imply that $x\mapsto \Phi (x)$ is
differentiable with a bounded derivative on any bounded interval not
including $0$; in particular\thinspace (see (H2a))
\begin{equation}
|\,\Phi (x)-\Phi (y)-\Phi ^{\prime }(y)\cdot (x-y)\,|\leq \Psi (|x-y|)\cdot
|x-y|,\ \ \ x^{2},y^{2}\in (a,b), \label{eqn 8}
\end{equation}%
where $\Psi :\mathbf{R}_{+}\rightarrow \mathbf{R}_{+}$ is continuous,
increasing and $\Psi (0)=0$. \vspace{1mm}
With this notation we shall prove (\ref{eqn 7}) by showing%
\begin{equation*}
\,\sum_{i=1}^{[nt]}\eta ^{\prime }(2)_{i}^{n}\,\overset{P}{\rightarrow }0
\end{equation*}%
and
\begin{equation*}
\ \,\sum_{i=1}^{[nt]}\eta ^{\prime \prime }(2)_{i}^{n}\,\overset{P}{%
\rightarrow }0.\vspace{1mm}
\end{equation*}%
Inserting the description of $(\sigma _{t})$\thinspace (see (H1a)) we may
write
\begin{equation*}
\eta ^{\prime }(2)_{i}^{n}=\eta ^{\prime }(2,1)_{i}^{n}+\eta ^{\prime
}(2,2)_{i}^{n}
\end{equation*}%
where for all $i,n\geq 1$
\begin{equation*}
\eta ^{\prime }(2,1)_{i}^{n}=\sqrt{n}\ \Phi ^{\prime }\left( \sigma _{\frac{%
i-1}{n}}\right) \int_{(i-1)/n}^{i/n}\left( \int_{(i-1)/n}^{u}a_{s}^{\ast }\,%
\mathrm{d}s\right) \,\,\mathrm{d}u
\end{equation*}%
and
\begin{eqnarray*}
\eta ^{\prime }(2,2)_{i}^{n} &=&\displaystyle\sqrt{n}\ \Phi ^{\prime }\left(
\sigma _{\frac{i-1}{n}}\right) \int_{(i-1)/n}^{i/n}\,\left[
\int_{(i-1)/n}^{u}\,\sigma _{s-}^{\ast }\,\mathrm{d}W_{s}+\int_{(i-1)/n}^{u}%
\,v_{s-}^{\ast }\,\mathrm{d}V_{s}\right. \, \\
&&+\displaystyle\left. \int_{E}\phi (s-,x)\,(\mu -\nu )(\mathrm{d}s\,\mathrm{%
d}x)\right] \mathrm{d}u.\vspace{1mm}
\end{eqnarray*}%
By (H2a) and (\ref{eqn 8}) and the uniform boundedness of $(a_{t}^{\ast })$
we have
\begin{equation*}
|\eta ^{\prime }(2,1)_{i}^{n}|\leq C\,\sqrt{n}\,\int_{(i-1)/n}^{i/n}\left\{
u-(i-1)/n\right\} \,\mathrm{d}u\leq C/n^{3/2}
\end{equation*}%
for all $i,n\geq 1$ and thus
\begin{equation*}
\,\sum_{i=1}^{[nt]}\eta ^{\prime }(2,1)_{i}^{n}\,\overset{P}{\rightarrow }0.%
\vspace{1mm}
\end{equation*}%
Since
\begin{equation*}
(W_{t}),\ (V_{t})\ \text{and}\ \left( \int_{0}^{t}\int_{E}\phi (s-,x)(\mu
-\nu )(\mathrm{d}s\,\mathrm{d}x)\right) \vspace{1mm}
\end{equation*}%
are all martingales we have
\begin{equation*}
\mathrm{E}\left[ \eta ^{\prime }(2,2)_{i}^{n}\,|\,\mathcal{F}_{\frac{i-1}{n}}%
\right] =0\ \ \ \text{for all}\quad i,n\geq 1.\vspace{1mm}
\end{equation*}
By Doob's inequality it is therefore feasible to estimate
\begin{equation*}
\sum_{i=1}^{[nt]}\,\mathrm{E}[\,(\eta ^{\prime }(2,2)_{i}^{n})^{2}].\vspace{%
1mm}
\end{equation*}%
Inserting again the description of $(\sigma _{t})$ we find, applying simple
inequalities, in particular Jensen's, that
\begin{eqnarray*}
&&(\eta ^{\prime }(2,2)_{i}^{n})^{2} \\
&\leq &\displaystyle C\,n\,\left( \int_{(i-1)/n}^{i/n}\left\{
\int_{(i-1)/n}^{u}\,\sigma _{s-}^{\ast }\,\mathrm{d}W_{s}\right\} \,\mathrm{d%
}u\right) ^{2}+C\,n\,\left( \,\int_{(i-1)/n}^{i/n}\left\{
\int_{(i-1)/n}^{u}\,v_{s-}^{\ast }\,\mathrm{d}V_{s}\right\} \,\mathrm{d}%
u\right) ^{2} \\
&&\displaystyle+C\,n\,\left( \,\int_{(i-1)/n}^{i/n}\int_{(i-1)/n}^{u}\left\{
\int_{E}\phi (s-,x)\,(\mu -\nu )(\mathrm{d}s\,\mathrm{d}x)\right\} \mathrm{d}%
u\right) ^{2} \\
&\leq &\displaystyle C\,\int_{(i-1)/n}^{i/n}\left(
\,\int_{(i-1)/n}^{u}\,\sigma _{s-}^{\ast }\,\mathrm{d}W_{s}\,\right) ^{2}\,%
\mathrm{d}u+C\,\int_{(i-1)/n}^{i/n}\left( \,\int_{(i-1)/n}^{u}\,v_{s-}^{\ast
}\,\mathrm{d}V_{s}\,\right) ^{2}\,\mathrm{d}u \\
&&\displaystyle+C\,\int_{(i-1)/n}^{i/n}\left(
\,\int_{(i-1)/n}^{u}\int_{E}\phi (s-,x)\,(\mu -\nu )(\mathrm{d}s\,\mathrm{d}%
x)\,\right) ^{2}\,\mathrm{d}u.\vspace{1mm}
\end{eqnarray*}%
The properties of the Wiener integrals and the uniform boundedness of $%
(\sigma _{t}^{\ast })$ and $(v_{t}^{\ast })$ ensure that
\begin{equation*}
\mathrm{E}\left[ \left( \,\int_{(i-1)/n}^{u}\,\sigma _{s-}^{\ast }\,\mathrm{d%
}W_{s}\,\right) ^{2}\,|\,\mathcal{F}_{\frac{i-1}{n}}\right] \leq C\cdot
\left( u-\frac{i-1}{n}\right)
\end{equation*}%
and likewise
\begin{equation*}
\mathrm{E}\left[ \left( \,\int_{(i-1)/n}^{u}\,v_{s-}^{\ast }\,\mathrm{d}%
V_{s}\,\right) ^{2}\,|\,\mathcal{F}_{\frac{i-1}{n}}\right] \leq C\cdot
\left( u-\frac{i-1}{n}\right) \vspace{1mm}
\end{equation*}%
for all $i,n\geq 1$. Likewise for the Poisson part we have
\begin{eqnarray*}
&&\displaystyle\mathrm{E}\left[ \left( \,\int_{(i-1)/n}^{u}\int_{E}\phi
(s-,x)\,(\mu -\nu )(\mathrm{d}s\,\mathrm{d}x)\,\right) ^{2}\,|\,\mathcal{F}_{%
\frac{i-1}{n}}\right] \\
&\leq &\displaystyle C\int_{(i-1)/n}^{u}\int_{E}\mathrm{E}[\phi
^{2}(s,x)\,|\,\mathcal{F}_{\frac{i-1}{n}}]\,F(\mathrm{d}x)\,\mathrm{d}s%
\vspace{1mm}
\end{eqnarray*}%
yielding a similar bound. Putting all this together we have for all $i,n\geq
1$
\begin{eqnarray*}
\mathrm{E}[\,(\eta ^{\prime }(2,2)_{i}^{n})^{2}\,|\,\mathcal{F}_{\frac{i-1}{n%
}}] &\leq &C\,\int_{(i-1)/n}^{i/n}(u-(i-1)/n)\,\mathrm{d}u \\
&\leq &C/n^{2}.
\end{eqnarray*}%
Thus as $n\rightarrow \infty $ so
\begin{equation*}
\sum_{i=1}^{[nt]}\mathrm{E}[\,(\eta ^{\prime }(2,2)_{i}^{n})^{2}]\rightarrow
0.\vspace{1mm}
\end{equation*}%
and since
\begin{equation*}
\mathrm{E}\left[ \eta ^{\prime }(2,2)_{i}^{n}\,|\,\mathcal{F}_{\frac{i-1}{n}}%
\right] =0\ \ \ \ \text{for all}\quad i,n\geq 1\vspace{1mm}
\end{equation*}%
we deduce from Doob's inequality that
\begin{equation*}
\,\sum_{i=1}^{[nt]}\eta ^{\prime }(2,2)_{i}^{n}\,\overset{P}{\rightarrow }0
\end{equation*}%
proving\ altogether
\begin{equation*}
\,\sum_{i=1}^{[nt]}\eta ^{\prime }(2)_{i}^{n}\,\overset{P}{\rightarrow }0.%
\vspace{1mm}
\end{equation*}%
Applying once more (H2a) and (\ref{eqn 8}) we have for every $\epsilon >0$
and every $i,n$ that
\begin{eqnarray*}
|\eta ^{\prime \prime }(2)_{i}^{n}| &\leq &\sqrt{n}\int_{(i-1)/n}^{i/n}\,%
\Psi \left( \left\vert \sigma _{u}-\sigma _{\frac{i-1}{n}}\right\vert
\right) \cdot \left\vert \sigma _{u}-\sigma _{\frac{i-1}{n}}\right\vert \,%
\mathrm{d}u \\
&\leq &\sqrt{n}\,\Psi (\epsilon )\int_{(i-1)/n}^{i/n}\,\left\vert \sigma
_{u}-\sigma _{\frac{i-1}{n}}\right\vert \,\mathrm{d}u+\sqrt{n}\,\Psi (2\sqrt{%
b})/\epsilon \int_{(i-1)/n}^{i/n}\,\left\vert \sigma _{u}-\sigma _{\frac{i-1%
}{n}}\right\vert ^{2}\,\mathrm{d}u.\vspace{1mm}
\end{eqnarray*}%
Thus from (\ref{8}) and its consequence
\begin{equation*}
\mathrm{E}\,\left[ \left\vert \sigma _{u}-\sigma _{\frac{i-1}{n}}\right\vert
\,\right] \leq C/\sqrt{n}
\end{equation*}%
we get
\begin{equation*}
\sum_{i=1}^{[nt]}\mathrm{E}[\,|\eta ^{\prime \prime }(2)_{i}^{n}|\,]\leq
Ct\,\Psi (\epsilon )+\frac{C\,\Psi (b)}{\sqrt{n}\,\epsilon }
\end{equation*}%
for all $n$ and all $\epsilon $. Letting here first $n\rightarrow \infty $
and then $\epsilon \rightarrow 0$ we may conclude that as $n\rightarrow
\infty $
\begin{equation*}
\sum_{i=1}^{[nt]}\mathrm{E}[\,|\eta ^{\prime \prime
}(2)_{i}^{n}|\,]\rightarrow 0\
\end{equation*}%
implying the convergence
\begin{equation*}
\,\sum_{i=1}^{[nt]}\eta (2)_{i}^{n}\,\overset{P}{\rightarrow }0.\vspace{1mm}
\end{equation*}%
Thus ending the proof of (\ref{13b}).
\noindent $\square $
\subsection{Proof of $\protect\eta (1)^{n}\protect\overset{P}{\rightarrow }0$
\label{subsection:proof of 13a}}
Recall we are to show that \textbf{\ }
\begin{equation}
\eta (1)^{n}=\,\sum_{i=1}^{[nt]}\eta (1)_{i}^{n}\,\overset{P}{\rightarrow }0.
\end{equation}%
Let still $t>0$ be fixed. Recall that
\begin{eqnarray*}
\eta (1)_{i}^{n} &=&\frac{1}{\sqrt{n}}\,\left\{ \mathrm{E}\left[ \,g(\sqrt{n}%
\,\triangle _{i}^{n}Y)\,|\,\mathcal{F}_{\frac{i-1}{n}}\right] \,-\rho
_{\sigma _{\frac{i-1}{n}}}(g)\right\} \\
&=&\frac{1}{\sqrt{n}}\,\mathrm{E}\,\left[ g(\sqrt{n}\,\triangle
_{i}^{n}Y)-g(\beta _{i}^{n})\,|\,\mathcal{F}_{\frac{i-1}{n}}\right] .\vspace{%
1mm}
\end{eqnarray*}%
Introduce the notation\thinspace (recall the assumption (K))
\begin{equation*}
A_{i}^{n}=\{\,|\sqrt{n}\,\triangle _{i}^{n}Y-\beta _{i}^{n}|>\,d(\beta
_{i}^{n},B)/2\,\}.\vspace{1mm}
\end{equation*}%
Since $B$ is a Lebesgue null set and $\beta _{i}^{n}$ is absolutely
continuous, $g^{\prime }(\beta _{i}^{n})$ is defined $a.s.$ and, by
assumption, $g$ is differentiable on the interval joining $\triangle
_{i}^{n}Y(\omega )$ and $\beta _{i}^{n}(\omega )$ for all $\omega \in
A_{i}^{n\,c}$. Thus, using the Mean Value Theorem, we may for all $i,n\geq 1$
write
\begin{eqnarray*}
&&g(\sqrt{n}\,\triangle _{i}^{n}Y)-g(\beta _{i}^{n}) \\
&=&\left\{ g(\sqrt{n}\,\triangle _{i}^{n}Y)-g(\beta _{i}^{n})\right\} \cdot
\mathbf{1}_{A_{i}^{n}} \\
&&+g^{\prime }(\beta _{i}^{n})\cdot (\sqrt{n}\,\triangle _{i}^{n}Y-\beta
_{i}^{n})\cdot \mathbf{1}_{A_{i}^{n\,c}} \\
&&+\left\{ g^{\prime }(\alpha _{i}^{n})-g^{\prime }(\beta _{i}^{n})\right\}
\cdot (\sqrt{n}\,\triangle _{i}^{n}Y-\beta _{i}^{n})\cdot \mathbf{1}%
_{A_{i}^{n\,c}} \\
&=&\sqrt{n}\,\left\{ \delta (1)_{i}^{n}+\delta (2)_{i}^{n}+\delta
(3)_{i}^{n}\right\} ,\vspace{1mm}
\end{eqnarray*}%
where $\alpha _{i}^{n}$ are random points lying in between $\sqrt{n}%
\,\triangle _{i}^{n}Y$ and $\beta _{i}^{n}$, i.e.
\begin{equation*}
\sqrt{n}\,\triangle _{i}^{n}Y\wedge \beta _{i}^{n}\leq \alpha _{i}^{n}\leq
\sqrt{n}\,\triangle _{i}^{n}Y\vee \beta _{i}^{n},
\end{equation*}%
and%
\begin{equation*}
\begin{array}{lll}
\delta (1)_{i}^{n} & = & \left[ \,\left\{ g(\sqrt{n}\,\triangle
_{i}^{n}Y)-g(\beta _{i}^{n})\right\} -g^{\prime }(\beta _{i}^{n})\cdot (%
\sqrt{n}\,\triangle _{i}^{n}Y-\beta _{i}^{n})\,\right] \cdot \mathbf{1}%
_{A_{i}^{n}}/\sqrt{n} \\
\delta (2)_{i}^{n} & = & \left\{ g^{\prime }(\alpha _{i}^{n})-g^{\prime
}(\beta _{i}^{n})\right\} \cdot (\sqrt{n}\,\triangle _{i}^{n}Y-\beta
_{i}^{n})\cdot \mathbf{1}_{A_{i}^{n\,c}}/\sqrt{n} \\
\delta (3)_{i}^{n} & = & g^{\prime }(\beta _{i}^{n})\cdot (\sqrt{n}%
\,\triangle _{i}^{n}Y-\beta _{i}^{n})/\sqrt{n}.%
\end{array}%
\end{equation*}%
Thus it suffices to prove
\begin{equation*}
\,\sum_{i=1}^{[nt]}\mathrm{E}\,\left[ \delta (k)_{i}^{n}\,|\,\mathcal{F}_{%
\frac{i-1}{n}}\right] \,\,\overset{P}{\rightarrow }0,\ \ \ k=1,2,3.
\end{equation*}
Consider the case $k=1$. Using (K) and the fact that $\beta _{i}^{n}$ is
absolutely continuous we have a.s.%
\begin{eqnarray*}
&&|g(\sqrt{n}\,\triangle _{i}^{n}Y)-g(\beta _{i}^{n})| \\
&\leq &M(1+|\sqrt{n}\,\triangle _{i}^{n}Y-\beta _{i}^{n}|^{p}+|\beta
_{i}^{n}|^{p})\cdot |\sqrt{n}\,\triangle _{i}^{n}Y-\beta _{i}^{n}| \\
&\leq &(2^{p}+1)M(1+|\sqrt{n}\,\triangle _{i}^{n}Y|^{p}+|\beta
_{i}^{n}|^{p})\cdot |\sqrt{n}\,\triangle _{i}^{n}Y-\beta _{i}^{n}|,
\end{eqnarray*}%
and
\begin{equation*}
|\,g^{\prime }(\beta _{i}^{n})\cdot (\sqrt{n}\,\triangle _{i}^{n}Y-\beta
_{i}^{n})\,|\leq M(1+|\beta _{i}^{n}|^{p})\cdot |\sqrt{n}\,\triangle
_{i}^{n}Y-\beta _{i}^{n}|.\vspace{1mm}
\end{equation*}%
By Cauchy-Schwarz's inequality $\mathrm{E}[\,|\delta (1)_{i}^{n}|\,]$ is
therefore for all $i,n\geq 1$ less than
\begin{equation*}
C\cdot \mathrm{E}[\,1+|\sqrt{n}\,\triangle _{i}^{n}Y|^{3p}+|\beta
_{i}^{n}|^{3p}]^{1/3}\cdot \mathrm{E}[\,(\sqrt{n}\,\triangle _{i}^{n}Y-\beta
_{i}^{n})^{2}/n\,]^{1/2}\cdot P(A_{i}^{n})^{1/6}\vspace{1mm}
\end{equation*}%
implying for fixed $t,$ by means of (\ref{2}), that
\begin{eqnarray*}
\mathrm{E}[\left[ \sum_{i=1}^{[nt]}|\,\delta (1)_{i}^{n}|\right] \, &\leq
&C\cdot \sup_{i\geq 1}P(A_{i}^{n})^{1/6}\,\sum_{i=1}^{[nt]}\mathrm{E}%
[\,(\triangle _{i}^{n}Y-\beta _{i}^{n})^{2}/n\,]^{1/2}\vspace{1mm} \\
&\leq &C\cdot \sup_{i\geq 1}P(A_{i}^{n})^{1/6}\,\sum_{i=1}^{[nt]}1/n \\
&\leq &Ct\cdot \sup_{i\geq 1}P(A_{i}^{n})^{1/6}.\vspace{1mm}
\end{eqnarray*}%
For all $i,n\geq 1$ we have for every $\epsilon >0$
\begin{eqnarray*}
P(A_{i}^{n}) &\leq &P(A_{i}^{n}\cap \{d(\beta _{i}^{n},B)\leq \epsilon
\})+P(A_{i}^{n}\cap \{d(\beta _{i}^{n},B)>\epsilon \})\vspace{1mm} \\
&\leq &P(d(\beta _{i}^{n},B)\leq \epsilon )+P(|\sqrt{n}\,\triangle
_{i}^{n}Y-\beta _{i}^{n}|>\epsilon /2)\vspace{1mm} \\
&\leq &P(d(\beta _{i}^{n},B)\leq \epsilon )+\frac{4}{\epsilon ^{2}}\cdot
\mathrm{E}[\,(\sqrt{n}\,\triangle _{i}^{n}Y-\beta _{i}^{n})^{2}]\vspace{1mm}
\\
&\leq &P(d(\beta _{i}^{n},B)\leq \epsilon )+\frac{C}{n\,\epsilon ^{2}}.
\end{eqnarray*}%
But (H2a) implies that the densities of $\beta _{i}^{n}$ are pointwise
dominated by a Lebesgue integrable function $h_{a,b}$ providing, for all $%
i,n\geq 1$, the estimate
\begin{eqnarray}
P(A_{i}^{n}) &\leq &\int_{\{x\,|\,d(x,B)\leq \epsilon \}}h_{a,b}\,\mathrm{d}%
\lambda _{1}+\frac{C}{n\,\epsilon ^{2}} \label{eqn 10} \\
&=&\alpha _{\epsilon }+\frac{C}{n\,\epsilon ^{2}}.\vspace{1mm} \notag
\end{eqnarray}%
Observe $\lim_{\epsilon \rightarrow 0}\alpha _{\epsilon }=0$. Taking now in (%
\ref{eqn 10}) $\sup $ over $i$ and then letting first $n\rightarrow \infty $
and then $\epsilon \downarrow 0$ we get
\begin{equation*}
\lim_{n}\,\sup_{i\geq 1}\,P(A_{i}^{n})=0
\end{equation*}%
proving that
\begin{equation*}
\mathrm{E}\left[ \,\sum_{i=1}^{[nt]}|\,\delta (1)_{i}^{n}|\right]
\,\rightarrow 0
\end{equation*}%
and\ thus
\begin{equation*}
\,\sum_{i=1}^{[nt]}\mathrm{E}\left[ \,\delta (1)_{i}^{n}\,|\,\mathcal{F}_{%
\frac{i-1}{n}}\right] \,\overset{P}{\rightarrow }0.\vspace{1mm}
\end{equation*}
Consider next the case $k=2$. As assumed in (K), $g$ is continuously
differentiable outside of $B$. Thus for each $A>1$ and $\epsilon >0$ there
exists a function $G_{A,\,\epsilon }:(0,1)\rightarrow \mathbf{R}_{+}$ such
that for given $0<\epsilon ^{\prime }<\epsilon /2$
\begin{equation*}
\left\vert g^{\prime }(x+y)-g^{\prime }(x)\right\vert \leq G_{A,\,\epsilon
}(\epsilon ^{\prime })\ \ \text{for all}\ |x|\leq A,\ |y|\leq \epsilon
^{\prime }<\epsilon <d(x,B).\vspace{1mm}
\end{equation*}%
Observe that $\lim_{\epsilon ^{\prime }\downarrow 0}G_{A,\,\epsilon
}(\epsilon ^{\prime })=0$ for all $A$ and $\epsilon $.\vspace{1mm} Fix $A>1$
and $\epsilon \in (0,1)$. For all $i,n\geq 1$ we have
\begin{eqnarray*}
&&|g^{\prime }(\alpha _{i}^{n})-g^{\prime }(\beta _{i}^{n})|\cdot \mathbf{1}%
_{A_{i}^{n\,c}} \\
&=&\displaystyle|g^{\prime }(\alpha _{i}^{n})-g^{\prime }(\beta
_{i}^{n})|\cdot \mathbf{1}_{A_{i}^{n\,c}}\,(\mathbf{1}_{\{|\alpha
_{i}^{n}|+|\beta _{i}^{n}|>A\}}+\mathbf{1}_{\{|\alpha _{i}^{n}|+|\beta
_{i}^{n}|\leq A\}})\vspace{1mm} \\
&\leq &\displaystyle|g^{\prime }(\alpha _{i}^{n})-g^{\prime }(\beta
_{i}^{n})|\cdot \frac{|\alpha _{i}^{n}|+|\beta _{i}^{n}|}{A}+|g^{\prime
}(\alpha _{i}^{n})-g^{\prime }(\beta _{i}^{n})|\cdot \mathbf{1}%
_{A_{i}^{n\,c}\,\cap \,\{|\alpha _{i}^{n}|+|\beta _{i}^{n}|\leq A\}} \\
&\leq &\displaystyle\frac{C}{A}\cdot (1+|\alpha _{i}^{n}|^{p}+|\beta
_{i}^{n}|^{p})^{2}+|g^{\prime }(\alpha _{i}^{n})-g^{\prime }(\beta
_{i}^{n})|\cdot \mathbf{1}_{A_{i}^{n\,c}\,\cap \,\{|\alpha _{i}^{n}|+|\beta
_{i}^{n}|\leq A\}} \\
&\leq &\displaystyle\frac{C}{A}\cdot (1+|\sqrt{n}\,\triangle
_{i}^{n}Y|^{2p}+|\beta _{i}^{n}|^{2p})+|g^{\prime }(\alpha
_{i}^{n})-g^{\prime }(\beta _{i}^{n})|\cdot \mathbf{1}_{A_{i}^{n\,c}\,\cap
\,\{|\alpha _{i}^{n}|+|\beta _{i}^{n}|\leq A\}}.\vspace{1mm}
\end{eqnarray*}%
Now writing
\begin{eqnarray*}
1 &=&\mathbf{1}_{\{d(\beta _{i}^{n},B)\leq \epsilon \}}+\mathbf{1}%
_{\{d(\beta _{i}^{n},B)>\epsilon \}}\vspace{1mm} \\
&=&\mathbf{1}_{\{d(\beta _{i}^{n},B)\leq \epsilon \}} \\
&&+\mathbf{1}_{\{d(\beta _{i}^{n},B)>\epsilon \}\,\cap \,\{|\alpha
_{i}^{n}-\beta _{i}^{n}|\leq \epsilon ^{\prime }\}} \\
&&+\mathbf{1}_{\{d(\beta _{i}^{n},B)>\epsilon \}\,\cap \,\{|\alpha
_{i}^{n}-\beta _{i}^{n}|>\epsilon ^{\prime }\}}
\end{eqnarray*}%
for all $0<\epsilon ^{\prime }<\epsilon /2$ we have
\begin{eqnarray*}
\mathbf{1}_{A_{i}^{n\,c}\,\cap \,\{|\alpha _{i}^{n}|+|\beta _{i}^{n}|\leq
A\}} &\leq &\mathbf{1}_{\{d(\beta _{i}^{n},B)\leq \epsilon \}\,\cap
\,A_{i}^{n\,c}\,\cap \,\{|\alpha _{i}^{n}|+|\beta _{i}^{n}|\leq A\}} \\
&&+\mathbf{1}_{A_{i}^{n\,c}\,\cap \,\{|\alpha _{i}^{n}|+|\beta _{i}^{n}|\leq
A\}\,\cap \,\{d(\beta _{i}^{n},B)>\epsilon \}\,\cap \,\{|\alpha
_{i}^{n}-\beta _{i}^{n}|\leq \epsilon ^{\prime }\}} \\
&&+\mathbf{1}_{A_{i}^{n\,c}\,\cap \,\{|\alpha _{i}^{n}|+|\beta _{i}^{n}|\leq
A\}\,\cap \,\{d(\beta _{i}^{n},B)>\epsilon \}}\cdot \frac{|\alpha
_{i}^{n}-\beta _{i}^{n}|}{\epsilon ^{\prime }}.
\end{eqnarray*}%
Combining this with the fact that
\begin{eqnarray*}
|g^{\prime }(\alpha _{i}^{n})-g^{\prime }(\beta _{i}^{n})| &\leq
&C(1+|\alpha _{i}^{n}|^{p}+|\beta _{i}^{n}|^{p}) \\
&\leq &CA^{p}
\end{eqnarray*}%
on $A_{i}^{n\,c}\,\cap \,\{|\alpha _{i}^{n}|+|\beta _{i}^{n}|\leq A\}$ we
obtain that
\begin{eqnarray*}
&&|g^{\prime }(\alpha _{i}^{n})-g^{\prime }(\beta _{i}^{n})|\cdot \mathbf{1}%
_{A_{i}^{n\,c}\,\cap \,\{|\alpha _{i}^{n}|+|\beta _{i}^{n}|\leq A\}} \\
&\leq &CA^{p}\cdot \left( \,\mathbf{1}_{\{d(\beta _{i}^{n},B)\leq \epsilon
\}}+\frac{|\alpha _{i}^{n}-\beta _{i}^{n}|}{\epsilon ^{\prime }}\right)
\,+G_{A,\,\epsilon }(\epsilon ^{\prime })\vspace{1mm} \\
&\leq &CA^{p}\cdot (\,\mathbf{1}_{\{d(\beta _{i}^{n},B)\leq \epsilon \}}+%
\frac{|\sqrt{n}\,\triangle _{i}^{n}Y-\beta _{i}^{n}|}{\epsilon ^{\prime }}%
\,)+G_{A,\,\epsilon }(\epsilon ^{\prime }).\vspace{1mm}
\end{eqnarray*}
Putting this together means that
\begin{eqnarray*}
\sqrt{n}\,|\delta (2)_{i}^{n}| &=&|g^{\prime }(\alpha _{i}^{n})-g^{\prime
}(\beta _{i}^{n})|\cdot |\sqrt{n}\,\triangle _{i}^{n}Y-\beta _{i}^{n}|\cdot
\mathbf{1}_{A_{i}^{n\,c}} \\
&\leq &\left\{ \frac{C}{A}\cdot (1+|\sqrt{n}\,\triangle
_{i}^{n}Y|^{2p}+|\beta _{i}^{n}|^{2p})+G_{A,\,\epsilon }(\epsilon ^{\prime
})\right\} \cdot |\sqrt{n}\,\triangle _{i}^{n}Y-\beta _{i}^{n}|\vspace{1mm}
\\
&&+\,CA^{p}\cdot \left( \mathbf{1}_{\{d(\beta _{i}^{n},B)\leq \epsilon
\}}\cdot |\sqrt{n}\,\triangle _{i}^{n}Y-\beta _{i}^{n}|+\frac{|\sqrt{n}%
\,\triangle _{i}^{n}Y-\beta _{i}^{n}|^{2}}{\epsilon ^{\prime }}\right) .%
\vspace{1mm}
\end{eqnarray*}%
Exploiting here the inequalities (\ref{2}) and (\ref{3}) we obtain, for all $%
A>1$ and $0<2\epsilon ^{\prime }<\epsilon <1$ and all $i,n\geq 1$, using H%
\"{o}lder's inequality, the following estimate
\begin{equation*}
\mathrm{E}[\,|\delta (2)_{i}^{n}|\,]\leq C\left( \frac{1}{A\,n}+\frac{%
G_{A,\,\epsilon }(\epsilon ^{\prime })}{n}+\frac{A^{p}\,\sqrt{\alpha
_{\epsilon }}}{n}+\frac{A^{p}}{\epsilon ^{\prime }\,n^{3/2}}\right) \vspace{%
1mm}
\end{equation*}%
implying for all $n\geq 1$ and $t\geq 0$ that
\begin{equation*}
\sum_{i=1}^{[nt]}\mathrm{E}[\,|\delta (2)_{i}^{n}|\,]\leq Ct\left( \frac{1}{A%
}+G_{A,\,\epsilon }(\epsilon ^{\prime })+A^{p}\,\sqrt{\alpha _{\epsilon }}+%
\frac{A^{p}}{\epsilon ^{\prime }\,n^{1/2}}\right) .\vspace{1mm}
\end{equation*}%
Choosing in this estimate first $A$ sufficiently big, then $\epsilon $
small\thinspace (recall that $\lim_{\epsilon \rightarrow 0}\alpha _{\epsilon
}=0$\thinspace ) and finally $\epsilon ^{\prime }$ small, exploiting that $%
\lim_{\epsilon ^{\prime }\downarrow 0}G_{A,\,\epsilon }(\epsilon ^{\prime
})=0$ for all $A$ and $\epsilon $, we may conclude that
\begin{equation*}
\lim_{n}\,\sum_{i=1}^{[nt]}\mathrm{E}\left[ \,|\delta (2)_{i}^{n}|\,\right]
=0
\end{equation*}%
and thus
\begin{equation*}
\sum_{i=1}^{[nt]}\mathrm{E}\left[ \,\delta (2)_{i}^{n}\,|\,\mathcal{F}_{%
\frac{i-1}{n}}\right] \,\overset{P}{\rightarrow }0.
\end{equation*}
So what remains to be proved is the convergence
\begin{equation*}
\,\sum_{i=1}^{[nt]}\mathrm{E}\,\left[ \delta (3)_{i}^{n}\,|\,\mathcal{F}_{%
\frac{i-1}{n}}\right] \,\,\overset{P}{\rightarrow }0.
\end{equation*}%
As introduced in (\ref{17})
\begin{equation*}
\sqrt{n}\,\triangle _{i}^{n}Y-\beta _{i}^{n}=\sum_{j=1}^{5}\xi
(j)_{i}^{n}=\psi (1)_{i}^{n}+\psi (2)_{i}^{n}
\end{equation*}%
for all $i,n\geq 1$ where
\begin{equation*}
\psi (1)_{i}^{n}=\xi (1)_{i}^{n}+\xi (3)_{i}^{n}+\xi (4)_{i}^{n},
\end{equation*}%
\begin{equation*}
\psi (2)_{i}^{n}=\xi (2)_{i}^{n}+\xi (5)_{i}^{n},
\end{equation*}%
and as
\begin{equation*}
\delta (3)_{i}^{n}=g^{\prime }(\beta _{i}^{n})\cdot (\psi (1)_{i}^{n}+\psi
(2)_{i}^{n})/\sqrt{n}\vspace{1mm}
\end{equation*}%
it suffices to prove
\begin{equation*}
\left( \,\sum_{i=1}^{[nt]}\mathrm{E}\left[ \,g^{\prime }(\beta
_{i}^{n})\cdot \psi (k)_{i}^{n}\,|\,\mathcal{F}_{\frac{i-1}{n}}\right] \,\,/%
\sqrt{n}\,\right) \overset{P}{\rightarrow }0,\ \ \ k=1,2.\vspace{1mm}
\end{equation*}
The case $k=1$ is handled by proving
\begin{equation}
\frac{1}{\sqrt{n}}\,\sum_{i=1}^{[nt]}\mathrm{E}[\,|g^{\prime }(\beta
_{i}^{n})\cdot \xi (j)_{i}^{n}|\,]\rightarrow 0,\ \ \ j=1,3,4.\vspace{1mm}
\label{eqn 11}
\end{equation}%
Using Jensen's inequality it is easily seen that for $j=1,3,4$
\begin{equation*}
\frac{1}{\sqrt{n}}\,\sum_{i=1}^{[nt]}\mathrm{E}[\,|g^{\prime }(\beta
_{i}^{n})\cdot \xi (j)_{i}^{n}|\,]\leq C\,t\cdot \sqrt{\frac{1}{n}%
\,\sum_{i=1}^{[nt]}\mathrm{E}[\,g^{\prime }(\beta _{i}^{n})^{2}]}\,\cdot \,%
\sqrt{\sum_{i=1}^{[nt]}\mathrm{E}[\,(\xi (j)_{i}^{n})^{2}]}
\end{equation*}%
and so using (\ref{12})
\begin{equation*}
\frac{1}{\sqrt{n}}\,\sum_{i=1}^{[nt]}\mathrm{E}[\,|g^{\prime }(\beta
_{i}^{n})\cdot \xi (j)_{i}^{n}|\,]\leq C\,t\cdot \,\sqrt{\sum_{i=1}^{[nt]}%
\mathrm{E}[\,(\xi (j)_{i}^{n})^{2}]}
\end{equation*}%
since almost surely
\begin{equation*}
|g^{\prime }(\beta _{i}^{n})|\leq C\,(1+|\beta _{i}^{n}|^{p})
\end{equation*}%
for all $i,n\geq 1$. From here, (\ref{eqn 11}) is an immediate consequence
of Lemmas \ref{lemma 1st}-\ref{lemma 5th}.\vspace{1mm}
The remaining case $k=2$ is different. The definition of $\psi (2)_{i}^{n}$
implies, using basic stochastic calculus, that $\psi (2)_{i}^{n}/\sqrt{n}$,
for all $i,n\geq 1$, may be written as
\begin{eqnarray*}
&&\int_{(i-1)/n}^{i/n}\left\{ \sigma _{\frac{i-1}{n}}^{\prime }\,\left(
W_{u}-W_{\frac{i-1}{n}}\right) +M(n,i)_{u}\right\} \,\mathrm{d}W_{u} \\
&=&\sigma _{\frac{i-1}{n}}^{\prime }\,\int_{(i-1)/n}^{i/n}\left( W_{u}-W_{%
\frac{i-1}{n}}\right) \,\mathrm{d}W_{u} \\
&&+\triangle _{i}^{n}M(n,i)\cdot \triangle _{i}^{n}W \\
&&+\int_{(i-1)/n}^{i/n}\left( W_{u}-W_{\frac{i-1}{n}}\right) \,\mathrm{d}%
M(n,i)_{u},
\end{eqnarray*}%
where $(M(n,i)_{t})$ is the martingale defined by $M(n,i)_{t}\equiv 0$ for $%
t\leq (i-1)/n$ and
\begin{equation*}
M(n,i)_{t}=v_{\frac{i-1}{n}}^{\ast }\,\left( V_{t}-V_{\frac{i-1}{n}}\right)
+\int_{(i-1)/n}^{t}\int_{E_{n}}\phi \left( \frac{i-1}{n},x\right) (\mu -\nu
)(\mathrm{d}s\,\mathrm{d}x)\vspace{1mm}
\end{equation*}%
otherwise. Thus for fixed $i,n\geq 1$
\begin{equation*}
\mathrm{E}\left[ \,g^{\prime }(\beta _{i}^{n})\cdot \psi (2)_{i}^{n}\,|\,%
\mathcal{F}_{\frac{i-1}{n}}\right] \,\,/\sqrt{n}
\end{equation*}%
is a linear combination of the following three terms
\begin{equation*}
\mathrm{E}\left[ g^{\prime }(\beta _{i}^{n})\cdot \sigma _{\frac{i-1}{n}%
}^{\prime }\,\int_{(i-1)/n}^{i/n}\left( W_{u}-W_{\frac{i-1}{n}}\right) \,%
\mathrm{d}W_{u}\,|\,\mathcal{F}_{\frac{i-1}{n}}\right] \,,
\end{equation*}%
\begin{equation*}
\mathrm{E}\left[ g^{\prime }(\beta _{i}^{n})\cdot \triangle
_{i}^{n}M(n,i)\cdot \triangle _{i}^{n}W\,|\,\mathcal{F}_{\frac{i-1}{n}}%
\right] \,
\end{equation*}%
and%
\begin{equation*}
\mathrm{E}[\,g^{\prime }(\beta _{i}^{n})\cdot \int_{(i-1)/n}^{i/n}W_{u}\,%
\mathrm{d}M(n,i)_{u}\,|\,\mathcal{F}_{\frac{i-1}{n}}\,].
\end{equation*}%
But these three terms are all equal to $0$ as seen by the following
arguments.\vspace{1mm}
The conditional distribution of
\begin{equation*}
\left( W_{t}-W_{\frac{i-1}{n}}\right) _{t\geq \frac{i-1}{n}}|\mathcal{F}_{%
\frac{i-1}{n}}
\end{equation*}%
is clearly not affected by a change of sign. Thus since $g$ being assumed
even and $g^{\prime }$ therefore odd we have
\begin{equation*}
\mathrm{E}\left[ \,g^{\prime }(\beta _{i}^{n})\,\int_{(i-1)/n}^{i/n}\left(
W_{u}-W_{\frac{i-1}{n}}\right) \,\mathrm{d}W_{u}\,|\,\mathcal{F}_{\frac{i-1}{%
n}}\,\right] =0
\end{equation*}%
implying the vanishing of the first term. \vspace{1mm}
Secondly, by assumption, $\left( W_{t}-W_{\frac{i-1}{n}}\right) _{t\geq
\frac{i-1}{n}}$ and $(M(n,i)_{t})_{t\geq \frac{i-1}{n}}$ are independent
given $\mathcal{F}_{\frac{i-1}{n}}$. Therefore, denoting by $\mathcal{F}%
_{i,n}^{\,0}$ the $\sigma $-field generated by
\begin{equation*}
\left( W_{t}-W_{\frac{i-1}{n}}\right) _{\frac{i-1}{n}\leq t\leq i/n}\ \ \
\text{and}\ \ \ \mathcal{F}_{\frac{i-1}{n}},
\end{equation*}%
the martingale property of $(M(n,i)_{t})$ ensures that
\begin{equation*}
\mathrm{E}[\,g^{\prime }(\beta _{i}^{n})\cdot \triangle _{i}^{n}M(n,i)\cdot
\triangle _{i}^{n}W\,|\,\mathcal{F}_{i,n}^{\,0}\,]=0\
\end{equation*}%
and%
\begin{equation*}
\mathrm{E}[\left[ g^{\prime }(\beta _{i}^{n})\cdot
\int_{(i-1)/n}^{i/n}W_{u}\,\mathrm{d}M(n,i)_{u}\,|\,\mathcal{F}_{i,n}^{\,0}%
\right] \,=0.
\end{equation*}%
Using this the vanishing of
\begin{equation*}
\mathrm{E}\left[ \,g^{\prime }(\beta _{i}^{n})\cdot \triangle
_{i}^{n}M(n,i)\cdot \triangle _{i}^{n}W\,|\,\mathcal{F}_{\frac{i-1}{n}}%
\right]
\end{equation*}%
and%
\begin{equation*}
\mathrm{E}\left[ \,g^{\prime }(\beta _{i}^{n})\cdot
\int_{(i-1)/n}^{i/n}W_{u}\,\mathrm{d}M(n,i)_{u}\,|\,\mathcal{F}_{\frac{i-1}{n%
}}\right] \,\vspace{1mm}
\end{equation*}%
is easily obtained by successive conditioning.\vspace{1mm}
The proof of (\ref{13a}) is hereby completed.
\noindent $\square $
\bibliographystyle{chicago}
|
2,869,038,153,963 | arxiv | \section{Introduction}
Matrix factorisations are a beautiful mathematical subject in the
sense that they are easy to define and still have a lot of interesting
structures. Furthermore they can be used and applied in physics, where
they describe boundary conditions and defects in $N=2$ supersymmetric
Landau-Ginzburg (LG) models (see e.g.\ \cite{Jockers:2007ng} for an overview).
In the simplest setting, a matrix factorisation consists of two quadratic
matrices $p^{0}$ and $p^{1}$ of the same size with polynomial entries whose product is
the identity matrix multiplied by a given potential
$W\in\mathbb{C}[x_{1},\dotsc ,x_{n}]$,
\begin{equation}
p^{0}\cdot p^{1} = W\cdot \mathbbm{1} \ , \quad p^{1}\cdot p^{0} = W\cdot
\mathbbm{1}\ .
\end{equation}
An example of that is given by
\begin{equation}
x^{\ell}\cdot x^{k-\ell} = x^{k} \ ,
\end{equation}
where $p^{0}$ and $p^{1}$ are just polynomials ($1\times 1$
matrices). This example describes B-type boundary conditions in
$N=2$ minimal models~\cite{Kapustin:2002bi,Brunner:2003dc}.
In addition to boundary conditions one can also consider B-type
defects between Landau-Ginzburg models with superpotentials $W$ and
$W'$. They can be described by matrix factorisations of the difference
$W-W'$~\cite{Kapustin:2004df,Khovanov:2004,Brunner:2007qu}. Defects
are very important and useful objects in two-dimensional field theory:
one of their most crucial properties is that they can be fused by
bringing them on top of each other to produce a new defect
\cite{Petkova:2000ip,Brunner:2007qu}. In such a way, defects define an
interesting algebraic structure that turns out to be useful in
analysing symmetries and dualities (see e.g.\ \cite{Frohlich:2006ch}),
and bulk and boundary renormalisation group flows (see e.g.\
\cite{Graham:2003nc,Bachas:2004sy,Brunner:2007ur,Fredenhagen:2009tn})
in such models. As defects can also be fused onto boundaries, they may
be used to relate or to generate boundary conditions. In particular,
if we know defects between different theories, we can generate
boundary conditions in one model from boundary conditions in the other
model by fusion of the defect.
In this work we will analyse defects between LG models with potentials
$W$ and $W'$ that are related by a variable transformation. If these
transformations are non-linear, the two physical theories will be
different. We will see that in such a situation there is one natural
defect that acts in a simple, but non-trivial way on matrix
factorisations. After analysing its properties we will apply it in a
number of examples. In particular we demonstrate how it can be used to
generate matrix factorisations in Kazama-Suzuki models from those in
minimal models.
\section{Variable transformations via defects}
A B-type defect separating two $N=2$ supersymmetric Landau-Ginzburg
models with superpotentials $W$ and $W'$, respectively, can be
described by a matrix factorisation of the difference $W-W'$ of the
potentials \cite{Brunner:2007qu,Carqueville:2009ev}. To be more precise, let $R$ and $R'$ be polynomial
algebras over $\mathbb{C}$, and $W\in R$, $W'\in R'$. A
$(W,W')$-defect matrix factorisation is then a pair $({}_{R}M_{R'},Q)$
where ${}_{R}M_{R'}={}_{R}M_{R'}^{0}\oplus {}_{R}M_{R'}^{1}$ is a
free, $\mathbb{Z}/2\mathbb{Z}$ graded $R$-$R'$-bimodule, and $Q$ is an
odd bimodule map,
\begin{equation}
Q= \begin{pmatrix}
0 & p^{1}\\
p^{0} & 0
\end{pmatrix} \ ,
\end{equation}
such that $Q^{2} =W\cdot \text{id}_{M} - \text{id}_{M}\cdot W'$. As
$M$ is assumed to be free, $Q$ can be written as a matrix with
polynomial entries. A B-type boundary condition is a special defect,
for which one side is trivial, e.g.\ $R'=\mathbb{C}$, $W'=0$.
Morphisms between defects $({}_{R}M_{R'},Q)$ and
$({}_{R}\tilde{M}_{R'},\tilde{Q})$ are bimodule maps $\varphi:M \to
\tilde{M}$ with $\tilde{Q}\circ \varphi = \varphi\circ Q$ modulo exact
maps of the form $\tilde{Q}\circ \psi + \psi \circ Q$. Matrix
factorisations are considered to be equivalent if there exist two
morphisms $\phi :M\to \tilde{M}$ and $\psi :\tilde{M}\to M$ such that
$\phi \circ \psi$ and $\psi \circ \phi$ equal the identity map up to
exact terms. Consider e.g.\ $({}_{R}M_{R},Q)$ and $({}_{R}M_{R},S\circ
Q\circ S^{-1})$ for an even isomorphism $S:M\to M$. These
factorisations are then equivalent with the morphisms being $\phi =S$
and $\psi =S^{-1}$. When we write the $\mathbb{Z}/2\mathbb{Z}$
gradation explicitly, the action of $S=\left(\begin{smallmatrix} s^{0}&0\\
0 & s^{1}\end{smallmatrix}\right)$ on $p^{0}$ and $p^{1}$ amounts to
similarity transformations,
\begin{equation}
p^{0} \mapsto s^{1}p^{0} (s^{0})^{-1} \ , \quad p^{1}\mapsto
s^{0}p^{1} (s^{1})^{-1} \ .
\end{equation}
\smallskip
One of the most interesting properties of defects is that they can be
fused. Physically this means that two defects can be put on top of
each other producing a new
defect~\cite{Petkova:2000ip,Brunner:2007qu}. Mathematically this
amounts to define the tensor product~\cite{Yoshino:1998,Khovanov:2004} of two matrix
factorisations $({}_{R}M_{R'},Q)$ and
$({}_{R'}\tilde{M}_{R''},\tilde{Q})$. As a module this is simply the
graded tensor product
\begin{equation}
M\otimes \tilde{M} = \left(M^{0}\otimes_{R'}\tilde{M}^{0}\oplus
M^{1}\otimes_{R'}\tilde{M}^{1} \right) \oplus \left(M^{1}\otimes_{R'}\tilde{M}^{0}\oplus
M^{0}\otimes_{R'}\tilde{M}^{1} \right) \ ,
\end{equation}
and the associated module map is
\begin{equation}
Q\hat{\otimes} \tilde{Q} := \begin{pmatrix}
0 & \begin{matrix}
p^{1}\otimes \text{id} & \text{id} \otimes \tilde{p}^{1}\\
-\text{id} \otimes \tilde{p}^{0} & p^{0}\otimes \text{id}
\end{matrix}\\
\begin{matrix}
p^{0}\otimes \text{id} & -\text{id} \otimes \tilde{p}^{1}\\
\text{id} \otimes \tilde{p}^{0} & p^{1} \otimes \text{id}
\end{matrix} & 0
\end{pmatrix}\ .
\end{equation}
For $R'=R$ and $W'=W$, there is a special defect called the identity
defect, which we denote by $({_R}I_{R},{}_{W}\mathcal{I}_{W})$.
Fusing the identity defect onto some defect reproduces the original
defect, it serves therefore as a unit object with respect to the
tensor product. Its precise construction can be found
in~\cite{Khovanov:2004,Kapustin:2004df,Carqueville:2009ev}.
\smallskip
For different superpotentials $W\in R$ and $W'\in S$ there is in general no natural
defect factorisation. On the other hand, if there exists a ring
homomorphism
\begin{equation}
\phi :R\to S \ , \quad \text{such that}\ \phi (W)=W' \ ,
\end{equation}
then we can naturally map $R$-modules to $S$-modules and vice versa by
extension or restriction of scalars: via the homomorphism $\phi$ the
ring $S$ has a natural $R$-$S$-bimodule structure, ${}_{R}S_{S}$, where
the multiplication from the left is defined via the homomorphism
$\phi$. Given a right $R$-module $M_{R}$ we can then map it to a right
$S$-module by
\begin{equation}
\phi^{*}:M_{R} \mapsto (M_{R})\otimes_{R} ({}_{R}S_{S}) \ ,
\end{equation}
which describes the extension of scalars from $R$ to $S$.
On the other hand, a left $S$-module ${}_{S}\tilde{M}$ has a natural
$R$-module structure using the homomorphism $\phi$. This restriction
of scalars from $S$ to $R$ can be written as the map
\begin{equation}
\phi_{*}: {}_{S}\tilde{M} \mapsto ({}_{R}S_{S})\otimes_{S} ({}_{S}\tilde{M}) \ .
\end{equation}
$\phi^{*}$ and $\phi_{*}$ act also on module homomorphisms in an
obvious way, so they define functors on the categories of $R$- and $S$-modules.
Notice that $\phi^{*}$ maps free modules to free modules, whereas this
is not guaranteed for $\phi_{*}$. We assume in the following that the
$R$-module ${}_{R}S$ is free, such that $\phi_{*}$ maps free modules to
free modules.
We can apply these functors also to matrix factorisations. In
particular we can apply them to the identity factorisations
$({}_{R}I_{R},{}_{W}\mathcal{I}_{W})$ and $({}_{S}I_{S},{}_{W'}\mathcal{I}_{W'})$ to
obtain two $(W,W')$-defects with $W'=\phi (W)$,
\begin{equation}
({}_{R}I^{A}_{S},{}_{W}\mathcal{I}_{W'}^{A}) = (\phi^{*} ({}_{R}I_{R}),\phi^{*}
({}_{W}\mathcal{I}_{W})) \ , \quad
({}_{R}I^{B}_{S},{}_{W}\mathcal{I}_{W'}^{B}) = (\phi_{*} ({}_{S}I_{S}),\phi_{*}
({}_{W'}\mathcal{I}_{W'})) \ .
\end{equation}
We now claim that these two defects are actually equivalent. To show
this we take the first defect and fuse the identity defect ${}_{S}I_{S}$ from the
right, and compare it to the second defect onto which we fuse the identity
defect ${}_{R}I_{R}$ from the left. As a module we obtain
\begin{equation}
({}_{R}I^{A}_{S}) \otimes_{S} ({}_{S}I_{S}) \cong ({}_{R}I_{R})\otimes_{R}
({}_{R}S_{S}) \otimes_{S} ({}_{S}I_{S}) \cong ({}_{R}I_{R}) \otimes_{R} ({}_{R}I^{B}_{S}) \ .
\end{equation}
Since
\begin{equation}
\left({}_{W}\mathcal{I}_{W} \otimes_{R} \text{id}_{S} \right) \hat{\otimes}
\left({}_{W'}\mathcal{I}_{W'} \right) =
\left({}_{W}\mathcal{I}_{W} \right) \hat{\otimes} \left(\text{id}_{S}
\otimes_{S}\, {}_{W'}\mathcal{I}_{W'} \right) \ ,
\end{equation}
also the factorisations agree, so that we indeed find that
these two defects are equivalent. We call them
$({}_{R}I_{S},{}_{W}\mathcal{I}_{W'})$.
By a similar consideration as above we see that when we fuse
$({}_{R}I_{S},{}_{W}\mathcal{I}_{W'})$ to the left, it acts by the functor $\phi
^{*}$, whereas it acts by the functor $\phi_{*}$ when we fuse it to
defects to the right. Thus we have a very simple description for the
fusion result for this defect. Analogously we can construct the defect
$({}_{S}I_{R},{}_{W'}\mathcal{I}_{W})$.
\smallskip
Let us explicitly describe how the defect
$({}_{R}I_{S},{}_{W}\mathcal{I}_{W'})$ acts by fusion.
First consider the (simpler) fusion to the left on a defect
$({}_{R'}M_{R},Q)$. For a rank $2m$ free $R'$-$R$-bimodule
${}_{R'}M_{R}$ we can think of $Q$ as a $2m\times 2m$ matrix with
entries $Q_{ij}$ in $R'\otimes_{\mathbb{C}}R$. Fusing
$({}_{R}I_{S},{}_{W}\mathcal{I}_{W'})$ onto this defect from the right, we obtain a
free $R'$-$S$-module of rank $2m$, and a matrix $\tilde{Q}$ with entries
$\tilde{Q}_{ij}= (\text{id}\otimes \phi) (Q_{ij})$, i.e.\ we just replace
the variables of $R$ by the variables of $S$ via the map $\phi$.
We now assume that ${}_{R}S$ is a finite rank free $R$-module,
\begin{equation}
\rho : R^{\oplus n} \xrightarrow{\sim} {}_{R}S \ .
\end{equation}
With the help of the $R$-module isomorphism $\rho$ we can then explicitly describe
how the defect $({}_{R}I_{S},{}_{W}\mathcal{I}_{W'})$ acts by fusion to the right
on a defect $({}_{S}M_{S'},Q)$. If ${}_{S}M_{S'}$ is free of rank
$2m$, then $Q$ can be represented as a $2m\times 2m$ matrix with entries $Q_{ij}\in
S\otimes_{\mathbb{C}}S'$. After the fusion we have a $R$-$S'$-module of
rank $2mn$, and each entry $Q_{ij}$ is replaced by the $n\times n$
matrix that represents the map $\rho^{-1} \circ Q_{ij}\circ \rho$
(where we tacitly extend $\rho$ to mean $\rho \otimes \text{id}_{S'}$).
A particular situation occurs when all $Q_{ij}$ are of the form $\phi
(\tilde{Q}_{ij})$. As $\rho$ is an $R$-module map, the map $\rho^{-1} \circ
Q_{ij}\circ \rho$ can then be represented by the $n\times n$ matrix
$\tilde{Q}_{ij}\cdot \mathbbm{1}_{n\times n}$. The resulting defect is
therefore a direct sum of $n$ identical defects. As an example,
consider the fusion of $({}_{R}I_{S},{}_{W}\mathcal{I}_{W'})$ on
$({}_{S}I_{R},{}_{W'}\mathcal{I}_{W})$. By the arguments above this
fusion results in a direct sum of $n$ identity defects,
\begin{equation}\label{fundrelation}
({}_{R}I_{S},{}_{W}\mathcal{I}_{W'})\otimes({}_{S}I_{R},{}_{W'}\mathcal{I}_{W})
\cong ({}_{R}I_{R},{}_{W}\mathcal{I}_{W})^{\oplus n} \ .
\end{equation}
\smallskip
In the special case that $\phi$ is a ring isomorphism, $\phi :R\to R$,
and $W'=\phi (W)=W$, the construction above leads to symmetry or
group-like defects $G^{\phi}= ({}_{R}M_{R},(\text{id} \otimes \phi)
({}_{W}\mathcal{I}_{W}))$ (which have been discussed
in~\cite{Frohlich:2006ch,Brunner:2007qu}). The fusion of such defects
is particularly simple,
\begin{equation}
G^{\phi} \otimes G^{\psi} \cong G^{\psi \circ \phi} \ ,
\end{equation}
and $G^{\phi}$ is invertible with inverse $G^{\phi^{-1}}$. These
defects therefore form a group.
\section{Examples and applications}
In this section we want to apply the formalism of the foregoing
section to physically interesting examples.
\subsection*{Minimal models}
Let us first look at the one variable case, $R=\mathbb{C}[y]$, and choose
the potential to be $W (y)=y^{k}$. The corresponding Landau-Ginzburg
model describes a minimal model at level $k-2$. Consider now the ring
homomorphism
\begin{equation}
\phi_{1}: p (y) \mapsto p (x^{d})
\end{equation}
that maps polynomials in $R$ to those in $S=\mathbb{C}[x]$. The
transformed potential is $W' (x) =x^{kd}$. We observe that ${}_{R}S$ is
a free $R$-module of rank $d$,
\begin{equation}
\begin{array}{lrcl}
\rho : & R^{\oplus d} & \to & S\\
& (p_{1} (y),\dotsc ,p_{d} (y)) & \mapsto &
\sum_{j=1}^{d} x^{j-1}p_{j} (x^{d}) \ .
\end{array}
\end{equation}
Let us now look at the corresponding defect between these two minimal
models. We consider the explicit construction
$({}_{R}I^{B}_{S},{}_{W}\mathcal{I}_{W'}^{B})$ via $\phi_{*}$. We start
with the identity defect $({}_{S}I_{S},{}_{W'}\mathcal{I}_{W'})$ that is
given by a rank $2$ matrix
${}_{W'}\mathcal{I}_{W'}=\bigl(\begin{smallmatrix}0 & \imath_{0}\\
\imath_{1} & 0 \end{smallmatrix}\bigr)$ with $\imath_{0}= (x-x')$ and
$\imath_{1}= (W (x)-W (x'))/ (x-x')$. Here we denoted by $x'$ the
variable corresponding to the right $S$-module structure. Under the
map $\phi_{*}$ acting on the left $S$-module structure the entry
$\imath_{0}$ is then replaced by
\begin{equation}
\tilde{\imath}_{0} = \begin{pmatrix}
-x' & & & y\\
1 & -x' & & \\
& \ddots & \ddots & \\
& & 1 & -x'
\end{pmatrix} \xrightarrow[\text{transformation}]{\text{similarity}} \begin{pmatrix}
y-x'^{d} & & & \\
& 1 & & \\
& & \ddots & \\
& & & 1
\end{pmatrix} \ .
\end{equation}
We therefore explicitly see that this defect is equivalent to
$({}_{R}I^{A}_{S},{}_{W}\mathcal{I}_{W'}^{A})$ that we obtain from the
identity defect in $y$-variables by expressing one of the variables in
terms of $x$. This defect is related to the generalised permutation
boundary conditions in two minimal
models~\cite{Caviezel:2005th,Fredenhagen:2006qw} by the folding trick.
We now want to apply this defect to matrix factorisations
$({}_{S}M,Q)$ that describe boundary conditions. The elementary
factorisation $x^{\ell}\cdot x^{kd-\ell}$ will be called
$Q_{\ell}^{(x)}$, and correspondingly $Q^{(y)}_{\ell}$ refers to the
$y$-factorisation $y^{\ell}\cdot y^{k-\ell}$. Fusing the defect
$({}_{R}I_{S},{}_{W}\mathcal{I}_{W'})$ to $(S^{\oplus
2},Q_{rd+\ell}^{(x)})$ results in a superposition $\big( R^{\oplus d},\big(
Q^{(y)}_{r}\big)^{\oplus d-\ell}\oplus \big(
Q^{(y)}_{r+1}\big)^{\oplus \ell}\big)$ (for $0\leq \ell \leq
d-1$). The factorisation $Q^{(y)}_{0}$ is trivial, so we see that the
basic factorisation $Q_{1}^{(x)}$ is just mapped to the basic
factorisation $Q_{1}^{(y)}$.
We can also consider defects in minimal models. Of particular interest
are the group-like defects $G^{n}_{(y)}$ \cite{Brunner:2007qu}
that induce the map $y\mapsto \eta^{n} y$. Here $\eta=\exp \frac{2\pi
i}{k}$ such that the potential $y^{k}$ is invariant. Obviously we have
$G^{n}\cong G^{n+k}$, and the group law is just $G^{m}_{(y)}\otimes
G^{n}_{(y)}=G^{m+n}_{(y)}$. As a $(W,W)$-defect matrix factorisation,
$G^{n}_{(y)}$ corresponds to $(y-\eta^{n}y')\cdot \frac{W (y)-W
(y')}{y-\eta^{n}y'}$. Similarly, $G^{n}_{(x)}$ denotes the group-like
defect corresponding to the map $x\mapsto \exp \frac{2\pi
in}{kd}x$. Given such a defect $G^{n}_{(x)}$ one can ask what happens to
it when we sandwich it between the defects ${}_{R}I_{S}$ and
${}_{S}I_{R}$. Surprisingly the result can again be expressed in terms
of group like-defects, namely
\begin{equation}
({}_{R}I_{S},{}_{W}\mathcal{I}_{W'})\otimes G^{n}_{(x)} \otimes
({}_{S}I_{R},{}_{W'}\mathcal{I}_{W}) \cong \left(G^{n}_{(y)}
\right)^{\oplus d} \ .
\end{equation}
\subsection*{\mbox{\boldmath$SU (3)/U (2)$} Kazama-Suzuki model}
As a more interesting example we look at a defect between an $SU (3)/U
(2)$ Kazama-Suzuki model and a product of two minimal models. Consider
the two variable polynomial rings $R=\mathbb{C}[y_{1},y_{2}]$ and
$S=\mathbb{C}[x_{1},x_{2}]$, and the ring homomorphism
\begin{equation}
\phi: p(y_{1},y_{2}) \mapsto p(x_{1}+x_{2},x_{1}x_{2}) \ ,
\end{equation}
which replaces the $y_{i}$ by the elementary symmetric polynomials in
the $x_{j}$. The potential in $x$-variables is that of two minimal models,
\begin{equation}
W' (x_{1},x_{2}) = x_{1}^{k}+x_{2}^{k} \qquad (k\geq 4)\ .
\end{equation}
It is symmetric in $x_{1}$ and $x_{2}$ and thus it can be expressed in
terms of the elementary symmetric polynomials leading to the potential
$W (y_{1},y_{2})$ in the $y$-variables such that $\phi (W)=W'$. This
then describes the $SU (3)/U (2)$ Kazama-Suzuki model (see e.g.\
\cite{Gepner:1988wi,Behr:2010ug}).
The $R$-module ${}_{R}S$ is free of rank $2$ with the explicit
$R$-module isomorphism
\begin{equation}
\begin{array}{lrcl}
\rho : & R\oplus R & \to & {}_{R}S\\
& (p_{1} (y_{1},y_{2}),p_{2} (y_{1},y_{2})) & \mapsto &
p_{1} (x_{1}+x_{2},x_{1}x_{2})+ (x_{1}-x_{2})p_{2}
(x_{1}+x_{2},x_{1}x_{2}) \, ,
\end{array}
\end{equation}
with inverse
\begin{equation}
\begin{array}{lrcl}
\rho^{-1} : & {}_{R}S & \to & R\oplus R\\
& p (x_{1},x_{2}) & \mapsto &
\Big( p_{s} (x_{1},x_{2})\big|_{y_{i}} , \frac{p_{a}
(x_{1},x_{2})}{x_{1}-x_{2}}\big|_{y_{i}}\Big) \ ,
\end{array}
\end{equation}
where $p_{s/a} (x_{1},x_{2}) =\frac{1}{2} (p (x_{1},x_{2})\pm p
(x_{2},x_{1}))$, and $|_{y_{i}}$ means to replace in a symmetric
polynomial in $x_{j}$ the elementary symmetric polynomials by the
$y_{i}$.
The $(W,W')$-defect between the Kazama-Suzuki model ($y$-variables)
and the minimal models ($x$-variables) acts on $y$-factorisations
simply by replacing variables. However, given an $x$-factorisation
with matrix $Q$, a matrix element $Q_{ij}$ is replaced by a $2\times
2$ matrix,
\begin{equation}\label{symmetrisation}
Q_{ij} \mapsto \begin{pmatrix}
( Q_{ij})_{s} & (x_{1}-x_{2}) (Q_{ij})_{a}\\
\frac{(Q_{ij})_{a}}{x_{1}-x_{2}} & (Q_{ij})_{s}
\end{pmatrix} \Bigg|_{y_{1},y_{2}} \ .
\end{equation}
As an example consider the boundary condition based on the
factorisation $(x_{1}-\xi x_{2})\cdot \frac{W'
(x_{1},x_{2})}{x_{1}-\xi x_{2}}$ with $\xi=\exp \frac{\pi i}{k}$
(these are the so-called permutation factorisations~\cite{Ashok:2004zb,Brunner:2005fv}). By the
map~\eqref{symmetrisation} the factor $(x_{1}-\xi x_{2})$ is mapped to
\begin{equation}
(x_{1}-\xi x_{2}) \mapsto \begin{pmatrix}
\frac{1-\xi}{2}y_{1} & \frac{1+\xi}{2} (y_{1}^{2}-4y_{2})\\[1pt]
\frac{1+\xi}{2} & \frac{1-\xi}{2}y_{1}
\end{pmatrix} \xrightarrow[\text{transf.}]{\text{similarity}}
\begin{pmatrix}
y_{1}^{2}- 2 (1+\cos \frac{\pi}{k+2})y_{2} & 0\\
0 & 1
\end{pmatrix} \, .
\end{equation}
This means that the linear polynomial factorisation in $x$ is mapped to
a polynomial factorisation in the $y$-variables. The interesting fact
is now that both factorisations describe rational boundary states in
the corresponding conformal field theories~\cite{Behr:2010ug}.
One can go further and consider the $x$-factorisation
\begin{equation}
\big( (x_{1}-\xi x_{2}) (x_{1}-\xi^{3}x_{2})\big) \cdot \frac{W' (x_{1},x_{2})}{(x_{1}-\xi x_{2}) (x_{1}-\xi^{3}x_{2})} = W'
(x_{1},x_{2}) \ .
\end{equation}
The quadratic factor is mapped to
\begin{align}
(x_{1}-\xi x_{2})&(x_{1}-\xi^{3}x_{2}) \\
\mapsto & \begin{pmatrix}
\frac{1-\xi^{4}}{2} (y_{1}^{2}-2y_{2})- (\xi+\xi^{3})y_{2} &
\frac{1+\xi^{4}}{2} (y_{1}^{2}-4y_{2})y_{1} \\
\frac{1+\xi^{4}}{2} y_{1} & \frac{1-\xi^{4}}{2} (y_{1}^{2}-2y_{2})-
(\xi+\xi^{3})y_{2} \end{pmatrix}\nonumber\\
\xrightarrow[\text{transf.}]{\text{similarity}} &
\begin{pmatrix}
y_{1}^{2}-2 (1+\cos \frac{\pi}{k})y_{2} & 0\\
y_{1} & y_{1}^{2}-2 (1+\cos \frac{3\pi}{k})y_{2}
\end{pmatrix} \ .
\nonumber
\end{align}
Again this factorisation has been identified with a rational boundary
condition in the Kazama-Suzuki model in~\cite{Behr:2010ug}. This
example shows that the variable transformation defect is indeed very
useful to generate interesting matrix factorisations. In a subsequent
publication we will show that with the help of this variable transformation defect,
one also can generate rational defects in Kazama-Suzuki models which
then allow to generate in principle all factorisations corresponding
to rational boundary conditions in these models.
The defect considered here actually also appears in the link homology
of Khovanov and Rozansky~\cite{Khovanov:2004}, namely the diagram on
the right in figure~\ref{fig:KRbblocks} corresponds in our language to the defect
$({}_{S'}I_{R})\otimes ({}_{R}I_{S})$ (where $S'=\mathbb{C}[x_3,x_4]$).
\begin{figure}[hbtp]
\centering
\includegraphics{KRbblocks}
\caption{Basic building blocks that appear in the resolution of crossings~\cite[figure
9]{Khovanov:2004}: the identity defect ${}_{S'}I_{S}$
in $x$-variables to the left, and the basic wide-edge graph on the right corresponding to $({}_{S'}I_{R})\otimes ({}_{R}I_{S})$ (with ${S=\mathbb{C}[x_1,x_2]}$, ${S'=\mathbb{C}[x_3,x_4]}$).}
\label{fig:KRbblocks}
\end{figure}
The diagram on the left of figure~\ref{fig:KRbblocks} simply
corresponds to the identity defect in $x$-variables. One of the fundamental
equivalences in the link homology displayed in figure~\ref{fig:KRisom} would read
in our notation (with $S''=\mathbb{C}[x_5,x_6]$)
\begin{equation}
\big( ({}_{S''}I_{R})\otimes ({}_{R}I_{S'})\big) \otimes \big( ({}_{S'}I_{R}) \otimes
({}_{R}I_{S})\big) \cong \big( ({}_{S''}I_{R})\otimes
({}_{R}I_{S})\big)^{\oplus 2}\ ,
\end{equation}
which follows immediately from~\eqref{fundrelation}.
\begin{figure}[hbtp]
\centering
\includegraphics{KRisom}
\caption{One of the fundamental diagram equivalences of \cite[figure 35 and Prop.~30]{Khovanov:2004} (up to grading).}
\label{fig:KRisom}
\end{figure}
It would be very interesting to also consider the morphisms between
the defects in figure~\ref{fig:KRbblocks} that are needed to formulate the complex of defects
assigned to crossings (see~\cite[figure 46]{Khovanov:2004}) in our
framework, but we leave this for future work.
\subsection*{\mbox{\boldmath$SU (n+1)/U (n)$} Kazama-Suzuki models}
The last example has a beautiful generalisation to a defect between a
$SU (n+1)/U (n)$ Kazama-Suzuki model and $n$ copies of minimal
models. We consider the polynomial rings $R=\mathbb{C}[y_{1},\dotsc
,y_{n}]$ and $S=\mathbb{C}[x_{1},\dotsc ,x_{n}]$, and the potential
\begin{equation}
W' (x_{1},\dotsc ,x_{n}) = x_{1}^{k}+ \dotsb +x_{n}^{k} \ .
\end{equation}
The ring homomorphism is defined by
\begin{equation}
\phi (y_{j}) = \sum_{i_{1}<\dotsb <i_{j}} x_{i_{1}}\cdot \dotsb \cdot
x_{i_{j}} \ ,
\end{equation}
and it maps the $y_{j}$ to the elementary symmetric polynomials in the
$x_{i}$. It is an old result in invariant theory~\cite[section
II.G]{Artin} that ${}_{R}S$ is a free $R$-module of rank $n!$. To get
an explicit $R$-module isomorphism between ${}_{R}S$ and $R^{\oplus
n!}$, one needs to choose a good basis in $S$. The simplest
choice~\cite{Artin} is to take the $n!$ polynomials given by
\begin{equation}
x_{1}^{\nu_{1}}x_{2}^{\nu_{2}}\cdot \dotsb \cdot x_{n}^{\nu_{n}} \ , \
\text{where} \ \nu_{i}\leq i-1 \ .
\end{equation}
Another possibility with some computational advantages is provided by
the Schubert polynomials $X_{\sigma} (x_{1},\dotsc ,x_{n})$, for which there is one for
each permutation $\sigma$ in the symmetric group $S_{n}$. It was shown
in~\cite{Macdonald} that any polynomial $p (x_{1},\dotsc ,x_{n})$ has a unique
expansion
\begin{equation}
p (x_{1},\dotsc ,x_{n}) = \sum_{\sigma\in S_{n}} p_{\sigma} (x_{1},\dotsc
,x_{n})X_{\sigma} (x_{1},\dotsc ,x_{n}) \ ,
\end{equation}
where the $p_{\sigma}$ are totally symmetric. The map $\rho$ is then
given by
\begin{equation}
\begin{array}{lrcl}
\rho : & R^{\oplus n!} & \to & S \\
& \!\!\!\!(p_{\sigma} (y_{1},\dotsc ,y_{n}))_{\sigma \in S_{n}}\! & \mapsto &
\!\!\!{\displaystyle \sum_{\sigma \in S_{n}}} p_{\sigma} (x_{1}+\dotsb +x_{n},\dotsc
,x_{1}\dotsb x_{n}) X_{\sigma} (x_{1},\dotsc ,x_{n})\, .
\end{array}
\end{equation}
\section{Conclusion}
We have seen that there is a natural defect between Landau-Ginzburg
theories whose superpotentials are related by a variable
transformation. The fusion of this defect onto other factorisations
has an explicit and simple description via the functors $\phi^{*}$ and
$\phi_{*}$ corresponding to extension and restriction of scalars.
The examples have shown that these defects can be used to relate
boundary conditions or defects in different LG models. In particular,
one can use such defects between minimal models and Grassmannian
Kazama-Suzuki models to put into use the knowledge that is already
available for minimal models to obtain factorisations for the
Kazama-Suzuki models. In a such a way one can for example generate all
factorisations corresponding to rational boundary conditions in the
$SU (3)/U (2)$ model, as we will show in a subsequent publication.
In the $SU (3)/U (2)$ example it turns out that the defects discussed
here are crucial to construct factorisations for rational topological
defects. These finitely many elementary defects and their
superpositions form a closed semi-ring that is isomorphic to the
fusion semi-ring. Realising such a finite-dimensional semi-ring (in
the sense that as a semi-group it is isomorphic to a direct product of
finitely many copies of $\mathbb{N}_{0}$) in terms of defect matrix
factorisations reflects the rational structure of the conformal field
theory that is otherwise hard to see in the LG formulation. It would
be interesting to investigate whether the existence of such an
algebraic structure automatically signals an enhanced symmetry in the CFT.
Finally, we have seen that our defects generate the building blocks of
Khovanov-Rozansky homology, except for the morphisms between defect
building blocks. In other words, one could say that our formulation
provides a physical setup of the Khovanov-Rozansky factorisations as a
sequence of Kazama-Suzuki models separated by defects. By
generalisation to $SU (n+1)/U (n)$ models, this is also true for the
higher graphs appearing in the MOY calculus~\cite{Murakami}. Our
analysis can therefore be seen as a physical supplement to the recent
results in~\cite{Becker,Carqueville:2011sj}.
\section*{Acknowledgements}
We thank Nils Carqueville, Dan Murfet and Ingo Runkel for useful discussions.
|
2,869,038,153,964 | arxiv | \section{Introduction}
The coadjoint representation has Poisson nature: the Lie bracket of a Lie algebra $\gg$ canonically induces a linear Poisson
bracket on its dual $\gg^*$. The symplectic leaves of the linear Poisson structure are the coadjoint orbits.
The induced symplectic structure on a coadjoint orbit is the so called Konstant-Kirillov-Souriau (KKS) symplectic structure.
For any Poisson structure the understanding of the
symplectic structure of any of it leaves is fundamental; for duals of Lie algebras this understanding is even more important,
for has deep implications on Hamiltonian group actions and representation theory \cite{LG,Ki}.
When $\gg$ is (semisimple) of compact type, coadjoint orbits --which are the classical flag manifolds from complex geometry--
are compact, and therefore more tools are available for the study of their
symplectic geometry \cite{LG}.
Global aspects of the symplectic geometry of non-compact coadjoint orbits are much harder to grasp. The first result in that direction
is due to Arnold. In \cite{Ar} he proved that the regular (complex) coadjoint orbit of $\mathrm{SL}(n+1,\mathbb{C})$ endowed with the
imaginary
part of the KKS holomorphic symplectic structure is symplectomorphic to the cotangent bundle
of the variety of full flags in $\mathbb{C}^{n+1}$, if and only if all the eigenvalues of some (and hence any) matrix in the orbit are
pure imaginary.
Later on, Azad, van den Ban and Biswas \cite{ABB} discovered that Arnold's result had a far reaching generalization for
semisimple real hyperbolic orbits, which we briefly discuss:
Let $G$ be a connected, non-compact semisimple Lie group with finite center, and let $\mathfrak{g}$ denote its Lie algebra.
The Killing form $\langle \cdot,\cdot \rangle$
intertwines the coadjoint and adjoint actions, and it is used to transfer the symplectic structure from a coadjoint
orbit to the corresponding adjoint one, so one can speak of the KKS symplectic structure of an adjoint
orbit. An element $H\in \gg$ is real hyperbolic if the operator $\mathrm{ad}(H)$ diagonalizes
with real eigenvalues; if an Iwasawa decomposition $G=KAN$ has been fixed, then real hyperbolic
elements are those conjugated to elements in the closure of
the fixed positive Weyl chamber $\mathrm{Cl}(\aa^+)\subset \aa$ of the Lie
algebra of $A$.
\begin{theorem}[\cite{ABB}]\label{thm:main} Let $G$ be a connected, non-compact semisimple Lie group with finite center
and let $G=KAN$ be any fixed Iwasawa decomposition. Then
for any real hyperbolic element $H\in \gg$, there exist a canonical symplectomorphism between the adjoint orbit
$\mathrm{Ad}(G)_H\subset \gg$
with its KKS symplectic structure,
and the cotangent bundle of the real flag manifold $\mathrm{Ad}(K)_H$ with its standard Liouville symplectic structure.
\end{theorem}
The existing proofs of Theorem \ref{thm:main} are far from elementary. They rely either on deep results on integrable systems \cite{ABB},
or on
non-trivial integrability results for Lie algebra actions \cite{GGS}.
The purpose of this note is to revisit Theorem \ref{thm:main} and provide a proof based on elementary facts on both Lie theory
and symplectic geometry, thus shedding some new light on the symplectic geometry of semisimple (co)adjoint orbits.
The key ingredients in our strategy are the full use of the \emph{canonical ruling} of the adjoint orbit and the description of new aspects
the \emph{symplectic
geometry of the `Iwasawa projections'}.
In what follows we briefly discuss the main ideas behind our approach, and compare it with \cite{ABB,GGS}:
We assume without loss of generality that $H\in \mathrm{Cl}(\aa^+)$.
The Iwasawa decomposition defines a well-known canonical ruling on the adjoint orbit $\mathrm{Ad}(G)_H$. The ruling, together
with the Killing form, determine a diffeomorphism
\begin{equation}\label{eq:emb}i\colon T^*\mathrm{Ad}(K)_H\rightarrow \mathrm{Ad}(G)_H
\end{equation}
extending the inclusion of real flag manifold $\mathrm{Ad}(K)_H\hookrightarrow\mathrm{Ad}(G)_H$.
Of course, the diffeomorphism (\ref{eq:emb}) appears both in \cite{ABB,GGS}, but it is not fully exploited.
\begin{itemize}
\item In \cite{ABB} non-trivial theory
of complete Lagrangian fibrations is used to construct a symplectomorphism
$\varphi\colon (T^*\mathrm{Ad}(K)_H,\omega_{\mathrm{std}})\rightarrow (\mathrm{Ad}(G)_H,\omega_{\mathrm{KKS}})$, with the property
of being the unique symplectomorphism
which (i) extends the identity on $\mathrm{Ad}(K)_H$ and (ii) is a morphism fiber bundles, where $\mathrm{Ad}(G)_H$
has the bundle structure induced by $i$ in (\ref{eq:emb}).
In fact, it can be checked that $\varphi$ coincides with $i$ (\ref{eq:emb})
(the uniqueness statement is also a consequence of the absence of non-trivial symplectic automorphisms of the cotangent bundle
preserving the zero section and the fiber bundle structure).
\item In \cite{GGS} a complete Hamiltonian action of $\gg$ on
$(T^*\mathrm{Ad}(K)_H,\omega_{\mathrm{std}})$ is built. The momentum map
$\mu\colon (T^*\mathrm{Ad}(K)_H,\omega_{\mathrm{std}})\rightarrow (\mathrm{Ad}(G)_H,\omega_{\mathrm{KKS}})$ is the desired
symplectomorphism; the authors also show that the momentum map $\mu$ matches $i$ in (\ref{eq:emb}).
\end{itemize}
Both in \cite{ABB,GGS} a global construction on a non-compact symplectic manifold is performed, something which always presents
technical difficulties. \footnote{In \cite{Ki}, Corollary 1, a proof of the isomorphism of a regular coadjoint orbit
with the cotangent bundle
is presented,
but it is not correct as the completeness issues are entirely ignored.}
Our strategy is much simpler: we shall take full advantage of the ruling structure
to prove the equality
\begin{equation}\label{eq:rul}
i_*\omega_{\mathrm{std}}=\omega_{\mathrm{KKS}}.
\end{equation}
In fact, this is the approach sketched by Arnold \cite{Ar}.
Basic symplectic linear algebra \cite{MS} implies that to prove (\ref{eq:rul}) at $x\in \mathrm{Ad}(G)_H$
it is enough to find $L_v,L_h\subset T_x\mathrm{Ad}(G)_H$ such that:
(i) $L_v,L_h$ are Lagrangian subspaces for both symplectic structures;
(ii) $L_v\cap L_h=\{0\}$;
(iii) $i_*\omega_{\mathrm{std}}(x)(Y,Z)=\omega_{\mathrm{KKS}}(x)(Y,Z),\,\, \forall\, Y\in L_v,\,Z\in L_h$.
As the notation suggests, $L_v$ will be the vertical tangent space coming from the fiber bundle structure,
which is trivially Lagrangian for $i^*\omega_{\mathrm{std}}$ and easily
seen to be Lagrangian for $\omega_{\mathrm{KKS}}$ \cite{ABB}.
Transitivity of the adjoint action implies the existence of $g\in G$ so that
\[x\in \mathrm{Ad}(g)(\mathrm{Ad}(K)_H):=\mathrm{Ad}(K)_H^g.\]
The `horizontal' subspace $L_h$ will be the tangent space to $\mathrm{Ad}(K)_H^g$ at $x$;
because the zero section $\mathrm{Ad}(K)_H$ is also Lagrangian
w.r.t. $\omega_{\mathrm{KKS}}$ \cite{ABB},
$G$-invariance of $\omega_{\mathrm{KKS}}$ implies that $L_h$ is
a Lagrangian subspace w.r.t $\omega_{\mathrm{KKS}}$. If $\mathrm{Ad}(K)_H^g$
it is to be Lagrangian w.r.t to $\omega_{\mathrm{std}}$, it should correspond to a closed 1-form in
$\mathrm{Ad}(K)_H$. In fact, it will be the graph of an exact 1-form, and the `projections' associated to the Iwasawa decomposition
will play a crucial role to determine a potential.
The `Iwasawa projection' $H\colon G\colon \rightarrow \aa$ is defined by $x\in K\mathrm{exp}H(x)N$. A pair $H\in \aa$, $g\in G$
determines a function
\[F_{g,H}\colon K\rightarrow \mathbb{R},\,\, k\mapsto \langle H, H(gk)\rangle.\] Under the assumption $H\in \mathrm{Cl}(\aa^+)$
the function descends to the real flag
manifold $\mathrm{Ad}(K)_H\cong K/Z_K(H)$, where $Z_K(H)$ is the centralizer of $H$ in $K$ \cite{DKV}. The functions $F_{g,H}$
are well-studied, and they play a prominent
role in Harmonic analysis and convexity theory \cite{DKV,H,B,BB}.
Our main technical result is:
\begin{proposition}\label{pro:pro}
Let $G$ be a connected, non-compact semisimple Lie group with finite center, let $G=KAN$ be any fixed Iwasawa decomposition and let
$H\in \mathrm{Cl}(\aa^+)$. Then for any $g\in G$ the submanifold \[\mathrm{Ad}(K)_H^g\subset \mathrm{Ad}(G)_H\overset{i^{-1}}{\cong}T^*\mathrm{Ad}(K)_H\]
is the graph of the exterior differential of $-F_{g,H}\in C^\infty(\mathrm{Ad}(K)_H)$.
\end{proposition}
Proposition \ref{pro:pro} completes the description of the `horizontal Lagrangians'. The equality
\[i_*\omega_{\mathrm{std}}(x)(Y,Z)=\omega_{\mathrm{KKS}}(x)(Y,Z),\,\, \forall\, Y\in L_v,\,Z\in L_t\]
will follow from computations analogous to those used to establish proposition \ref{pro:pro},
this providing a proof of Theorem \ref{thm:main} which only appeals to basic symplectic geometry and Lie theory.
\section{Proof of Theorem \ref{thm:main}}
In this section we fill in the details of the proof of Theorem \ref{thm:main} sketched in the introduction.
Let us fix a Cartan decomposition $G=KP$ associated
to an involution $\theta$, and let $\mathfrak{k},\mathfrak{p}$ denote the respective Lie algebras.
A choice maximal abelian subalgebra $\aa\subset \pp$ and positive Weyl chamber $\mathfrak{a}^+\subset \aa$ (or root ordering)
gives rise to an Iwasawa decomposition $G=KAN$, with $\mathfrak{n}$ the Lie algebra of the nilpotent factor.
We shall denote the adjoint action of $g$ on $X\in \mathfrak{g}$ by $X^g$.
We may pick without any loss of generality $H\in \mathrm{Cl}(\aa^+)$ and consider the corresponding adjoint orbit $\mathrm{Ad}(G)_H$.
The orbit is identified
with the homogeneous space $G/Z(H)$, where $Z(H)$ denotes the centralizer of $H$. Under this identification $\mathrm{Ad}(K)_H$
is mapped to a submanifold canonically isomorphic to $K/Z_K(H)$, where $Z_K(H)=K\cap Z(H)$ is the centralizer of $H$ in $K$. At the infinitesimal level
the tangent space at $H\in \mathrm{Ad}(K)_H$ is identified
with the quotient space $\mathfrak{k}/\mathfrak{z}_K(H)$, where $\mathfrak{z}_K(H)$ is the Lie algebra of $Z_K(H)$.
\subsection{The ruling and the identification $T^*\mathrm{Ad}(K)H\overset{i}{\cong}\mathrm{Ad}(G)_H$.}
The contents we sketch in this subsection are rather standard. We refer the reader to \cite{ABB} for a throughout exposition.
Let $\mathfrak{n}(H)$ be the sum of root subspaces associated to positive roots not vanishing on $H$. We have
the $\theta$-orthogonal decomposition
\[\mathfrak{g}=\theta\mathfrak{n}(H)\oplus \mathfrak{z}(H)\oplus \mathfrak{n}(H),\]
where $\theta\mathfrak{n}(H)=\mathfrak{n}^-(H)$ are the root spaces corresponding to the negative roots which are non-trivial on $H$.
The affine subspace $H+\mathfrak{n}(H)$ is tangent to $\mathrm{Ad}(G)_H$ and complementary to $\mathrm{Ad}(K)_H$ at $H$.
Even more, the adjoint action of the subgroup
$N(H)$ integrating the nilpotent Lie algebra $\mathfrak{n}(H)$ maps $N(H)$ diffeomorphically
into $H+\mathfrak{n}(H)\subset \mathrm{Ad}(G)_H$.
This induces the well-known ruling of $\mathrm{Ad}(G)_H$.
As any ruled manifold, $\mathrm{Ad}(G)_H$ becomes an affine bundle. Since $\mathrm{Ad}(K)_H$ is transverse to the affine fibers,
the structure can be reduced to that of a vector bundle with zero section $\mathrm{Ad}(K)_H$. As to which vector bundle this is,
a vector tangent to the fiber over $H$ belongs to
$\mathfrak{n}(H)$; the map $X\mapsto X+\theta X$ is a monomorphism from $\mathfrak{n}$ to $\mathfrak{k}$. Since
the image of $\mathfrak{n}(H)$ has
trivial intersection with $\mathfrak{z}_K(H)$, it is isomorphic to $T_H\mathrm{Ad}(K)_H$. Therefore the pairing
$\langle\cdot,\cdot\rangle\colon \mathfrak{n}(H)\times \mathfrak{k}/\mathfrak{z}_K(H)\rightarrow \mathbb{R}$ --which is well defined--
is also non-degenerate, and this provides the
canonical identification of the fiber at $H$ with $T^*_H\mathrm{Ad}(K)_H$. Since the Killing form and Lie bracket are $\mathrm{Ad}$-invariant,
for any $k\in K$ we have the analogous statement for
\[\langle\cdot,\cdot\rangle\colon \mathfrak{n}(H^k)\times \mathfrak{k}/\mathfrak{z}_K(H^k)=\mathfrak{n}(H)^k\times
\mathfrak{k}/\mathfrak{z}_K(H)^k\rightarrow \mathbb{R},\] this giving the identification
\[i\colon T^*\mathrm{Ad}(K)_H\longrightarrow \mathrm{Ad}(G)_H.\]
\subsection{The symplectic forms $\omega_{\mathrm{std}}$ and $\omega_{\mathrm{KKS}}$.}
From now on we shall omit the map $i$ in the notation, so we have $\omega_{\mathrm{std}},\omega_{\mathrm{KKS}}$
two symplectic forms on $\mathrm{Ad}(G)_H$ whose equality we want to check.
For the purpose of fixing the sign convention, we take the standard symplectic form of the cotangent bundle $\omega_{\mathrm{std}}$
to be
$-d\lambda$, where $\lambda=\xi dx$ and $\xi,x$ are the momentum and position coordinates, respectively.
The tangent space at $H^g\in \mathrm{Ad}(G)_H$ is spanned by vectors of the form $[X^g,H^g]$, $X\in \gg$. The formula
\[\omega_{KKS}(H^g)([X^g,H^g],[Y^g,H^g])=\langle H,[X,Y]\rangle\]
is well defined on $\mathfrak{g}/\mathfrak{z}(H)$, and gives rise to an $\mathrm{Ad}(G)$-invariant symplectic form on the orbit \cite{Ki}.
As discussed in the introduction, to prove the equality $\omega_{\mathrm{std}}(H^g)=\omega_{\mathrm{KKS}}(H^g)$,
we shall start by finding complementary Lagrangian subspaces for both symplectic forms.
\subsection{The vertical Lagrangian subspaces}
At $H^g$ we define $L_v(H^g)= H^g+\mathfrak{n}(H)^g$, i.e. the tangent space to the ruling. Of course,
this space is Lagrangian for $\omega_{\mathrm{std}}$. It is also Lagrangian for $\omega_{\mathrm{KKS}}$ \cite{ABB}. We include
the proof of this fact to illustrate the kind of arguments we will use in our computations:
Two vectors in $L_v(H^g)$
are of the form $[X^g,H^g],[Y^g,H^g]$, where $X,Y\in \mathfrak{n}(H)$. Therefore
\[\omega_{KKS}(H^g)([X^g,H^g],[Y^g,H^g])=\langle H^g,[X^g,Y^g]\rangle=\langle H,[X,Y]\rangle=0,\]
where the vanishing follows because $[X,Y]\in \mathfrak{n}(H)$ and the subspaces $\mathfrak{a}$ and
are $\mathfrak{n}(H)$ are orthogonal w.r.t. the Killing form (following from the orthogonality w.r.t. the inner product
$\langle\cdot,\theta\cdot\rangle$ used in the Iwasawa decomposition).
\subsection{The horizontal Lagrangian subspaces} We consider $\mathrm{Ad}(K)_H^g$ the conjugation by $g$ of $\mathrm{Ad}(K)_H$ and
we define $L_h(H^g)= T_{H^g}\mathrm{Ad}(K)_H^g$. We shall prove that $\mathrm{Ad}(K)_H^g$ is a Lagrangian submanifold for both symplectic
forms, so in particular $L_h(H^g)$ is a Lagrangian subspace.
The KKS symplectic form is $\mathrm{Ad}(G)$-invariant. Therefore it suffices to prove that $\mathrm{Ad}(K)_H$
is Lagrangian w.r.t $\omega_{\mathrm{KKS}}$ to conclude that for all $g\in G$ the submanifold $\mathrm{Ad}(K)_H^g$ is Lagrangian w.r.t $\omega_{\mathrm{KKS}}$.
At $H^k$ two vectors tangent to $\mathrm{Ad}(K)_H$
are of the form $[X^k,H^k],[Y^k,H^k]$, where $X,Y\in \mathfrak{k}$. Hence
\[\omega_{KKS}(H^k)([X^k,H^k],[Y^k,H^k])=\langle H^k,[X^k,Y^k]\rangle=\langle H,[X,Y]\rangle=0,\]
where the vanishing follows from $[X,Y]\in \mathfrak{k}$ and the orthogonality of $\mathfrak{a}\subset \mathfrak{p}$ and
$\mathfrak{k}$ w.r.t. the Killing form (see also \cite{ABB}).
To describe the behavior of $\mathrm{Ad}(K)_H^g$ w.r.t. to $\omega_{\mathrm{std}}$, we need a formula for the projection map
$\mathrm{pr}\colon \mathrm{Ad}(G)_H\rightarrow \mathrm{Ad}(K)_H$ defined by the bundle structure. To that end we introduce
all the `Iwasawa projections'
\[K\colon G\rightarrow K, \,\,A\colon G\rightarrow A,\,\,N\colon G\rightarrow N,\]
characterized by $x\in K(x)AN$, $x\in KA(x)N$, $x\in KAN(x)$, respectively (note that the `Iwasawa projection'
cited
in the Introduction is $H=\mathrm{log}A$).
\begin{lemma}\label{lem:pro} The 'Iwasawa projection' $K\colon G\rightarrow K$ descends to the bundle projection
\[\mathrm{pr}\colon \mathrm{Ad}(G)_H\cong G/Z(H)\rightarrow \mathrm{Ad}(K)_H\cong K/Z_K(H)\]
associated to the ruling.
\end{lemma}
\begin{proof}
Let us write $g=K(g)A(g)N(g)$. Then it follows that \[H^g=H^{K(g)A(g)N(g)K(g)^{-1}K(g)},\]
where $K(g)A(g)N(g)K(g)^{-1}\in \mathfrak{n}(H)^{K(g)}$, which proves our assertion.
\end{proof}
To understand the bundle projection infinitesimally we also need information on the differential of the `Iwasawa projections'.
This information can be found for $H$ or $A$ in \cite{DKV} (for higher order derivatives as well; see also \cite{BB}). The result for the three projections
is presented below; the proof is omitted since it is a straightforward application of the chain rule.
\begin{lemma}\label{lem:infpro}
For any $X\in \mathfrak{g}$ and $g\in G$ we have
\begin{equation}\label{eq:decomp2}
X^{AN(g)}=K(X,g)+A(X,g)^{A(g)}+N(X,g)^{AN(g)}
\end{equation}
written as sum of vectors in $\mathfrak{k},\mathfrak{a},\mathfrak{n}$,
where $K(X,g),A(X,g),N(X,g)$ stand for the left translation to the identity on $K,A,N$ of the vector field represented
by the curves $K(g\mathrm{exp}(tX)),A(g\mathrm{exp}(tX)),N(g\mathrm{exp}(tX))$, respectively, and $AN(g)$ denotes $A(g)N(g)$.
\end{lemma}
\begin{proof}[Proof of Proposition \ref{pro:pro}]
The submanifold $\mathrm{Ad}(K)_H^g$ is the graph of a 1-form $\alpha_{g,H}\in \Omega^1(\mathrm{Ad}(K)_H)$, which we evaluate
now: given any $k\in K$, according to Lemma \ref{lem:pro} the point $H^{gk}\in \mathrm{Ad}(K)_H^g$ projects
over $H^{K(gk)}\in \mathrm{Ad}(K)_H$. The tangent space $T_{H^{K(gk)}}\mathrm{Ad}(K)_H$ is spanned by vectors of the form
$L_{K(gk)^{-1}*}K(X,gk)$, where $X\in \kk$. By definition of $\a_{g,H}$ we have:
\[
\a_{g,H}(K(gk))(L_{K(gk)^{-1}*}K(X,gk))=\langle (H^{gk}-H^{K(gk)})^{K(gk)^{-1}}, K(X,gk)\rangle.
\]
Because $\mathfrak{k}$ and $\mathfrak{a}\subset \mathfrak{p}$ are orthogonal, we deduce
\[\langle (H^{gk}-H^{K(gk)})^{K(gk)^{-1}}, K(X,gk)\rangle=\langle H^{AN(gk)},K(X,gk)\rangle.
\]
By (\ref{eq:decomp2})
\[\langle H^{AN(gk)},K(X,gk)\rangle=\langle H^{AN(gk)},X^{AN(gk)}-A(X,gk)-N(X,g)^{AN(g)}\rangle.\]
Because $H^{AN(gk)}\in \mathfrak{a}+\mathfrak{n}$ and because $\aa$ and $\mathfrak{n}(H)$ are $\langle \cdot,\cdot\rangle$-orthogonal
\[\langle H^{AN(gk)},X^{AN(gk)}-A(X,gk)-N(X,g)^{AN(g)}\rangle=-\langle H^{AN(gk)},A(X,gk)\rangle,\]
and therefore
\begin{equation}\label{eq:form1}
\a_{g,H}(K(gk))(L_{K(gk)^{-1}*}K(X,gk))=-\langle H,A(X,gk)\rangle.
\end{equation}
Now consider the function $F_{g,H}\colon K\rightarrow \mathbb{R},\, k\mapsto \langle H,H(gk)\rangle$.
By \cite{DKV}, Proposition 5.6, it descends to a function $F_{g,H}\in C^\infty(\mathrm{Ad}(K)_H)$. According to \cite{DKV},
Corollary 5.2,
\[-D F_{g,H}(L_{K(g)*}K(X,g))=-\langle X^{AN(g)},H\rangle,\]
and by equation (\ref{eq:decomp2})
\begin{equation}\label{eq:form2}
-D F_{g,H}(L_{K(g)*}K(X,g))=-\langle H, A(X,g)\rangle.
\end{equation}
Hence by equations (\ref{eq:form1}) and (\ref{eq:form2}) we conclude
\[\a_{g,H}=-DF_{g,H},\]
as we wanted to prove.
\end{proof}
\subsection{The equality $\omega_{\mathrm{KKS}}=\omega_{\mathrm{std}}$.}
We just need to prove the equality at any point $H^g$ on pairs of vectors $[X^g,H^g], [Y^g,H^g]$,
where $X\in \mathfrak{k}$ and $Y\in \mathfrak{n}(H)$.
By definition of the KKS form $\omega_{KKS}(H^g)([X^g,H^g],[Y^g,H^g])=\langle H,[X,Y]\rangle$.
As for the standard form
\[ \omega_{\mathrm{std}}(H^g)([X^g,H^g],[Y^g,H^g])=\langle [Y^g,H^g]^{K(g)^{-1}},K(X,g)\rangle=\langle [Y^{AN(g)},H^{AN(g)}],K(X,g)\rangle.\]
By equation (\ref{eq:decomp2})
\[\langle [Y^{AN(g)},H^{AN(g)}],K(X,g)\rangle=\langle [Y,H],X\rangle -\langle [Y^{AN(g)},H^{AN(g)}], A(X,g)^{A(g)}+N(X,g)^{AN(g)}\rangle,\]
which equals $\langle [Y,H],X\rangle=\langle H,[X,Y]\rangle$ since in the second summand the first entry belongs to $\mathfrak{n}$ and the second to
$\mathfrak{a}+\mathfrak{n}$.
|
2,869,038,153,965 | arxiv | \section{Introduction}
If $(X,0)$ is the germ of an algebraic (analytic) variety, then one
can define two natural metrics on it. Both are defined by choosing an
embedding of $(X,0)$ into $(\mathbbm{C}^N,0)$. The first is the \emph{outer
metric}, where the distance between two points $x,y\in X$ is given by
$d_{out}(x,y) := \norm{x-y}_{\mathbbm{C}^N}$, so just the restriction of the
Euclidean metric to $(X,0)$. The other is the \emph{inner metric},
where the distance is defined as $d_{in}(x,y) := \inf_{\gamma}
\big{\{} length_{\mathbbm{C}^N}(\gamma)\ \big{\vert}\ \morf{\gamma}{[0,1]}{X} \text{
is a rectifiable curve, } \gamma(0) = x \text{ and }
\gamma(1) = y \big{\}}$. Both of these metrics are independent of the
choice of the embedding up to bilipschitz equivalence. The outer metric
determines the inner metric, and it is clear that $d_{out}(x,y) \leq
d_{in} (x,y)$. The other direction is in general not true, and we say
that $(X,0)$ is \emph{Lipschitz normally embedded} if the inner and
outer metric are bilipschitz equivalent. \emph{Bilipschitz geometry}
is the study of the bilipschitz equivalence classes of these two
metrics. Now one can of course define
the inner and outer metric for any embedded subspace of Euclidean space, but the
bilipschitz class might in this case depend of the embeddings.
In January 2016 Asuf Shachar asked the following question on
Mathoverflow.org: Is the Lie group $\operatorname{GL}_n^+(\mathbbm{R})$ Lipschitz normally
embedded, where $\operatorname{GL}_n^+(\mathbbm{R})$ is the group of $n\times n$ matrices
with positive determinants. A positive answer was given by Katz,
Katz, Kerner, Liokumovich and Solomon
in \cite{kerneretc}. They first prove it for the
\emph{model determinantal singularity} $M^n_{n,n}$ (they call it the
determinantal
singularity), that is the set of $n\times n$ matrices with determinant
equal to zero. Then they replace the segments of the straight line
between two points of $\operatorname{GL}_n^+(\mathbbm{R})$ that passes trough $\operatorname{GL}_n^-(\mathbbm{R})$ with a line
arbitrarily close to $M^n_{n,n}$. Their proof relies on topological
arguments, and some results on conical stratifications of MacPherson
and Procesi \cite{macphersonprocesi}. In this article we give an
alternative proof relying only on linear algebra and simple
trigonometry, which also works for all model determinantal
singularities. We will also discuss the case of general determinantal
singularities, by giving some examples of determinantal singularities
that are not Lipschitz normally embedded, and then discussing some
additional assumptions on a determinantal singularity that might imply
it is Lipschitz normal embedded.
This work is in the intersection of two areas that have seen a lot of
interest lately, namely bilipschitz geometry and determinantal
singularities. The study of bilipschitz geometry of complex spaces
started with Pham and Teissier that studied the case of curves in
\cite{phamteissier}. It then lay dormant for long time until Birbrair
and Fernandes began studying the case of complex surfaces
\cite{birbrairfernandes}. Among important recent results are the
complete classification of the inner metric of surfaces by Birbrair,
Neumann and Pichon \cite{thickthin}, the proof that Zariski
equisingularity is equivalent to bilipschitz triviality in the case of
surfaces by Neumann and Pichon \cite{zariski} and the proof that
outer Lipschitz regularity implies smoothness by Birbrair,
Fernandes, L\^{e} and Sampaio
\cite{lipschitzregularity}. Determinantal singularity is also an area
that has been around for along time, that recently saw a lot of
interest. They can be seen as a generalization of ICIS, and the recent
results have mainly been in the study of invariants coming from their
deformation theory. In \cite{ebelingguseinzade} \'Ebeling and
Guse{\u\i}n-Zade defined the index of a $1$-form, and the Milnor
number have been defined in various different ways by Ruas and da
Silva Pereira \cite{cedinhamiriam}, Damon and Pike \cite{damonpike}
and Nu\~no-Ballesteros, Or\'efice-Okamoto and Tomazalla
\cite{NunoOreficeOkamotoTomazella}. Their
deformation theory have also been studied by Gaffney and Rangachev
\cite{gaffenyrangachev} and Fr\"uhbis-Kr\"uger and Zach
\cite{fruhbiskrugerzach}.
This article is organized as follows. In section \ref{preliminaries}
we discuss the basic notions of Lipschitz normal embeddings and
determinantal singularities and give some results concerning when a
space is Lipschitz normally embedded. In section \ref{modelcase} we
prove the main theorem, that model determinantal singularities are
Lipschitz normally embedded. Finally in section \ref{secgeneralcase} we
discuss some of the difficulties to extend this result to the settings
of general determinantal singularities.
\section{Preliminaries on bilipschitz geometry and determinantal
singularities}\label{preliminaries}
\subsection*{Lipschitz normal embeddings}
In this section we discuss some properties of Lipschitz normal
embeddings. We will first give the definition of Lipschitz normally
embedding we will work with.
\begin{defn}
We say that $X$ is \emph{Lipschitz normally embedded} if there exist
$K>1$ such that for all $x,y\in X$,
\begin{align}
d_{in}(x,y)\leq Kd_{out}(x,y).\label{lneeq}
\end{align}
We call a $K$ that satisfies the inequality \emph{a bilipschitz
constant of} $X$.
\end{defn}
A trivial example of a Lipschitz normally embedded set is $\mathbbm{C}^n$. For
an example of a space that is not Lipschitz normally embedded,
consider the plane curve given by $x^3-y^2=0$, then
$d_{out}((t^2,t^3),(t^2,-t^3))=2\num{t}^{\tfrac{3}{2}}$ but the
$d_{in}((t^2,t^3),(t^2,-t^3))= 2\num{t}+ o(t)$, this implies that
$\tfrac{d_{in}((t^2,t^3),(t^2,-t^3))}{d_{out}((t^2,t^3),(t^2,-t^3))}$
is unbounded as $t$ goes to $0$, hence there cannot exist a $K$
satisfying \eqref{lneeq}.
Pham and Teissier \cite{phamteissier} show that in general the outer
geometry of a complex plane curve is equivalent to its embedded
topological type, and the inner geometry is equivalent to the abstract
topological type. Hence a plane curve is Lipschitz normally embedded
if and only if it is a union of smooth curves intersecting
transversely. See also Fernandes \cite{fernandesplanecurve}.
In the cases of higher dimension the question of which singularities
are Lipschitz normally embedded becomes much more complicated. It is no
longer only rather trivial singularities that are Lipschitz normally
embedded, for example in the case of surfaces the first author
together with Neumann and Pichon, shows that rational surface
singularities are Lipschitz normally embedded if and only if they are
minimal \cite{normallyembedded}. As we will later see, determinantal
singularities give examples of non trivial Lipschitz normally
embedded singularities in arbitrary dimensions.
We will next give a couple of results about when spaces constructed from
Lipschitz normally embedded spaces are themselves Lipschitz normally
embedded. First is the case of product spaces.
\begin{prop}\label{product}
Let $X\subset \mathbbm{R}^n$ and $Y\subset \mathbbm{R}^m$ and let $Z = X\times Y
\subset \mathbbm{R}^{n+m}$. $Z$ is Lipschitz normally embedded if and only if
$X$ and $Y$ are Lipschitz normally embedded.
\end{prop}
\begin{proof}
First we prove the ``if'' direction.
Let $(x_1,y_1),(x_2,y_2)\in X\times Y$. We need to show that
$d_{in}^{X\times Y}((x_1,y_1)(x_2,y_2))\leq K d_{out}^{X\times
Y}((x_1,y_1)(x_2,y_2))$. Let $K_X$ be the constant such that
$d_{in}^X(a,b) \leq K_X d_{out}^X(a,b)$ for all $a,b\in X$, and let $K_Y$
be the constant such that $d_{in}(a,b)^Y \leq K_Y d_{out}(a,b)^Y$ for all
$a,b\in Y$. We get, using the triangle inequality, that
\begin{align*}
d_{in}^{X\times Y}((x_1,y_1)(x_2,y_2))\leq d_{in}^{X\times
Y}((x_1,y_1)(x_1,y_2))+ d_{in}^{X\times Y}((x_1,y_2)(x_2,y_2)).
\end{align*}
Now the points $(x_1,y_1)$ and $(x_1,y_2)$ both lie in the slice
$\{x_1\}\times Y$ and hence $d_{in}^{X\times
Y}((x_1,y_1)(x_1,y_2)) \leq d_{in}^{Y}(y_1,y_2)$ and likewise we have
$d_{in}^{X\times
Y}((x_1,y_2)(x_2,y_2)) \leq d_{in}^{X}(x_1,x_2)$. This then implies that
\begin{align*}
d_{in}^{X\times Y}((x_1,y_1)(x_2,y_2))\leq K_Y d_{out}^{Y}(y_1,y_2)+
K_X d_{out}^{X}(x_1,x_2),
\end{align*}
where we use that $X$ and $Y$ are Lipschitz normally embedded. Now it
is clear that $d_{out}^{X\times Y}((x_1,y_1)(x_1,y_2)) =
d_{out}^{Y}(y_1,y_2)$ and $d_{out}^{X\times Y}((x_1,y_2)(x_2,y_2)) =
d_{out}^{X}(x_1,x_2)$. Also, since $d_{out}^{X\times
Y}((x_1,y_1)(x_2,y_2))^2=d_{out}^{Y}(y_1,y_2)^2+
d_{out}^{X}(x_1,x_2)^2$ by definition of the product metric, we have
that $d_{out}^{X\times Y}((x_1,y_1)(x_1,y_2)) \leq d_{out}^{X\times
Y}((x_1,y_1)(x_2,y_2))$ and $d_{out}^{X\times
Y}((x_1,y_2)(x_2,y_2)) \leq d_{out}^{X\times
Y}((x_1,y_1)(x_2,y_2))$. It then follows that
\begin{align*}
d_{in}^{X\times Y}((x_1,y_1)(x_2,y_2))\leq (K_Y + K_X) d_{out}^{X\times
Y}((x_1,y_1)(x_2,y_2)).
\end{align*}
For the other direction, let $p,q \in X$ consider any path
$\morf{\gamma}{ [0,1] }{Z}$ such that $\gamma(0) = (p,0)$ and
$\gamma(1) = (q,0)$. Now $\gamma(t)= \big(\gamma_X(t),\gamma_Y(t)\big)$ where
$\morf{\gamma_X}{ [0,1] }{X}$ and $\morf{\gamma_Y}{ [0,1] }{Y}$ are
paths and $\gamma_X(0) = p$ and $\gamma_X(1) =q$. Now $l(\gamma)\geq
l(\gamma_X)$, hence $d_{in}^X(p,q) \leq d_{in}^Z((p,0),(q,0))$. Now
$Z$ is Lipschitz normally embedded, so there exist a $K>1$ such that
$d_{in}^Z(z_1,z_2)\leq K d_{out}(z_1,z_2)$ for all $z_1.z_2\in Z$. We also have that
$d_{out}^Z((p,0),(q,0))= d_{out}^X (p,q)$, since $X$ is embedded in $Z$ as
$X\times \{ 0\}$. Hence $d_{in}^X(p,q) \leq K d_{out}^X (p,q)$. The
argument for $Y$ being Lipschitz normally embedded is the same
exchanging $X$ with $Y$.
\end{proof}
An other case we will need later is the case of cones.
\begin{prop}\label{cone}
Let $X$ be the cone over $M$, then
$X$ is Lipschitz normally embedded if and only if $M$ is Lipschitz
normally embedded.
\end{prop}
\begin{proof}
We first prove that $M$ Lipschitz normally embedded implies that $X$
is Lipschitz normally embedded.
Let $x,y\in X$ and assume that $\norm{x}\geq\norm{y}$. First if $x=0$
(the cone point), then the straight line from $y$ to $x$ lies in $X$,
hence $d_{in}(x,y)=d_{out}(x,y)$. So we can assume that $x\neq 0$, and
let $y'=\tfrac{y}{\norm{y}}\norm{x}$. Then $y'$ and $x$ lie in the
same copy $M_\epsilon$
of $M$, hence $d_{in}(x,y')\leq K_M d_{out}(x,y')$. Now $y'$ is the point
closest to $y$ on $M_\epsilon$. Hence all of $M_\epsilon-y'$ lies on
the other side of the affine hyperspace through $y'$ orthogonal to the
line $\overline{yy'}$ from $y$ to $y'$. Hence the angle between
$\overline{yy'}$ and the line $\overline{y'x}$ between $y'$ and $x$ is
more than $\tfrac{\pi}{2}$. Therefore, the Euclidean distance from $y$ to
$x$ is larger than $l(\overline{yy'})$ and $l(\overline{y'x})$. This gives
us:
\begin{align*}
d_{in}(x,y) &\leq d_{in}(x,y')+ d_{in}(y',y) \leq
K_md_{out}(x,y')+d_{out}(y',y)\\ &\leq (K_m+1)d_{out}(x,y).
\end{align*}
For the other direction, assume that $X$ is Lipschitz normally
embedded, but $M$ is not Lipschitz normally embedded.
Since $M$ is compact the only obstructions to being Lipschitz normally
embedded are local. So let $p\in M$ be a point such that $M$ is not
Lipschitz normally embedded in a small open neighbourhood $U\subset M$ of $p$. By
Proposition \ref{product} we have that $U\times (-\epsilon,\epsilon)$ is not
Lipschitz normally embedded, where $0<\epsilon$ is much smaller than
the distance from $M$ to the origin. Now the quotient map from
$\morf{c}{M\times [0,\infty)}{X}$ induces an outer (and therefore also
inner) bilipschitz equivalence of $U\times (-\epsilon,\epsilon)$ with
$c\big(U\times (-\epsilon,\epsilon)\big)$. Since both $U$ and
$\epsilon$ can be chosen to be arbitrarily small, we have that there
does not exist any small open neighbourhood of $p\in X$ that is
Lipschitz normally embedded, contradicting that $X$ is Lipschitz
normally embedded. Hence $X$ being Lipschitz normally embedded implies
that $M$ is Lipschitz normally embedded.
\end{proof}
\subsection*{Determinantal singularities}
Let $M_{m,n}$ be the space of $m\times n$ matrices with complex (or
real) entries. For $1\leq t\leq \min \{ m,n \}$ let $\M{t}$ denote the
\emph{model determinantal
singularity}, that is $\M{t} = \big{\{} A\in M_{m,n} \vert \operatorname{rank} A <
t \big{\}}$. $\M{t}$ is an algebraic variety, with algebraic
structure defined by the vanishing of all $t\times t$ minors. It is
homogeneous, and hence a real cone over its real link, it is also
a complex cone but it is the real conical structure we will use.
It is highly singular with the singular set of $\M{t}$ being
$\M{t-1}$. If fact the action of the group $\operatorname{GL}_m\times\operatorname{GL}_n$ by
conjugation insures that the decomposition $\M{t} = \bigcup_{i=1}^t
\M{i}-\M{i-1}$ is a Whitney stratification.
Let $\morf{F}{\mathbbm{C}^N}{M_{m,n}}$ be a map with holomorphic entries, then
$X=F^{-1} (\M{t})$ is a \emph{determinantal variety} of type
$(m,n,t)$ if $\operatorname{codim} X = \operatorname{codim}
\M{t} = (m-t+1)(n-t+1)$. If $F(0)= 0$ we will call the germ $(X,0)$
a \emph{determinantal singularity} of type $(m,n,t)$.
Determinantal singularities can have quite bad singularities, hence one
often restrict to the following subset with better properties:
\begin{defn}
Let $X$ be a determinantal singularity defined by a map
$\morf{F}{\mathbbm{C}^N}{M_{m,n}}$. One says that $X$ is an \emph{essentially
isolated determinantal singularity} (EIDS for short) if $F$ is
transversal to the strata of $\M{t}$ at all point in a punctured
neighbourhood of the origin.
\end{defn}
While an EIDS can still have very bad singularities at the origin, it
singular points away from the origin only depends on the type and $N$,
for example $X-F^{-1}(\M{1})$ has a nice action, and is stratified by
$X^i= F^{-1}(\M{i}- \M{i-1})$. A lot of interesting singularities are
EIDS, for example all ICIS are EIDS.
In proving that our singularities are Lipschitz normally embedded,
we often have to change coordinates, to get some nice matrices for our
points. Hence we need the following lemma to see that these changes of
coordinates do preserve the inequalities we are using.
\begin{lemma}\label{chageofcoordinates}
Let $V\subset M_{m,n}$ be a subset invariant under linear change of
coordinates. If $x,y\in V$ satisfy $d_{in}(x,y)\leq Kd_{out}(x,y)$,
then the same is true after any linear change of coordinates.
\end{lemma}
\begin{proof}
Any linear change of coordinates of $M_{m,n}$ is given by conjugation
by a pair of matrices $(A,B)\in\operatorname{GL}_m\times\operatorname{GL}_n$.
First we see that the outer metric is just scale by the following:
\begin{align*}
d_{out}(AxB^{-1},AyB^{-1}) &=\norm{A(x-y)B^{-1}}
=\norm{A}\norm{x-y}\norm{B^{-1}}\\ &=\norm{A}\norm{B^{-1}}d_{out}(x,y).
\end{align*}
Second we consider the case of length of curves. Let $P^V_{x,y}:=\{
\morf{\gamma}{[0,1]}{V}\ \vert\ \gamma\text{ is a
rectifiable curve such that } \gamma(0)=x
\text{ and } \gamma(1)=y\}$, then the conjugation $\gamma\to A\gamma B^{-1}$
defines a bijection of $P^V_{x,y}$ and $P^{AVB^{-1}}_{AxB^{-1},AyB^{-1}} =
P^V_{AxB^{-1},AyB^{-1}}$. Moreover, that
$l(A\gamma B^{-1})=\norm{A}l(\gamma)\norm{B^{-1}}$ follows from the
definition of length of a curve. Hence
\begin{align*}
d_{in}(x,y) &= \inf_{\gamma\in P_{x,y}} \big{\{} l(\gamma) \big{\}} =
\inf_{A\gamma B^{-1} \in P_{AxB^{-1},AyB^{-1}}} \bigg{\{} \frac{l(A\gamma
B^{-1})}{\norm{A}\norm{B^{-1}}} \bigg{\}}\\ &=
\frac{d_{in}(AxB^{-1},AyB^{-1})}{\norm{A}\norm{B^{-1}}}.
\end{align*}
The result then follows.
\end{proof}
\section{The case of the model determinantal singularities}\label{modelcase}
In this section we prove that $\M{t}$ is Lipschitz normally
embedded. We do that by considering several cases for the position of
two points $p,q\in \M{t}$, and finding inequalities of the form
$d_{in}(p,q) \leq K d_{out}(p,q)$, where we explicitly give the value
of $K$. First we consider the simple case
where $q=0$.
\begin{lemma}\label{0case}
Let $p\in\M{t}$ then $d_{in}(p,0)=d_{out}(p,0)$.
\end{lemma}
\begin{proof}
This follows since $\M{t}$ is conical, and hence the straight line
from $p$ to $0$ lies in $\M{t}$.
\end{proof}
The second case we consider is when $p$ and $q$ are orthogonal. This
case is not much more complicated than the case $q=0$.
\begin{lemma}\label{orthogonalcase}
Let $p,q\in\M{t}$ such that $\langle p,q\rangle = 0$. Then
$d_{in}(p,q) \leq 2d_{out}(p,q)$.
\end{lemma}
\begin{proof}
That $\langle p,q \rangle = 0$ implies
that the line from $p$ to $q$, the line from $p$ to $0$ and the line
from $q$ to $0$ form a right triangle with the line from $p$ to
$q$ as the hypotenuse. Hence $d_{out}(p,0)\leq d_{out}(p,q)$ and
$d_{out}(q,0)\leq d_{out}(p,q)$, this the gives that:
\begin{align*}
d_{in}(p,q)\leq d_{in}(p,0)+ d_{in}(q,0) = d_{out}(p,0)+ d_{out}(q,0)
\leq 2 d_{out}(p,q).
\end{align*}
\end{proof}
The last case we need to consider is the case where $p$ and $q$ are
not orthogonal. This case is a little more complicated and we need to
do the proof by induction.
\begin{lemma}\label{generalcase}
Let $p,q\in\M{t}$ such that $\langle p,q \rangle \neq 0$. Then
$d_{in}(p,q)\leq 2\operatorname{rank}(p)d_{out}(p,q)$.
\end{lemma}
\begin{proof}
The poof is by induction in $t$ by considering $\M{t}$ as
depending on $t$ as $M^{t}_{m'+t,n'+t}$. The base case is $\M{1}=\{
0\}$, which trivially satisfies the inequality.
So we assume the theorem is true for $M^{t-1}_{m-1,n-1}$.
By a change of coordinates we can assume that $p$ and $q$ have the
following forms:
\begin{align*}
p=\left(
\begin{array}{@{} c | c @{}}
p_{11} & \begin{matrix} p_{12} & \dots & p_{1n} \end{matrix} \\ \hline
\begin{matrix} 0 \\ \vdots \\ 0 \end{matrix} & \text{\Huge $D_p$}
\end{array}
\right)\text{ and } q=\left(
\begin{array}{@{} c | c @{}}
q_{11} & \begin{matrix} 0 & \dots & 0 \end{matrix} \\ \hline
\begin{matrix} q_{21} \\ \vdots \\ q_{m1} \end{matrix} & \text{\Huge $D_q$}
\end{array}
\right)
\end{align*}
where $p_{11},q_{11}\neq 0$ and $D_p,D_q\in M_{m-1,n-1}$. Then let
$p'$, $q'$ and $q_0$ be the following points:
\begin{align*}
p'=\left(
\begin{array}{@{} c | c @{}}
q_{11} & \begin{matrix} 0 & \dots & 0 \end{matrix} \\ \hline
\begin{matrix} 0 \\ \vdots \\ 0 \end{matrix} & \text{\Huge $D_p$}
\end{array}
\right),\ q'=\left(
\begin{array}{@{} c | c @{}}
q_{11} & \begin{matrix} 0 & \dots & 0 \end{matrix} \\ \hline
\begin{matrix} 0 \\ \vdots \\ 0 \end{matrix} & \text{\Huge $D_q$}
\end{array}
\right)\text{ and }
q_0=\left(
\begin{array}{@{} c | c @{}}
q_{11} & \begin{matrix} 0 & \dots & 0 \end{matrix} \\ \hline
\begin{matrix} 0 \\ \vdots \\ 0 \end{matrix} & \text{\Huge $0$}
\end{array}
\right).
\end{align*}
It is clear that $p',q'\in \M{t}$, moreover the straight line
$\overline{pp'}$ from $p$ to $p'$ is in $\M{t}$, and the straight line
$\overline{qq'}$ from $q$ to $q'$ is in $\M{t}$. Hence
$d_{in}(p,p')=d_{out}(p,p')$ and $d_{in}(q,q')=d_{out}(q,q')$. Let
$H_{q_0}$ be the affine space through $q_0$ defined as
\begin{align*}
H_{q_0} := \Bigg{\{} \left(
\begin{array}{@{} c | c @{}}
q_{11} & \begin{matrix} 0 & \dots & 0 \end{matrix} \\ \hline
\begin{matrix} 0 \\ \vdots \\ 0 \end{matrix} & \text{\Huge $A$}
\end{array}
\right)\in M_{m,n} \Big\vert \text{ where } A\in M_{m-1,n-1} \Bigg{\}}.
\end{align*}
It is clear that $p',q'\in H_{q_0}$ and hence $p',q'\in H_{q_0}\bigcap
\M{t}$. Now $H_{q_0}\bigcap \M{t}$ is isomorphic to
$M^{t-1}_{m-1,n-1}$, and we get by induction $d_{in}(p',q') \leq
2\operatorname{rank} D_p d_{out}(p',q') = 2(\operatorname{rank} p-1) d_{out}(p',q') $.
We now have that:
\begin{align}\label{inequalitygeneric}
d_{in}(p,q) &\leq d_{in}(p,q')+ d_{in}(q',q) \leq d_{in}(p,p') +
d_{in}(p',q')+ d_{in}(q',q) \\ &\leq d_{out}(p,p') + 2(rank
p-1)d_{out}(p',q') + d_{out}(q',q).\nonumber
\end{align}
The line $\overline{pp'}$ is in the direction $p-p'$ and the line
$\overline{p'q '}$ is in the direction $q'-p'$. These direction are
\begin{align*}
p-p'=\left(
\begin{array}{@{} c | c @{}}
p_{11}-q_{11} & \begin{matrix} p_{12} & \dots & p_{1n} \end{matrix} \\ \hline
\begin{matrix} 0 \\ \vdots \\ 0 \end{matrix} & \text{\Huge $0$}
\end{array}
\right)\text{ and } q'-p'=\left(
\begin{array}{@{} c | c @{}}
0 & \begin{matrix} 0 & \dots & 0 \end{matrix} \\ \hline
\begin{matrix} 0 \\ \vdots \\ 0 \end{matrix} & \text{\Huge
$D_q-D_p$}
\end{array}
\right),
\end{align*}
hence $\overline{pp'}$ and $\overline{p'q '}$ are orthogonal. This
implies that the straight line $\overline{pq'}$ is the hypotenuse of a
right triangle given by $p,p'$ and $q'$. We therefore have that
$d_{out}(p,p') \leq d_{out}(p,q')$ and $d_{out}(p',q') \leq
d_{out}(p,q')$.
Likewise we have that the line $\overline{pq'}$ is in the direction
$p-q'$ and the line $\overline{qq '}$ is in the direction
$q-q'$. These direction are
\begin{align*}
p-q'=\left(
\begin{array}{@{} c | c @{}}
p_{11}-q_{11} & \begin{matrix} p_{12} & \dots & p_{1n} \end{matrix} \\ \hline
\begin{matrix} 0 \\ \vdots \\ 0 \end{matrix} & \text{\Huge $D_p-D_q$}
\end{array}
\right)\text{ and } q-q'=\left(
\begin{array}{@{} c | c @{}}
0 & \begin{matrix} 0 & \dots & 0 \end{matrix} \\ \hline
\begin{matrix} q_{21} \\ \vdots \\ q_{m1} \end{matrix} & \text{\Huge
$0$}
\end{array}
\right),
\end{align*}
so $\overline{pq'}$ and $\overline{qq '}$ are orthogonal. Hence we
have that $p,q$ and $q'$ form a right triangle with $\overline{pq}$ as
hypotenuse, which implies that $d_{out}(p,p') \leq d_{out}(p,q')$ and
$d_{out}(p',q') \leq d_{out}(p,q')$. When we combine this with the
previous
paragraph it follows that $d_{out}(p,p'), d_{out}(q',p'), d_{out}(q,q') \leq
d_{out}(p,q)$, and then using this in inequality
\eqref{inequalitygeneric} the result follows.
\end{proof}
We have now considered all possible pairs of $p,q$, and we can then
combine the results to get the main theorem.
\begin{thm}
The model determinantal singularity $\M{t}$ is Lipschitz normally
embedded, with a bilipschitz constant $2t-2$.
\end{thm}
\begin{proof}
Let $p,q\in\M{t}$. If $\langle p,q \rangle=$ then $d_{in}(p,q)\leq
2d_{out}(p,q)$ by Lemma
\ref{orthogonalcase}. If $\langle p,q \rangle \neq 0$ then
$d_{in}(p,q)\leq 2\operatorname{rank}(p)d_{out}(p,q)$ by Lemma
\ref{generalcase}. Hence in all cases $d_{in}(p,q)\leq (2t-2)d_{out}(p,q)$
since $\operatorname{rank} p\leq t-1$.
\end{proof}
\section{The general case}\label{secgeneralcase}
The case of a general determinantal singularity is much more difficult
that the case of a model one. One can in general not expect a determinantal
singularity to be Lipschitz normally embedded, the easiest way to see
this is to note that all ICIS are determinantal, and that there are
many ICIS that are not Lipschitz normally embedded. For example
among the simple surface singularities $A_n$, $D_n$, $E_6$, $E_7$ and
$E_8$ only the $A_n$'s are Lipschitz normally embedded. Since the
structure of determinantal singularities does not give us any new
tools to study ICIS, we will probably not be able to say when an ICIS
is Lipschitz normally embedded. This means that since $F^{-1}(\M{1})$
is often an ICIS, we probably have to assume it is Lipschitz normally
embedded to say
anything about whether $F^{-1}(\M{t})$ is Lipschitz normally
embedded. But before we discuss such assumption further, we will give
some examples of determinantal singularities that fails to be
Lipschitz normally embedded.
\begin{ex}\label{degenerationofcusps}
Let $X$ be the determinantal singularity of type $(3,3,3)$ given by
the following map $\morf{F}{\mathbbm{C}^3}{M_{3,3}}$:
\begin{align*}
F(x,y,z)=\left(
\begin{array}{@{} c c c @{}}
x & 0 & z \\
y & x & 0 \\
0 & y & x
\end{array}
\right).
\end{align*}
Since this is a linear embedding of $\mathbbm{C}^3$ into $\mathbbm{C}^9$, one can see
$X$ as an intersection of a linear subspace and $M^{3}_{3,3}$. Hence
one would expect it to be a nice space. On the other hand
$X=V(x^3-y^2z)$, hence it is a family of cusps degeneration to a
line, or seeing an other way as a cone over a cusp. But $X$ being
Lipschitz normally embedded would imply that the cusp $x^3-y^2=0$
is Lipschitz normally embedded by
Proposition \ref{cone}, which we know it is
not by the work of Pham and Teissier \cite{phamteissier}.
\end{ex}
Notice that in the Example \ref{degenerationofcusps},
$X^1=F^{-1}(M^{1}_{3,3})$ is a point and $X^1=F^{-1}(M^{2}_{3,3})$ is
a line, so both $X^1$ and $X^2$ are Lipschitz normally embedded. So it
does not in general follows that if $X^i$ is Lipschitz normally
embedded then $X^{i+1}$ is. Now the singularity in Example
\ref{degenerationofcusps} is not an EIDS, $F$ is not transverse to the
strata of $M^3_{3,3}$ at points on the $z$-axis. In the next example
we will see that EIDS is not enough either.
\begin{ex}[Simple Cohen-Macaulay codimensional 2 surface singularities]\label{scmc2ss}
In \cite{fruhbiskrugerneumer} Fr\"uhbis-Kr\"uger and Neumer classify
simple Cohen-Macaulay codimension 2 singularities. They are all EIDS of
type $(3,2,2)$, and the surfaces correspond to the
rational triple points classified by Tjurina \cite{tjurina}. We will
look closer at two of such families. First we have the family
given by the matrices:
\begin{align*}
\left(
\begin{array}{@{} c c c @{}}
z & y+w^l & w^m \\
w^k & y & x
\end{array}
\right).
\end{align*}
This family corresponds to the family of triple points in
\cite{tjurina} called $A_{k-1,l-1,m-1}$. Tjurina shows that the dual
resolution graph of their minimal resolution are:
$$
\xymatrix@R=6pt@C=24pt@M=0pt@W=0pt@H=0pt{
&&\\
\overtag{\lower.2pt\hbox to 3.5pt{\hss$\circ$\hss}}{-2}{8pt}\dashto[rr] &
{\hbox to 0pt{\hss$\underbrace{\hbox to 80pt{}}$\hss}}&
\overtag{\lower.2pt\hbox to 3.5pt{\hss$\circ$\hss}}{-2}{8pt}\lineto[r] &
\overtag{\lower.2pt\hbox to 3.5pt{\hss$\circ$\hss}}{-3}{8pt}\lineto[r]\lineto[d] &
\overtag{\lower.2pt\hbox to 3.5pt{\hss$\circ$\hss}}{-2}{8pt}\dashto[rr] &
{\hbox to 0pt{\hss$\underbrace{\hbox to 80pt{}}$\hss}}&
\overtag{\lower.2pt\hbox to 3.5pt{\hss$\circ$\hss}}{-2}{8pt}\\
&{k-1}&& \righttag{\lower.2pt\hbox to 3.5pt{\hss$\circ$\hss}}{-2}{8pt}\dashto[dddd] &&{l-1}\\
&&&&\\
&&&&\blefttag{\quad}{m-1\begin{cases} \quad \\
\ \\ \ \end{cases}}{10pt} & \\
&&&&\\
&&& \righttag{\lower.2pt\hbox to 3.5pt{\hss$\circ$\hss}}{-2}{8pt} & .\\
&&}$$
Using Remark 2.3 of \cite{spivakovsky} we see that these singularities
are minimal, and hence by the result of \cite{normallyembedded} we get
that they are Lipschitz normally embedded.
The second family is given by the matrices:
\begin{align*}
\left(
\begin{array}{@{} c c c @{}}
z & y+w^l & xw \\
w^k & x & y
\end{array}
\right).
\end{align*}
Tjurina calls this family $B_{2l,k-1}$ and give the dual resolution
graphs of their minimal resolutions as:
$$
\xymatrix@R=6pt@C=24pt@M=0pt@W=0pt@H=0pt{
&&&& \overtag{\lower.2pt\hbox to 3.5pt{\hss$\circ$\hss}}{-2}{8pt} &\\
&&&&&\\
\overtag{\lower.2pt\hbox to 3.5pt{\hss$\circ$\hss}}{-2}{8pt}\dashto[rr] &
{\hbox to 0pt{\hss$\underbrace{\hbox to 65pt{}}$\hss}}&
\overtag{\lower.2pt\hbox to 3.5pt{\hss$\circ$\hss}}{-2}{8pt}\lineto[r] &
\overtag{\lower.2pt\hbox to 3.5pt{\hss$\circ$\hss}}{-3 \hspace{7pt}}{8pt}\lineto[r] &
\overtag{\lower.2pt\hbox to 3.5pt{\hss$\circ$\hss}}{-2 \hspace{20pt}}{8pt}\lineto[r] \lineto[uu]&
\overtag{\lower.2pt\hbox to 3.5pt{\hss$\circ$\hss}}{-2}{8pt}\dashto[rr]
&{\hbox to 0pt{\hss$\underbrace{\hbox to 80pt{}}$\hss}}&
\overtag{\lower.2pt\hbox to 3.5pt{\hss$\circ$\hss}}{-2}{8pt}\\
&2l&&& &&k-3.& \\
&&}$$
Following Spivakovsky this is not a minimal singularity, and since it
is rational according to Tjurina it is not Lipschitz normally embedded
by the result of \cite{normallyembedded}.
These two families do not look very different but one is Lipschitz
normally embedded and the other is not. We can do the same for all the
Cohen-Macaulay codimension 2 surfaces, and using the results in
\cite{normallyembedded} that rational surface singularities are
Lipschitz normally embedded if and only if they are minimal, we get
that only the family $A_{l,k,m}$ is Lipschitz normally
embedded. This is similar to the case of codimension 1, since only the
$A_n$ singularities are Lipschitz normally embedded among the simple
singularities.
\end{ex}
So as we see in Example \ref{scmc2ss} being and EIDS with singular
set Lipschitz normally embedded, is not enough to ensure the variety is
Lipschitz normally embedded. One should notice that the varieties
in Example \ref{degenerationofcusps} and \ref{scmc2ss} are both
defined by maps $\morf{F}{\mathbbm{C}^N}{M_{m,n}}$ where $N<mn$. This
means that one should think of the singularity as a section of
$\M{t}$, but being a subspace of a Lipschitz normally embedded space
does not imply the Lipschitz normally embedded condition.
If $N\geq mn$ then
one can think about the singularity being a fibration over $\M{t}$,
and as we saw in Proposition \ref{product} products of Lipschitz
normally embedded spaces are Lipschitz normally embedded. Now in this
case $X^1=F^{-1}(\M{1})$ is ICIS if $X$ is an EIDS, which means that we
probably can not say anything general about whether it is Lipschitz
normally embedded or not. So natural assumptions would be to assume
that $X$ is an EIDS and that $X^1$ is Lipschitz normally embedded.
\section*{Acknowledgements}
The authors would like to thank Anne Pichon for first letting us know
about Asuf Shachar's question on Mathoverlfow.org, and later pointing
out a mistake in the proof of the first version of Proposition
\ref{generalcase}. We would also like to thank Lev Birbrair for
sending us an early version of the paper by Katz, Katz, Kerner,
Liokumovich and Solomon \cite{kerneretc}, and encouraging us to work on
the problem. The first author was supported by FAPESP grant
2015/08026-4 and the second author was partially supported by FAPESP
grant 2014/00304-2 and CNPq grant 306306/2015-8.
|
2,869,038,153,966 | arxiv | \section{Introduction and main results}
\subsection{Hypersingular Riesz gases} \label{sec-gensetting}
Let $d \geq 1$ and $s$ be a real number with $s > d$. We consider a system of $N$ points in the Euclidean space $\Rd$ with \textit{hypersingular} Riesz pairwise interactions, in an external field $V$. The particles are assumed to live in a \textit{confinement set} $\Omega \subseteq \mathbb{R}^d$. The energy $\mathcal{H}_N(\vec{X}_N)$ of the system in a given state $\vec{X}_N = (x_1, \dots, x_N) \in (\mathbb{R}^d)^N$ is defined to be
\begin{equation} \label{def:HN}
\mathcal{H}_N(\vec{X}_N) := \sum_{1 \leq i \neq j \leq N} \frac{1}{|x_i-x_j|^s} + N^{s/d} \sum_{i=1}^N V(x_i).
\end{equation}
The external field $V$ is a confining potential, growing at infinity, on which we shall make assumptions later. The term \textit{hypersingular} corresponds to the fact that the singularity of the kernel $|x-y|^{-s}$ is non-integrable with respect to the Lebesgue measure on $\Rd$.
For any $\beta > 0$, the canonical Gibbs measure associated to \eqref{def:HN} at inverse temperature $\beta$ and for particles living on $\Omega$ is given by
\begin{equation}\label{def:PNbeta}
d\mathbb{P}_{N,\beta}(\vec{X}_N) = \frac{1}{\ZNbeta} \exp\left( - \beta N^{-s/d} \mathcal{H}_N(\vec{X}_N)\right) \mathbf{1}_{\Omega^N}(\vec{X}_N) d\vec{X}_N,
\end{equation}
where $d\vec{X}_N$ is the Lebesgue measure on $(\mathbb{R}^d)^N$, $\mathbf{1}_{\Omega^N}(\vec{X}_N)$ is the indicatrix function of $\Omega^N$, and $\ZNbeta$ is the “partition function”; i.e., the normalizing factor
\begin{equation}
\label{def:ZNbeta} \ZNbeta := \int_{\Omega^N} \exp\left( - \beta N^{-s/d} \mathcal{H}_N(\vec{X}_N)\right) d\vec{X}_N.
\end{equation}
We will call the statistical physics system described by \eqref{def:HN} and \eqref{def:PNbeta} a \medskip \textit{hypersingular Riesz gas}.
\edb{For Riesz potentials in the case $s>d$, ground state configurations (or Riesz energy minimizers) of $N$-particle systems (with or without the external field $V$) have been extensively studied in the large $N$ limit, see \cite{HSAdv, HSNotices, Hardin:2016kq} and the references therein.
Furthermore, for the case of positive temperature, the statistical mechanics of Riesz gases have been investigated in \cite{LebSer} but for a different range of the parameter $s$, namely $\max(d-2, 0) \leq s < d$. In that paper, a large deviation principle for the empirical process (which encodes the microsopic behavior of the particles at scale $N^{-1/d}$, averaged in a certain way) was derived. The main goal of the present paper is to extend that work to the hypersingular case. By combining the approaches of the above mentioned papers we obtain a large deviation principle describing macroscopic as well as microscopic properties for hypersingular Riesz gases.}
\ed{
Studying Riesz interactions for the whole range of $s$ from $0$ to infinity is of interest in approximation theory and coding theory, as it connects logarithmic \edb{interactions}, Coulomb interactions, and \edb{(in the limit $s \to \infty$)} packing problems, see \cite{HSNotices,saff1997distributing}.}\edb{ Investigating} \ed{such systems with temperature is also a natural question for statistical mechanics, as it improves our understanding of the behavior of systems with long-range vs. short-range interactions (see, for instance, \cite{draw,cdr,MR2673930} where the interest of such questions is stressed and \cite{bloom2016large} and \cite[Section 4.2]{mazars2011long} for \edb{additional} results).} Analyzing the case $s > d$ \edb{is also} a first step toward the study of physically more relevant interactions as the Lennard-Jones \medskip potential.
\ed{The hypersingular \edb{Riesz} case $s>d$ and the \edb{integrable Riesz} case $s<d$ have important differences.}
\edb{For} $s < d$ (which can be thought of as long-range) and, more generally, \edb{for integrable interaction kernels $g$} (which includes regular interactions)
the global, macroscopic behavior can be studied using classical potential theory\edb{. Namely,} the empirical measure $\frac{1}{N} \sum_{i=1}^N \delta_{x_i}$ is found to converge rapidly to some equilibrium measure determined uniquely by $\Omega$ and $V$ and obtained as the unique minimizer of the potential-theoretic functional
\begin{equation*}
\iint_{\mathbb{R}^d \times \mathbb{R}^d} g(x-y) d\mu(x) d\mu(y) + \int_{\mathbb{R}^d} V d\mu
\end{equation*}which can be seen as a mean-field energy with a non-local term.
We refer e.g. to \cite{safftotik} or \cite[Chap. 2]{MR3309890} for a treatment of this question (among others).
\ed{In these \edb{integrable} cases, if temperature is scaled in the same way as here,
the macroscopic behavior is governed by the equilibrium measure \edb{ and thus} is independent of the temperature \edb{so that} no knowledge of the microscopic distribution of points is necessary to determine the macroscopic distribution. At the next order in energy, which governs the microscopic distribution of the points, a dependency on $\beta$ appears. As seen in \cite{LebSer}, in the Coulomb and potential Riesz cases (it is important in the method that the interaction kernel be reducible to the kernel of a local operator, which is known only for these particular interactions), the microscopic distribution around a point is given by a problem in a whole space with a neutralizing background, fixing the local density as equal to that of the equilibrium measure at that point. The microscopic distribution is found to minimize the sum of a (renormalized) Riesz energy term and a relative entropy term. A crucial ingredient in the proof is a ``screening" construction \edb{showing that energy can be computed additively over large disjoint microscopic boxes; i.e., interactions between configurations in different large microscopic boxes are negligible to this order. }}
\ed{The hypersingular case can be seen as more delicate \edb{than} the \edb{integrable} case due to the absence of an equilibrium measure. The limit of the empirical measure has to be identified differently. In the case of \edb{ground state configurations} (minimizers), this was done in \cite{Hardin:2016kq}. \edb{For positive temperature}, in contrast with the \edb {above described integrable case}, \edb{we shall show} the \edb{empirical } limit \edb{measure} is obtained as a by-product of the study at the microscopic scale and depends on $\beta $ in quite an indirect way (see Theorem \ref{theo:LDPmesure}). The microscopic profiles minimize a full-space version of the problem, giving an energy that depends on the local density, and the macroscopic distribution can then be found by a local density approximation, by minimizing the sum of its energy and that due to the confinement \edb{potential}. Since the energy is easily seen to scale like $N^{1+s/d}$, the choice of the temperature scaling $\beta N^{-s/d}$ is made so that the energy and the entropy for the microscopic distributions carry the same weight of order $N$. Other choices of temperature scalings are possible, but would lead to degenerate versions of the situation we are examining, with either all the entropy terms asymptotically disappearing for small temperatures, or the effect of the energy altogether disappearing for large temperatures.
Note that going to the microscopic scale in order to derive the behavior at the macroscopic scale was already the approach needed in \cite{leble2015large} for the case of the ``two-component plasma", a system of two-dimensional particles of positive and negative charges interacting logarithmically for which no a priori knowledge of the equilibrium measure can be found.
On the other hand, the hypersingular case is also easier in the sense that the interactions decay faster at infinity, \edb{implying that} long-range interactions between large microscopic ``boxes" \edb{are negligible and do not require any sophisticated screening procedures}. Our proofs will make crucial use of this ``self-screening" property.}
\ed{To describe the system at the microscopic scale, } we define a Riesz energy $\overline{\mathbb{W}}_s$ \edb{(see subsection~\ref{sec:energyerandompoint})} for infinite random point configurations which is the counterpart of the renormalized energy of \cite{petrache2014next,LebSer,leble2016logarithmic} (defined for $s < d$). It is \edb{conjectured} to be minimized by lattices \ed{ \edb{for certain low dimensions, but this is a completely open problem with the exception of dimension 1 (see \cite{blanc2015crystallization} and the discussion following \eqref{perEnLim}).}}
To any sequence of configurations $\{\vec{X}_N\}_N$, we associate an ``empirical process" whose limit (a random tagged point process) describes the point configurations $\vec{X}_N$ at scale $N^{-1/d}$. Our main result will be that there is a Large Deviations Principle for the law of this empirical process with rate function equal to (a variant of) the energy $\beta \overline{\mathbb{W}}_s$ plus the relative entropy of the empirical process with respect to the Poisson point process.
For minimizers of the Riesz energy $\mathcal{H}_N$, we show that the limiting empirical processes must minimize $\overline{\mathbb{W}}_s$, thus describing their microscopic \medskip structure.
\ed{The question of treating more general interactions than the Riesz ones remains widely open. The fact that the interaction has a precise homogeneity under rescaling is crucial for the hypersingular case treated here. On the other hand, in the \edb{integrable} case, we do not know how to circumvent the need for expressing the energy via the potential generated by the points; i.e., the need for the Caffarelli-Silvestre representation of the interaction as the kernel of a local operator (achieved by adding a space dimension).
}
\subsection{Assumptions and notation}
\label{sec:assumptions}
\subsubsection{Assumptions}
In the rest of the paper, we assume that $\Omega\subset \mathbb{R}^d$ is closed with positive $d$-dimensional Lebesgue measure and that
\begin{align} \label{ass:regAc}
&\text{$\partial \Omega$ is $C^1$,} \\
\label{ass:regV} & \text{$V$ is a continuous, non-negative real valued function on $\Omega$.}
\end{align}
Furthermore if $\Omega$ is unbounded, we assume that
\begin{align} \label{ass:croissanceV}
& \lim_{|x| \rightarrow \infty} V(x) = + \infty, \\
\label{ass:integr} & \exists M > 0 \text { such that } \int \exp\left(- M V(x)\right) dx < + \infty.
\end{align}
The assumption \eqref{ass:regAc} on the regularity of $\partial \Omega$ is mostly technical and we believe that it could be relaxed to e.g. $\partial \Omega$ is locally the graph of, say, a Hölder function in $\mathbb{R}^d$. However it is unclear to us what the minimal assumption could be (e.g., is it enough to assume that $\partial \Omega$ has zero measure?). An interesting direction would be to study the case where $\Omega$ is a $p$-rectifiable set in $\mathbb{R}^d$ with $p < d$ (see e.g. \cite{Borodachov:2016kx, Hardin:2016kq}).
Assumption \eqref{ass:regV} is quite mild (in comparison e.g. with the corresponding assumption in the $s < d$ case, where one wants to ensure some regularity of the so-called equilibrium measure, which is essentially two orders lower than that for $V$) and we believe it to be sharp for our purposes. Assumption \eqref{ass:croissanceV} is an additional confinement assumption, and \eqref{ass:integr} ensures that the partition function $\ZNbeta$, defined in \eqref{def:ZNbeta}, is finite (at least for $N$ large enough). Indeed the interaction energy is non-negative, hence for $N$ large enough \eqref{ass:integr} ensures that the integral defining the partition function is convergent.
\subsubsection{General notation}
We let $\config$ be the space of point configurations in $\mathbb{R}^d$ (see Section \ref{sec:pointconfig} for a precise definition). If $X$ is some measurable space and $x \in X$ we denote by $\delta_x$ the Dirac mass at $x$.
\subsubsection{Empirical measure and empirical \ed{processes}}
Let $\vec{X}_N = (x_1, \dots, x_N)$ in $\Omega^N$ be fixed.
\begin{itemize}
\item We define the empirical measure $\mathrm{emp}(\vec{X}_N)$ as
\begin{equation} \label{def:emp}
\mathrm{emp}(\vec{X}_N) := \frac{1}{N} \sum_{i=1}^N \delta_{x_i}.
\end{equation}
It is a probability measure on $\Omega$.
\item We define $\vec{X}_N'$ as the finite configuration rescaled by a factor $N^{1/d}$
\begin{equation} \label{def:ompN}
\vec{X}_N' := \sum_{i=1}^N \delta_{N^{1/d} x_i}.
\end{equation}
It is a point configuration (an element of $\config$), which represents the $N$-tuple of particles $\vec{X}_N$ seen at microscopic scale.
\item We define the \textit{tagged empirical process} $\overline{\Emp}_N(\vec{X}_N)$ as
\begin{equation}
\label{def:Emp}
\overline{\Emp}_N(\vec{X}_N) := \int_{\Omega} \delta_{\left(x,\, \theta_{N^{1/d}x} \cdot \vec{X}_N' \right)} dx,
\end{equation}
where $\theta_x$ denotes the translation by $- x$. It is a positive measure on $\Omega \times \config$. \\
\end{itemize}
Let us now briefly explain the meaning of the last definition \eqref{def:Emp}.
For any $x \in \Omega$, $\theta_{N^{1/d}x} \cdot \vec{X}_N'$ is an element of $\config$ which represents the $N$-tuple of particles $\vec{X}_N$ centered at $x$ and seen at microscopic scale (or, equivalently, seen at microscopic scale and then centered at $N^{1/d} x$). In particular any information about this point configuration in a given ball (around the origin) translates to an information about $\vec{X}_N'$ around $x$. We may thus think of $\theta_{N^{1/d}x} \cdot \vec{X}_N'$ as encoding the behavior of $\vec{X}_N'$ around $x$.
The measure
\begin{equation} \label{empiricalfield}
\int_{\Omega} \delta_{\theta_{N^{1/d}x} \cdot \vec{X}_N'} dx
\end{equation}
is a measure on $\config$ which encodes the behaviour of $\vec{X}_N'$ around each point $x \in \Omega$. We may think of it as the “averaged” microscopic behavior (although it is not, in general, a probability measure, and its mass can be infinite). The measure defined by \eqref{empiricalfield} would correspond to what is called the “empirical field”.
The tagged empirical process $\overline{\Emp}_N(\vec{X}_N)$ is a finer object, because for each $x \in \Omega$ we keep track of the centering point $x$ as well as of the microscopic information $\theta_{N^{1/d}x} \cdot \vec{X}_N'$ around $x$. It yields a measure on $\Omega \times \config$ whose first marginal is the Lebesgue measure on $\Omega$ and whose second marginal is the (non-tagged) empirical process defined above in \eqref{empiricalfield}. Keeping track of this additional information allows one to test $\overline{\Emp}_N(\vec{X}_N)$ against functions $F(x, \mathcal{C}) \in C^0(\Omega \times \config)$ which may be of the form
$$
F(x, \mathcal{C}) = \chi(x) \tilde{F}(\mathcal{C}),
$$
where $\chi$ is a smooth function localized in a small neighborhood of a given point of $\Omega$, and $\tilde{F}(\mathcal{C})$ is a continuous function on the space of point configurations. Using such test functions, we may thus study the microsopic behavior of the system after a small average (on a small domain of $\Omega$), whereas the empirical process only allows one to study the microscopic behavior after averaging over the whole $\Omega$.
\ed{The study of empirical processes, or \textit{empirical fields}, as natural quantities to encode the averaged microscopic behavior appear e.g. in \cite{FollmerOrey} for particles without interaction or \cite{Georgii1} in the interacting case.}
\subsubsection{Large deviations principle}
Let us recall that a sequence $\{\mu_N\}_N$ of probability measures on a metric space $X$ is said to satisfy a Large Deviation Principle (LDP) at speed $r_N$ with rate function $I : X \to [0, +\infty]$ if the following holds for any Borel set $A \subset X$
$$
- \inf_{\mathring{A}} I \leq \liminf_{N \rightarrow \infty}\frac{1}{r_N} \log \mu_N(A) \leq \limsup_{N \rightarrow \infty}\frac{1}{r_N} \log \mu_N(A) \leq - \inf_{\bar{A}} I,
$$
where $\mathring{A}$ (resp. $\bar{A}$) denotes the interior (resp. the closure) of $A$. The functional $I$ is said to be a {\it good rate function} if it is lower semi-continuous and has compact sub-level sets. We refer to \cite{MR2571413} and \cite{Varadhan2016} for detailed treatments of the theory of large deviations and to \cite{MR3309619} for an introduction to the applications of LDP's in the statistical physics setting.
Roughly speaking, a LDP at speed $r_N$ with rate function $I$ expresses the following fact: the probability measures $\mu_N$ concentrate around the points where $I$ vanishes, and any point $x \in X$ such that $I(x) > 0$ is not “seen” with probability $1 - \exp(-N I(x))$.
\subsection{Main results}
\subsubsection{Large deviations of the empirical processes}
We let $\overline{\mathfrak{P}}_{N, \beta}$ be the push-forward of the Gibbs measure $\mathbb{P}_{N,\beta}$ (defined in \eqref{def:PNbeta}) by the map $\overline{\Emp}_N$ defined in \eqref{def:Emp}. In other words, $\overline{\mathfrak{P}}_{N, \beta}$ is the law of the random variable “tagged empirical process” when $\vec{X}_N$ is distributed following $\mathbb{P}_{N,\beta}$.
The following theorem, which is the main result of this paper, involves the functional $\overline{\mathcal{F}}_{\beta}=\overline{\mathcal{F}}_{\beta,s}$ defined in \eqref{def:fbarbeta}. It is a free energy functional of the type “$\beta$ Energy + Entropy” (see Section \ref{sec:ERS}, \ref{sec:energy} and \ref{sec:ratefunction} for precise definitions). The theorem expresses the fact that the microscopic behavior of the system of particles is determined by the minimization of the functional $\overline{\mathcal{F}}_{\beta}$ and that configurations $\vec{X}_N$ having empirical processes $\overline{\Emp}(\vec{X}_N)$ far from a minimizer of $\overline{\mathcal{F}}_{\beta}$, have negligible probability of order $\exp(-N)$.
\begin{theo} \label{theo:LDPemp}
For any $\beta > 0$, the sequence $\{\overline{\mathfrak{P}}_{N, \beta}\}_N$ satisfies a large deviation principle at speed $N$ with good rate function $\overline{\mathcal{F}}_{\beta} - \min \overline{\mathcal{F}}_{\beta}$.
\end{theo}
\begin{coro} \label{coro:ZNbeta}
The first-order expansion of $\log \ZNbeta$ as $N\to \infty$ is
\begin{equation*}
\log \ZNbeta = - N \min \overline{\mathcal{F}}_{\beta} + o(N).
\end{equation*}
\end{coro}
\subsubsection{Large deviations of the empirical measure}
As a byproduct of our microscopic study, we derive a large deviation principle which governs the asymptotics of the empirical measure (which is a macroscropic quantity). Let us denote by $\mathfrak{emp}_{N,\beta}$ the law of the random variable $\mathrm{emp}(\vec{X}_N)$ when $\vec{X}_N$ is distributed according to $\mathbb{P}_{N,\beta}$. The rate function $I_{\beta}=I_{\beta,s}$, defined in Section \ref{sec:ratefunction} (see \eqref{def:Ibeta}), \ed{has the form
\begin{equation}\label{formeI}
I_{\beta}(\rho)= \int_{\Omega} f_\beta(\rho(x)) \rho(x)\, dx + \beta \int_{\Omega} V(x) \rho(x) \, dx + \int_{\Omega} \rho(x) \log \rho(x)\, dx,
\end{equation}
and is a local density approximation. The function $f_\beta$ in this expression is determined by a minimization problem over the {\it microscopic} empirical processes.}
\begin{theo} \label{theo:LDPmesure}
For any $\beta > 0$, the sequence $\{\mathfrak{emp}_{N,\beta}\}_{N}$ obeys a large deviation principle at speed $N$ with good rate function $I_{\beta} - \min I_{\beta}$. In particular, the empirical measure converges almost surely to the unique minimizer of $I_{\beta}$.
\end{theo}
The rate function $I_{\beta}$ is quite complicated to study in general. However, \ed{thanks to the convexity of $f_\beta$ and elementary properties of the standard entropy we may characterize its minimizer in some particular cases (see Section \ref{sec:addproofs} for the proof)}:
\begin{prop} \label{prop:muVbeta}
Let $\mu_{V, \beta}$ be the unique minimizer of $I_{\beta}$.
\begin{enumerate}
\item If $V = 0$ and $\Omega$ is bounded, then $\mu_{V, \beta}$ is the uniform probability measure on $\Omega$ for any $\beta > 0$.
\item If $V$ is arbitrary and $\Omega$ is bounded, $\mu_{V, \beta}$ converges to the uniform probability measure on $\Omega$ as $\beta \to 0$.
\item If $V$ is arbitrary, $\mu_{V, \beta}$ converges to $\mu_{V,\infty}$ as $\beta \to + \infty$, where $\mu_{V, \infty}$ is the limit empirical measure for energy minimizers as defined in the paragraph below.
\end{enumerate}
\end{prop}
\subsubsection{The case of minimizers}
Our remaining results deal with energy minimizers (in statistical physics, this corresponds to setting $\beta = + \infty$). Let $\{\vec{X}_N\}_N$ be a sequence of point configurations in $\Omega$ such that for any $N \geq 1$, $\vec{X}_N$ has $N$ points and minimizes $\mathcal{H}_N$ on $\Omega^N$.
The macroscopic behavior is known from \cite{Hardin:2016kq}: there is a unique minimizer $\mu_{V, \infty}$ (the notation differs from \cite{Hardin:2016kq}) of the functional
\begin{equation} \label{minimiserho}
C_{s,d} \int_{\Omega} \rho(x)^{1+s/d}\, dx+ \int_{\Omega} V(x) \rho(x)\, dx
\end{equation}
among probability densities $\rho$ over $\Omega$ ($C_{s,d}$ is a constant depending on $s,d$ defined in \eqref{def:Csd1}), and the empirical measure $\mathrm{emp}(\vec{X}_N)$ converges to $\mu_{V, \infty}$ as $N \rightarrow \infty$. See \eqref{muVinfty} for an explicit formula for $\mu_{V, \infty}$. \ed{Note that the formula \eqref{minimiserho} is what one obtains when letting formally $\beta \to \infty$ in the definition of $I_{\beta}$, and is resembling some of the terms arising in Thomas-Fermi theory (cf. \cite{MR2583992} and \cite{Lieb-TF-Rev}).
}
The notation for the next statement is given in Sections \ref{sec:pointconfig} and \ref{sec:energy}. Let us simply say that $\overline{\mathbb{W}}_s$ (resp. $\mathcal{W}_s$) is an energy functional defined for a random point configuration (resp. a point configuration), and that $\overline{\mathcal{M}}_{stat,1}(\bconfig)$ (resp. $\config_{\mu_{V,\infty}(x)}$) is some particular subset of random point configurations (resp. of point configurations in $\mathbb{R}^d$). The intensity measure of a random tagged point configuration is defined in Section \ref{sec:intensitymeasure}.
\begin{prop} \label{prop:minimizers} We have:
\begin{enumerate}
\item $\{\overline{\Emp}(\vec{X}_N)\}_N$ converges weakly (up to extraction of a subsequence) to some minimizer $\bP$ of $\overline{\mathbb{W}}_s$ over $\overline{\mathcal{M}}_{stat,1}(\bconfig)$.
\item The intensity measure of $\bP$ coincides with $\mu_{V, \infty}$.
\item For $\bP$-almost every $(x, \mathcal{C})$, the point configuration $\mathcal{C}$ minimizes $\mathcal{W}_s(\mathcal{C})$ within the class $\config_{\mu_{V,\infty}(x)}$.
\end{enumerate}
\end{prop}
The first point expresses the fact that the tagged empirical processes associated to minimizers converge to minimizers of the “infinite-volume” energy functional $\overline{\mathbb{W}}_s$. The second point is a rephrasing of the global result cited above, to which the third point adds some microscopic information.
The problem of minimizing the energy functionals $\overline{\mathbb{W}}_s$, $\mathbb{W}_s$ or $\mathcal{W}_s$ is hard in general. In dimension $1$, however, it is not too difficult to show that the “crystallization conjecture” holds, namely that the microscopic structure of minimizers is ordered and converge to a lattice:
\begin{prop} \label{prop:crystallization1d}
Assume $d=1$. The unique stationary minimizer of $\mathbb{W}_s$ is the law of $u + \mathbb{Z}$, where $u$ is a uniform choice of the origin in $[0,1]$.
\end{prop}
In dimension $2$, it would be expected that minimizers are given by the triangular (or Abrikosov) lattice, we refer to \cite{blanc2015crystallization} for a recent review of such conjectures. \ed{In large dimensions, it is not expected that lattices are minimizers.}
\ed{
\subsection{Outline of the method}
Our LDP result is phrased in terms of the empirical processes associated to point configurations, as in \cite{LebSer} and thus the objects we consider and the overall aproach are quite similar to \cite{LebSer}. It is however quite simplified by the fact that, because the interaction is short-range and we are in the non-potential case, we do not need to express the energy in terms of the ``electric potential" generated by the point configuration. The definition of the limiting microscopic interaction energy \edb{$\mathcal W_s(\mathcal C)$} is thus significantly simpler than in \cite{LebSer}, it suffices to take, for $\mathcal{C}$ an infinite configuration of points in the whole space,
$$\mathcal W_s(\mathcal C)= \liminf_{R\to \infty} \sum_{p,q \in \mathcal C\cap K_R, p \neq q} \frac{1}{|p-q|^s}$$
where $K_R$ is the cube of sidelength $R$ centered at the origin. When considering this quantity, there is however no implicit knowledge of the average density of points, contrarily to the situation of \cite{LebSer}. This is then easily extended to an energy for point processes $\overline{\mathbb{W}}_s$ by taking expectations.
As in \cite{LebSer}, the starting point of the LDP proof is a Sanov-type result that states that the logarithm of the volume of configurations whose empirical processes lie in a small ball around a given tagged point process $\bP$ can be expressed as $\edb{(-N)}$ times an entropy denoted $\ERS({\bP}|\mathbf{\Pi})$.
\edb{As we shall show, $N^{-1-s/d}\mathcal{H}_N(\vec{X}_N)\approx \overline{\mathbb{W}}_s(\bP) +\overline{\mathbb{V}}(\bP)$ for a sufficiently large set of configurations $\vec{X}_N$ near $\P$,
where $\overline{\mathbb{V}}(\bP)$ is a term corresponding to the external potential $V$.
Then this will suffice to obtain the LDP since the logarithm of the probability of the empirical field being close to $\bP$ is nearly $N$ times
\begin{equation*}
- \beta \edb{(\overline{\mathbb{W}}_s(\bP)+\overline{\mathbb{V}}(\bP))} - \ERS({\bP}|\mathbf{\Pi}),
\end{equation*}
up to an additive constant.
The entropy can be expressed in terms of $\bP^x$ (the process centered at $x$) as
\begin{equation} \ERS({\bP}|\mathbf{\Pi})=\int (\ERS(\bP^x|\mathbf{\Pi})-1)\, dx +1,
\end{equation}
where $\ERS(P |\mathbf{\Pi})$ is a ``specific relative entropy" with respect to the Poisson point process $\mathbf{\Pi}$.
Assuming that $\bP^x$ has an intensity $\rho(x)$, then the scaling properties of the energy $\overline{\mathbb{W}}_s$ (the fact that the energy scales like $\rho^{1+s/d}$ where $\rho$ is the density) and of the specific relative entropy $\ERS$ allow to transform this into
\begin{multline*}
- \int_{\Omega}\left(\beta \rho^{s/d} \mathbb{W}_s(\bP^x) \edb{+ \ERS(\sigma_{\rho(x)}(\bP^x)|\mathbf{\Pi})} + \beta V(x) \right) \rho(x) \, dx \\
\edb{-} \int_{\Omega} \rho(x) \log \rho(x) \, dx,
\end{multline*}}
which is the desired rate function.
Minimizing over $P$'s of intensity $\rho$ allows to obtain the rate function $I_\beta$ of \eqref{def:IbetaA}.
To run through this argument, we encounter the same difficulties as in \cite{LebSer}, i.e. the difficulty in replacing $\mathcal{H}_N$ by $\overline{\mathbb{W}}_s$ due to the fact that $\mathcal{H}_N$ is not continuous for the topology on empirical processes that we are considering. The lack of continuity of the interaction near the origin is dealt with by a truncation and regularization argument, similarly as in \cite{LebSer}. The lack of continuity due to the locality of the topology is handled thanks to the short-range nature of the Riesz interaction, by showing that large microscopic boxes effectively do not interact, the ``self-screening" property alluded to before, via a shrinking procedure borrowed from \cite{HSAdv}. We refer to Section \ref{sec4} for more detail.
}
\section{General definitions}
All the hypercubes considered will have their sides parallel to some fixed choice of axes in $\mathbb{R}^d$. For $R > 0$ we let $\carr_R$ be the hypercube of center $0$ and sidelength $R$. If $A \subset \mathbb{R}^d$ is a Borel set we denote by $|A|$ its Lebesgue measure, and if $A$ is a finite set we denote by $|A|$ its cardinal.
\subsection{(Random) (tagged) point configurations}
\label{sec:pointconfig}
\subsubsection{Point configurations}
We refer to \cite{dvj} for further details and proofs of the claims.
\begin{itemize}
\item If $A \subset \mathbb{R}^d$, we denote by $\config(A)$ the set of locally finite point configurations in $A$ or equivalently the set of non-negative, purely atomic Radon measures on $A$ giving an integer mass to singletons. We abbreviate $\config(\mathbb{R}^d)$ as $\config$.
\item For $\mathcal{C} \in \config$, we will often write $\mathcal{C}$ for the Radon measure $\sum_{p \in \mathcal C} \delta_p$.
\item The sets $\config(A)$ are endowed with the topology induced by the weak convergence of Radon measures (also known as vague convergence). These topological spaces are Polish, and we fix a distance $d_{\config}$ on $\config$ which is compatible with the topology on $\config$ (and whose restriction on $\config(A)$ is also compatible with the topology on $\config(A)$).
\item For $x \in \mathbb{R}^d$ and $\mathcal{C} \in \config$ we denote by $\theta_x \cdot \mathcal{C}$ “the configuration $\mathcal{C}$ centered at $x$” (or “translated by $-x$”), namely
\begin{equation}\label{actiontrans}
\theta_x \cdot \mathcal{C} := \sum_{p \in \mathcal{C}} \delta_{p - x}.
\end{equation}
We will use the same notation for the action of $\Rd$ on Borel sets: if $A \subset \mathbb{R}^d$, we denote by $\theta_x \cdot A$ the translation of $A$ by the vector $-x$.
\end{itemize}
\subsubsection{Tagged point configurations.}
\begin{itemize}
\item When $\Omega \subset \mathbb{R}^d$ is fixed, we define $\bconfig := \Omega \times \config$ as the set of “tagged” point configurations with tags in $\Omega$.
\item We endow $\bconfig$ with the product topology and a compatible distance $\dbconfig$.
\end{itemize}
Tagged objects will usually be denoted with bars (e.g., $\bP$, $\overline{\mathbb{W}}$, \dots).
\subsubsection{Random point configurations}
\begin{itemize}
\item We denote by $\pconfig$ the space of probability measures on $\config$; i.e., the set of laws of random point configurations.
\item The set $\pconfig$ is endowed with the topology of weak convergence of probability measures (with respect to the topology on $\config$), see \cite[Remark 2.7]{LebSer}.
\item We say that $P$ in $\pconfig$ is \textit{stationary} (and we write $P \in \psconfig$) if its law is invariant by the action of $\mathbb{R}^d$ on $\config$ as defined in \eqref{actiontrans}.
\end{itemize}
\subsubsection{Random tagged point configurations}
\label{sec:randomttaggedpoint}
\begin{itemize}
\item When $\Omega \subset \mathbb{R}^d$ is fixed, we define $\pbconfig$ as the space of measures $\bP$ on $\bconfig$ such that
\begin{enumerate}
\item The first marginal of $\bP$ is the Lebesgue measure on $\Omega$.
\item For almost every $x \in \Omega$, the disintegration measure $\bPx$ is an element of $\pconfig$.
\end{enumerate}
\item We say that $\bP$ in $\pbconfig$ is \textit{stationary} (and we write $\bP \in \psbconfig$) if $\bPx$ is in $\psconfig$ for almost every $x \in \Omega$.
\end{itemize}
Let us emphasize that, in general, the elements of $\pbconfig$ are \textit{not} probability measures on $\bconfig$ (e.g., the first marginal is the Lebesgue measure on $\Omega$).
\subsubsection{Density of a point configuration}
\label{sec:density}
\begin{itemize}
\item For $\mathcal{C} \in \config$, we define $\mathrm{Dens}(\mathcal{C})$ (the \textit{density} of $\mathcal{C}$) as
\begin{equation}
\label{densDef}
\mathrm{Dens}(\mathcal{C}):=\liminf_{R\to\infty} \frac{\crd{\mathcal{C} \cap \carr_R}}{R^d}.
\end{equation}
\item For $m \in [0, +\infty]$, we denote by $\config_m$ the set of point configurations with density $m$.
\item For $m \in (0, +\infty)$, the scaling map
\begin{equation}
\label{def:sigmam} \sigma_m : \mathcal{C} \mapsto m\edb{^{1/d}} \mathcal{C}
\end{equation}
is a bijection of $\config_m$ onto $\config_1$, of inverse $\sigma_{1/m}$.
\end{itemize}
\subsubsection{Intensity of a random point configuration}
\begin{itemize}
\item For $P \in \psconfig$, we define $\mathrm{Intens}(P)$ (the \textit{intensity} of $P$) as
\begin{equation*}
\mathrm{Intens}(P) := \Esp_{P} \left[ \mathrm{Dens}(\mathcal{C}) \right].
\end{equation*}
\item We denote by $\psmconfig$ the set of laws of random point configurations $P \in \pconfig$ that are stationary and such that $\mathrm{Intens}(P) = m$.
For $P\in \psmconfig$, the stationarity assumption implies the formula
\begin{equation*}
\Esp_P \left[ \int_{\mathbb{R}^d} \varphi\, d\mathcal{C} \right] = m \int_{\mathbb{R}^d} \varphi(x)\, dx, \text{ for any $\varphi \in C^0_c(\mathbb{R}^d)$.}
\end{equation*}
\end{itemize}
\subsubsection{Intensity measure of a random tagged point configuration}
\label{sec:intensitymeasure}
\begin{itemize}
\item For $\bP$ in $\psbconfig$, we define $\overline{\mathrm{Intens}}(\bP)$ (the \textit{intensity measure} of $\bP$) as
\begin{equation*}
\overline{\mathrm{Intens}}(\bP)(x) = \mathrm{Intens}(\bPx),
\end{equation*}
which really should, in general, be understood in a dual sense: for any $f \in C_c(\mathbb{R}^d)$,
$$
\int f d\overline{\mathrm{Intens}}(\bP) := \int_{\Omega} f(x) \mathrm{Intens}(\bPx) dx.
$$
\item We denote by $\overline{\mathcal{M}}_{stat,1}(\bconfig)$ the set of laws of random tagged point configurations $\bP $ in $\pbconfig$ which are stationary and such that
\begin{equation*}
\int_{\Omega} \overline{\mathrm{Intens}}(\bP)(x)\, dx = 1
\end{equation*}
\item If $\bP$ has intensity measure $\rho$ we denote by $\overline{\sigma}_{\rho}(\bP)$ the element of $\pbconfig$ satisfying
\begin{equation} \label{def:bsigrho}
\left(\overline{\sigma}_{\rho}(\bP)\right)^x = \sigma_{\rho(x)}\left( \bP^x\right), \text{ for all $x \in \Omega$,}
\end{equation}
where $\sigma$ is as in \eqref{def:sigmam}.
\end{itemize}
\subsection{Specific relative entropy} \label{sec:ERS}
\begin{itemize}
\item Let $P$ be in $\psconfig$. The \textit{specific relative entropy} $\ERS[\Pst|\mathbf{\Pi}]$ of $\Pst$ with respect to $\mathbf{\Pi}$, the law of the Poisson point process of uniform intensity $1$, is given by
\begin{equation} \label{defERS}
\ERS[P|\mathbf{\Pi}] := \lim_{R \rightarrow \infty} \frac{1}{|\carr_R|} \Ent\left(\Pst_{|\carr_R} | \mathbf{\Pi}_{|\carr_R} \right)
\end{equation}
where $ P_{|\carr_R}$ denotes the process induced on (the point configurations in) $\carr_R$, and $\Ent( \cdot | \cdot)$ denotes the usual relative entropy (or Kullbak-Leibler divergence) of two probability measures defined on the same probability space.
\item It is known (see e.g. \cite{MR3309619}) that
the limit \eqref{defERS} exists as soon as $P$ is stationary, and also that the functional $P \mapsto \ERS[\Pst|\mathbf{\Pi}]$ is affine lower semi-continuous with compact sub-level sets (it is a good rate function).
\item Let us observe that the empty point process has specific relative entropy $1$ with respect to $\mathbf{\Pi}$.
\item If $P$ is in $\psmconfig$ we have (see \cite[Lemma 4.2.]{LebSer})
\begin{equation}\label{scalingent}
\ERS[P |\mathbf{\Pi}]=
\ERS [\sigma_m(P) |\mathbf{\Pi}]m + m \log m +1-m,
\end{equation}
where $\sigma_m(P)$ denotes the push-forward of $P$ by \eqref{def:sigmam}.
\end{itemize}
\subsection{Riesz energy of (random) (tagged) point configurations} \label{sec:energy}
\subsubsection{Riesz interaction}
We will use the notation $\mathrm{Int}$ (as “interaction”) in two slightly different ways:
\begin{itemize}
\item If $\mathcal{C}_1, \mathcal{C}_2$ are some fixed point configurations, we let $\mathrm{Int}[\mathcal{C}_1, \mathcal{C}_2]$ be the Riesz interaction between $\mathcal{C}_1$ and $\mathcal{C}_2$.
\begin{equation*}
\mathrm{Int}[\mathcal{C}_1, \mathcal{C}_2] := \sum_{p \in \mathcal{C}_1, \, q \in \mathcal{C}_2, p \neq q} \frac{1}{|p-q|^s}.
\end{equation*}
\item If $\mathcal{C}$ is a fixed point configuration and $A, B$ are two subsets of $\mathbb{R}^d$ we let $\mathrm{Int}[A, B](\mathcal{C})$ to be the Riesz interaction between $\mathcal{C} \cap A$ and $\mathcal{C} \cap B$; i.e.,
\begin{equation*}
\mathrm{Int}[A,B](\mathcal{C}) := \mathrm{Int}[\mathcal{C} \cap A, \mathcal{C} \cap B] = \sum_{p \in \mathcal{C} \cap A, q \in \mathcal{C} \cap B, p \neq q} \frac{1}{|p-q|^s}.
\end{equation*}
\item Finally, if $\tau > 0$, we let $\mathrm{Int}_{\tau}$ be the truncation of the Riesz interaction at distances less than $\tau$; i.e.,
\begin{equation} \label{def:Inttau}
\mathrm{Int}_{\tau}[\mathcal{C}_1, \mathcal{C}_2] := \sum_{p \in \mathcal{C}_1, q \in \mathcal{C}_2, |p-q| \geq \tau} \frac{1}{|p-q|^s}.
\end{equation}
\end{itemize}
\subsubsection{Riesz energy of a finite point configuration}
\label{sec:finitepointenergy}
\begin{itemize}
\item Let $\omega_N=(x_1,\ldots, x_N)$ be in $(\mathbb{R}^d)^N$. We define its Riesz $s$-energy as
\begin{equation} \label{def:Es}
E_s(\omega_N):=\mathrm{Int}[\omega_N, \omega_N] = \sum_{1 \leq i \neq j \leq N} \frac{1}{|x_i-x_j|^s}.
\end{equation}
\item For $A \subset \mathbb{R}^d$, we consider the {\em $N$-point minimal $s$-energy}
\begin{equation} \label{def:EsAN}
E_s(A, N) := \inf_{\omega_N \in A^N} E_s(\omega_N).
\end{equation}
\item The asymptotic minimal energy $C_{s,d}$ is defined as
\begin{equation}\label{def:Csd1}
C_{s,d} :=\lim_{N\to \infty}\frac{E_s(\carr_1,N)}{N^{1+s/d}}.
\end{equation}
The limit in \eqref{def:Csd1} exists as a positive real number (see \cite{HSNotices,HSAdv}).
\item By scaling properties of the $s$-energy, it follows that
\begin{equation} \label{def:Csd2}
\lim_{N\to \infty}\frac{E_s(\carr_R,N)}{N^{1+s/d}}=C_{s,d} R^{-s}.
\end{equation}
\end{itemize}
\subsubsection{Riesz energy of periodic point configurations}
We first extend the definition of the Riesz energy to the case of periodic point configurations.
\begin{itemize}
\item We say that $\Lambda \subset \mathbb{R}^d$ is a $d$-dimensional Bravais lattice if $\Lambda = U \mathbb{Z}^d$, for some nonsingular $d\times d$ real matrix $U$. A fundamental domain for $\Lambda$ is given by $\mathbf{D}_{\Lambda} = U [-\frac{1}{2}, \frac{1}{2})^d$, and the co-volume of $\Lambda$ is $|\Lambda| :=\text{vol}(\mathbf{D}_{\Lambda}) = |\det U|$.
\item If $\mathcal{C}$ is a point configuration (finite or infinite) and $\Lambda$ a lattice, we denote by $\mathcal{C} + \Lambda$ the configuration $\{ p + \lambda \mid p \in \mathcal{C}, \lambda \in \Lambda \}$. We say that $\mathcal{C}$ is $\Lambda$-periodic if $\mathcal{C} + \Lambda = \mathcal{C}$.
\item If $\mathcal{C}$ is $\Lambda$-periodic, it is easy to see that we have $\mathcal{C} = \left(\mathcal{C} \cap \mathbf{D}_{\Lambda}\right) + \Lambda$. The density of $\mathcal{C}$ is thus given by
\begin{equation*}
\mathrm{Dens}(\mathcal{C}) = \frac{ |\mathcal{C} \cap \mathbf{D}_{\Lambda}|}{|\Lambda|}
\end{equation*}
\end{itemize}
Let $\Lambda$ be a lattice and $\omega_N = \{x_1, \dots, x_N\} \subset \mathbf{D}_{\Lambda}$.
\begin{itemize}
\item We define, as in \cite{HSS} for $s > d$, the {\em $\Lambda$-periodic $s$-energy of $\omega_N$} as
\begin{equation}\label{perEndef}
E_{s, \Lambda} (\omega_N):=\sum_{x \in \omega_N} \sum_{\substack{y\in \omega_N+\Lambda\\ y\neq x}}\frac{1}{|x-y|^s}.
\end{equation}
\item It follows (cf. \cite{HSS}) that $E_{s, \Lambda}(\omega_N)$ can be re-written as
\begin{equation} \label{Eslwithzeta}
E_{s, \Lambda}(\omega_N) = N\zeta_{\Lambda}(s)+\sum_{x\neq y\in \omega_N}\zeta_{\Lambda}(s,x-y),
\end{equation}
where
$$\zeta_{\Lambda}(s)=\sum_{0\neq v\in \Lambda} {|v|^{-s}}$$
denotes the {\em Epstein zeta function} and
$$\zeta_{\Lambda}(s,x):=\sum_{v\in \Lambda} {|x+v|^{-s}}$$
denotes the {\em Epstein-Hurwitz zeta function} for the lattice $\Lambda$.
\item Denoting the {\em minimum $\Lambda$-periodic $s$-energy} by
\begin{equation}\label{minperEndef}
\mathcal{E}_{s,\Lambda}(N) :=\min_{\omega_N \in \mathbf{D}_{\Lambda}^N} E_{s, \Lambda}(\omega_N),
\end{equation}
it is shown in \cite{HSS} that
\begin{equation}\label{perEnLim}
\lim_{N\to \infty}\frac{\mathcal{E}_{s,\Lambda}(N)}{N^{1+s/d}}= C_{s,d} |\Lambda|^{-s/d},
\end{equation}
where $C_{s,d}$ is as in \eqref{def:Csd1}.\\
\end{itemize}
The constant $C_{s,d}$ for $s>d$ appearing in \eqref{def:Csd1} and \eqref{perEnLim} is known only in the case $d=1$ where $C_{s,1}=\zeta_{\mathbb{Z}}(s)=2\zeta(s)$ and $\zeta(s)$ denotes the classical Riemann zeta function.
For dimensions $d=2, 4, 8$, and $24$, it has been conjectured (cf. \cite{cohn2007universally, brauchart2012next} and references therein) that $C_{s,d}$ for $s>d$ is also given by an Epstein zeta function, specifically, that $C_{s,d}=\zeta_{\Lambda_d}(s)$ for $\Lambda_d$ denoting the equilateral triangular (or hexagonal) lattice, the $D_4$ lattice, the $E_8$ lattice, and the Leech lattice (all scaled to have co-volume 1) in the dimensions $d=2, 4, 8,$ and 24, respectively.
\subsubsection{Riesz energy of an infinite point configuration}
\begin{itemize}
\item Let $\mathcal{C}$ in $\config$ be an (infinite) point configuration. We define its Riesz $s$-energy as
\begin{equation}
\label{def:Ws}
\mathcal{W}_s( \mathcal{C}) := \liminf_{R\to \infty} \frac{1}{R^d} \sum_{p \neq q \in \mathcal{C} \cap \carr_R} \frac{1}{|p-q|^s} = \liminf_{R \rightarrow \infty} \frac{1}{R^d} \mathrm{Int}[\carr_R, \carr_R](\mathcal{C}).
\end{equation}
If $\mathcal{C} =\emptyset$, we define $\mathcal{W}_s(\mathcal{C})=0$. The $s$-energy is non-negative and can be $+ \infty$.
\item We have, for any $\mathcal{C}$ in $\config$ and any $m \in (0, + \infty)$
\begin{equation}
\label{scalingw1}
\mathcal{W}_s(\sigma_m \mathcal{C})= m^{-(1+s/d)} \mathcal{W}_s(\mathcal{C}).
\end{equation}
\end{itemize}
It is not difficult to verify (cf. \cite[Lemma 9.1]{cohn2007universally}), that if $\Lambda$ is a lattice and $\omega_N$ is a $N$-tuple of points in $\mathbf{D}_{\Lambda}$ we have
\begin{equation}\label{WELambda}
\mathcal{W}_s (\omega_N +\Lambda) = \frac{1}{|\Lambda|} E_{s, \Lambda} (\omega_N).
\end{equation}
In particular, we have (in view of \eqref{Eslwithzeta})
\begin{equation}\label{Wlambda}
\mathcal{W}_s(\Lambda)=|\Lambda|^{-1} \zeta_{\Lambda}(s).
\end{equation}
\subsubsection{Riesz energy for laws of random point configurations}
\label{sec:energyerandompoint}
\begin{itemize}
\item Let $P$ be in $\pconfig$, we define its Riesz $s$-energy as
\begin{equation}
\label{def:WsP1a}
\mathbb{W}_s (P):= \liminf_{R \rightarrow \infty} \frac{1}{R^d} \Esp_{P}\left[ \mathrm{Int}[\carr_R, \carr_R](\mathcal{C}) \right].
\end{equation}
\item Let $\bP$ be in $\pbconfig$, we define its Riesz $s$-energy as
\begin{equation}
\label{def:bWsP}
\overline{\mathbb{W}}_s (\bP) := \int_{\Omega} \mathbb{W}_s( \bPx)\, dx.
\end{equation}
\item Let $\bP$ in $\pbconfig$ with intensity measure $\rho$. It follows from \eqref{scalingw1}, \eqref{def:bWsP} and the definition \eqref{def:bsigrho} that
\begin{equation} \label{scalingw2}
\overline{\mathbb{W}}_s \left( \bP \right) = \int_{\Omega} \rho(x)^{1+s/d}\, \mathbb{W}_s\left( \left(\overline{\sigma}_{\rho}(\bP)\right)^x \right) dx.
\end{equation}
\end{itemize}
Let us emphasize that we define $\mathbb{W}_s$ as in \eqref{def:WsP1a} and \textit{not} by $\Esp_{P}[\mathcal{W}_s]$. Fatou's lemma easily implies that
\begin{equation} \label{Wsfatou}
\Esp_{P} [ \mathcal{W}_s] \leq \mathbb{W}_s(P)
\end{equation}
and in fact, in the stationary case, we may show that equality holds (see Corollary \ref{coro:Fatouegal}).
\subsubsection{Expression in terms of the two-point correlation function}
Let $P$ be in $\pconfig$ and let us assume that the two-point correlation function of $P$, denoted by $\rho_{2, P}$ exists in some distributional sense. We may easily express the Riesz energy of $P$ in terms of $\rho_{2,P}$ as follows
\begin{equation} \label{Wrho2P}
\mathbb{W}_s(P) = \liminf_{R \rightarrow \infty} \frac{1}{R^d} \int_{\carr_R \times \carr_R} \frac{1}{|x-y|^s} \rho_{2,P}(x,y) dx dy.
\end{equation}
If $P$ is stationary, the expression can be simplified as
\begin{equation} \label{Wrho2P2}
\mathbb{W}_s(P) = \liminf_{R \rightarrow \infty} \int_{[-R, R]^d} \frac{1}{|v|^s} \rho_{2,P}(v) \prod_{i=1}^d \left( 1 - \frac{|v_i|}{R} \right) dv \, ,
\end{equation}
where $\rho_{2,P}(v) = \rho_{2,P}(0,v)$ (we abuse notation and see $\rho_{2,P}$ as a function of one variable, by stationarity) and $v = (v_1, \dots, v_d)$.
Both \eqref{Wrho2P} and \eqref{Wrho2P2} follow from the definitions and easy manipulations, proofs (in a slightly different context) can be found in \cite{leble2016logarithmic}. \ed{Let us emphasize that the integral in the right-hand side of \eqref{Wrho2P} is on two variables, whereas the one in \eqref{Wrho2P2} is a single integral, obtained by using stationarity and applying Fubini's formula, which gives the weight $\prod_{i=1}^d \left( 1 - \frac{|v_i|}{R} \right)$.}
\subsection{The rate functions}
\label{sec:ratefunction}
\subsubsection{Definitions}
\begin{itemize}
\item For $\P$ be in $\pbconfig$, we define
\begin{equation*}
\overline{\mathbb{V}}(\bP) := \int V(x) d\left(\overline{\mathrm{Intens}}(\bP)\right)(x).
\end{equation*}
This is the energy contribution of the potential $V$.
\item For $\bP$ be in $\overline{\mathcal{M}}_{stat,1}(\bconfig)$, we define
\begin{equation} \label{def:fbarbeta}
\overline{\mathcal{F}}_{\beta}(\bP) := \beta \left( \overline{\mathbb{W}}_s(\bP) + \overline{\mathbb{V}}(\bP) \right) + \int_{\Omega} \left(\ERS[\bPx | \mathbf{\Pi}] -1\right) dx +1.
\end{equation}
It is a free energy functional, the sum of an energy term $ \overline{\mathbb{W}}_s(\bP) + \overline{\mathbb{V}}(\bP)$ weighted by the inverse temperature $\beta$ and an entropy term.
\item If $\rho$ is a probability density we define $I_{\beta}(\rho)$ as
\ed{
\begin{multline} \label{def:IbetaA}
I_{\beta}(\rho) := \int_{\Omega} \inf_{P \in \edb{\mathcal{P}_{stat,\rho(x)}(\config) }} \left(\beta \mathbb{W}_s(P) +\ERS[P|\mathbf{\Pi}] -1\right)dx \\ + \beta \int_{\Omega} \rho(x) V(x)\, dx +1,\qquad
\end{multline}
\edb{which can be written} as
\begin{multline} \label{def:Ibeta}
I_{\beta}(\rho) = \int_{\Omega} \rho(x) \inf_{P \in \psunconfig} \left( \beta \rho(x)^{s/d} \mathbb{W}_s(P) +\ERS[P|\mathbf{\Pi}] \right)dx \\ + \beta \int_{\Omega} \rho(x) V(x)\, dx + \int_{\Omega} \rho(x) \log \rho(x) \, dx.
\end{multline}
\edb{This last equation may seem} more complicated but \edb{note that} the $\inf$ inside the integral is taken on a fixed set, independent of $\rho$.}
The rate function $I_{\beta}$ is obtained in Section \ref{sec:contractionprinciple} as a \textit{contraction} (in the language of Large Deviations theory, see e.g. \cite[Section 3.1]{MR3309619}) of the functional $\overline{\mathcal{F}}_{\beta}$, \ed{and \eqref{def:Ibeta} follows from \eqref{def:IbetaA} by scaling properties of $\mathbb{W}_s$ and $\ERS[ \cdot | \mathbf{\Pi}]$.}
\end{itemize}
\subsubsection{Properties}
\begin{prop} \label{prop:ratefunction}
For all $\beta > 0$, the functionals $\overline{\mathcal{F}}_{\beta}$ and $I_{\beta}$ are good rate functions. Moreover, $I_{\beta}$ is strictly convex.
\end{prop}
\begin{proof}
It is proven in Proposition \ref{prop:lsc} that $\overline{\mathbb{W}}_s$ is lower semi-continuous on $\overline{\mathcal{M}}_{stat,1}(\bconfig)$. As for $\overline{\mathbb{V}}$, we may observe that, if $\bP \in \overline{\mathcal{M}}_{stat,1}(\bconfig)$
\begin{equation*}
\overline{\mathbb{V}}(\bP) = \int_{\Omega \times \config} \left( V(x) |\mathcal{C} \cap \carr_1| \right) d\bP(x, \mathcal{C}),
\end{equation*}
and that $(x, \mathcal{C}) \mapsto V(x) |\mathcal{C} \cap \carr_1|$ is lower semi-continuous on $\bconfig$, thus $\overline{\mathbb{V}}$ is lower semi-continuous on $\overline{\mathcal{M}}_{stat,1}(\bconfig)$, moreover, it is known that $\ERS[\cdot | \mathbf{\Pi}]$ is lower semi-continuous (see Section \ref{sec:ERS}). Thus $\overline{\mathcal{F}}_{\beta}$ is lower semi-continuous. Since $\overline{\mathbb{W}}_s$ and $\overline{\mathbb{V}}$ are bounded below, the sub-level sets of $\overline{\mathcal{F}}_{\beta}$ are included in those of $\ERS[\cdot | \mathbf{\Pi}]$, which are known to be compact (see again Section \ref{sec:ERS}). Thus $\overline{\mathcal{F}}_{\beta}$ is a good rate function.
The functional $I_{\beta}$ is easily seen to be lower semi-continuous, and since $\mathcal{W}_s$, $\ERS$ and $V$ are bounded below, the sub-level sets of $I_{\beta}$ are included into those of $\int_{\Omega} \rho \log \rho$ which are known to be compact, thus $I_{\beta}$ is a good rate function.
To prove that $I_{\beta}$ is strictly convex in $\rho$, it is enough to prove that the first term in the right-hand side of \eqref{def:Ibeta} is convex (the second one is clearly affine, and the last one is well-known to be strictly convex). We may observe that the map
$$
\rho \mapsto \beta \rho^{1+s/d} \mathbb{W}_s(P) + \rho \,\ERS[P|\mathbf{\Pi}] - \rho
$$
is convex for all $P$ (because $\mathbb{W}_s(P)$ is non-negative), and the infimum of a family of convex functions is also convex, thus
$$
\rho \mapsto \inf_{P \in \psunconfig} \left( \beta \rho^{1+s/d} \mathbb{W}_s(P) + \rho \, \ERS[P|\mathbf{\Pi}] \right)
$$
is convex in $\rho$, which concludes the proof.
\end{proof}
\section{Preliminaries on the energy}
\subsection{General properties}
\subsubsection{Minimal energy of infinite point configurations}
In this section, we connect the minimization of $\mathcal{W}_s$ (defined at the level of infinite point configurations) with the asymptotics of the $N$-point minimal energy as presented in Section \ref{sec:finitepointenergy}. Let us recall that the class $\config_m$ of point configurations with mean density $m$ has been defined in Section \ref{sec:density}.
\begin{prop} \label{prop:WS}
We have
\begin{equation} \label{minA1}
\inf_{\mathcal{C} \in \config_1} \mathcal{W}_s(\mathcal{C}) = \min_{\mathcal{C} \in \config_1} \mathcal{W}_s(\mathcal{C}) = C_{s,d},
\end{equation}
where $C_{s,d}$ is as in \eqref{def:Csd1}. Moreover, for any $d$-dimensional Bravais lattice $\Lambda$ of co-volume $1$, there exists a minimizing sequence $\{C_N\}_N$ for $\mathcal{W}_s$ over $\config_1$ such that $\mathcal{C}_N$ is $N^{1/d}\Lambda$-periodic for $N \geq 1$.
\end{prop}
\begin{proof}
Let $\Lambda$ be a $d$-dimensional Bravais lattice $\Lambda$ of co-volume $1$, and for any $N$ let $\omega_N$ be a $N$-point configuration minimizing $E_{s,\Lambda}$. We define
\begin{equation*}
\mathcal{C}_N := N^{1/d} \left( \omega_N + \Lambda \right).
\end{equation*}
By construction, $\mathcal{C}_N$ is a $N^{1/d} \Lambda$-periodic point configuration of density $1$. Using the scaling property \eqref{scalingw1} and \eqref{WELambda}, we have
\begin{equation*}
\mathcal{W}_s (\mathcal{C}_N)= \frac{\mathcal{W}_s\left( \omega_N+\Lambda\right)}{N^{1+s/d}} = \frac{E_{s, \Lambda}(\omega_N)}{N^{1+s/d}}.
\end{equation*}
On the other hand, we have by assumption $E_{s, \Lambda}(\omega_N) = \mathcal{E}_{s,\Lambda}(N)$. Taking the limit $N \rightarrow \infty$ yields, in light of \eqref{perEnLim}, $\lim_{N \rightarrow \infty} \mathcal{W}_s(\mathcal{C}_N) = C_{s,d}$. In particular we have
\begin{equation} \label{infWsa}
\inf_{\mathcal{C} \in \config_1} \mathcal{W}_s(\mathcal{C}) \leq C_{s,d}.
\end{equation}
To prove the converse inequality, let us consider $\mathcal{C}$ in $\config_1$ arbitrary. We have by definition (see \eqref{def:Es} and \eqref{def:Ws}) and the scaling properties of $E_s$,
\begin{equation*}
\mathcal{W}_s(\mathcal{C}) =\liminf_{R \to \infty} \frac{E_s \left( \mathcal{C} \cap \carr_R \right)}{R^d} = \liminf_{R \to \infty}\frac{1}{R^{d+s}} E_s \left(\frac{1}{R}\mathcal{C} \cap \carr_1\right),
\end{equation*}
and, again by definition (see \eqref{def:EsAN})
\begin{equation*}
E_s \left(\frac{1}{R}\mathcal{C} \cap \carr_1\right) \geq \mathcal{E}_s\left(\carr_1, |\mathcal{C} \cap \carr_R| \right).
\end{equation*}
We thus obtain
\begin{equation*}
\mathcal{W}_s(\mathcal{C}) \geq \liminf_{R \to \infty} \frac{\mathcal{E}_s\left(\carr_1, |\mathcal{C}\cap \carr_R| \right)}{\crd{\mathcal{C}\cap \carr_R}^{1+s/d}} \left(\frac{\crd{\mathcal{C}\cap \carr_R}}{R^d}\right)^{1+s/d}.
\end{equation*}
Using the definition \eqref{def:Csd1} of $C_{s,d}$ we have
$$
\liminf_{R \rightarrow \infty} \frac{\mathcal{E}_s\left(\carr_1, |\mathcal{C}\cap \carr_R| \right)}{\crd{\mathcal{C}\cap \carr_R}^{1+s/d}} \geq C_{s,d},
$$
and by definition of the density, since $\mathcal{C}$ is in $\config_1$ we have
$$
\liminf_{R \rightarrow \infty} \left(\frac{\crd{\mathcal{C}\cap \carr_R}}{R^d}\right)^{1+s/d} = 1.
$$
It yields $\mathcal{W}_s(\mathcal{C}) \geq C_{s,d}$ and so (in view of \eqref{infWsa})
\begin{equation} \label{infWsb}
\inf_{\mathcal{C}\in \mathcal{A}_1} \mathcal{W}_s(\mathcal{C})= C_{s,d}.
\end{equation}
It remains to prove that the infimum is achieved. Let us start with a sequence $\{\omega_M\}_{M \geq 1}$ such that $\omega_M$ is a $M^d$-point configuration in $K_M$ satisfying
\begin{equation} \label{omegaMminimise}
\lim_{M \rightarrow \infty} \frac{E_s(\omega_M)}{M^d} = C_{s,d}.
\end{equation}
Such a sequence of point configurations exists by definition of $C_{s,d}$ as in \eqref{def:Csd1}, and by the scaling properties of $E_s$. We define a configuration $\mathcal{C}$ inductively as follows.
\begin{itemize}
\item Let $r_1, c_1, s_1 = 1$ and let us set $\mathcal{C} \cap \carr_{r_1}$ to be $\omega_1$.
\item Assume that $r_N, s_N, c_N$ and $\mathcal{C} \cap \carr_{r_N}$ have been defined. We let
\begin{equation} \label{choiceSN}
s_{N+1} = \lceil c_{N+1}r_N + (c_{N+1} r_N)^{\frac{1}{2}} \rceil,
\end{equation}
with $c_{N+1} > 1$ to be chosen later. We also let $r_{N+1}$ be a multiple of $s_{N+1}$ large enough, to be chosen later. We tile $\carr_{r_{N+1}}$ by hypercubes of sidelength $s_{N+1}$ and we define $\mathcal{C} \cap \carr_{r_{N+1}}$ as follows:
\begin{itemize}
\item In the central hypercube of sidelength $s_{N+1}$, we already have the points of $\mathcal{C} \cap \carr_{r_N}$ (because $r_N \leq s_{N+1}$) and we do not add any points. In particular, this ensures that each step of our construction is compatible with the previous ones.
\item In all the other hypercubes, we paste a copy of $\omega_{c_{N+1} r_N}$ “centered” in the hypercube in such a way that
\begin{equation} \label{hypdistanhyper}
\text{all the points are at distance $\geq (c_{N+1} r_N)^{\frac{1}{2}}$ of the boundary}.
\end{equation}
This is always possible because $\omega_{c_{N+1} r_N}$ lives, by definition, in an hypercube of sidelength $c_{N+1} r_N$ and because we have chosen $s_{N+1}$ as in \eqref{choiceSN}.
\end{itemize}
We claim that the number of points in $\carr_{r_{N+1}}$ is always less than $r_{N+1}^d$ (as can easily be checked by induction) and is bounded below by
\begin{equation*}
\left( \left(\frac{r_{N+1}}{s_{N+1}}\right)^d -1 \right) (c_{N+1} r_N)^d.
\end{equation*}
Thus it is easy to see that if $c_{N+1}$ is chosen large enough and if $r_{N+1}$ is a large enough multiple of $s_{N+1}$, then
\begin{equation} \label{nombrepointsSNetc}
\text{ the number of points in $r_{N+1}$ is $r_{N+1}^d (1 - o_N(1))$.}
\end{equation}
Let us now give an upper bound on the interaction energy $\mathrm{Int}[\carr_{r_{N+1}}, \carr_{r_{N+1}}](\mathcal{C})$. We recall that we have tiled $\carr_{r_{N+1}}$ by hypercubes of sidelength $s_{N+1}$.
\begin{itemize}
\item Each hypercube as a self-interaction energy given by $E_s(\omega_{c_{N+1} r_N})$, except the central one, whose self-interaction energy is bounded by $O(r_N^d)$ (as can be seen by induction).
\item The interaction of a given hypercube with the union of all the others can be controlled because, by construction (see \eqref{hypdistanhyper}) the configurations pasted in two disjoint hypercubes are far way from each other. We can compare it to
\begin{equation*}
\int_{r =(c_{N+1} r_N)^{\frac{1}{2}}}^{+ \infty} \frac{1}{r^s} s_{N+1}^{d} r^{d-1} dr,
\end{equation*}
and an elementary computation shows that it is negligible with respect to $s_{N+1}^d$ (because $d < s$).
\end{itemize}
We thus have
\begin{equation*}
\mathrm{Int}[\carr_{r_{N+1}}, \carr_{r_{N+1}}](\mathcal{C}) \leq \left( \left(\frac{r_{N+1}}{s_{N+1}}\right)^d -1 \right) E_s(\omega_{c_{N+1} r_N}) + O(r_N^d) + \left(\frac{r_{N+1}}{s_{N+1}}\right)^d o_N\left(s_{N+1}^d \right).
\end{equation*}
We may now use \eqref{omegaMminimise} and get that
\begin{equation} \label{energieminimisante}
\frac{1}{r_{N+1}^d} \mathrm{Int}[\carr_{r_{N+1}}, \carr_{r_{N+1}}](\mathcal{C}) \leq C_{s,d} + o_N(1).
\end{equation}
\end{itemize}
Let $\mathcal{C}$ be the point configuration constructed as above. Taking the limit as $N \rightarrow \infty$ in \eqref{nombrepointsSNetc} shows that $\mathcal{C}$ is in $\config_1$, and \eqref{energieminimisante} implies that $\mathcal{W}_s(\mathcal{C}) \leq C_{s,d}$, which concludes the proof of \eqref{minA1}.
\end{proof}
\subsubsection{Energy of random point configurations}
In the following lemma, we prove that for stationary $P$ the $\liminf$ defining $\mathbb{W}_s(P)$ as in \eqref{def:WsP1a} is actually a limit, and that the convergence is uniform of sublevel sets of $\mathbb{W}_s$ (which will be useful for proving lower semi-continuity).
\begin{lem}\label{lem:WsP}
Let $P$ be in $\psconfig$. The following limit exists in $[0, +\infty]$
\begin{equation} \label{WsP2}
\mathbb{W}_s(\P) := \lim_{R \rightarrow \infty} \frac{1}{R^d} \Esp_{P} \left[ \mathrm{Int}[\carr_R, \carr_R]\right].
\end{equation}
Moreover we have as $R \rightarrow \infty$
\begin{equation} \label{erreurWsP2}
\left| \mathbb{W}_s(\P) - \frac{1}{R^d} \Esp_{P} \left[ \mathrm{Int}[\carr_R, \carr_R] \right] \right| \leq C \left(\mathbb{W}_s(P)^{\frac{2}{1+s/d}} + \mathbb{W}_s(P) \right) o_R(1),
\end{equation}
with $o_R(1)$ depending only on $s,d$.
\end{lem}
\begin{proof}
We begin by showing that the quantity
$$\frac{1}{n^d} \Esp_{P} \left[ \mathrm{Int}[\carr_{n}, \carr_n](\mathcal{C})\right]$$
is non-decreasing for integer values of $n$.
For $n \geq 1$, let $\{\tilde{K}_v\}_{v \in \mathbb{Z}^d \cap \carr_n }$ be a tiling of $\carr_n$ by unit hypercubes, indexed by the centers $v \in \mathbb{Z}^d \cap \carr_n$ of the hypercubes, and let us split $\mathrm{Int}[\carr_n, \carr_n]$ as
\begin{equation*}
\mathrm{Int}[\carr_{n}, \carr_n] = \sum_{v , v'\in \mathbb{Z}^d \cap \carr_n} \mathrm{Int}[\tilde{K}_{v}, \tilde{K}_{v'}].
\end{equation*}
Using the stationarity assumption and writing $v = (v_1, \dots, v_d)$ and $|v|:=\max_i |v_i|$, we obtain
\begin{equation*}
\Esp_{P} \left[ \sum_{v, v' \in \mathbb{Z}^d \cap \carr_n} \mathrm{Int}[\tilde{K}_{v}, \tilde{K}_{v'}] \right] = \sum_{v \in \mathbb{Z}^d \cap \carr_{2n}} \Esp_{P} \left[\mathrm{Int}[\tilde{K}_{0}, \tilde{K}_{v}] \right] \prod_{i=1}^d (n - |v_i|).
\end{equation*}
We thus get
\begin{equation} \label{WsPpremier}
\frac{1}{n^d} \Esp_{P} \left[ \mathrm{Int}[\carr_{n}, \carr_n] \right] = \sum_{v \in \mathbb{Z}^d \cap \carr_{2n}} \Esp_{P} \left[\mathrm{Int}[\tilde{K}_{0}, \tilde{K}_{v}] \right] \prod_{i=1}^d \left(1 - \frac{|v_i|}{n}\right),
\end{equation}
and it is clear that this quantity is non-decreasing in $n$, in particular the limit as $n \rightarrow \infty$ exists in $[0, + \infty]$. We may also observe that $R \mapsto \mathrm{Int}[\carr_R, \carr_R]$ is non-decreasing in $R$. It is then easy to conclude that the limit of \eqref{WsP2} exists in $[0, + \infty]$.
Let us now quantify the speed of convergence. First, we observe that for $|v| \geq 2$ we have
\begin{equation*}
\Esp_{P} \left[\mathrm{Int}[\tilde{K}_{0}, \tilde{K}_{v}] \right] \leq O\left(\frac{1}{|v-1|^s}\right) \Esp_{P}[ N_0 N_v],
\end{equation*}
where $N_0, N_v$ denotes the number of points in $\tilde{K}_0, \tilde{K}_v$. Indeed, the points of $\tilde{K}_{0}$ and $\tilde{K}_{v}$ are at distance at least $|v-1|$ from each other (up to a multiplicative constant depending only on $d$).
On the other hand, Hölder's inequality and the stationarity of $P$ imply
\begin{equation*}
\|N_0 N_v\|_{L^1(P)} \leq \|N_0 \|_{L^{1+s/d}(P)} \|N_v \|_{L^{1+s/d}(P)} = \|N_0 \|^2_{L^{1+s/d}(P)},
\end{equation*}
and thus we have $\Esp_{P}[ N_0 N_v] \leq \Esp_{P}[N_0]^{\frac{2}{1+s/d}}$. On the other hand, it is easy to check that for $P$ stationary,
$$\Esp_{P} [N_0^{1+s/d}] \leq C \mathbb{W}_s(P)$$
for some constant $C$ depending on $d,s$. Indeed, the interaction energy in the hypercube $\tilde{K}_0$ is bounded below by some constant times $N_0^{1+s/d}$, and \eqref{WsPpremier} shows that
$$
\mathbb{W}_s(P) \geq \Esp_{P} \left[\mathrm{Int}[\tilde{K}_{0}, \tilde{K}_0]\right].
$$
We thus get
\begin{multline*}
\mathbb{W}_s(P) - \sum_{v \in \mathbb{Z}^d \cap \carr_{2n}} \Esp_{P} \left[\mathrm{Int}[\tilde{K}_{0}, \tilde{K}_{v}] \right] \prod_{i=1}^d \left(1 - \frac{|v_i|}{n}\right) \\ \leq \mathbb{W}_s(P)^{\frac{2}{1+s/d}} \left(\sum_{v \in \mathbb{Z}^d \cap \carr_{2n}, |v| \geq 2} \frac{1}{|v-1|^s} \left(1 - \prod_{i=1}^d \left(1 - \frac{|v_i|}{n}\right) \right) + \sum_{|v| \geq 2n} \frac{1}{|v|^s}\right) \\
+ \frac{1}{n} \sum_{|v| =1} \Esp_{P} \left[\mathrm{Int}[\tilde{K}_{0}, \tilde{K}_{v}] \right] .
\end{multline*}
It is not hard to see that the parenthesis in the right-hand side goes to zero as $n \rightarrow \infty$. On the other hand, we have
$$
\sum_{|v| =1} \Esp_{P} \left[\mathrm{Int}[\tilde{K}_{0}, \tilde{K}_{v}] \right] \leq \mathbb{W}_s(P).
$$
Thus we obtain
\begin{equation*}
\mathbb{W}_s(P) - \frac{1}{n^d} \Esp_{P} \left[ \mathrm{Int}[\carr_{n}, \carr_n] \right] \leq \left(\mathbb{W}_s(P)^{\frac{2}{1+s/d}} + \mathbb{W}_s(P)\right) o_n(1),
\end{equation*}
with a $o_n(1)$ depending only on $d,s$ and it is then not hard to get \eqref{erreurWsP2}.
\end{proof}
For any $R > 0$, the quantity $\mathrm{Int}[\carr_{R}, \carr_R]$ is continuous and bounded below on $\config$, thus the map $$P \mapsto \frac{1}{R^d} \Esp_{P} \left[ \mathrm{Int}[\carr_{R}, \carr_R] \right]$$ is lower semi-continuous on $\pconfig$. The second part of Lemma \ref{lem:WsP} shows that we may approximate $\mathbb{W}_s(P)$ by $\frac{1}{R^d} \Esp_{P} \left[ \mathrm{Int}[\carr_{R}, \carr_R] \right]$ up to an error which $o_R(1)$, uniformly on sub-level sets of $\mathbb{W}_s$. The next proposition follows easily.
\begin{prop} \label{prop:lsc}
\begin{enumerate}
\item The functional $\mathbb{W}_s$ is lower semi-continuous on $\psunconfig$.
\item The functional $\overline{\mathbb{W}}_s$ is lower semi-continuous on $\overline{\mathcal{M}}_{stat,1}(\bconfig)$.
\end{enumerate}
\end{prop}
We may also prove the following equality (which settles a question raised in Section \ref{sec:energyerandompoint}).
\begin{coro} \label{coro:Fatouegal}
Let $P$ be in $\psunconfig$, then we have
$$
\mathbb{W}_s(P) = \lim_{R \rightarrow \infty} \frac{1}{R^d} \Esp_{P} \left[ \mathrm{Int}[\carr_R, \carr_R](\mathcal{C}) \right] = \Esp_{P} \left[ \liminf_{R \rightarrow \infty} \frac{1}{R^d} \mathrm{Int}[\carr_R, \carr_R](\mathcal{C}) \right].
$$
\end{coro}
\begin{proof}
As was observed in \eqref{Wsfatou}, Fatou's lemma implies that
$$
\Esp_{P} \left[ \liminf_{R \rightarrow \infty} \frac{1}{R^d} \mathrm{Int}[\carr_R, \carr_R](\mathcal{C}) \right] \leq \lim_{R \rightarrow \infty} \frac{1}{R^d} \Esp_{P} \left[ \mathrm{Int}[\carr_R, \carr_R](\mathcal{C}) \right] = \mathbb{W}_s(P),
$$
(the last equality is by definition). On the other hand, with the notation of the proof of Lemma \ref{lem:WsP}, we have for any integer $n$ and any $\mathcal{C}$ in $\config$
$$
\frac{1}{n^d} \mathrm{Int}[\carr_n, \carr_n](\mathcal{C}) = \frac{1}{n^d} \sum_{v, v' \in \mathbb{Z}^d \cap \carr_R} \mathrm{Int}[\tilde{K}_{v}, \tilde{K}_{v'}],
$$
and the right-hand side is dominated under $P$ (as observed in the previous proof), thus the dominated convergence theorem applies.
\end{proof}
\subsection{Derivation of the infinite-volume limit of the energy}
The following result is central in our analysis. It connects the asymptotics of the $N$-point interaction energy $\{\mathcal{H}_N(\vec{X}_N)\}_N$ with the infinite-volume energy $\overline{\mathbb{W}}_s(\bP)$ of an infinite-volume object: the limit point $\bP$ of the tagged empirical processes $\{\overline{\Emp}_N(\vec{X}_N)\}_N$.
\begin{prop} \label{prop:glinf} For any $N \geq 1$, let $\vec{X}_N = (x_1, \dots, x_N)$ be in $\Omega^N$, let $\mu_N$ be the empirical measure and $\bP_N$ be the tagged empirical process associated to $\vec{X}_N$; i.e.,
\begin{equation*}
\mu_N := \mathrm{emp}(\vec{X}_N), \quad \bP_N := \overline{\Emp}_N(\vec{X}_N),
\end{equation*}
as defined in \eqref{def:emp} and \eqref{def:Emp}. Let us assume that
\begin{equation*}
\liminf_{N \rightarrow \infty} \frac{\mathcal{H}_N(\vec{X}_N)}{N^{1+s/d}} < + \infty.
\end{equation*}
Then, up to extraction of a subsequence,
\begin{itemize}
\item $\{\mu_N\}_N$ converges weakly to some $\mu$ in $\mathcal{M}(\Omega)$,
\item $\{\bP_N\}_N$ converges weakly to some $\bP$ in $\overline{\mathcal{M}}_{stat,1}(\bconfig)$,
\item $\mathrm{Intens}(\bP) = \mu$.
\end{itemize} Moreover we have
\begin{equation} \label{conc:glinf}
\liminf_{N\to \infty} \frac{ \mathcal{H}_N(x_1, \dots, x_N)}{N^{1+s/d}} \ge \overline{\mathbb{W}}_s(\bP)+ \overline{\mathbb{V}}(\bP).
\end{equation}
\end{prop}
\begin{proof}
Up to extracting a subsequence, we may assume that $\mathcal{H}_N(\vec{X}_N)= O(N^{1+s/d})$.
First, by positivity of the Riesz interaction, we have for $N \geq 1$
\begin{equation*}
\int_{\Omega} V \, d\mu_N \leq \frac{\mathcal{H}_N(\vec{X}_N)}{N^{1+s/d}},
\end{equation*}
and thus $\int_{\Omega} V \, d\mu_N$ is bounded. By \eqref{ass:regV} and \eqref{ass:croissanceV} we know that $V$ is bounded below and has compact sub-level sets. An easy application of Markov's inequality shows that $\{\mu_N\}_N$ is tight, and thus it converges (up to another extraction). It is not hard to check that $\{\bP_N\}_N$ converges (up to extraction) to some $\bP$ in $\pbconfig$ (indeed the average number of points per unit volume is constant, which implies tightness, see e.g. \cite[Lemma 4.1]{LebSer}) whose stationarity is clear (see again e.g. \cite{LebSer}).
Let $\bar{\rho}$ be the intensity measure of $\bP$ (in the sense of Section \ref{sec:intensitymeasure}), we want to prove that $\bar{\rho} = \mu$ (which will in particular imply that $\bP$ is in $\overline{\mathcal{M}}_{stat,1}(\bconfig)$). It is a general fact that $\bar{\rho} \leq \mu$ (see e.g. \cite[Lemma 3.7]{leble2015large}), but it could happen that a positive fraction of the points cluster together, resulting in the existence of a singular part in $\mu$ which is missed by $\bar{\rho}$ so that $\bar{\rho} < \mu$. However, in the present case, we can easily bound the moment (under $\bP_N$) of order $1 + s/d$ of the number of points in a given hypercube $\carr_R$. Indeed, let $\{\tilde{K}_i\}_{i \in I}$ be a covering of $\Omega$ by disjoint hypercubes of sidelength $RN^{-1/d}$, and let $n_i = N\mu_N\left(\tilde{K}_i\right)$ denote the number of points from $\vec{X}_N$ in $\tilde{K}_i$. We have, by positivity of the Riesz interaction
\begin{equation*}
\mathcal{H}_N(\vec{X}_N) \geq \sum_{i \in I} \mathrm{Int}[\tilde{K}_i, \tilde{K}_i] \geq C\sum_{i \in I} \frac{n_i^{1+s/d}N^{s/d}}{R^s},
\end{equation*}
for some constant $C>0$ (depending only on $s$ and $d$) because the minimal interaction energy of $n$ points in $\tilde{K}_i$ is proportional to $\frac{n^{1+s/d}N^{s/d}}{R^s}$ (see \eqref{def:Csd1}, \eqref{def:Csd2}). Since $\mathcal{H}_N(\vec{X}_N) = O(N^{1+s/d})$ by assumption, we get that $\sum_{i \in I} n_i^{1+s/d} = O(N)$, with an implicit constant depending only on $R$. It implies that $x \mapsto N\mu_N \left( B(x, RN^{-1/d}) \right)$ is uniformly (in $N$) locally integrable on $\Omega$ for all $R > 0$, and arguing as in \cite[Lemma 3.7]{leble2015large} we deduce that $\bar{\rho} = \mu$.
We now turn to proving \eqref{conc:glinf}. Using the positivity and scaling properties of the Riesz interaction and a Fubini-type argument we may write, for any $R > 0$
\begin{equation*}
\mathrm{Int}[\Omega, \Omega](\vec{X}_N) \geq N^{1+s/d} \int_{\Omega \times \config} \frac{1}{R^d} \mathrm{Int}[\carr_R, \carr_R](\mathcal{C}) d\bP_N(x, \mathcal{C}).
\end{equation*}
Of course we have, for any $M > 0$,
\begin{equation*}
\int_{\Omega \times \config} \frac{1}{R^d} \mathrm{Int}[\carr_R, \carr_R](\mathcal{C})d\bP_N(x, \mathcal{C}) \geq \int_{\Omega \times \config} \frac{1}{R^d} \left( \mathrm{Int}[\carr_R, \carr_R](\mathcal{C}) \wedge M\right) d\bP_N(x, \mathcal{C}),
\end{equation*}
and thus the weak convergence of $\bP_N$ to $\bP$ ensures that
\begin{equation*}
\int_{\Omega \times \config} \frac{1}{R^d} \mathrm{Int}[\carr_R, \carr_R](\mathcal{C}) d\bP_N(x, \mathcal{C}) \geq \int_{\Omega \times \config} \frac{1}{R^d} \left(\mathrm{Int}[\carr_R, \carr_R](\mathcal{C}) \wedge M\right) d\bP(x, \mathcal{C}) + o_N(1).
\end{equation*}
Since this is true for all $M$ we obtain
\begin{equation*}
\liminf_{N \rightarrow \infty} \frac{\mathrm{Int}[\Omega, \Omega](\vec{X}_N)}{N^{1+s/d}} \geq \int_{\Omega \times \config} \frac{1}{R^d} \left( \mathrm{Int}[\carr_R, \carr_R](\mathcal{C}) \right) d\bP(x, \mathcal{C}).
\end{equation*}
Sending $R$ to $+ \infty$ and using Proposition \ref{prop:WS} we get
\begin{equation} \label{limiteint}
\liminf_{N \rightarrow \infty} \frac{\mathrm{Int}[\Omega, \Omega](\vec{X}_N)}{N^{1+s/d}} \geq \liminf_{R \rightarrow \infty} \int_{\Omega \times \config} \frac{1}{R^d} \left( \mathrm{Int}[\carr_R, \carr_R](\mathcal{C}) \right) d\bP(x, \mathcal{C}) =: \overline{\mathbb{W}}_s(\bP).
\end{equation}
On the other hand, the weak convergence of $\mu_N$ to $\mu$ and Assumption \ref{ass:regV} ensure that
\begin{equation} \label{limiteV}
\liminf_{N \rightarrow \infty} \int_{\Omega} V\, d\mu_N \geq \int_{\Omega} V\, d\mu.
\end{equation}
Combining \eqref{limiteint} and \eqref{limiteV} gives \eqref{conc:glinf}.
\end{proof}
Proposition \ref{prop:glinf} can be viewed as a $\Gamma$-$\liminf$ result (in the language of $\Gamma$-convergence). We will prove later (e.g. in Proposition \ref{quasicontinu}, which is in fact a much stronger statement) the corresponding $\Gamma$-$\limsup$.
\section{Proof of the large deviation principles}\label{sec4}
As in \cite{LebSer}, the main obstacle for proving Theorem \ref{theo:LDPemp} is to deal with the lack of upper semi-continuity of the interaction, namely that there is no upper bound of the type
\begin{equation*}
\mathcal{H}_N(\vec{X}_N) \lesssim N^{1+s/d} \left( \overline{\mathbb{W}}_s(\bP) + \overline{\mathbb{V}}(\bP) \right)
\end{equation*}
which holds in general under the mere condition that $\overline{\Emp}_N(\vec{X}_N) \approx \bP$ (cf. \eqref{def:Emp} for a definition of the tagged empirical process). This yields a problem for proving the large deviations lower bound (in contrast, \textit{lower} semi-continuity holds and the proof of the large deviations upper bound is quite simple). Let us briefly explain why.
Firstly, due its singularity at $0$, the interaction is not uniformly continuous with respect to the topology on the configurations. Indeed a pair of points at distance $\epsilon$ yields a $\epsilon^{-s}$ energy but a pair of points at distance $2 \epsilon$ has energy $(2\epsilon)^{-s}$, with $|\epsilon^{-s} - (2\epsilon)^{-s} | \to \infty$, although these two point configurations are very close for the topology on $\config$.
Secondly, the energy is non-additive: we have in general
\begin{equation*}
\mathrm{Int}[\mathcal{C}_1 \cup \mathcal{C}_2, \mathcal{C}_1 \cup \mathcal{C}_2] \neq \mathrm{Int}[\mathcal{C}_1, \mathcal{C}_1] + \mathrm{Int}[\mathcal{C}_2, \mathcal{C}_2].
\end{equation*}
Yet the knowledge of $\overline{\Emp}_N$ (through the fact that $\overline{\Emp}_N(\vec{X}_N) \in B(\bP, \epsilon)$) yields only \textit{local} information on $\vec{X}_N$, and does not allow one to reconstruct $\vec{X}_N$ \textit{globally}. Roughly speaking, it is like partitioning $\Omega$ into hypercubes and having a family of point configurations, each belonging to some hypercube, but without knowing the precise configuration-hypercube pairing. Since the energy is non-additive (there are non trivial hypercube-hypercube interactions in addition to the hypercubes' self-interactions), we cannot (in general) deduce $\mathcal{H}_N(\vec{X}_N)$ from the mere knowledge of the tagged empirical process.
In Section \ref{sec:LDPLB}, the singularity problem is dealt with by using a regularization procedure similar to that of \cite{LebSer}, while the non-additivity is shown to be negligible due to the short-range nature of the Riesz potential for $s > d$.
\subsection{A LDP for the reference measure}
Let $\Leb_{\Ac^N}$ be the Lebesgue measure on $\Omega^N$, and let $\bar{\mathfrak{Q}}_N$ be the push-forward of $\Leb_{\Ac^N}$ by the “tagged empirical process” map $\overline{\Emp}_N$ defined in \eqref{def:Emp}. Let us recall that $\Omega$ is not necessarily bounded, hence $\Leb_{\Ac^N}$ may have an infinite mass and thus there is no natural way of making $\bar{\mathfrak{Q}}_N$ a probability measure.
\begin{prop} \label{prop:Sanovreference}
Let $\bP$ be in $\overline{\mathcal{M}}_{stat,1}(\bconfig)$. We have
\begin{multline} \label{SanovReference}
\lim_{\epsilon \rightarrow 0} \liminf_{N \rightarrow \infty} \frac{1}{N} \log \bar{\mathfrak{Q}}_N\left( B(\bP, \epsilon) \right) = \lim_{\epsilon \rightarrow 0} \limsup_{N \rightarrow \infty} \frac{1}{N} \log \bar{\mathfrak{Q}}_N\left( B(\bP, \epsilon) \right)
\\ = - \int_{\Omega} \left(\ERS[\bPx | \mathbf{\Pi}] -1\right) dx - 1.
\end{multline}
\end{prop}
\ed{We recall that $\bPx$ is the disintegration measure of $\bP$ at the point $x$, or the “fiber at $x$" (which is a measure on $\config$) of $\bP$ (which is a measure on $\Omega \times \config$), see Section \ref{sec:randomttaggedpoint}.}
\begin{proof}
If $\Omega$ is bounded, Proposition \ref{prop:Sanovreference} follows from the analysis of \cite[Section 7.2]{LebSer}, see in particular \cite[Lemma 7.8]{LebSer}. The only difference is that the Lebesgue measure on $\Omega$ used in \cite{LebSer} is normalized, which yields an additional factor of $\log |\Omega|$ in the rate function. The proof extends readily to a non-bounded $\Omega$ because the topology of weak convergence on $\pbconfig$ is defined with respect to test functions which are compactly supported on $\Omega$.
\end{proof}
\subsection{A LDP upper bound}
\begin{prop} \label{prop:LDPUB}
Let $\bP$ be in $\overline{\mathcal{M}}_{stat,1}(\bconfig)$.
We have
\begin{equation} \label{LDPUB}
\lim_{\epsilon \rightarrow 0} \limsup_{N \rightarrow \infty} \frac{1}{N} \log \overline{\mathfrak{P}}_{N, \beta} ( B(\bP, \epsilon)) \leq - \overline{\mathcal{F}}_{\beta}(\bP) + \limsup_{N \rightarrow \infty} \left(- \frac{\log \ZNbeta}{N}\right) .
\end{equation}
\end{prop}
\begin{proof}
Using the definition of $\overline{\mathfrak{P}}_{N, \beta}$ as the push-forward of $\mathbb{P}_{N,\beta}$ by $\overline{\Emp}_N$ we may write
\begin{equation*}
\overline{\mathfrak{P}}_{N, \beta} ( B(\bP, \epsilon)) = \frac{1}{\ZNbeta} \int_{\Omega^N \cap \{\overline{\Emp}_N(\vec{X}_N) \in B(\bP, \epsilon)\}} \exp\left(-\beta N^{-s/d} \mathcal{H}_N(\vec{X}_N)\right) d\vec{X}_N.
\end{equation*}
From Proposition \ref{prop:glinf} and Proposition \ref{prop:lsc} we know that for any sequence $\vec{X}_N$ such that $\overline{\Emp}_N(\vec{X}_N) \in B(\bP, \epsilon)$ we have
\begin{equation*}
\liminf_{N \rightarrow \infty} \frac{\mathcal{H}_N(\vec{X}_N)}{N^{1+s/d}} \geq \overline{\mathbb{W}}_s(\bP) + \overline{\mathbb{V}}(\bP) + o_{\epsilon}(1).
\end{equation*}
We may thus write
\begin{multline*}
\limsup_{N \rightarrow \infty} \frac{1}{N} \log \overline{\mathfrak{P}}_{N, \beta} ( B(\bP, \epsilon)) \leq - \beta \left( \overline{\mathbb{W}}_s(\bP) + \overline{\mathbb{V}}(\bP) \right) \\
+ \limsup_{N \rightarrow \infty} \int_{\Omega^N \cap \{\overline{\Emp}_N(\vec{X}_N) \in B(\bP, \epsilon)\}} d\vec{X}_N + \limsup_{N \rightarrow \infty} \left( - \frac{\log \ZNbeta}{N} \right) + o_{\epsilon}(1).
\end{multline*}
Using Proposition \ref{prop:Sanovreference} we know that
\begin{equation*}
\limsup_{N \rightarrow \infty} \frac{1}{N} \log \int_{\Omega^N \cap \{\overline{\Emp}_N(\vec{X}_N) \in B(\bP, \epsilon)\}} d\vec{X}_N = - \int_{\Omega} \left(\ERS[\bPx | \mathbf{\Pi}] -1\right) - 1 + o_{\epsilon}(1).
\end{equation*}
We thus obtain, sending $\epsilon \rightarrow 0$
\begin{multline*}
\limsup_{N \rightarrow \infty} \frac{1}{N} \log \overline{\mathfrak{P}}_{N, \beta} ( B(\bP, \epsilon)) \leq - \beta \left( \overline{\mathbb{W}}_s(\bP) + \overline{\mathbb{V}}(\bP) \right) - \int_{\Omega} \left(\ERS[\bPx | \mathbf{\Pi}] -1\right) - 1 \\
+ \limsup_{N \rightarrow \infty} \left(- \frac{\log \ZNbeta}{N}\right),
\end{multline*}
which, in view of the definition of $\overline{\mathcal{F}}_{\beta}$ as in \eqref{def:fbarbeta}, yields \eqref{LDPUB}.
\end{proof}
\subsection{A LDP lower bound} \label{sec:LDPLB}
The goal of the present section is to prove a matching LDP lower bound:
\begin{prop} \label{prop:LDPLB}
Let $\bP$ be in $\overline{\mathcal{M}}_{stat,1}(\bconfig)$.
We have
\begin{equation} \label{LDPLB}
\lim_{\epsilon \rightarrow 0} \liminf_{N \rightarrow \infty} \frac{1}{N} \log \overline{\mathfrak{P}}_{N, \beta} ( B(\bP, \epsilon)) \geq - \overline{\mathcal{F}}_{\beta}(\bP) + \liminf_{N \rightarrow \infty} \left(- \frac{\log \ZNbeta}{N}\right).
\end{equation}
\end{prop}
For $N \geq 1$ and $\delta > 0$, let us define the set $T_{N, \delta}(\bP)$ as
\begin{equation} \label{def:TNdelta}
T_{N, \delta}(\bP) = \left\lbrace \vec{X}_N \mid \frac{\mathcal{H}_N(\vec{X}_N)}{N^{1+s/d}} \leq \overline{\mathcal{F}}_{\beta}(\bP) + \delta \right\rbrace.
\end{equation}
We will rely on the following result:
\begin{prop}
\label{prop:quasicontinu}
Let $\bP$ be in $\overline{\mathcal{M}}_{stat,1}(\bconfig)$. For all $\epsilon, \delta >0$ we have
\begin{equation}
\label{quasicontinu}
\begin{split}
\liminf_{N \rightarrow \infty} \frac{1}{N} \log \Leb_{\Ac^N} &\left( \overline{\Emp}_N \in B(\bP, \epsilon) \cap \vec{X}_N \in T_{N, \delta}(\bP) \right)\\ & \geq - \int_{\Omega} \left(\ERS[\bPx | \mathbf{\Pi}] -1\right) dx - 1.
\end{split}
\end{equation}
\end{prop}
\begin{proof}[Proof of Proposition \ref{prop:quasicontinu}.]
We may assume that $\Omega$ is compact and that the intensity measure of $\bP$, denoted by $\bar{\rho}$, is continuous, compactly supported and bounded below. Indeed we can always approximate $\bP$ by random point processes satisfying these additional assumptions. For any $N \geq 1$, we let $\bar{\rho}_N(x) := \bar{\rho}(x N^{-1/d})$ and we let $\Omega_N := N^{1/d} \Omega$.
In fact, for simplicity we will assume that $\Omega$ is some large hypercube. The argument below readily extends to the case where $\Omega$ can be tiled by small hypercubes, and any $C^1$ domain can be tiled by small hypercubes up to some “boundary parts” which are negligible for our concerns (a precise argument is given e.g. in \cite[Section 6]{LebSer}).
\medskip
For $R > 0$, we let $\{ \tilde{K}_i \}_{i \in I}$ be a partition of $\Omega_N$ by hypercubes of sidelength $R$. For $R, M$, we denote by $\bP_{R, M}$ the restriction\footnote{That is, $\bP_{R, M}\in \overline{\mathcal M}(\Omega\times \config[\carr_R]).$} to $\carr_R$ of $\bP$, conditioned to the event
\begin{equation} \label{conditioning}
\left\lbrace \left|\mathcal{C} \cap \carr_R\right| \leq MR^d\right\rbrace.
\end{equation}
\medskip
\textbf{Step 1.} \textit{Generating microstates.} \ \\
For any $\epsilon > 0$, for any $M, R > 0$, for any $\nu > 0$, for any $N \geq 1$, there exists a family $\mathcal{A} = \mathcal{A}(\epsilon, M, R, \nu, N)$ of point configurations $\mathcal{C}$ such that:
\begin{enumerate}
\item $\mathcal{C} = \sum_{i \in I} \mathcal{C}_i$ where $\mathcal{C}_i$ is a point configuration in $\tilde{K}_i$.
\item $| \mathcal{C} | = N$.
\item The “discretized” empirical process is close to $\bP_{R, M}$
\begin{equation}
\label{bPdbelo} \overline{P}_d(\mathcal{C}) := \frac{1}{|I|} \sum_{i \in I} \delta_{(N^{-1/d} x_i, \,\theta_{x_i} \cdot \mathcal{C}_i)} \text{ belongs to } B(\bP_{R, M}, \nu),
\end{equation}
where $x_i$ denotes the center of $\tilde{K}_i$.
\item The associated empirical process is close to $\bP$
\begin{equation}
\label{bPcbelo} \overline{P}_c(\mathcal{C}) := \int_{\Omega} \delta_{(x,\, \theta_{N^{1/d}x} \cdot \mathcal{C})} \, dx \text{ belongs to } B(\bP, \epsilon).
\end{equation}
Note that $\overline{P}_c(\mathcal{C}) =\overline{\Emp}_N( N^{-1/d}\mathcal{C}) $.
\item The volume of $\mathcal{A}$ satisfies, for any $\epsilon > 0$
\begin{equation} \label{bonvolume}
\liminf_{M \rightarrow \infty} \liminf_{R \rightarrow \infty} \frac{1}{R^d} \lim_{\nu \to 0} \lim_{N \rightarrow \infty} \frac{1}{|I|} \log \mathbf{Leb}_{\Omega_N^N} \left( \mathcal{A} \right) \geq - \int_{\Omega} \left(\ERS[\bPx | \mathbf{\Pi}] -1\right) - 1.
\end{equation}
\end{enumerate}
This is essentially \cite[Lemma 6.3]{LebSer} with minor modifications (e.g. the Lebesgue measure in \cite{LebSer} is normalized, which yields an additional logarithmic factor in the formulas).
We will make the following assumption on $\mathcal{A}$
\begin{equation} \label{conditioning2}
|\mathcal{C}_i| \leq 2MR^d \text{ for all } i \in I.
\end{equation}
Indeed for fixed $M$, when $\overline{P}_d$ is close to $\bP_{R,M}$ (for which \eqref{conditioning} holds), the fraction of hypercubes on which \eqref{conditioning2} fails to hold as well as the ratio of excess points over the total number of points (namely $N$) are both small. We may then “redistribute” these excess points among the other hypercubes without affecting \eqref{bPcbelo} and changing the energy estimates below only by a negligible quantity.
\medskip
\textbf{Step 2.} \textit{First energy estimate.} \ \\
For any $R, M, \tau > 0$, the map defined by
\begin{equation*}
\mathcal{C}\in\config(\carr_R) : \, \longrightarrow \mathrm{Int}_{\tau}[\mathcal{C},\mathcal{C}] \wedge \frac{(2MR^d)^2}{\tau^s}
\end{equation*}
(where $\mathrm{Int}_{\tau}$ is as in \eqref{def:Inttau}) is continuous on $\config(\carr_R)$ and \textit{bounded} (this is precisely the reason for conditioning that the number of points are bounded). We may thus write, in view of \eqref{conditioning} and \eqref{bPdbelo}, \eqref{conditioning2},
\begin{multline*}
\int_{\Omega \times \config(\carr_R)} \mathrm{Int}_{\tau}\, d\overline{P}_d = \int_{\Omega \times \config(\carr_R)} \mathrm{Int}_{\tau} \wedge \frac{(2MR^d)^2}{\tau^s} d\overline{P}_d \\
= \int_{\Omega \times \config(\carr_R)} \mathrm{Int}_{\tau} \wedge \frac{(2MR^d)^2}{\tau^s} d\bP_{R, M} + o_{\nu}(1) = \int_{\Omega \times \config(\carr_R)} \mathrm{Int}_{\tau}\, d\bP_{R, M} + o_{\nu}(1).
\end{multline*}
Moreover we have
\begin{equation*}
\lim_{M \rightarrow \infty} \lim_{R \rightarrow \infty} \frac{1}{R^d} \int_{\Omega \times \config(\carr_R)} \mathrm{Int}_{\tau} d\bP_{R, M} = \overline{\mathbb{W}}_s(\bP) + o_{\tau}(1),
\end{equation*}
thus we see that, with \eqref{bPdbelo}
\begin{equation} \label{energietronqueeconverge}
\lim_{M \rightarrow \infty, R \rightarrow \infty} \lim_{\nu \to 0} \lim_{N \rightarrow \infty} \frac{1}{N} \sum_{i \in I} \mathrm{Int}_{\tau}[\mathcal{C}_i, \mathcal{C}_i] = \overline{\mathbb{W}}_s(\bP) + o_{\tau}(1).
\end{equation}
\medskip
\textbf{Step 3.} \textit{Regularization.} \ \\
In order to deal with the short-scale interactions that are not captured in $\mathrm{Int}_{\tau}$, we apply the regularization procedure of \cite[Lemma 5.11]{LebSer}. Let us briefly present this procedure:
\begin{enumerate}
\item We partition $\Omega_N$ by small hypercubes of sidelength $6\tau$.
\item If one of these hypercubes $\mathcal{K}$ contains more than one point, or if it contains a point and one of the adjacent hypercubes also contains a point, we replace the point configuration in $\mathcal{K}$ by one with the same number of points but confined in the central, smaller hypercube $\mathcal{K}' \subset \mathcal{K}$ of side length $3 \tau$ and that lives on a lattice (the spacing of the lattice depends on the initial number of points in $\mathcal{K}$).
\end{enumerate}
This allows us to control the difference $\mathrm{Int} - \mathrm{Int}_{\tau}$ in terms of the number of points in the modified hypercubes.
In particular we replace $\mathcal{A}$ by a new family of point configurations, such that
\begin{equation} \label{truncationerror}
\frac{1}{N} \sum_{i \in I} \left( \mathrm{Int} - \mathrm{Int}_{\tau} \right) [\mathcal{C}_i, \mathcal{C}_i] \leq C \tau^{-s-d}\Esp_{\overline{P}_d} \left[ \left( \left( \left|\mathcal{C} \cap \carr_{12\tau} \right| \right)^{2+s/d} - 1 \right)_+ \right].
\end{equation}
The right-hand side of \eqref{truncationerror} should be understood as follows: any group of points which were too close to each other (without any precise control) have been replaced by a group of points with the same cardinality, but whose interaction energy is now similar to that of a lattice. The energy of $n$ points in a lattice of spacing $\frac{\tau}{n^{1/d}}$ scales like $n^{2+ s/d} \tau^{-s}$, and taking the average over all small hypercubes, is similar to computing $\frac{1}{\tau^d} \Esp_{\bP_d}$.
As $\nu \to 0$ we may then compare the right-hand side of \eqref{truncationerror} with the same quantity for $\bP$, namely
\begin{equation*}
\tau^{-s-d} \Esp_{\bP} \left[ \left( \left(\left|\mathcal{C} \cap \carr_{12\tau} \right| \right)^{2+s/d} - 1 \right)_+ \right]
\end{equation*}
which can be shown to be $o_{\tau}(1)$ (following the argument of \cite[Section 6.3.3]{LebSer}), because it is in turn of the same order as
$$
\Esp_{\bP} \left[ \left( \mathrm{Int} - \mathrm{Int}_{\tau} \right) [K_1, K_1] \right],
$$
which goes to zero as $\tau \to 0$ by dominated convergence.
We obtain
\begin{equation} \label{errtrunpetite}
\lim_{\tau \to 0} \lim_{M, R \rightarrow \infty} \lim_{\nu \to 0} \frac{1}{N} \sum_{i \in I} \left( \mathrm{Int} - \mathrm{Int}_{\tau} \right) [\mathcal{C}_i, \mathcal{C}_i] =0
\end{equation}
and combining \eqref{errtrunpetite} with \eqref{energietronqueeconverge} we get that
\begin{equation} \label{avantscaling}
\lim_{\tau \to 0} \lim_{M \rightarrow \infty, R \rightarrow \infty} \lim_{\nu \to 0} \lim_{N \rightarrow \infty} \frac{1}{N} \sum_{i \in I} \mathrm{Int}[\mathcal{C}_i, \mathcal{C}_i] \leq \overline{\mathbb{W}}_s(\bP).
\end{equation}
\textbf{Step 4.} \textit{Shrinking the configurations.} \edb{
This procedure is borrowed from \cite{HSAdv}. It rescales the configuration by a factor less than one (but very close to $1$) effectively shrinking it and creating an empty boundary layer around each cube. Thus points belonging to different cubes are sufficiently well-separated so that interactions between the cubes are negligible--a much simpler approach to screening than that in the long range case.}
For $R > 0$ we let $R' := R^{\sqrt{d/s}}$.
It is not true in general that $\mathrm{Int}[\mathcal{C}, \mathcal{C}]$ can be split as the sum $\sum_{i \in I} \mathrm{Int}[\mathcal{C}_i, \mathcal{C}_i]$. However since the Riesz interaction decays fast at infinity it is approximately true if the configurations $\mathcal{C}_i$ are separated by a large enough distance. To ensure that, we “shrink” every configuration $\mathcal{C}_i$ in $\tilde{K}_i$, namely we rescale them by a factor $1 - \frac{R'}{R}$. This operation affects the discrete average \eqref{bPdbelo} but not the empirical process; i.e., for any $\epsilon > 0$, if $M, R$ are large enough and $\nu$ small enough, we may still assume that \eqref{bPcbelo} holds. The interaction energy in each hypercube $\tilde{K}_i$ is multiplied by $\left(1- \frac{R'}{R}\right)^{-s} = 1 + o_R(1)$, but the configurations in two distinct hypercubes are now separated by a distance at least $R'$. Since \eqref{conditioning2} holds, an elementary computation implies that we have, for any $i$ in $I$
\begin{equation*}
\mathrm{Int}[\mathcal{C}_i, \sum_{j \neq i} \mathcal{C}_j] = M^2 R^{d} \frac{R^d}{R'^s} O(1),
\end{equation*}
with a $O(1)$ depending only on $d,s$. We thus get
\begin{equation*}
\mathrm{Int}[\mathcal{C}, \mathcal{C}] = \sum_{i \in I} \mathrm{Int}[\mathcal{C}_i, \mathcal{C}_i] + N M^2 \frac{R^d}{R'^s} O(1),
\end{equation*}
but $\frac{R^d}{R'^s} = o_R(1)$ by the choice of $R'$ (and the fact that $d < s$) and thus (in view of \eqref{avantscaling} and the effect of the scaling on the energy)
\begin{equation}
\lim_{\tau \to 0} \lim_{M \rightarrow \infty, R \rightarrow \infty} \lim_{\nu \to 0} \lim_{N \rightarrow \infty} \frac{1}{N} \mathrm{Int}[\mathcal{C}, \mathcal{C}] \leq \overline{\mathbb{W}}_s(\bP).
\end{equation}
We have thus constructed a large enough (see \eqref{bonvolume}) volume of point configurations in $\Omega_N$ whose associated empirical processes converge to $\bP$ and such that
\begin{equation*}
\frac{1}{N} \mathrm{Int}[\mathcal{C}, \mathcal{C}] \leq \overline{\mathbb{W}}_s(\bP) + o(1).
\end{equation*}
We may view these configurations at the original scale by applying a homothety of factor $N^{-1/d}$, this way we obtain point configurations $\vec{X}_N$ in $\Omega$ such that
\begin{equation*}
\frac{1}{N^{1+s/d}} E_s(\vec{X}_N) \leq \overline{\mathbb{W}}_s(\bP)+ o(1).
\end{equation*}
It is not hard to see that the associated empirical measure $\mu_N$ converges to the intensity measure of $\bP$ and since $V$ is continuous we also have
\begin{equation*}
\frac{1}{N} \int_{\mathbb{R}} V d\mu_N = \overline{\mathbb{V}}(\bP) + o(1).
\end{equation*}
This concludes the proof of Proposition \ref{prop:quasicontinu}.
\end{proof}
We may now prove the LDP lower bound.
\begin{proof}[Proof of Proposition \ref{prop:LDPLB}.]
Proposition \ref{prop:quasicontinu} implies \eqref{LDPLB}, indeed we have
\begin{multline*}
\overline{\mathfrak{P}}_{N, \beta} ( B(\bP, \epsilon)) = \frac{1}{\ZNbeta} \int_{\Omega^N \cap \{\overline{\Emp}_N(\vec{X}_N) \in B(\bP, \epsilon)\}} \exp\left(-\beta N^{-s/d} \mathcal{H}_N(\vec{X}_N)\right) d\vec{X}_N \\ \geq \frac{1}{\ZNbeta} \int_{\Omega^N \cap \{\overline{\Emp}_N(\vec{X}_N) \in B(\bP, \epsilon) \cap T_{N, \delta}(\bP)\}} \exp\left(-\beta N^{-s/d} \mathcal{H}_N(\vec{X}_N)\right) d\vec{X}_N \\
\geq \frac{1}{\ZNbeta} \exp \left(-\beta N \left(\overline{\mathcal{F}}_{\beta}(\bP) + \delta\right) \right) \int_{\Omega^N \cap \{\overline{\Emp}_N(\vec{X}_N) \in B(\bP, \epsilon) \cap T_{N, \delta}(\bP)\}} d\vec{X}_N,
\end{multline*}
and \eqref{quasicontinu} allows us to bound below the last integral as
\begin{equation*}
\liminf_{\delta \to 0, \epsilon \to 0, N \rightarrow \infty} \frac{1}{N} \log \int_{\Omega^N \cap \{\overline{\Emp}_N(\vec{X}_N) \in B(\bP, \epsilon) \cap T_{N, \delta}(\bP)\}} d\vec{X}_N \geq -\int_{\Omega} \left(\ERS[\bPx | \mathbf{\Pi}] -1\right) - 1.
\end{equation*}
\end{proof}
\subsection{Proof of Theorem \ref{theo:LDPemp} and Corollary \ref{coro:ZNbeta}}
From Proposition \ref{prop:LDPUB} and Proposition \ref{prop:LDPLB}, the proof of Theorem \ref{theo:LDPemp} is standard. Exponential tightness of $\overline{\mathfrak{P}}_{N, \beta}$ comes for free (see e.g. \cite[Section 4.1]{LebSer}) because the average number of points is fixed, and we may thus improve the weak large deviations estimates \eqref{LDPUB}, \eqref{LDPUB} into the following: for any $A \subset \overline{\mathcal{M}}_{stat,1}(\bconfig)$ we have
\begin{multline*}
- \inf_{\mathring{A}} \overline{\mathcal{F}}_{\beta} + \liminf_{N \rightarrow \infty} \left(- \frac{\log \ZNbeta}{N}\right) \\ \leq \liminf_{N \rightarrow \infty}\frac{1}{N} \log \overline{\mathfrak{P}}_{N, \beta}(A) \leq \limsup_{N \rightarrow \infty}\frac{1}{N} \log \overline{\mathfrak{P}}_{N, \beta}(A) \\ \leq - \inf_{\overline{A}} \overline{\mathcal{F}}_{\beta} \limsup_{N \rightarrow \infty} \left(- \frac{\log \ZNbeta}{N}\right).
\end{multline*}
We easily deduce that
\begin{equation*}
\lim_{N \rightarrow \infty} \frac{\log \ZNbeta}{N} = - \min_{\overline{\mathcal{M}}_{stat,1}(\bconfig)} \overline{\mathcal{F}}_{\beta},
\end{equation*}
which proves Corollary \ref{coro:ZNbeta}, and that the LDP for $\overline{\mathfrak{P}}_{N, \beta}$ holds as stated in Theorem \ref{theo:LDPemp}.
\subsection{Proof of Theorem \ref{theo:LDPmesure}}
\label{sec:contractionprinciple}
\begin{proof}
Theorem \ref{theo:LDPmesure} follows from an application of the “contraction principle” (see e.g. \cite[Section 3.1]{MR3309619}). Let us consider the map $\pbconfig \to \mathcal{M}(\Omega)$ defined by
\begin{equation*}
\widetilde{\mathrm{Intens}} : \bP \mapsto \int_{\Omega} \delta_x \Esp_{\bPx}[\mathcal{C} \cap \carr_1].
\end{equation*}
It is continuous on $\psbconfig$ and coincides with $\overline{\mathrm{Intens}}$. By the contraction principle, the law of $\widetilde{\mathrm{Intens}}(\overline{\Emp}(\vec{X}_N))$ obeys a large deviation principle governed by
\begin{equation*}
\rho \mapsto \inf_{\overline{\mathrm{Intens}}(\bP) = \rho} \overline{\mathcal{F}}_{\beta}(\bP),
\end{equation*}
which is easily seen to be equal to $I_{\beta}(\rho)$ as defined in \eqref{def:IbetaA}.
For technical reasons (a boundary effect), it is not true in general that $\widetilde{\mathrm{Intens}}(\overline{\Emp}(\vec{X}_N)) = \mathrm{emp}(\vec{X}_N)$, however we have
\begin{equation*}
\mathrm{dist}_{\mathcal{M}(\Omega)} \left(\widetilde{\mathrm{Intens}}(\overline{\Emp}(\vec{X}_N)), \overline{\Emp}(\vec{X}_N)\right) = o_N(1),
\end{equation*}
uniformly for $\vec{X}_N \in \Omega$. In particular, the laws of $\widetilde{\mathrm{Intens}}(\overline{\Emp}(\vec{X}_N))$ and of $\mathrm{emp}(\vec{X}_N)$ are exponentially equivalent (in the language of large deviations), thus any LDP can be transferred from one to the other. This proves Theorem \ref{theo:LDPmesure}.
\end{proof}
\section{Additional proofs: Propositions \ref{prop:muVbeta}, \ref{prop:minimizers} and \ref{prop:crystallization1d}}
\label{sec:addproofs}
\subsection{Limit of the empirical measure} \label{sec:LDPempir}
From Theorem \ref{theo:LDPmesure} and the fact that $I_{\beta}$ is strictly convex we deduce that $\mathrm{emp}(\vec{X}_N)$ converges almost surely to the unique minimizer of $I_{\beta}$.
\begin{proof}[Proof of Proposition \ref{prop:muVbeta}.] \ \\
First, if $V = 0$ and $\Omega$ is bounded, $I_{\beta}$ can be written as
\begin{multline*}
I_{\beta}(\rho) := \int_{\Omega} \rho(x) \inf_{P \in \psunconfig} \left( \beta \rho(x)^{s/d} \mathbb{W}_s(P) +\ERS[P|\mathbf{\Pi}] \right)dx \\ + \int_{\Omega} \rho(x) \log \rho(x) \, dx.
\end{multline*}
We claim that both terms in the right-hand side are minimized when $\rho$ is the uniform probability measure on $\Omega$ (we may assume $|\Omega| = 1$ to simplify, without loss of generality). This property is well-known for the relative entropy term $\int_{\Omega} \rho \log \rho$, and we now prove it for the energy term.
\ed{First, let us observe that
$$
\alpha \mapsto \inf_{P \in \psunconfig} \left( \beta \alpha^{1+s/d} \mathbb{W}_s(P) + \alpha \ERS[P|\mathbf{\Pi}]\right)
$$
is convex in $\alpha$ \edb{since it is the infimum over a family of convex functions (recall } that $\alpha \mapsto \alpha^{1+s/d}$ is convex in $\alpha$ and that $\mathbb{W}_s$ is always positive). Since $|\Omega| = 1$ we have, by Jensen's inequality,
\begin{multline*}
\int_{\Omega} \inf_{P \in \psunconfig} \left( \beta \rho(x)^{1+s/d} \mathbb{W}_s(P) + \rho(x)\ERS[P|\mathbf{\Pi}] \right)dx
\\
\geq \inf_{P \in \psunconfig} \left( \beta \left(\int_{\Omega} \rho(x) \right)^{1+s/d} \mathbb{W}_s(P) + \left(\int_{\Omega} \rho(x)\right) \ERS[P|\mathbf{\Pi}] \right),
\end{multline*}
and since $\int_{\Omega} \rho = 1$, we conclude that $I_{\beta}$ is minimal for $\rho \equiv 1$.} Thus the empirical measure converges almost surely to the uniform probability measure on $\Omega$, which proves the first point of Proposition \ref{prop:muVbeta}.
Next, let us assume that $V$ is arbitrary and $\Omega$ bounded. It is not hard to see that for the minimizer $\mu_{V, \beta}$ of $I_{\beta}$ we have, as $\beta \to 0$.
\begin{equation*}
I_{\beta}(\mu_{V, \beta}) \geq I_{\beta}(\rho_{\mathrm{unif}}) + O(\beta),
\end{equation*}
where $\rho_{\mathrm{unif}}$ is the uniform probability measure on $\Omega$. Moreover it is also true (as proven above) that the first term in the definition of $I_{\beta}$ is minimal for $\rho = \rho_{\mathrm{unif}}$. We thus get that, as $\beta \to 0$
$$
\int_{\Omega} \mu_{V, \beta} \log \mu_{V, \beta} - \int_{\Omega} \rho_{\mathrm{unif}} \log \rho_{\mathrm{unif}} = O(\beta),
$$
\ed{in other words the relative entropy of $\mu_{V, \beta}$ with respect to $\rho_{\mathrm{unif}}$ converges to $0$ as $\beta \to 0$. The Csiszár-Kullback-Pinsker's inequality allows us to bound the square of the total variation distance between $\mu_{V, \beta}$ and $\rho_{\mathrm{unif}}$ by the relative entropy (up to a multiplicative constant), and thus $\mu_{V, \beta}$ converges (in total variation) to the uniform probability measure on $\Omega$ as $\beta \to 0$.} This proves the second point of Proposition \ref{prop:muVbeta}.
Finally for $V$ arbitrary, the problem of minimizing of $I_{\beta}$ is, as $\beta \rightarrow \infty$, similar to minimizing
$$
\beta \left( \int_{\Omega} \rho(x)^{1+s/d} \min \mathbb{W}_s dx + \int_{\Omega} \rho(x) V(x) dx \right).
$$
Since $\min \mathbb{W}_s = C_{s,d}$ we recover (up to a multiplicative constant $\beta > 0$) the minimization problem studied in \cite{Hardin:2016kq}, namely the problem of minimizing
$$
C_{s,d} \int_{\Omega} \rho(x)^{1+s/d} dx + \int_{\Omega} \rho(x) V(x) dx,
$$
among probability densities, whose (unique) solution is given by $\mu_{V, \infty}$.
In order to prove that $\mu_{V, \beta}$ converges to $\mu_{V, \infty}$ as $\beta \rightarrow \infty$, we need to make that heuristic rigorous, which requires an adaptation of \cite[Section 7.3, Step 2]{leble2016logarithmic}. We claim that there exists a sequence $\{P_k\}_{k \geq 1}$ in $\psunconfig$ such that
\begin{equation} \label{Pkapprox}
\lim_{k \rightarrow \infty} \mathbb{W}_s(P_k) = C_{s,d}, \quad \forall k \geq 1, \ERS[P_k|\Pi] < + \infty.
\end{equation}
We could think of taking $P_k = P$ where $P$ is some minimizer of $\mathbb{W}_s$ among $\psunconfig$, but it might have infinite entropy (e.g., if $P$ was the law of the stationary process associated to a lattice, as in dimension $1$). We thus need to “expand” $P$ (e.g., by making all the points vibrate independently in small balls as described in \cite[Section 7.3, Step 2]{leble2016logarithmic} in the case of the one-dimensional lattice). We may then write that, for any $\beta > 0$ and $k \geq 1$,
\begin{multline*}
I_{\beta}(\mu_{V, \beta}) \leq I_{\beta}(\mu_{V, \infty}) \leq \beta \left( \int_{\Omega} \mu_{V, \infty}(x)^{1+s/d} \mathbb{W}_s(P_k) + \int_{\Omega} \mu_{V, \infty}(x) V(x) dx \right) \\ + \ERS[P_k|\Pi] + \int_{\Omega} \mu_{V, \infty}(x) \log \mu_{V, \infty}(x) \\
\leq \beta \left(\int_{\Omega} \mu_{V, \infty}(x)^{1+s/d} C_{s,d} + \int_{\Omega} \mu_{V, \infty}(x) V(x) dx \right) + \ERS[P_k|\Pi] + \beta o_k(1),
\end{multline*}
where we have used \eqref{Pkapprox} in the last inequality. Choosing $\beta$ and $k$ properly \ed{so that $k \to \infty$ as $\beta \to \infty$, while assuring that the $\beta o_k(1)$ term goes to zero, we have}
\begin{multline*}
C_{s,d} \int_{\Omega} \mu_{V, \beta}(x)^{1+s/d} + \int_{\Omega} \mu_{V, \beta}(x) V(x) \leq C_{s,d} \int_{\Omega} \mu_{V, \infty}(x)^{1+s/d} dx + \int_{\Omega} \mu_{V, \infty}(x) V(x) dx \\
+ o_{\beta \rightarrow \infty }(1).
\end{multline*}
By convexity, it implies that $\mu_{V, \beta}$ converges to $\mu_{V, \infty}$ as $\beta \rightarrow \infty$.
\end{proof}
\subsection{The case of minimizers}
\begin{proof}[Proof of Proposition \ref{prop:minimizers}]
Let $\{\vec{X}_N\}_N$ be a sequence of $N$-point configuration such that for all $N \geq 1$, $\vec{X}_N$ minimizes $\mathcal{H}_N$. From Proposition \ref{prop:glinf} we know that (up to extraction), $\{\overline{\Emp}(\vec{X}_N)\}_N$ converges to some $\bP \in \overline{\mathcal{M}}_{stat,1}(\bconfig)$ such that
\begin{equation} \label{minimlsci}
\overline{\mathbb{W}}_s(\bP) + \overline{\mathbb{V}}(\bP) \leq \liminf_{N \rightarrow \infty} \frac{\mathcal{H}_N(\vec{X}_N)}{N^{1+s/d}},
\end{equation}
and we have, by \eqref{Wsfatou}, \eqref{minA1} and the scaling properties of $\mathbb{W}_s$
\begin{equation} \label{minimplusgrand}
\overline{\mathbb{W}}_s(\bP) + \overline{\mathbb{V}}(\bP) \geq C_{s,d} \int_{\Omega} \rho(x)^{1+s/d} dx + \int_{\Omega} V(x) \rho(x) dx.
\end{equation}
where $\rho = \overline{\mathrm{Intens}}(\bP)$. We also know that the empirical measure $\mathrm{emp}(\vec{X}_N)$ converges to the intensity measure $\rho = \overline{\mathrm{Intens}}(\bP)$.
On the other hand, from \cite[Theorem 2.1]{Hardin:2016kq} we know that $\mathrm{emp}(\vec{X}_N)$ converges to some measure $\mu_{V, \infty}$ which is defined as follows: define $L$ to be the unique solution of
\begin{equation*}
\int_{\Omega} \left[\frac{L-V(x)}{C_{s,d}(1+s/d)}\right]_+^{d/s} dx=1.
\end{equation*}
and let then $\mu_{V, \infty}$ be given by
\begin{equation}\label{muVinfty}
\mu_{V, \infty}(x):= \left[\frac{L-V(x)}{C_{s,d} (1+s/d)}\right]_+^{d/s} \qquad (x\in \Omega).
\end{equation}
It is proven in \cite{Hardin:2016kq} that $\mu_{V, \infty}$ minimizes the quantity
\begin{equation} \label{queminimim}
C_{s,d} \int_{\Omega} \rho(x)^{1+s/d}\, dx + \int V(x) \rho(x)\, dx,
\end{equation}
among all probability density functions $\rho$ supported on $\Omega$. It is also proven that
\begin{equation} \label{minimizerminimize}
\lim_{N \rightarrow \infty} \frac{\mathcal{H}_N(\vec{X}_N)}{N^{1+s/d}} = C_{s,d} \int_{\Omega} \mu_{V, \infty}(x)^{1+s/d}\, dx + \int V(x) \mu_{V, \infty}(x)\, dx,
\end{equation}
By unicity of the limit we have $\rho := \overline{\mathrm{Intens}}(\bP) = \mu_{V, \infty}$. In view of \eqref{minimlsci}, \eqref{minimplusgrand}, \eqref{minimizerminimize} and by the fact that $\mu_{V, \infty}$ minimizes \eqref{queminimim} we get that
$$
\overline{\mathbb{W}}_s(\bP) + \overline{\mathbb{V}}(\bP) = C_{s,d} \int_{\Omega} \mu_{V, \infty}(x)^{1+s/d} dx + \int_{\Omega} V(x) \mu_{V, \infty}(x) dx,
$$
and that $\bP$ is in fact a minimizer of $\overline{\mathbb{W}}_s + \overline{\mathbb{V}}$. We must also have
$$\overline{\mathbb{W}}_s(\bP) = C_{s,d} \int_{\Omega} \mu_{V, \infty}(x)^{1+s/d}\, dx$$
hence (in view of \eqref{Wsfatou}) we get
\begin{equation*}
\mathcal{W}_s(\mathcal{C}) = C_{s,d} \mu_{V, \infty}(x)^{1+s/d} = \min_{\config_{\mu_{V, \infty}(x)}} \mathcal{W}_s, \text{ for $\bP$-a.e. $(x,\mathcal{C})$,}
\end{equation*}
which concludes the proof.
\end{proof}
\subsection{The one-dimensional case}
Proposition \ref{prop:crystallization1d} is very similar to the first statement of \cite[Theorem 3]{leble2016logarithmic}, and we sketch its proof here.
\begin{proof}[Proof of Proposition \ref{prop:crystallization1d}]
First, we use the expression of $\mathbb{W}_s$ in terms of the two-point correlation function, as presented in \eqref{Wrho2P2}
$$
\mathbb{W}_s(P) = \liminf_{R \rightarrow \infty} \int_{[-R, R]^d} \frac{1}{|v|^s} \rho_{2,P}(v) \left( 1 - \frac{|v|}{R} \right) dv.
$$
Then, we split $\rho_{2, P}$ as the sum
$$
\rho_{2, P} = \sum_{k=1}^{+\infty} \rho_{2,P}^{(k)},
$$
where $\rho_{2,P}^{(k)}$ is the correlation function of the $k$-th neighbor (which makes sense only in dimension $1$). It is not hard to check that
$$\int \rho_{2,P}^{(k)}(x) = 1 \text{ and } \int x \rho_{2,P}^{(k)}(x) = k$$
(the last identity holds because $P$ has intensity $1$ and is stationary). Using the convexity of
$$
v \mapsto \frac{1}{|v|^s} \left( 1 - \frac{|v|}{R} \right),
$$
we obtain that for any $k \geq 1$ it holds
$$
\int \frac{1}{|v|^s} \left( 1 - \frac{|v|}{R} \right) \rho_{2,P}^{(k)} dv \geq \int \frac{1}{|v|^s} \left( 1 - \frac{|v|}{R} \right) \delta_{k}(v) dv = \int \frac{1}{|v|^s} \left( 1 - \frac{|v|}{R} \right) \rho_{2,P_{\mathbb{Z}}}^{(k)}(v) dv,
$$
where $P_{\mathbb{Z}} = u + \mathbb{Z}$ (with $u$ uniform in $[0,1]$) thus we have
$$
\mathbb{W}_s(P) \geq \mathbb{W}_s(P_{\mathbb{Z}}),
$$
which proves that $\mathbb{W}_s$ is minimal at $P_{\mathbb{Z}}$.
\end{proof}
\noindent
{\bf Acknowledgements:} The authors thank the referee for a careful reading and helpful suggestions.
\bibliographystyle{alpha}
|
2,869,038,153,967 | arxiv | \section{Introduction}
At the onset of surface theory surfaces in 3-space, and especially
canonical surfaces in 3-space, occupied a central role.
In particular, this study led to the famous Noether inequality $K^2
\geq 2 p_g - 4$, while
Castelnuovo observed that if the canonical map of a minimal smooth
surface $S$ is birational
(obviously then $p_g \geq 4$) the
inequality $K^2 \geq 3 p_g - 7$ must hold true.
These are the lower bounds for surface geography, but upper bounds
played a decisive role
in the investigations of the last 30 years, leading to
the socalled Bogomolov-Miyaoka-Yau inequality $$K^2 \leq 9 \chi : = 9
(p_g -q +1)$$ (cf. \cite{BPV} Chapter
VII, section 4).
For instance, the BMY - inequality gives a hypothetical upper bound for
a question raised by F. Enriques (cf. \cite{enriques},
chapter VIII, page $284$).
\begin{problem}
Which are the possible values of $K^2$, in particular which is the
highest possible value of $K^2$ for
minimal surfaces
with geometric genus $p_g = 4$ having a birational canonical map
(so-called {\em simple canonical} surfaces)?
\end{problem}
In fact, Enriques even conjectured that the highest possible value for
$K^2$ should be $24$, based on the conjecture that the expected number
of moduli should be strictly positive. The second author showed in
\cite{bidouble}
that this bound does not hold
true,
constructing simple canonical surfaces with geometric genus $p_g = 4$
and $11 \leq K^2 \leq 28$
(in these examples $K^2$ equals the 'canonical degree', i.e., the
degree of the canonical image).
This bound was improved by C. Liedtke (cf. \cite{liedke}) who showed
the existence of a simple canonical
surface with $p_g = 4$ and $K^2 = 31$ (and canonical degree $ 12 $).
Simple canonical surfaces have $K^2
\geq 5$, and for $5 \leq K^2
\leq 7$ they were constructed by Enriques,
Franchetta, Kodaira, Maxwell, and for $6 \leq K^2 \leq 16$ by Burniat,
while Ciliberto was able to show for $5 \leq K^2
\geq 10$ the existence of simple canonical surfaces with ordinary
singularities (cf. \cite{enriques},
\cite{franchetta},
\cite{maxwell}, \cite{kodaira},
\cite{burniat}, \cite{ciliberto}).
If we try to go up with $ K^2$, the BMY inequality tells us that $
K^2 \leq 45$, and that,
if equality holds,
then necessarily $S$ is regular $ (q(S) = 0).$
The main result of this paper is the following
\medskip
{\bf Main Theorem.} {\em There exists a minimal smooth algebraic surface $S$ of
general type over the complex numbers with $K^2 = 45$ and $p_g =
4$, and with birational canonical map. $S$ is rigid, the canonical system
$|K_S|$ has a fixed part and the degree of the canonical image is $19$.}
\medskip
The rigidity of $S$ is due to the fact that, by Yau's proof of the
inequality $K^2 \leq 9 \chi$,
it follows (cf. also \cite{miya}) that $K^2 = 9 \chi$
if and only if the universal covering of $S$ is the 2-dimensional
complex ball $\mathfrak B_2$.
It was for long time extremely hard to give direct algebro geometric
constructions of such ballquotients,
until a breakthrough came via the the explicit constructions by Hirzebruch as
Kummer coverings of the complex projective plane branched in a
configuration of lines (\cite{hirz}).
These examples were extended and generalized in
the book \cite{bhh}, which amply describes three examples of such
(compact) ballquotients.
The configurations occurring are quite classical: a complete
quadrangle, the Hesse configuration and the dual
Hesse configuration. Even if it is possible to determine the
numerical data which a configuration has to
fulfill in order to give rise to a ball quotient, it is less easy to compute
the holomorphic invariants.
In fact, already the determination of the irregularities $q$ of the
Hirzebruch examples
and of some \'etale quotients of them required
further work by M.-N. Ishida (cf. \cite {ishida1},
\cite{ishida2}), but no regular examples were indeed found (except
Mumford's fake projective plane,
whose construction however was not so explicit as Hirzebruch's one,
see \cite{mum}).
The example of \cite{bhh} we are interested in here is the
$(\mathbb{Z}/5 \mathbb{Z})^5$-covering
$\hat{S}$ of $\mathbb{P}^2$ branched exactly in a complete
quadrangle. This surface has the invariants $K^2
= 45 \cdot 125$ and $\chi = 5 \cdot 125$. It is clear that an
\'etale $(\mathbb{Z}/5 \mathbb{Z})^3$ quotient
or, equivalently, a smooth $(\mathbb{Z}/5 \mathbb{Z})^2$ covering of
$\mathbb{P}^2$ branched exactly in a
complete quadrangle has the invariants $K^2 = 45$ and $\chi = 5$.
Since, as we observed, $\chi = p_g - q +1$,
we have to produce an example of a surface $S$ which is regular
(i.e., $q=0$) in order to get
the desired example of a surface with
$K^2 = 45$ and
$p_g =4$. In fact, we will show that up to isomorphisms there are
exactly four smooth surfaces with $K^2 =
45$, $\chi = 5$, obtained as $(\mathbb{Z}/5 \mathbb{Z})^2$ coverings of
$\mathbb{P}^2$ branched exactly in a
complete quadrangle: but only one of them is regular (has $q=0$).
\medskip
The main ingredient of our investigation is the theory of Abelian
Galois coverings,
developped
by Pardini (cf. \cite{Pardini1}), but apparently not sufficiently known.
Since the treatment by Pardini is very algebraic, and at some points
not so explicit, we devote section $1$ to explain the structure theorem for
such
Abelian coverings, and especially the relation occurring between the
topological data (which
allow to construct the examples) and the explicit determination of
the character sheaves (or eigensheaves) of the covering
(these determine not only the topological but also the holomorphic
invariants of the constructed surface).
Sections $2$ and $3$ are devoted to the construction of our surfaces, and
to the investigation of the symmetries of our construction.
This study allows us to classify all the examples up to isomorphisms.
\section{Abelian Covers}
In this paragraph we will recall the structure theorem for normal Abelian
Galois ramified coverings. We shall give a more direct presentation
than the one in the
original paper by R. Pardini (cf. \cite{Pardini1}). This will turn
out to be more suitable for our purposes.
Let $X$, $Y$ be normal projective varieties, assume $Y$ to be smooth
and let $\pi : X \rightarrow Y$
be a finite Galois cover with Abelian Galois group $G$. By the theorem
on the purity of the branch locus the critical set of $\pi$ is a
divisor $R$, the ramification
divisor, whose image $D := \pi (R)$ is called the branch divisor. In
the case where also
$X$ is smooth we have the following result
(cf. \cite{catmod}, prop. 1.1).
\begin{prop}
If $X$ is smooth $R$ is a normal crossing divisor with smooth components.
Moreover, if $x \in X$, then
the stabilizer of $x$ is the direct sum of the stabilizers of the
components of $R$ passing through $x$ and
these last groups are cyclic.
\end{prop}
We may assume without loss of generality, and will assume in the
following, that $Y$ is
smooth and $D$ is a normal crossing divisor.
We remark that $\pi$ factors canonically as
$$\xymatrix{
X\ar[rr]^\pi \ar[dr] _p& &Y \\
&X'\ar[ur]_{p'} &
}$$
where $X'$ is maximal such that $p' : X' \rightarrow Y$ is
unramified. In fact, one takes
$X' := X / G'$, where $G'$ is the subgroup of $G$ generated by the
stabilizers $G_x$ of points $x \in X$.
\begin{definition}
$\pi$ is called {\em totally ramified} iff $p'$ is an isomorphism
(i.e., $G = G'$).
\end{definition}
Observe that $\pi$ is necessarily totally ramified if $Y$ has a
trivial algebraic fundamental group.
Now, $\pi$ is determined by the surjective homomorphism $\phi : \pi_1
(Y - D) \rightarrow G$,
which factors through $\varphi : H_1 (Y - D, \mathbb{Z}) \rightarrow
G$, since $G$ is assumed to be Abelian.
We denote by $G^*$ the group of characters of $G$, and
we shall use the additive notation for the group
operation in $G^*$ . Recall that $\pi$
is flat (for this it
suffices that $Y$ is smooth and $X$ is normal) and that the action of
$G$ induces a splitting of the direct image of $\ensuremath{\mathcal{O}}_X$ into eigensheaves
$$
\pi _* \mathcal{O}_X = \bigoplus_{\chi \in G^*} \mathcal{L}_{\chi}^{-1},
$$
where $G$ acts on the invertible sheaf $\mathcal{L}_{\chi}^{-1}$ via
the character $\chi$.
Note that $\mathcal{L}_1 \cong \mathcal{O}_Y$ and denote by
$L_{\chi}$ a divisor
associated to the eigensheaf $\mathcal{L}_{\chi}$ (thus
$\mathcal{L}_{\chi}\cong \ensuremath{\mathcal{O}}(L_{\chi})$).
We shall show how:
\begin{itemize}
{\em \item[1)] one calculates $H_1(Y-D, \mathbb{Z})$;
\item[2)] one calculates the character sheaves $\mathcal{L}_{\chi}=
\ensuremath{\mathcal{O}}(L_{\chi})$
in terms of the surjective homomorphism $\varphi : H_1 (Y - D,
\mathbb{Z}) \rightarrow G$.}
\end{itemize}
Consider the exact sequence
\begin{equation}\label{exactsequ}
0 \rightarrow K \rightarrow H_1(Y-D, \mathbb{Z}) \rightarrow H_1(Y,
\mathbb{Z}) \rightarrow 0.
\end{equation}
\begin{rem}\label{tr}
If $\pi$ is totally ramified, $\varphi|K : K \rightarrow G$ is surjective.
\end{rem}
Following the arguments in \cite{catmod} we obtain
\begin{prop}
$$
K = ker(H_1(Y-D) \rightarrow H_1(Y)) = coker(r : H^{2n-2}(Y)
\rightarrow H^{2n-2}(D)).
$$
In particular, if $H_1(Y, \mathbb{Z}) = 0$, then
$H_1(Y-D, \mathbb{Z}) = coker(r : H^{2n-2}(Y) \rightarrow H^{2n-2}(D))$.
\end{prop}
{\it Proof. }
Let $V$ be an open tubular neighbourhood of $D$ and denote by
$\partial V$ its boundary.
Then we have the exact sequence
$$
\ldots \rightarrow H^{2n-2}(Y) \rightarrow H^{2n-2}(D) \rightarrow
H^{2n-1}(Y,\overline{V})
\rightarrow H^{2n-1}(Y) \rightarrow \ldots
$$
Observing that $H^{2n-1}(Y,\overline{V}) \cong H_1 (Y-D, \mathbb{Z})$,
we see that
$$
K = ker(H_1(Y-D, \mathbb{Z})\rightarrow H_1(Y, \mathbb{Z})) \cong
$$
$$
\cong coker(r : H^{2n-2}(Y) \rightarrow H^{2n-2}(D) \cong
\bigoplus_{i=1}^k [D_i] \mathbb{Z}).
$$
\hspace*{\fill}$Q.E.D.$
\begin{rem}
Applying $Hom_{\mathbb{Z}}(\_ ~,G)$ to the short exact sequence
(\ref{exactsequ}) above we get
$$
0 \rightarrow Hom(H_1(Y, \mathbb{Z}),G) \rightarrow Hom(H_1(Y-D, \mathbb{Z}),G)
\rightarrow Hom(K,G) \rightarrow
$$
$$
\rightarrow Ext^1(H_1(Y, \mathbb{Z}),G) \rightarrow Ext^1(H_1(Y-D, \mathbb{Z}),G)
\rightarrow Ext^1(K,G) \rightarrow 0.
$$
Therefore an Abelian covering of $Y$ ramified in $D$ is uniquely
determined by a surjective morphism $\varphi : K \rightarrow G$ if and
only if $0 = Hom(H_1(Y),G) $ and $ Ext^1(H_1(Y),G) \rightarrow
Ext^1(H_1(Y - D),G)$ is injective. This happens,
for instance, if $H_1(Y, \mathbb{Z})=0$, or more generally if $H_1(Y,
\mathbb{Z})$ is a finite group whose
exponent
is relatively prime to the exponent of $G$.
\end{rem}
Let us determine the character sheaves of the Abelian covering determined by
$\varphi : H_1(Y-D, \mathbb{Z}) \rightarrow G$.
Let $\chi \in G^*$ be a character of $G$, i.e., $\chi : G \rightarrow
C \subset \mathbb{C}^*$,
where $C$ is cyclic. Then $\chi$ induces a surjective morphism $ \chi
\circ \varphi : H_1(Y-D, \mathbb{Z})
\rightarrow C$, whence a factorization of $\pi$ as
$$\xymatrix{
X\ar[rr]^\pi \ar[dr] & &Y \\
&Z:= X / (ker (\chi \circ \varphi))\ar[ur]_{\pi_{\chi}} &
}$$
where $\pi_{\chi} : Z \rightarrow Y$ is a cyclic covering with group $C$.
\begin{rem}
$\mathcal{L}_{\chi}(Z) = \mathcal{L}_{\chi}(X)$, and we are reduced
to calculate
the character sheaves for cyclic coverings.
\end{rem}
Write $D = \bigcup_{i=1}^k D_i$ as a union of smooth irreducible components and
denote by $\delta_i$ the image of a small
loop around $D_i$ in $H_1(Y-D, \mathbb{Z})$.
Let $d$ be the order of $C$ and let us identify $C$ with
$\mathbb{Z} / d$; then we have the well known formula (cf. the
proof of prop. 4.5 of \cite{torelli})
$$
\ensuremath{\mathcal{O}}_Y(d L_{\chi}) \cong \ensuremath{\mathcal{O}}_Y(\sum_{i=1}^k (\chi \circ \varphi)
(\delta_i) \ D_i).
$$
\begin{rem}
We remark that the above linear equivalence
$$
dL_{\chi} \equiv \sum_{i=1}^k (\chi \circ \varphi) (\delta_i) D_i,
$$
depends only on $\chi \circ (\varphi|K)$ and does not uniquely determine
the character sheaf $\mathcal{L}_{\chi}$. In fact, if
$\mathcal{L}_{\chi} \in Pic(Y)$ satisfies the above
equation, then also $\mathcal{L}_{\chi} \otimes \eta$ does, for each
$d$-torsion sheaf $\eta \in Pic(Y)$.
If
$\eta$ corresponds to an element $\alpha \in Hom(H_1(Y,\mathbb{Z}),
\mathbb{Z} / d \mathbb{Z})$, then
$\mathcal{L}_{\chi} \otimes \eta$ is the character sheaf of the
cyclic covering corresponding to $\chi \circ
\varphi + \alpha \circ p$, where $p : H_1(Y-D, \mathbb{Z})
\rightarrow H_1(Y, \mathbb{Z})$. Clearly
$(\chi \circ \varphi + \alpha \circ p)|K = (\chi \circ \varphi)|K$.
\end{rem}
Now, consider the exact sequence
$$
0 \rightarrow \bigoplus_{i=1}^k \mathbb{Z} \cdot D_i \rightarrow
Pic(Y) \rightarrow Pic(Y-D) \rightarrow 0.
$$
Then $\chi \circ \varphi \in Hom(H_1(Y-D, \mathbb{Z}), \mathbb{Z} / d
\mathbb{Z})$ corresponds to
a $d$- torsion sheaf $\eta \in Pic(Y-D)$. Assume that $\mathcal{L}
= \mathcal{O}(L)$ is another lifting of
$\eta$ to $Pic(Y)$. Then $\mathcal{L} = \mathcal{L}_{\chi} + \sum a_i
D_i$, $a_i \in \mathbb{Z}$. Therefore
$$
dL = d(L_{\chi} + \sum a_iD_i) \equiv \sum_{i=1}^k ((\chi \circ
\varphi) (\delta_i) + da_i) D_i.
$$
Choosing a fixed system of representatives of $\mathbb{Z} / d
\mathbb{Z}$, e.g.,
$\mathbb{Z} / d \mathbb{Z} = \{0, \ldots , d-1\}$, we get then the
uniqueness of $\mathcal{L}_{\chi}$.
\medskip
We will now use the above approach in order to write explicit equations for $X$ as
a subvariety in the geometric
vector bundle
corresponding to the locally free sheaf $\bigoplus_{\chi \in G^*-\{1\}}
\mathcal{L}_{\chi}$.
\begin{rem}
Let $\chi: G \rightarrow C \cong \mathbb{Z} /d$, $\chi ': G
\rightarrow C \cong \mathbb{Z} /d'$
be two characters of $G$. Then $ord(\chi + \chi ') = l.c.m.(d,d')
=:M$. Write $M$ as $M = \lambda \cdot d =
\lambda ' \cdot d'$. Consider the linear equivalences
$$
d L_{\chi} \equiv \sum_{i=1}^k (\chi \circ \varphi)
(\delta_i) D_i = \sum_{i=1}^k \Delta_i D_i,
$$
$$
d' L_{\chi'} \equiv \sum_{i=1}^k (\chi' \circ \varphi)
(\delta_i) D_i = \sum_{i=1}^k \Delta_i' D_i.
$$
and
$$
M ( L_{\chi +\chi'}) \equiv \sum_{i=1}^k ((\chi + \chi ')\circ \varphi)
(\delta_i) D_i \equiv \sum_{i=1}^k ((\lambda \Delta_i +
\lambda ' \Delta_i ') \ mod (M) )\cdot D_i \ .
$$
Since moreover $ 0 < \lambda \Delta_i + \lambda ' \Delta_i ' < 2M$,
we may write (identifying the divisor $L_{\chi}$ with the divisor
$(\sum_{i=1}^k \frac{\Delta_i}{d} D_i ) \in \oplus_{i=1}^k {\bf Q} D_i$)
$$
L_{\chi} + L_{\chi'} - L_{\chi + \chi '} = \sum_{i=1}^k
\epsilon _{D_i}^{\chi, \chi'} D_i,
$$
where $\epsilon _{D_i}^{\chi, \chi'} = 1$ if $\lambda \Delta_i +
\lambda ' \Delta_i ' \geq M$ and
$\epsilon _{D_i}^{\chi, \chi'} = 0$ otherwise.
\end{rem}
The above equality is equivalent (as shown in \cite{Pardini1}) to the existence of the
multiplication maps
$$
\mu_{\chi, \chi '} : \mathcal{L}_{\chi}^{-1} \otimes \mathcal{L}_{\chi'}^{-1}
\rightarrow \mathcal{L}_{\chi + \chi '}^{-1}
$$
which correspond to global sections of $\mathcal{L}_{\chi} \otimes
\mathcal{L}_{\chi'}
\otimes \mathcal{L}_{\chi + \chi '}^{-1}$ whose divisor
is exactly equal to $\sum_{i=1}^k
\epsilon _{D_i}^{\chi, \chi'} D_i$.
Let in fact $\sigma_{i}
\in \Gamma (X, \mathcal{O} (D_{i}))$ a section with $ div (
\sigma_{i} ) = D_{i}$: then
$\Pi_{i} \sigma_{i}^{\epsilon_{\chi, \chi
'}^{i}} $ is a global section of $\mathcal{L}_{\chi} \otimes
\mathcal{L}_{\chi'}
\otimes \mathcal{L}_{\chi + \chi '}^{-1}$ yielding the multiplication maps.
These sections define equations for
the natural embedding
$$
i : X \hookrightarrow W : = \bigoplus _{\chi \in G^* - \{1\}}
\mathbb{V} (\mathcal{L}_{\chi}^{-1}).
$$
Let in fact $w_{\chi}$ be a fibre coordinate of $\mathbb{V}
(\mathcal{L}_{\chi}^{-1})$ :
then $i(X)$ is defined by the equations
$$
w_{\chi} w_{\chi '} = \Pi_{\nu} \sigma_{\nu}^{\epsilon_{\chi, \chi
'}^{\nu}} w_{\chi+ \chi '}.
$$
We infer the following
\begin{cor}
If $Y$ and the $D_i$'s are defined over a field $K$, then also $X$ is
defined over $K$.
\end{cor}
\section{The construction}
We consider in $\mathbb{P}^2 = \mathbb{P}^2_{\mathbb{C}}$ a complete
quadrangle, i.e., the union of the six lines through four points
$P_0, \cdots P_3$
in general position.
Let $\pi : Y := \hat{\mathbb{P}}^2 (P_0, \cdots , P_3) \rightarrow
\mathbb{P}^2$
be the Del Pezzo surface which is the blow up of the plane in
the points $P_0, \cdots , P_3$.
Denote by
$E_0,
\cdots , E_3$ the exceptional curves. Moreover, for $j = 1, 2, 3$,
let $L_j' := H - E_0 - E_j$, where $H$ is
the total transform in $Y$ of a line on
$\mathbb{P}^2$, and let $L_j := H - \sum_{i=1}^3 E_i + E_j$. I.e.,
$L_j'$ is the
strict transform of the line in $\mathbb{P}^2$ through $P_0$ and
$P_j$, whereas $L_j$ is the strict transform
of the line in $\mathbb{P}^2$ through $E_i$ and $E_k$, where
$\{i,j,k\} = \{1,2,3\}$.
The divisor $L_1 + L_2 + L_3 + L'_1 + L'_2 + L'_3 + E_0 +
E_1 + E_2 + E_3$ on $Y$ has simple normal crossings and we shall
denote it by $D$.
\begin{rem}\label{first homology}
It is well known that $H^2(Y, \mathbb{Z})$ is freely generated by
$H$, $E_0, \cdots , E_3$. Since
\begin{equation}
H_1(Y-D, \mathbb{Z}) \cong coker(r : H^{2n-2}(Y) \rightarrow H^{2n-2}(D) \cong
\bigoplus_{i=1}^k [D_i] \mathbb{Z}),
\end{equation}
where $r$ is given by the intersection matrix
\bigskip
\begin{tabular}{|l|c c c c c|}
\hline
& $H$ & $E_0$ & $E_1$ & $E_2$ & $E_3$ \\
\hline
$L_1'$ & 1 & 1 & 1 & 0 & 0\\
$L_2'$ & 1 & 1 & 0 & 1 & 0\\
$L_3'$ & 1 & 1 & 0 & 0 & 1\\
$L_1$ & 1 & 0 & 0 & 1 & 1\\
$L_2$ & 1 & 0 & 1 & 0 & 1\\
$L_3$ & 1 & 0 & 1 & 1 & 0\\
$E_0$ & 0 & -1 & 0 & 0 & 0\\
$E_1$ & 0 & 0 & -1 & 0 & 0\\
$E_2$ & 0 & 0 & 0 & -1 & 0\\
$E_3$ & 0 & 0 & 0 & 0 & -1\\
\hline
\end{tabular}
.
\bigskip
we obtain
$$
H_1(Y-D, \mathbb{Z}) \cong (\bigoplus_{i=0}^3 \mathbb{Z} e_i \oplus
\bigoplus_{i=1}^3
\mathbb{Z} l_i \oplus \bigoplus_{i=0}^3 \mathbb{Z} l_i') / H^2(Y, \mathbb{Z}),
$$
where $e_j$ (resp. $l_i$, $l_i'$) is a (small) simple loop oriented
counterclockwise
around $E_j$ (resp. $L_i$, $L_i'$).
I.e., $H_1(Y-D, \mathbb{Z})$ has generators $e_0, \ldots , e_3, l_1,
l_2, l_3, l_1',
l_2', l_3'$ and the relations are
$e_0 = l_1' + l_2' + l_3'$, $e_i = l_i' + l_j + l_k$, $\sum l_i' +
\sum l_i =0$.\\
In particular, $H_1(Y-D, \mathbb{Z})$ is free of rank 5.
\end{rem}
We want to construct a smooth Galois cover $p : S \rightarrow Y$ with
group $(\mathbb{Z}
/ 5 \mathbb{Z})^2$ branched exactly in $D$.
Such a Galois cover is determined by a surjective homomorphism
$\varphi : \mathbb{Z}^5 \cong H_1(Y-D, \mathbb{Z} )
\rightarrow (\mathbb{Z}
/ 5 \mathbb{Z})^2$ with certain conditions ensuring that $S$ is
smooth and that the covering branches
exactly in $D$.
By a slight abuse of notation we denote from now on by $\epsilon_h$,
$l_i'$, and $l_j$ the images in $(\mathbb{Z} / 5 \mathbb{Z})^2$ of
the above generators of $H_1(Y-D, \mathbb{Z})$ under the homomorphism
$\varphi$.
\begin{rem}
It obviously follows from remark (\ref{first homology}) that each
$\epsilon_h$ is determined by
the $l_i'$, $l_j$ and that $\sum_i l_i' +\sum_j l_j = 0$.
\end{rem}
We write
$$
l_i' :=
\begin{pmatrix}
x_i \\
y_i
\end{pmatrix}
=: u_i; ~~~~~~~~l_j' :=
\begin{pmatrix}
z_j \\
w_j
\end{pmatrix}
=: v_j,
$$
where $x_i$, $y_i$, $z_j$, $w_j \in \{0, \ldots , 4 \} \cong
\mathbb{Z} / 5 \mathbb{Z}$.
In order to calculate the invariants (i.e., $p_g$, $K_S^2$, $q$) of
the Galois covering
given by the homomorphism $\varphi$, we have to calculate for each
character $\chi \in ((\mathbb{Z} /5
\mathbb{Z})^2)^*$ the eigensheaf $\mathcal{L}_{\chi}$.
Before doing this let us work out first the two sets of conditions
ensuring that our covering is
1) branched exactly in $D = L_1 + L_2 + L_3 + L'_1 + L'_2 + L'_3 + E_0 +
E_1 + E_2 + E_3$;
and that
2) $S$ is smooth.
\begin{lemma}\label{nec}
1) If $u_i$, $v_j$, $\sum u_i$, $\epsilon_i = u_i + v_j +v_k$ are
different from zero
in $(\mathbb{Z} /5 \mathbb{Z})^2$, then the covering $p : S
\rightarrow Y$ is branched exactly in $L_1$,
$L_2$, $L_3$, $L_1$, $L_2'$, $L_3'$, $E_0, \ldots , E_3$.
2) If the following pairs of vectors in $(\mathbb{Z} /5 \mathbb{Z})^2$
$(u_i,v_i)$ for $i \in \{1,2,3\}$, $(u_1,u_1+u_2+u_3)$, $(u_2,u_1+u_2+u_3)$,
$(u_3,u_1+u_2+u_3)$, $(u_1, u_1+v_2+v_3)$, $(u_2, u_2+v_1+v_3)$,
$(u_3, u_3+v_1+v_2)$, $(u_1+ v_2+v_3,v_i)$
for $i=2,3$, $(u_2+ v_1+v_3,v_i)$ for $i=1,3$, $(u_3+ v_1+v_2,v_i)$ for $i=1,2$
are linearly independent, then $S$ is smooth.
\end{lemma}
{\it Proof. } 1) is obvious.
2) follows from the fact that $S$ given by the homomorphism $\varphi$ is smooth
if and only if the following condition holds: let $D_1$, $D_2$ be two
non trivial irreducible subdivisors of
the branch divisor of $p : S \rightarrow Y$ and let $d_i$ a small
loop around $D_i$, then $\varphi(d_1)$ and
$\varphi(d_2)$ are not in the same cyclic subgroup of $(\mathbb{Z} /5
\mathbb{Z})^2$. \hspace*{\fill}$Q.E.D.$
\begin{rem}
Let $p : S \rightarrow Y$ be a $(\mathbb{Z} /5 \mathbb{Z})^2$ - Galois
cover with $u_i$ and $v_j$ satisfying the two conditions of the above
lemma. Then $S$ is a smooth minimal surface with $K_S^2 = 45$ and
$\chi = 5$. We are interested to find such surfaces with $q=0$,
because then they will have geometric genus equal to $4$.
\end{rem}
Given a character $\chi = (a,b) \in (\mathbb{Z} /5 \mathbb{Z})^2$,
let us determine $\mathcal{L}_{\chi} = \mathcal{L}_{(a,b)}$.
By the results of section $1$, we get
\begin{prop}\label{character sheaves}
$$
5 \mathcal{L}_{\chi} \equiv \sum_{i=1} ^3 \chi (l_i) L_i + \sum_{i=1} ^3
\chi (l_i') L_i' + \sum_{i=0} ^3 \chi (e_i) E_i.
$$
$$
5 \mathcal{L}_{(a,b)} \equiv \sum_{i=1} ^3 [ax_i + by_i] L_i + \sum_{i=1} ^3
[az_i + bw_i] L_i' + [a(x_1+x_2+x_3) +
$$
$$
b(y_1 + y_2 + y_3)] E_0 + \sum_{i=1}^3 [a(x_i + z_j + z_k) + b(y_i +
w_j + w_k)] E_i.
$$
Here $[z]$ denotes the residue class of $z$ modulo $5$.
\end{prop}
\section{The symmetries of the construction}
\begin{definition}
A six - tuple $\mathfrak{U} := (u_1,u_2,u_3,v_1,v_2,v_3) \in
((\mathbb{Z} /5 \mathbb{Z})^2 - \{0\})^6$
is said to be {\em admissible} if and only if
0) $ u_1 + u_2 + u_3 + v_1 + v_2 + v_3 = 0$
and moreover the two conditions of Lemma \ref{nec} are verified:
1) $u_i$, $v_j$, $\sum u_i$, $\epsilon_i = u_i + v_j +v_k$ are
different from zero in
$(\mathbb{Z} /5 \mathbb{Z})^2$;
2) the following pairs of vectors in $(\mathbb{Z} /5 \mathbb{Z})^2$
$(u_i,v_i)$ for $i \in \{1,2,3\}$, $(u_1,u_1+u_2+u_3)$, $(u_2,u_1+u_2+u_3)$,
$(u_3,u_1+u_2+u_3)$, $(u_1, u_1+v_2+v_3)$, $(u_2, u_2+v_1+v_3)$,
$(u_3, u_3+v_1+v_2)$, $(u_1+ v_2+v_3,v_i)$
for $i=2,3$, $(u_2+ v_1+v_3,v_i)$ for $i=1,3$, $(u_3+ v_1+v_2,v_i)$ for $i=1,2$,
are linearly independent.
\end{definition}
\begin{rem}
We have seen in the previous section that an admissible six - tuple
$\mathfrak{U}$
induces a smooth Galois cover $p : S \rightarrow Y$ with Galois group
$(\mathbb{Z} /5 \mathbb{Z})^2$.
Moreover, $S$ is a minimal surface of general type with $K_S^2 = 45$
and $\chi = 5$. We recall that $S$ is a
ball quotient, hence rigid.
\end{rem}
Using MAGMA one sees that there are exactly $201600$ admissible six - tuples.
But of course a lot of them will lead to isomorphic surfaces. In
order to understand how many non isomorphic
surfaces (with $p_g =4$) we will get by this construction, we have to
understand the symmetries.
Two admissible six - tuples $\mathfrak{U}$, $\mathfrak{U}'$ obviously
give isomorphic surfaces
if there is an automorphism $\phi \in Gl(2, \mathbb{Z}/ 5
\mathbb{Z})$ such that $\phi (\mathfrak{U}) =
\mathfrak{U}'$. On the other hand the group of biholomorphic
automorphisms of $\mathbb{P}^2 \setminus
\{L_1,L_2,L_3,L_1',L_2',L_3'\}$ equals $\mathfrak{S}_5$ (cf.
\cite{Terada}). The action of $\mathfrak{S}_5$ on
the set of admissible six tuples is generated by the following transformations:
$$
(01) : (u_1,u_2,u_3,v_1,v_2,v_3) \rightarrow
(u_1,u_3+v_1+v_2,u_2+v_1+v_3,u_1+u_2+u_3,v_2,v_3);
$$
$$
(02) : (u_1,u_2,u_3,v_1,v_2,v_3) \rightarrow
(u_3+v_1+v_2,u_2,u_1+v_2+v_3,v_1,u_1+u_2+u_3,v_3);
$$
$$
(03) : (u_1,u_2,u_3,v_1,v_2,v_3) \rightarrow
(u_2+v_1+v_3,u_3+v_1+v_2,u_3,v_1,v_2,u_1+u_2+u_3);
$$
$$
(04) : (u_1,u_2,u_3,v_1,v_2,v_3) \rightarrow
(u_1,u_2,u_3,u_1+v_2+v_3,u_2+v_1+v_3,u_3+v_1+v_2).
$$
It is easy to see that these four transpositions generate the action
of a group isomorphic to $\mathfrak{S}_5$.
We consider now the group $\mathcal{G}$ acting on the set of
admissible six - tuples
$\mathcal{S}$, which is generated by $\mathfrak{S}_5$ and $Gl(2,
\mathbb{Z}/ 5 \mathbb{Z})$. Then
$\mathcal{G}$ is a quotient of $Gl(2, \mathbb{Z}/ 5 \mathbb{Z})
\times \mathfrak{S}_5$ (the actions commute,
being given by multiplication on the right, respectively on the left).
A MAGMA computation
shows that $\mathcal{G}$ has four orbits on $\mathcal{S}$.
Representatives for these orbits are
$$
\mathfrak{U}_1 = (
\begin{pmatrix}
1 \\
0
\end{pmatrix}, \begin{pmatrix}
1 \\
0
\end{pmatrix}, \begin{pmatrix}
0 \\
1
\end{pmatrix}, \begin{pmatrix}
2 \\
1
\end{pmatrix}, \begin{pmatrix}
2 \\
1
\end{pmatrix}, \begin{pmatrix}
4 \\
2
\end{pmatrix});
$$
$$
\mathfrak{U}_2 = (
\begin{pmatrix}
1 \\
0
\end{pmatrix}, \begin{pmatrix}
1 \\
0
\end{pmatrix}, \begin{pmatrix}
0 \\
1
\end{pmatrix}, \begin{pmatrix}
2 \\
1
\end{pmatrix}, \begin{pmatrix}
4 \\
2
\end{pmatrix}, \begin{pmatrix}
2 \\
1
\end{pmatrix});
$$
$$
\mathfrak{U}_3 = (
\begin{pmatrix}
1 \\
0
\end{pmatrix}, \begin{pmatrix}
1 \\
0
\end{pmatrix}, \begin{pmatrix}
0 \\
1
\end{pmatrix}, \begin{pmatrix}
4 \\
1
\end{pmatrix}, \begin{pmatrix}
3 \\
2
\end{pmatrix}, \begin{pmatrix}
1 \\
1
\end{pmatrix});
$$
$$
\mathfrak{U}_4 = (
\begin{pmatrix}
1 \\
0
\end{pmatrix}, \begin{pmatrix}
1 \\
0
\end{pmatrix}, \begin{pmatrix}
0 \\
1
\end{pmatrix}, \begin{pmatrix}
1 \\
1
\end{pmatrix}, \begin{pmatrix}
0 \\
3
\end{pmatrix}, \begin{pmatrix}
2 \\
0
\end{pmatrix}).
$$
The orbit of $\mathfrak{U}_1$ has length $28800$, whereas the orbits
of $\mathfrak{U_2}$,
$\mathfrak{U_3}$, $\mathfrak{U_4}$ have respective length $57600$.
In particular we see that
$\mathcal{G} \cong
Gl(2, \mathbb{Z}/ 5 \mathbb{Z}) \times \mathfrak{S}_5$.
\medskip
We have moreover:
\begin{theo}\label{four}
Let $S_i$ be the minimal smooth surface of general type with $K^2
=45$ and $\chi =5$ obtained
from the covering induced by the admissible six - tuple
$\mathfrak{U}_i$, where $i \in \{1,2,3,4\}$. Then we
have that $S_3$ is regular (i.e., $q(S_3) = 0$), whereas $q(S_i) = 2$
for $i \neq 3$.
\end{theo}
In particular, $S_3$ is the unique minimal surface with $K_S^2 = 45$
and $p_g = 4$ obtained as a $(\mathbb{Z}/ 5 \mathbb{Z})^2$ - cover of
$\mathbb{P}^2$ branched exactly in a complete quadrangle of the
complex projective plane. \\
{\it Proof. }
We will calculate the geometric genus of $S = S_3$, using the formula
$$
H^0(S,\mathcal{O}_S (K_S)) = \bigoplus_{(a,b) \in G^*} H^0(Y,
\mathcal{O}_Y(K_Y) \otimes \mathcal{L}_{(a,b)}).
$$
Applying proposition (\ref{character sheaves}) we obtain the
following table for
the character sheaves $\mathcal{L}_{(a,b)}$: \\
{\Small
\begin{tabular}{|l|c|c|c|c|c|}
\hline
$L_{(a,b)}$ & $a = 0$ & $a = 1$ & $a = 2$ & $a = 3$ & $a = 4$ \\
\hline
$b = 0$ & $\mathcal{O}_Y$ & $2H - E_1 - E_2 $ & $2H - E_1 - E_2$ &
$3H - E_0 - 2E_1$ & $3H - E_0 -2E_1$\\
& & $ - E_3$ & & $- E_2 - E_3$ & $- E_2$ \\
\hline
$b=1$ & $H$ & $H$ & $3H - E_0 -E_1$ & $3H -E_0 -E_1$ & $3H -E_0 -E_1$ \\
& & & $-E_2 - E_3$ & $-2E_2 - E_3$ & $-E_2 - E_3$ \\
\hline
$b=2$ & $2H - E_1 - E_3$ & $2H - E_1 - E_2$ & $2H - E_0 - E_1$ & $3H
-E_0 -E_1$ & $3H - 2E_0 -E_1$ \\
& & $-E_3$ & $-E_2$ & $-E_2 - E_3$ & $-E_2 - E_3$ \\
\hline
$b=3$ & $2H - E_2 - E_3$ & $3H -E_0 -E_1$ & $2H - E_0 - E_3$ & $2H -
E_0$ & $4H - 2E_0 - E_1$ \\
& & $-E_2 - E_3$ & & & $- 2E_2 - 2E_3$ \\
\hline
$b=4$ & $3H - E_1 - E_2$ & $2H - E_0 -E_3$ & $3H -E_0 -E_1$ & $3H
-2E_0 -E_1$ & $3H -2E_0 -E_1$ \\
& $-2E_3$ & & $-E_2 - 2E_3$ & $-E_2 - E_3$ & $-E_2$ \\
\hline
\end{tabular}
}
We see immediately that $H^0(Y, \mathcal{O}_Y(K_Y) \otimes
\mathcal{L}_{(a,b)}) = 0$ for all $(a,b) \notin \{(2,1),(3,2),(1,3),
(4,1) \}$ and $H^0(Y, \mathcal{O}_Y(K_Y) \otimes \mathcal{L}_{(a,b)})
\cong \mathbb{C}$ for $(a,b) \in \{(2,1),(3,2),(1,3), (4,1) \}$, i.e.,
$p_g(S_3) = 4$. This proves the claim for $S_3$.
The geometric genus of the remaining surfaces is calculated in
exactly the same way. \hspace*{\fill}$Q.E.D.$
\section{The canonical map}
In the previous section we have constructed a minimal surface $S$ of general
type with $K_S^2 = 45$, $p_g = 4$ and $q(S) =0$. We want
now to understand the behaviour of the
canonical map of $S$.
For $(a,b) \in (\mathbb{Z} / 5)^2$ we write
$$
\delta_i(a,b) := [ax_i + by_i]
$$
$$
\lambda_j(a,b) := [az_j + bw_j]
$$
$$
\mu_0(a,b) := [a(x_1+x_2+x_3) + b(y_1 + y_2 + y_3)],
$$
$$
\mu_h(a,b) := [a(x_h+z_j+z_k) + b(y_h + w_j + w_k)].
$$
Then we know that
$$
5 \mathcal{L}_{(a,b)} = \sum_{i=1}^3 \delta_i (a,b)L_i' +
\sum_{j=1}^3 \lambda_j (a,b)) L_j
+ \sum_{h=0}^3 \mu_h (a,b)) E_h.
$$
Denote by $R_1, \ldots , R_{10}$ the ramification divisors of $p : S
\rightarrow Y$
lying over $L_1'$, $L_2'$, $L_3'$, $L_1$, $L_2$, $L_3$, $E_0, \ldots
, E_3$: it is easy to see that they are all irreducible genus 2 curves. Moreover,
let $x_i$ be a local
equation of $R_i$. We already saw that $H^0(S, \mathcal{O}_S (K_S))$
is the direct sum of 4 one dimensional eigenspaces $H^0(S,
\mathcal{O}_S (K_S))_{(a,b)} \cong H^0((Y,
\mathcal{O}_Y(K_Y) \otimes \mathcal{L}_{(a,b)} ) \cong H^0 ( \mathbb{P}^2,
\ensuremath{\mathcal{O}}_{\mathbb{P}^2})$.
Then a basis of $H^0(S, \mathcal{O}_S (K_S))$ is given by
$$
\{x_1^{4 - \delta_1 (a,b)}\cdot x_2^{4 - \delta_2 (a,b)} \cdot
x_3^{4 - \delta_3 (a,b)} \cdot
x_4^{4 - \lambda_1 (a,b)} \cdot x_5^{4 - \lambda_2 (a,b)}\cdot x_6^{4
- \lambda_3 (a,b)}\cdot
$$
$$
\cdot x_7^{4 - \mu_0 (a,b)} \cdot \cdots x_{10}^{4 - \mu_3 (a,b)} | H^0((Y,
\mathcal{O}_Y(K_Y) \otimes \mathcal{L}_{(a,b)} \neq 0 \}.
$$
It is easy to compute the table giving the numbers $\lambda_i$,
$\delta_j$ and $\mu_h$ for $(a,b) \in \{(2,1),(3,2),(1,3), (4,1) \}$: \\
\begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|}
\hline
$(a,b)$ & $\delta_1$ & $\delta_2$ & $\delta_3$ & $\lambda_1$ &
$\lambda_2$ & $\lambda_3$ & $\mu_0$ & $\mu_1$ & $\mu_2$ & $\mu_3$ \\
\hline
$(1,3)$ & 1 & 1 & 3 & 2 & 4 & 4 & 0 & 4 & 2 & 4 \\
\hline
$(2,1)$ & 2 & 2 & 1 & 4 & 3 & 3 & 0 & 3 & 4 & 3 \\
\hline
$(3,2)$ & 3 & 3 & 2 & 4 & 3 & 0 & 3 & 1 & 2 & 4 \\
\hline
$(4,1)$ & 4 & 4 & 1 & 2 & 4 & 0 & 4 & 3 & 1 & 2 \\
\hline
\end{tabular}
\medskip
Therefore we have the following result
\begin{lemma}\label{basis}
A basis for $H^0(S, \mathcal{O}_S(K_S))$ is given by
$$
\{x_1^3 x_2^3 x_3 x_4^2 x_7^4 x_9^2, x_1^2 x_2^2 x_3^3 x_5 x_6 x_7^4
x_8 x_{10},
x_1 x_2 x_3^2 x_5 x_6^4 x_7 x_8^3 x_9^2, x_3^3 x_4^2 x_6^4 x_8 x_9^3 x_{10}^2\}.
$$
\end{lemma}
We can now prove the following
\begin{theo}
1) The canonical map $\phi_K$ of $S$ has $R_3$ as fixed part and
its movable part has five base points.
We have $R_3^2 = -1$ and $K_S.R_3 = 3$. The base points of $K_S -
R_3$ are $x_1 \cap x_4$ (of
type $(1,1))$, $x_1 \cap x_8$ (of type $(1,1,1)$), $x_2 \cap x_9$ (of
type $(2,1,1)$), $x_3 \cap x_7$ (of type
$(2,1,1)$), $x_6 \cap x_9$ (of type $(1,1)$).
2) The canonical map is birational and its image in $\mathbb{P}^3$
has degree $19$.
\end{theo}
In order to keep the formulation of the above theorem as simple as
possible we adopted
the notation: a base point $p$ of $K-R_3$ on $S$ is {\em of type}
$(n_1, n_2, \ldots , n_k)$ iff $p$ is a
$n_1$ - tuple base point of $K-R_3$, after one blow up the strict
transform of $K - R_3$ has a $n_2$ - tuple base
point and so on.
{\it Proof. }
1) It is immediate from the description of the basis of $H^0(S,
\mathcal{O}_S(K_S))$
given in lemma (\ref{basis})
that $R_3$ is the only fixed part of $|K_S|$and that $|K-R_3|$ does
not have a fixed part. It is easy to see
that the base points of $|K-R_3|$ are exactly $x_1 \cap x_4$, $x_1
\cap x_8$, $x_2 \cap x_9$, $x_3 \cap x_7$,
$x_6 \cap x_9$. Next we will see which kind of base points we have
and whether there are still infinitely near
base points.
1) $x_1 \cap x_4$: locally around this point $|K-R_3|$ is given by
$x_1^3x_4^2, x_1^2, x_1, x_4^2$.
The ideal generated is the ideal $(x_1 , x_4^2)$, thus $K-R_3$ has a
base point of
type $(1,1)$ in $x_1 \cap x_4$.
2) $x_1 \cap x_8$: locally around this point $|K-R_3|$ is given by
$x_1^3, x_1^2 x_8, x_1 x_8^3, x_8$.
The ideal generated is the ideal $(x_1^3 , x_8)$, thus
$K-R_3$ has a base point of type $(1,1,1)$ in $x_1 \cap x_8$.
Similarly in the three remaining cases we see that: \\
3) $|K-R_3|$ has a base point of type $(2,1,1)$ in $x_2 \cap x_9$
(ideal $(x_2^2 , x_2 x_9^2)$).
4) $|K-R_3|$ has a base point of type $(2,1,1)$ in $x_3 \cap
x_7$ (ideal $(x_3^2 , x_7^4, x_3 x_7)$).
5) $|K-R_3|$ has a base point of type $(1,1)$ in $x_6 \cap x_9$
(ideal $(x_6 , x_9^2)$).
Therefore we get $deg \phi_K deg \phi_K(S) = (K-R_3)^2 - 2 \cdot 4 - 11 = 19$.
Here we use that $R_3$ has self
intersection $-1$ and genus $2$. It follows immediately that $deg \phi_K = 1$
and that the canonical image has
degree $19$. \hspace*{\fill}$Q.E.D.$
The following
\begin{cor}
There exist surfaces of general type $S$ with birational canonical
map such that
the canonical system of each deformation of $S$ has base points.
\end{cor}
is an answer to a question by Junho Lee.
\medskip
{\bf Acknowledgement.} We are grateful to Fritz Grunewald for his
invaluable help
in the explicit calculations leading to Theorem \ref{four}.
This research was performed in the realm of the DFG Schwerpunkt
"Globale methoden in der komplexen Geometrie".
|
2,869,038,153,968 | arxiv | \section{Introduction}
\label{sec-intro}
The powerful star formation and nuclear activity that led to the buildup of massive galaxies through cosmic time have been the subject of many studies. Most of these have focused on the cosmic time period elapsed between redshifts $z\sim 1.5$ and 3, when the cosmic star formation rate density had an overall peak \citep{hop06,beh10}, and the massive galaxy number density had a fast increase \citep[e.g.,][]{fon04,cap06a,sar06,kaj10}. At that time, stellar and nuclear activity were mostly obscured by dust, resulting in a high incidence of ultra-luminous infrared galaxies (ULIRGs). Indeed, a substantial fraction of the most massive galaxies were ULIRGs at $z\sim1.5-3$ \citep{dad05,cap06b}.
The study of powerful star formation activity over the first few billion years of cosmic time ($z>3$) has proven to be more challenging, due to galaxy fainter fluxes, and the gradual decline of the cosmic star formation activity at high $z$. A notable exception to this challenge is offered by the study of bright sub-/millimetre selected galaxies, whose redshift distribution has a significant tail at $z>3$ \citep[e.g.,][]{war11,mic12}. However, the sensitivity limits of current sub-/millimetre surveys only allows for the study of the most extreme examples of early dust-obscured star formation, while a plausible population of more typical star-forming ULIRGs at $z>3$ is still to be found.
An alternative approach for finding massive, dust-obscured starbursts at high $z$ consists of selecting bright mid-IR galaxies that are characterised by significantly red colours in their spectral energy distributions (SEDs). These red colours are the result of a redshifted 4000~$\rm \AA$ break and/or significant dust extinction. For example, different works have shown that optically faint, mid-IR bright galaxies are mostly dusty starbursts lying at $z \raisebox{-0.13cm}{~\shortstack{$>$ \\[-0.07cm] $\sim$}}~ 2$, and some also host active galactic nuclei (AGN) \citep[e.g.,][]{yan04,hou05,dey08}. Restricting this selection to those sources in which the significant flux drop occurs at near-IR wavelengths (observed $\lambda \approx 1-2 \, \rm \mu m$) should produce a redshift distribution biased towards even higher redshifts.
Huang et al.~(2011) reported the existence of four galaxies selected with the {\em Spitzer Space Telescope} Infrared Array Camera \citep[IRAC;][]{faz04}, characterised by colours $H-[3.6]>4.5$ (AB). Their SED fitting suggests that these galaxies lie at $z\sim4-6$. Similarly, Wang et al.~(2012) analysed the SEDs of 76 IRAC galaxies with $K_s - [3.6]>1.6$ (AB), and found that about half of them are massive galaxies at $z\gsim3$.
Making use of data from the {\em Spitzer} Extended Deep Survey \citep[SEDS;][]{ash13} and the {\em Hubble Space Telescope (HST)} Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey \citep[CANDELS;][]{gro11,koe11}, Caputi et al.~(2012; C12 hereafter) independently searched for these kinds of red galaxies in an area that is part of the UKIRT Infrared Deep Sky Survey \citep{law07} Ultra Deep Survey (UDS) field. C12 analysed the SEDs of 25 IRAC galaxies characterised by colours $H-[4.5]>4$ (AB), and concluded that between $\sim 45$ and $85\%$ of them are massive galaxies at $z>3$.
Among the $z>3$ galaxies in C12, six have been detected by the Mid Infrared Photometer for {\em Spitzer} \citep[MIPS;][]{rie04} at $24 \, \rm \mu m$, which at $z>3$ traces rest-frame wavelengths $\lambda_{\rm rest} < 6 \, \rm \mu m$, and thus indicates the presence of hot dust. For the brightest sources, this is likely due to the presence of an AGN. Understanding whether these galaxies are simultaneously undergoing a major episode of star formation requires us to follow them up at sub-/millimetre wavelengths, at which the cold-dust continuum emission can directly be probed.
In this work we present PdBI 1.1~mm continuum observations towards the two brightest mid-IR galaxies in the $H-[4.5]>4$ sample analysed by C12. These interferometric observations have allowed us to achieve a spatial resolution of $\sim 1.8 \, \rm arcsec$ and sub-mJy sensitivities at millimetre wavelengths. Throughout this paper, all quoted magnitudes and colours are total and refer to the AB system \citep{oke83}. We adopt a cosmology with $\rm H_0=70 \,{\rm km \, s^{-1} Mpc^{-1}}$, $\rm \Omega_M=0.3$ and $\rm \Omega_\Lambda=0.7$. Stellar masses refer to a Salpeter (1955) initial mass function (IMF) over stellar masses $(0.1-100) \, \rm M_\odot$.
\begin{figure*}
\epsscale{1.1}
\plotone{stampcomp_pdbih45_27564.eps}
\caption{Postage stamps of target id \#27564. From left to right: {\em HST} CANDELS f160w; SEDS IRAC $4.5 \, \rm \mu m$; MIPS $24 \, \rm \mu m$; PdBI clean 1.1~mm map. The shown field is of $\sim20\times 20$~arcsec$^2$ in all cases. \label{fig_maps27564}}
\end{figure*}
\begin{deluxetable*}{lcccccccc}
\tabletypesize{\scriptsize}
\tablecaption{Photometric properties of our two P\lowercase{d}BI targets. \label{table_targ}}
\tablehead{\colhead{ID} & \colhead{RA (J2000)\tablenotemark{a}} & \colhead{DEC(J2000)\tablenotemark{a}} & \colhead{F160W} & \colhead{[4.5]} & \colhead{$S_\nu(24 \, \rm \mu m)(\rm \mu Jy)$} & \colhead{$S_\nu(850 \, \rm \mu m)(mJy)$} & \colhead{$S_\nu(1.1 \rm mm)(mJy)$}
}
\startdata
\#27564 & 02:17:16.35 & -05:14:43.1 & $24.89\pm0.05$ & $20.39\pm0.10$ & $599 \pm 13$ & $< 2.8 $ & $0.78 \pm 0.18$ \\
\#26857a & 02:17:51.69 & -05:15:07.2 & $24.39\pm0.14$ & $20.26\pm0.10$ & $334\pm 12$ & $< 2.8 $ & $<1.06$ \\
\#26857b & 02:17:51.62 & -05:15:03.6 & --- & --- & --- &$4.6 \pm 1.4$\tablenotemark{b} & $1.64 \pm 0.53$
\enddata
\tablenotetext{a}{The RA and DEC values correspond to the IRAC coordinates, except for \#26857b, for which we quote the PdBI coordinates.}
\tablenotetext{b}{The SCUBA2 $850 \, \rm \mu m$ source centroid is $\sim 3 \pm 4$~arcsec apart from our PdBI source centroid.}
\end{deluxetable*}
\section{Target selection and IRAM P\lowercase{d}BI observations}
\label{sec-data}
Our targets correspond to the two brightest IRAC galaxies reported in C12. Their photometric properties are summarised in Table~\ref{table_targ}. In addition to being bright in all IRAC bands, these two sources are also bright at $24 \, \rm \mu m$, i.e., they have $S_\nu(24 \, \rm \mu m)=(599 \pm 13)$ and $(334\pm 12) \, \rm \mu Jy$ for \#27564 and \#26857, respectively. On the other hand, the more recently available SCUBA2 maps have revealed that there is a $3.3 \sigma$ detection with $S_\nu(850 \, \rm \mu m)=(4.6 \pm 1.4) \, \rm mJy$ within the field of our target \#26857. This field is out of the area covered by SCUBA2 at $450 \, \rm \mu m$. The region around \#27564 has been covered both at $450$ and $850 \, \rm \mu m$, but no $> 2 \sigma$ detection is found within an 8~arcsec distance of our target centroid \citep{gea13}.
The SED fitting analysis based on 17 broad bands ($U$ through $8.0 \, \rm \mu m$) performed by C12 indicates that these two galaxies are at redshifts $z>3$. As for most objects in the C12 sample, the SED fitting solutions are highly degenerate in redshift space, making it very difficult to obtain precise redshift estimates. However, for these two sources the probability is $P(z>3) \raisebox{-0.13cm}{~\shortstack{$>$ \\[-0.07cm] $\sim$}}~ 0.85$, so they can be considered quite secure high-$z$ candidates.
We followed up our two targets with the PdBI in the summer D and C configuration with six antennas, between 24 September and 28 November 2013. We used the WideX correlator tuned to a sky frequency of 265~GHz (corresponding to $\sim1.1 \, \rm mm$), with dual polarization, which produced data over a contiguous 3.6~GHz bandwidth. The weather conditions were reasonable, with water vapour pressures ranging between 1.5 and 3.0~mm. The resulting beam size is of $\sim 1.8$~arcsec, which is comparable to the IRAC $4.5 \, \rm \mu m$ resolution, and the positional accuracy is of around 0.4~arcsec (the PdBI absolute positional accuracy is $< 0.3$~arcsec, but this accuracy is somewhat degraded for faint sources). The total times on-source were 11.9 and 2.6 hours for targets \#27564 and \#26857, which produced maps with $1\sigma$ depths of 0.18 and 0.53~mJy/beam, respectively. The relative integration times have been decided based on the preliminary SCUBA2 source fluxes/positions available at the time of writing the PdBI proposal. We performed the data calibration and analysis using the CLIC and MAPPING tasks within the GILDAS software package\footnote{http://www.iram.fr/IRAMFR/GILDAS} \citep{gui00}. The bandpass, complex gain and flux densities have been calibrated with bright ($\raisebox{-0.13cm}{~\shortstack{$>$ \\[-0.07cm] $\sim$}}~ 0.5 \, \rm Jy$) standard sources. The main flux calibrator for our observations was MWC349, which produces a flux accuracy of $\sim 10\%$ at 1.1~mm.
\section{Results}
\label{sec-results}
\subsection{IRAM PdBI maps}
\label{sec-maps}
Figure~\ref{fig_maps27564} shows the clean, full-bandwidth PdBI 1.1~mm map centred at the position of target \#27564, and corresponding CANDELS (f160w) $H$-band, SEDS/IRAC $4.5 \, \rm \mu m$, and MIPS $24 \, \rm \mu m$ maps over the same field. The PdBI map shows a robust $4.3 \sigma$ detection centred $0.4$~arcsec apart from the IRAC source centroid, which we can unambiguously identify with our $H-[4.5]>4$ target.
Figure~\ref{fig_maps26857} shows the corresponding maps for target \#26857. On the PdBI 1.1~mm map a single source appears, with a marginal $3.1\sigma$ detection, located at a distance of 3.7~arcsec from our target centroid. Note that this PdBI source is actually twice as bright as source \#27564 at 1.1~mm, but its detection is less significant due to the considerably shorter integration times.
A $3.1\sigma$ detection on the PdBI map is below the threshold typically considered for robust detections at sub-/millimetre wavelengths \citep[i.e., $\sim3.7\sigma$; e.g.,][]{cop06,wei09}. However, the presence of a $3.3\sigma$ SCUBA2 source $\sim3 \pm 4$~arcsec away, suggests that the PdBI detection could be the counterpart of the SCUBA2 source (as the positions are consistent within the SCUBA2 positional uncertainty; see fig.~\ref{fig_maps26857}). Note that the probability associated with a $3.1\sigma$ peak is approximately given by $1-\rm erf(3.1/\sqrt{2}) \approx 0.00195$. About 17 independent PdBI beams are contained within the SCUBA2 positional error circle, which implies that the random probability that a PdBI $3.1\sigma$ peak lies within that area is only of $\sim 0.035$. Assuming a SCUBA2 positional uncertainty radius twice as large as that assumed here would still yield a small probability ($\sim 0.14$). Therefore, these simple statistical arguments suggest that our PdBI $3.1\sigma$ detection is very likely real. However, we note that these arguments are not totally conclusive, as the probability associated with the S/N ratio given by $1-\rm erf((S/N)/\sqrt{2})$ should only be considered as an approximation.
In any case, the significant separation between the PdBI $3.1\sigma$ source and our target centroid means that it is very unlikely that the millimetre (and sub-millimetre) emission is produced by our $H-[4.5]>4$ IRAC source. We discuss the possibilities for this PdBI detection in detail in Section~\ref{sec-targ26}.
\begin{figure*}
\epsscale{1.1}
\plotone{stampcomp_pdbih45_26857.eps}
\caption{Postage stamps of target id \#26857. From left to right: {\em HST} CANDELS f160w; SEDS IRAC $4.5 \, \rm \mu m$; MIPS $24 \, \rm \mu m$; PdBI clean 1.1~mm map. The shown field is of $\sim20\times 20$~arcsec$^2$ in all cases. The X-like symbols on the left and middle panels mark the position of the single $>3 \sigma$ detection on the PdBI 1.1~mm map, which is 3.7~arcsec apart from our target (IRAC) centroid. The circle in each panel is centred at the peak position of the SCUBA2 $\sim3.3 \sigma$ detection, and the radius indicates the positional uncertainty. \label{fig_maps26857}}
\end{figure*}
\subsection{Analysis of the Target Multi-wavelength Properties}
\subsubsection{Target \#27564}
As for target \#27564 the identification with the PdBI detection is unambiguous, we can combine the multi-wavelength information to investigate the dust emission properties of this source. Figure~\ref{fig_sedi} shows the dust IR SED for this target. To acknowledge the uncertainties in the redshift determination of this source, we analyse the two extreme values of the redshift interval with maximum probability, i.e. $z=3$ and $z=4.5$. Note that, although there is a non-negligible probability that this source is at higher redshift, we deem that unlikely, as it is detected in the optical $B$ band with $B=26.99 \pm 0.18$~mag (but is not detected in the UDS deep $U$-band images).
\begin{figure*}
\plotone{sedstellardustcomp_275.eps}
\caption{Dust IR SED of target id \#27564 at the minimum and maximum most likely redshifts: $z=3$ ({\em left}) and $z=4.5$ ({\em right}). In both panels, the circles correspond to the {\em Spitzer}, SCUBA2 and PdBI photometry. Upper limits correspond to $2 \sigma$ flux densities. These photometric data points are too few to attempt a multi-component dust modelling, so we show an arbitrary dusty AGN torus model, and a pure IR star-forming galaxy model, for an illustrative purpose. Independently of the models chosen, it is evident that an IR star-forming galaxy model alone cannot reproduce simultaneously all the observed IR photometry. The dusty star-forming galaxy model shown here has been taken from the library by Lagache et al.~(2004), while the dusty torus model belongs to the template library by H\"onig \& Kishimoto~(2010). \label{fig_sedi}}
\end{figure*}
With photometry measured in only three bands in the wavelength range $8 \, \rm \mu m - 1.1 \, mm$, and flux density upper limits in other two bands, we are unable to do a full spectral modelling of our target dust emission. However, from Fig.~\ref{fig_sedi} it is clear that a simple star-forming galaxy model cannot reproduce the sub-/millimetre and mid-IR flux densities altogether. An additional dusty torus component is necessary to reproduce the total IR SED. We have performed an independent, self-consistent SED fitting from the UV through 1.1~mm using the GRASIL code \citep{sil98}, and obtained a similar result: at any of the possible redshifts, no pure star-forming galaxy model can account for the significant excess at mid-IR wavelengths. Therefore, we conclude that our target \#27564 is a composite AGN/star-forming galaxy.
Visual inspection of the {\em HST} images for this source in different bands (Fig.~\ref{fig_hstzoom}) indicates that this galaxy has a more extended morphology towards longer wavelengths. This suggests the presence of an extended structure with dust-obscured star formation, in correspondence with the millimetre detection. Note that this is not an apparent effect of the lack of sensitivity in the {\em HST} images at short wavelengths. Target \#27564 has magnitudes $H (\rm f160w)=24.89 \pm 0.05$, and $V (\rm f606w)=27.01 \pm 0.23$. The $2\sigma$ depth of the CANDELS/UDS f606w map is $\sim 28$~mag, so the source is well detected in this image, but considerably fainter than in the f160w map.
\begin{figure*}
\plotone{src27564_hstzoomin.eps}
\caption{Detailed {\em HST} view of target id \#27564 at different wavelengths. The field size in each stamp is of $\sim3\times 3$~arcsec$^2$. \label{fig_hstzoom}}
\end{figure*}
Based on our PdBI observed flux density at 1.1~mm, we can estimate the total infrared luminosity $L_{\rm IR}^{\rm SFR}$ produced by star formation in our target. The obtained value depends on the adopted IR galaxy template. Following Micha{\l}owski et al.~(2010b), we scaled different, typical IR galaxy templates to the observed 1.1~mm flux density, obtaining $L_{\rm IR}^{\rm SFR}\sim 0.6-1.7 \times 10^{12} \, \rm L_\odot$. For any given template, the derived luminosities $L_{\rm IR}^{\rm SFR}$ are similar at $z=3$ and $z=4.5$, given that the flux dimming at higher $z$ is compensated by the negative $k$ correction. Considering this range of $L_{\rm IR}^{\rm SFR}$ values, and using the $L_{\rm IR}^{\rm SFR} - SFR$ relation derived by Kennicutt~(1998), we estimate that the obscured SFR of our target is $SFR \approx 200 \pm 100 \, \rm M_\odot/yr$. Note that this SFR would have been largely overestimated if it had been computed starting from the $24 \, \rm \mu m$ flux density, which is dominated by the dusty torus emission.
The stellar mass derived for \#27564 is of $\sim 2.5 \times 10^{11} \, \rm M_\odot$ at $z=3$, and $\sim 10^{12} \, \rm M_\odot$ at $z=4.5$, after correcting for the AGN contamination, using a simple power-law component subtraction from the optical through IRAC band photometry \citep[see][]{cap13}. These stellar-mass corrected values are $\sim 30\%$ of the uncorrected ones. Note that, especially the value at $z=4.5$, should still be considered as an upper limit of the real stellar mass. At this redshift, the IRAC bands only trace rest-frame wavelengths $0.6-1.5 \, \rm \mu m$, and the hot-dust power-law component increasingly contaminates the normal galaxy SED up to rest-frame $\sim 3 \, \rm \mu m$. Thus, observing \#27564 at different wavelengths between observed 8 and $24 \, \rm \mu m$ is necessary to really weigh the impact of the AGN power-law component, and derive a fully corrected stellar-mass value.
\subsubsection{Target \#26857}
\label{sec-targ26}
Target \#26857 is not detected on the PdBI map, but we can still attempt to constrain its IR dust SED from the photometric upper limits. Figure~\ref{fig_sedii} shows this SED at the minimum and maximum most likely redshifts for this source. This source is not detected in any of the UV/optical bands, so in this case we consider $z=5$ as the upper limit for the redshift (note that a higher upper limit will not change the following analysis). At both extreme redshifts, we have normalised the dusty star-forming galaxy template to the PdBI 1.1~mm flux density upper limit, which is the most restrictive one in the far-IR.
\begin{figure*}
\plotone{sedstellardustcomp_26857.eps}
\caption{Dust IR SED of target id \#26857a at the minimum and maximum most likely redshifts: $z=3$ ({\em left}) and $z=5.0$ ({\em right}). Line styles and symbols are the same as in Fig.~\ref{fig_sedi}. \label{fig_sedii}}
\end{figure*}
From Fig.~\ref{fig_sedii} it is evident that the pure star-forming galaxy template adjusted to the 1.1~mm photometry could just be sufficient to reproduce the mid-IR ($8$ and $24 \, \rm \mu m$) flux densities, if the galaxy were at $z=3$. Of course, this could have variations depending on the chosen star-forming galaxy template, but in any case a dusty torus component would be of minor importance (unless, of course, the real flux densities at millimetre wavelengths are much lower than the upper limits). At higher redshifts, a pure star-forming galaxy template is increasingly unable to adjust both the mid- and far-IR photometry, even for the 1.1~mm the flux density upper limit. Therefore, we can conclude that it is likely that source \#26857 is also a composite AGN/star-forming system.
As we have discussed in Section \ref{sec-maps}, there is a tentative $3.1\sigma$ detection in the field of this source, but located at 3.7~arcsec from our target. So our target does not appear to be the counterpart of the PdBI/SCUBA2 source. The presence of a $3.3\sigma$ SCUBA2 $850 \, \rm \mu m$ source, whose position is consistent with that of our PdBI source within the error bars, suggests that the PdBI detection is likely real (see discussion in \S\ref{sec-maps}). Note that the SCUBA2 and PdBI flux densities are consistent with each other, considering a typical dust spectral slope of 3.5-4.0.
Interestingly, no counterpart is found at the position of the PdBI detection on the CANDELS $H$-band image, which is remarkable given the depth of the CANDELS UDS maps ($H \approx 27$). Even more surprisingly, no counterpart is found on the deep SEDS $4.5 \, \rm \mu m$ map. So, if the PdBI source is indeed real, then it will be similar in nature to the presumably rare source GN10 \citep{wan07}, which is very bright at sub-millimetre wavelengths, but extremely faint in the near-IR, and which has been confirmed to be at $z=4.04$ \citep{dan08,dad09}. The PdBI flux density, along with all the photometric upper limits from the $U$ through the $8 \, \rm \mu m$ bands, are consistent with a GN10 SED. We note that these kinds of sources are probably not so rare as initially thought, given that another sub-millimetre source (HDF 850.1) on the same field has been found to have similar properties to those of GN10 \citep{wal12}.
Another possibility could be that the PdBI source is a cold, dusty gas cloud that is associated with our target \#26857. Other cases similar to this have been reported in the literature. For example, Ivison et al.~(2008) have found two sub-millimetre sources associated with a radio galaxy at $z=3.8$, one of which does not have a counterpart in the IRAC bands or shorter wavelengths. They proposed that this sub-millimetre source could be a plume of cold, dusty gas tidally stripped from one of two merging AGN. However, this plume of cold gas was much closer to its associated AGN ($< 10 \, \rm kpc$) than what our PdBI detection would be from target \#26857 if it were at the same redshift ($\sim 25-30 \, \rm kpc$ at $z\sim 3-4.5$). Therefore, we conclude that the hypothesis that our PdBI detection and target \#26857 are physically associated is much less likely than the possibility that they are different sources.
\section{Constraints on the sub-/millimetre properties of other $H-[4.5]>4$ sources}
\label{sec_othersrcs}
\begin{figure}
\epsscale{1.1}
\plotone{stampcomp_otherred.eps}
\caption{Postage stamps of two additional $H-[4.5]>4$ sources. {\em Left}: {\em HST} CANDELS f160w; {\em right:} SEDS IRAC $4.5 \, \rm \mu m$. The shown field is of $\sim20\times 20$~arcsec$^2$ in all cases. The circle in each panel is centred at the peak position of a SCUBA2 detection, and the radius indicates the positional uncertainty. The SCUBA2 $850 \, \rm \mu m$ detection significances are $\sim4.8 \sigma$ for the source in the top panels, and $\sim3.4 \sigma$ for the source in the bottom panels. The SCUBA2 source in the upper panels is also detected at $450 \, \rm \mu m$ with $3.4\sigma$ significance. \label{fig_othstamps}}
\end{figure}
\begin{figure*}
\plotone{sedstellardustcomp_othersrcs.eps}
\caption{Dust IR SEDs of four other $H-[4.5]>4$ sources in the C12 sample, which are $24 \, \rm \mu m$ detected. These sources have not been targeted with the PdBI, but we do have SCUBA2 and AzTEC photometric upper limits for them. For clarity, only a pure dusty star-forming galaxy model (dashed line), normalised to the most restrictive of the upper limits, is shown in this case. These plots illustrate that for these other four sources with $H-[4.5]>4$, a pure dusty star-forming galaxy model is sufficient to account for both the mid-IR photometry and sub-millimetre upper limits at $z=3$. This would not be the case, though, if the real sub-millimetre flux densities were much lower than the upper limits, and/or if the sources were actually at a much higher redshift (right-hand side panel). \label{fig_sedoth}}
\end{figure*}
Four additional $H-[4.5]>4$ sources in the C12 sample are detected at $24 \, \rm \mu m$, albeit with fainter fluxes $S_\nu(24 \, \rm \mu m)< 150 \, \rm \mu Jy$. We do not have PdBI observations for them, but we can nevertheless try to constrain their dust IR SEDs, using the SCUBA2 maps, and existing AzTEC 1.1~mm maps for the UDS field \citep{aus10}.
All of these sources lie in the coverage area of the SCUBA2 $850 \, \rm \mu m$ maps, and two of them in the $450 \, \rm \mu m$ coverage region. For the two sources with only $850 \, \rm \mu m$ coverage, no $> 3 \sigma$ SCUBA2 detection is found within an 8~arcsec radius. In the fields of the other two sources, with coverage in both SCUBA2 bands, there are respective SCUBA2 $850 \, \rm \mu m$ ($450 \, \rm \mu m$) detections with 4.8 (3.4$\sigma$), and 3.4 ($<2\sigma$) significance. However, in each case, the SCUBA2 centroid is more than 6~arcsec away from our $H-[4.5]>4$ source centroid (Fig.~\ref{fig_othstamps}), so it is very unlikely that any of these SCUBA2 sources corresponds to an IRAC $H-[4.5]>4$ source. Note that in the case of the SCUBA2 $850 \, \rm \mu m$ detection with $4.8\sigma$ confidence (upper panels in Fig.~\ref{fig_othstamps}), there are two other clear IRAC/WFC3 sources within the sub-millimetre positional uncertainty circle, so one of them (or both) are the likely correct counterparts. In the case of the SCUBA2 $850 \, \rm \mu m$ detection with $3.4\sigma$ confidence (bottom panels in Fig.~\ref{fig_othstamps}), one would be tempted to associate the SCUBA2 source with the $H-[4.5]>4$ galaxy. However, given the results of our PdBI observations towards target \#26857, we suspect that this association would likely be wrong.
Therefore, for the study of the IR dust SEDs of all our four additional $H-[4.5]>4$ sources with $24 \, \rm \mu m$ detections, we consider that none of them is detected in the SCUBA2 maps, and we use flux density upper limits for the SCUBA2 bands that cover the field of our sources. Figure~\ref{fig_sedoth} shows the dust IR SEDs for these four sources altogether. We only include here a pure dusty star-forming IR galaxy model, normalised to the most restrictive of the sub-millimetre photometric upper limits. These plots illustrate that, for any of these $H-[4.5]>4$ sources, such a model is sufficient to account for both the mid-IR photometry and sub-/millimetre upper limits at $z=3$. If the sources were at much higher redshifts (e.g., $z\sim5$), and/or the sub-millimetre fluxes were significantly lower than the SCUBA2 upper limits, then a dusty torus component would be needed. In conclusion, even when we cannot completely exclude the need for a dusty torus, the existing IR photometry indicates that not all of the $H-[4.5]>4$ sources are expected to have an important AGN component, as it is the case for our PdBI target \#27564, and also likely for \#26857.
\section{Discussion}
\label{sec-disc}
Our PdBI detections towards two extremely red $H-[4.5]>4$ galaxies at $z>3$ are important for the following reasons.
$\bullet$ Target \#27564 has a clear, $4.3 \sigma$-confidence millimetre counterpart, which confirms that this is a massive, AGN/star-forming composite galaxy at high redshifts. The millimetre flux density, which is completely dominated by star formation, indicates that this galaxy has an IR luminosity due to star formation $L_{\rm IR}^{\rm SFR} \approx 0.6-1.7 \times 10^{12} \, \rm L_\odot$, corresponding to $SFR\approx 200 \pm 100 \, \rm M_\odot/yr$. This implies that, from the star-formation point of view, this source is not like the typical hyper-luminous sub-/millimetre sources discovered thus far with single-dish millimetre telescopes at $z\sim 3-4$ \citep[e.g.,][]{mic10}. Rather, it is a modest ULIRG at $z>3$, such as those more commonly found by IR galaxy surveys at $z\sim 2-3$. Preliminary results on sub-millimetre number counts in deep ALMA observations \citep{kar13,ono14} suggest that, if the redshift distribution of faint sub-millimetre galaxies is comparable to that of the brighter sources currently known, then many more examples of these ordinary ULIRGs should be discovered at $z\sim 3-4$.
$\bullet$ There is a tentative PdBI $3.1 \sigma$ detection at a distance of 3.7~arcsec of our target \#26857. The most likely scenario is that the two sources are unrelated, and the lack of another {\em Spitzer} or {\em HST} counterpart suggests that the PdBI detection corresponds to a new example of a very dusty starburst, like GN10, at high $z$. Our PdBI source also reveals that our $H-[4.5]>4$ galaxy is not the counterpart of the SCUBA2 detection in the same field, as a simple identification of the SCUBA2 source with the brightest IRAC source in the field would suggest.
One could wonder whether the discovery of this new dusty source in the field of our target \#26857 is simply fortuitous. We believe that it likely is not: these red high-$z$ sources tend to be highly clustered \citep[e.g.,][]{tam10,cpk11}, so our finding of a GN10-like candidate source close to our $H-[4.5]>4$ target should perhaps not come as a surprise. However, this is not necessarily expected for all $H-[4.5]>4$ galaxies, as some of them do not have any close SCUBA2 detection (neither robust nor tentative).
The analysis of the dusty IR SEDs of other $H-[4.5]>4$ galaxies suggests that not all of them are characterised by an important AGN component. At redshifts $z\sim3$, a pure dusty IR star-forming galaxy model is able to reproduce the mid-IR photometry and the sub-/millimetre photometric upper limits in all cases. Therefore, we conclude that our PdBI targets \#27564 and \#26857 could be prototypical of the brightest IRAC $H-[4.5]>4$ galaxies, but not all of them.
As a general conclusion, we argue that the analysis of ultra-deep near and mid-IR data offers an alternative route to discover new sites of powerful star-formation activity over the first few billion years of cosmic time. We also conclude that associations between single-dish sub-millimetre sources and bright IRAC galaxies can be quite uncertain in some cases, and interferometric observations are necessary to study the dust-obscured star formation properties of the $H-[4.5]>4$ galaxies.
\acknowledgments
Based on observations carried out with the IRAM PdBI. IRAM is supported by INSU/CNRS (France), MPG (Germany) and IGN (Spain). Also based on observations made with the {\em Spitzer Space Telescope}, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA; the NASA/ESA {\em Hubble Space Telescope}, obtained at the Space Telescope Science Institute; and the James Clerk Maxwell Telescope, operated by the Joint Astronomy Centre on behalf of the UK, Dutch and Canadian scientific councils. The research leading to these results has received funding from the European Commission Seventh Framework Programme (FP/2007-2013) under grant agreement number 283393 (RadioNet3).
We thank Ian Smail for useful discussions on the SCUBA2 maps, and an anonymous referee for a constructive report. MJM acknowledges the support of the UK Science and Technology Facilities Council.
|
2,869,038,153,969 | arxiv | \section{Introduction}
\label{intro}
The time analysis has been grounded throughout the years \cite{Penrose} shows the possibility of a cyclical cosmological time. However, he did not show a standard mathematical foundation for his arguments, neither, early, outlined possible future developments for this new view of the universe.\\
Thus, this work is intended to formulate in terms of quaternionic logarithms \cite {LogTrig} some of the ideas discussed in \cite{Penrose}. In this context some important results were already previously addressed in the work \cite{HyperCycles}, through exponential functions. However, the aim of this work is to show the possibility of approaching the cosmological cycles of time, to logarithmic functions and present that the end, whether or not there is an advantage in this approach, taking by quaternionic logarithms instead of exponential hypercomplex function to expand a new view of the universe.\\
The approach here will be based on submission of quaternionic exponential and logarithmic functions, and then use the latter approach in the proposed problem on this paper. It is essential to emphasize that the space where the study is analyzed in the quaternionic space $H$ and the functions are functions of quaternionic variables taking values $q\in H.$ Thus, throughout the text the quaternionic exponential function and logarithmic functions will be called simply quaternionic exponential function and logarithmic function
\section{The Exponential and Logarithmic Quaternionic Functions.}
Seeking for a better support to this work, a definition is essential for the subsequent understanding. Then follows the definition of quaternionic function.
\begin{definition}A quaternionic function is a law $f: H \longrightarrow H$ that associates to each $q=(q_{1},q_{2},q_{3},q_{4})$ one $w=f(q)$ in the division algebra of quaternions, is represented as follows: \\
\begin{eqnarray}
f(q_{1},q_{2},q_{3},q_{4}) = f_{1}(q_{1},q_{2},q_{3},q_{4}) + f_{2}(q_{1},q_{2},q_{3},q_{4})i\\
+ f_{3}(q_{1},q_{2},q_{3},q_{4})j + f_{4}(q_{1},q_{2},q_{3},q_{4})k.\nonumber
\end{eqnarray}
\end{definition}
Considering $q\in H$ an quaternionic number given $q=q_{1}+q_{2}i+q_{3}j+q_{4}k,$ or $q=q_{1}+\vec{q},$ as shown in \cite{LogTrig}. The exponential function is given by:
\begin{equation}e^{q}=e^{q_{1}}\{cos|\vec{q}|+\vec{q}(\frac{sin|\vec{q}|}{|\vec{q}|})\}.\end{equation}
The expression above allows a number of conclusions concerning the mentioned function; one of them is that it is Hyper Periodic, and this fact is illustrated geometrically by means of a cube which edges measure $2\pi.$ The fact that the exponential function belongs to this region makes it possible to nominate the region and considering that the region is critical to the quaternionic exponential function region.\\
Another important function in the development of the Quaternionic Analysis, the quaternionic logarithmic function is implemented \cite{LogTrig}. Then, it is first presented the generalized system of spherical coordinates in four dimensional, as bellow \cite{LogTrig}:
$$u_{1}=rcos\theta_{1}cos\theta_{2}cos\theta_{3}, 0<r<\infty$$
$$u_{2}=rcos\theta_{1}cos\theta_{2}sin\theta_{3},0<\theta_{3}<2\pi$$
$$u_{3}=rcos\theta_{1}sin\theta_{2}, 0<\theta_{2}<\frac{\pi}{2}$$
$$u_{4}=rsin\theta_{1}, 0<\theta_{1}<\frac{\pi}{2}$$
Identifying, in the expression (1):
$$e^{q}=e^{q_{1}}\{cos|\vec{q}|+\vec{q}(\frac{sin|\vec{q}|}{|\vec{q}|})\}$$
the value of $e^{\vec{q}},$ is given by:
\begin{equation}\phi=(cos|\vec{q}|,q_{2}\frac{sin|\vec{q}|}{|\vec{q}|},q_{3}\frac{sin|\vec{q}|}{|\vec{q}|},q_{4}\frac{sin|\vec{q}|}{|\vec{q}|})\end{equation}
or
\begin{equation}\phi=cos|\vec{q}|+q_{2}\frac{sin|\vec{q}|}{|\vec{q}|}i+q_{3}\frac{sin|\vec{q}|}{|\vec{q}|}j+q_{4}\frac{sin|\vec{q}|}{|\vec{q}|}k,\end{equation}
where replacing $(2)$ by $(1)$ there is:
\begin{equation}e^{q}=e^{u_{1}}\phi.\end{equation}
Identifying the $(4)$ expression in spherical coordinates we have:
$$e^{q}=e^{u_{1}}(cos\theta_{1}cos\theta_{2}cos\theta_{3}+cos\theta_{1}cos\theta_{2}sin\theta_{3}i+cos\theta_{1}sin\theta_{2}j+sin\theta_{1}k)$$
is logarithm of $q$ will be denoted by $lnq$ and will be the inverse of the exponential function. Thus $ w=lnq,$ satisfies the relation:
$$e^{w}=q,$$
where $q\neq 0.$ Now let $w\in H$ indicated by $w=w_{1}+w_{2}i+w_{3}j+w_{4}k$ or $w=(w_{1},w_{2},w_{3},w_{4}k)$ and using the generalized spherical coordinates:
$$u'_{1}=rcos\theta_{1}cos\theta_{2}cos\theta_{3}, 0<r<\infty$$
$$u'_{2}=rcos\theta_{1}cos\theta_{2}sin\theta_{3},0<\theta_{3}<2\pi$$
$$u'_{3}=rcos\theta_{1}sin\theta_{2}, 0<\theta_{2}<\frac{\pi}{2}$$
$$u'_{4}=rsin\theta_{1}, 0<\theta_{1}<\frac{\pi}{2}.$$
Now $w$ is given by:
$$w=u'_{1}+u'_{2}i+u'_{3}j+u'_{4}k$$
$$w=r(\frac{u'_{1}}{r}+\frac{u'_{2}}{r}i+\frac{u'_{3}}{r}j+\frac{u'_{4}}{r}k)$$
$$w=r(cos\theta_{1}cos\theta_{2}cos\theta_{3}+cos\theta_{1}cos\theta_{2}sin\theta_{3}i+cos\theta_{1}sin\theta_{2}j+sin\theta_{1}k)$$
where the value $r$ is positive, i.e, $r>0.$ Thus,
$$e^{w}=e^{u'_{1}}\{cos|\vec{u'}|+\vec{u'}(\frac{sin|\vec{u'}|}{|\vec{u'}|})\}$$
where $w=u'_{1}+\vec{u'}.$\\
However,
$$e^{u'_{1}}e^{\vec{u'}}=q$$
taking $e^{\vec{u'_{1}}}=r,$
\begin{equation}u'_{1}=ln|r|\end{equation}
and
$$\vec{u'}=ln(cos\theta_{1}cos\theta_{2}cos\theta_{3}+cos\theta_{1}cos\theta_{2}sin\theta_{3}i+cos\theta_{1}sin\theta_{2}j+sin\theta_{1}k)$$
but $w=u_{1}+\vec{u'}.$\\
Therefore, there is an expression for the logarithmic quaternionic function.
\begin{equation}w=lnq=ln|r|+ln(cos\theta_{1}cos\theta_{2}cos\theta_{3}+cos\theta_{1}cos\theta_{2}sin\theta_{3}i+cos\theta_{1}sin\theta_{2}j+sin\theta_{1}k)\end{equation}
\section{Logarithmic Cycles of the Time.}
The implementation appeared in the previous section suggests that the logarithm is used as a way to show the cyclicity of some physical quantity. At the present time it will show that is cyclicity happens simply considering that $q=t+xi+yj+zk,$ $q\in H$ is a function $\eta(q)=\eta(t,x,y,z),$ given by:
\begin{equation}\eta(t+\tau i,x,y,z)=(t+\tau i)+xi+yj+zk\end{equation}
or
\begin{equation}\eta(t+\tau i,x,y,z)=(t+\tau i)+\vec{u}.\end{equation}
Thus, the above function is characterized by its position, given by the vector $\vec{u}=(x,y,z)$ and time, here denoted by $t.$ The function can determine the temporal evolution of some physical quantity or simply modeling the approach to the problem.\\
Now, the function $\eta(q),$ in logarithmic form, can be written on using $(5),$ $u'_{1}=t+\tau i=ln|r|,$ and $(6),$ as follows:
\begin{equation}ln[\eta(q)]=ln|e^{t+\tau i}|+ln(cos\theta_{1}cos\theta_{2}cos\theta_{3}+cos\theta_{1}cos\theta_{2}sin\theta_{3}i+cos\theta_{1}sin\theta_{2}j+sin\theta_{1}k),\end{equation}
or,
\begin{equation}ln[\eta(q)]=ln|e^{t}(cos(\tau)+ isin(\tau)|+ln(cos\theta_{1}cos\theta_{2}cos\theta_{3}+cos\theta_{1}cos\theta_{2}sin\theta_{3}i+cos\theta_{1}sin\theta_{2}j+sin\theta_{1}k),\end{equation}
Therefore, considering the quaternionic function of the form: \begin{equation}\eta[q_{\varphi}]=[t+(2k\pi\tau)+(\theta_{1}+2k\pi)i+(\theta_{2}+2k\pi)j+(\theta_{3}+2k\pi)k],\end{equation}
it becomes clear that:
\begin{equation}ln[\eta(q)]=ln[\eta(q_{\varphi})].\end{equation}
The above results can be summarized in the following theorem:
\begin{theorem}If $q, q_{\varphi}\in H$ are quaternionic numbers of the form $q=(t,x,y,z)$ and $q_{\varphi}=(t+\tau i,x,y,z),$ furthermore $ln(\eta(q))$ is the logarithmic quaternionic function, then
\begin{equation}ln[\eta(q)]=ln[\eta(q_{\varphi})].\end{equation}
\end{theorem}
\section{Conclusion.}
The \textbf{Theorem} \textbf{1} shows the possibility of making the cyclicality in the variable $\tau$ and the spatial variables, which are located here by $\theta_{1},$ $\theta_{2}$ and $\theta_{3},$ and it is possible to resume the initial function. The following statements are concerned with what was presented here:
\begin{enumerate}
\item If the function $\eta$ representing a physical phenomenon, the cyclicality of time allows, since the spatial dimensions occur simultaneously, unlike what happens in the case considered in \cite{HyperCycles} where only time was considered cyclical.
\item The regions of variation of $\tau$ and the spatial variables represented here by generalized angles in spherical coordinates can be interpreted as the fundamental range $-\pi<\tau\leq\pi$ and the cube edges parallel to the coordinate axes and length $2\pi$ centered at the origin of the three dimensional system.
\item The possibility of cyclicity in logarithmic form allows two disjoint approaches: one to separate the space and time to another time and space in cyclic variations.
\end{enumerate}
Therefore, the model presented here can be applied to dependent physical models of time and space coordinates and it was shown that so there Cycles of the Time is necessary that the time variable is of the form $t+\tau i$ which can be considered a flat composition of times so that a one is dependent on of another second the complex relationship presented, and more, these remain after one cycle in exponential models \cite{HyperCycles} and logarithmic unchanged and return to the starting position. Furthermore, considering the probable connected model variables studied simultaneously and a cycle time will result in a cycle of space, both because they are in sync formulation present in log cycle time coupled to cycles of space.
\section{Acknowledgments}
My family.
|
2,869,038,153,970 | arxiv | \section{Introduction} \label{sec:intro}
Complex Organic Molecules (COMs) are organic molecules with six or more atoms, over 50 species of which have been detected in the ISM \citep{Herbst2009}. Understanding the chemistry that leads to the formation of such large molecules is an active area of research including laboratory experiments \citep{Chuang2016,Bergantini2017}, observational surveys \citep{Ceccarelli2017,Belloche2016}, and modelling work \citep[eg.][]{Coutens2018}. However, the major formation routes of COMs in star forming regions remains an open question.\par
It is possible that COMs form in the gas phase of star forming regions. For example, models have shown that proton transfer reactions between common ice mantle species that sublimate in hot cores can efficiently produce COMs \citep{Taquet2016}. Further, chemical models using gas phase reactions to form glycolaldehyde (HCOCH$_2$OH) can match the abundances observed in hot corinos \citep{Skouteris2018}. Recent observations of formamide towards the L1157-B1 shocked region were also well fit by shock models in which the parent species were released into the gas phase by the shock passage and then reacted in the warm, dense post-shock gas \citep{Codella2017}. Similarly, \citet{Kahane2013} found that observed formamide abundances in the protostar IRAS 16293-2422 could be reproduced using a model that assumed neutral parent species were able to react in the warm gas.\par
Alternatively, COMs in the gas phase may be best explained by grain surface formation followed by desorption into the gas phase. In this case, the grain surface acts to improve the efficiency of formation, bringing reactants together into one location and potentially lowering the energy required. Models of both a prestellar core \citep[L1544][]{Vasyunin2017,Quenard2018} and a hot corino \citep[IRAS 16293 B][]{Quenard2018} have had success implementing the diffusion-reaction mechanism of \citet{Hasegawa1992}. However, both works rely on chemical desorption \citep{Minissale2016} to release COMs into the gas phase, the efficiency of which is not well constrained.\par
Regardless of the formation path, the problem of releasing material into the gas phase remains. Gas phase formation routes require parent species to be released from the grains and surface formation requires the release of the products. In warm regions such as hot cores or shocked zones, this poses no issue. However, in cold dark clouds, it is less obvious how efficiently material can be released into the grains. In this work, the explosions of ice mantles are considered as a possible way to both enrich the gas phase with grain surface material and to open new chemical pathways.\par
It has been proposed that the ice mantles of dust grains may undergo explosions caused by the build up and subsequent reaction of radicals in the ice \citep{Greenberg1976}. This would release stored chemical energy and could raise the temperature of the whole dust grain. If this temperature excursion is sufficiently high, the ices will sublimate explosively. To raise a dust grain to \SI{1000}{\kelvin} would require approximately \SI{12}{\kilo\joule\per\mole}, an order of magnitude less than the typical bond energy \citet{Duley2011}.\par
An interesting consequence of these explosions is the unique chemical phase that follows. \citet{Cecchi-Pestellini2010} and \citet{Rawlings2013a} considered that in such explosions, the sublimated ice forms an expanding shell of gas which initially has the density of the pre-sublimation solid ($\sim$ \SI{e22}{\per\centi\metre\cubed}) and a temperature of \SI{1000}{\kelvin}. This phase lasts for $\sim$\SI{100}{\nano\second} as the sublimated ice expands into the wider environment but the chemical timescale is sufficiently short in such hot, dense gas that efficient three body chemistry can take place. This would lead to the formation of complex species from the released material and the chemical enrichment of the wider gas phase.\par
Whilst the possibility of these explosions forming specific molecules such as propene (CH$_2$CHCH$_3$) \citep{Rawlings2013} and methanol (CH$_3$OH) \citep{Coutens2017} have been studied, a comprehensive model of these explosions towards a dark cloud has not been produced. In this work, a gas-grain chemical model that includes explosions is used to model observations of COMs in a dark cloud with the aim of testing whether explosion chemistry is a viable route to their formation. In Section~\ref{sec:tmc-1}, the observational data is presented. In Section~\ref{sec:model}, the chemical model is described and, in Section~\ref{sec:results}, a comparison between the model and observations is presented.
\section{TMC-1 - Observational Data}
\label{sec:tmc-1}
In order to test whether explosion chemistry is a necessary or relevant process for dark cloud chemistry, observational constraints are required. TMC-1 is a common test case for dark cloud models \citep[eg.][]{Vidal2017,Ruaud2016} and many COMs have been detected in the region \citep{Soma2018}, making it an ideal candidate.\par
Two tests of the models are taken into consideration. First, the inclusion of explosions in the chemical model should not interfere with the gas phase chemistry of simple species. These species must be at least as well described by explosions as they are by other models. To this end, the first part of Table~\ref{table:observations} lists simple chemical species and their abundances taken from \citet{Agundez2013}. These were calculated by those authors from observed column densities using a H$_2$ column density of \SI{e22}{\per\centi\metre\squared}. \par
Second, the primary goal is to reproduce the observed abundances of COMs in TMC-1. Using the H$_2$ column density from \citet{Agundez2013}, the column density of COMs in the region have also been converted to fractional abundances. These are listed in the second part of Table~\ref{table:observations}. The column densities of methanol (CH$_3$OH), acetaldehyde (CH$_3$CHO), methyl formate (HCOOCH$_3$) and dimethyl ether (CH$_3$OCH$_3$) were taken from \citet{Soma2018}. Propene (CH$_2$CHCH$_3$) was detected by \citet{Marcelino2007}.\par
Note that \citet{Soma2015} found that the methanol emission in TMC-1 peaks in a different location to the cyanopolyyne peak. The cyanopolyyne peak is the location from which most molecular emission in the region originates but the COMs detected by \citet{Soma2018} were detected towards the methanol peak. \citet{Soma2018} argue that the detected COMs are therefore likely to form on the grain surface or from CH$_3$OH in the gas. The reason for this is that any enhancement in CH$_3$OH would naturally be accompanied by an enhancement in the other species. If explosions were responsible for forming or releasing COMs, similar behaviour would be observed. Since the physical conditions of the two peaks are broadly similar. ($n_H\sim$\SI{e4}{\per\centi\metre\cubed} and $T_k$=\SI{10}{\kelvin}) and even the methanol abundance only varies by a factor of 1.5 \citep{Soma2018}. As a result, no distinction is made between the peaks for the sake of the modelling.
\begin{table}
\centering
\caption{Species and measured abundances in TMC-1 taken from \citet{Agundez2013} unless otherwise specified.}
\begin{tabular}{cc}
\hline
Species & Fractional Abundance \\
\hline
OH & \num{3e-7}\\
CO & \num{1.7e-4}\\
HCO$^+$ & \num{9.3e-9}\\
H$_2$CO & \num{5e-8}\\
N$_2$H$^+$ & \num{2.8e-10}\\
NH$_3$ & \num{2.5e-8}\\
CS & \num{3e-9}\\
H$_2$CS & \num{7e-10}\\
OCS & \num{2.2e-9}\\
SO & \num{1.5e-9}\\
\hline
CH$_3$OH & \num{6e-9}\\
CH$_3$CHO & \num{5.5e-10}\\
HCOOCH$_3$ & \num{1.6e-10} \\
CH$_3$OCH$_3$ & \num{1.9e-10} \\
CH$_2$CHCH$_3$ & \num{4e-9} \\
\hline
\end{tabular}
\label{table:observations}
\end{table}
\section{Model}
\label{sec:model}
\subsection{The Cloud Chemistry Model}
In order to model TMC-1 and to test the effect of explosions on the chemistry of dark clouds, the gas-grain chemical code UCLCHEM\footnote{\url{uclchem.github.io}} \citep{Holdship2017} was modified. The basic dark cloud model is described in this section.\par
UCLCHEM is used to model a single point at the centre of a dark cloud. The gas starts at a hydrogen nuclei density of \SI{e2}{\per\centi\metre\cubed} and collapses in freefall to \SI{2e4}{\per\centi\metre\cubed} at a constant temperature of \SI{10}{\kelvin}. After the collapse, the visual extinction at the cloud centre is 10 mag. Initially, the abundance of every species except for atomic elements is set to zero whilst the elemental abundances themselves are set to their solar values \citep{Asplund2009}.\par
The model follows 528 species through a network of approximately 3000 reactions. This includes species in the gas phase and in the ice mantles. Gas phase reactions from the UMIST12 database \citep{McElroy2013}, freeze out of gas phase species onto the dust grains and the non-thermal desorption of those species back into the gas phase through UV, cosmic rays and H$_2$ formation \citep{Roberts2007} are all included in the network. In addition to this, the cosmic ray induced photo-dissociation of hydrogenated species on the grain surfaces are included using efficiencies from \citet{Garrod2006}.\par
\subsection{The Explosion Model}
\label{sec:explosion-model}
The model considers the possibility that if enough chemical energy is stored in the ice mantles, it could be suddenly released and this would lead to an explosion. This is treated by considering the abundance of H atoms in the ices. If approximately 5\% of the grain material was atomic hydrogen, the energy released through H$_2$ formation would be sufficient to heat the whole grain to \SI{1000}{\kelvin} if every H atom was involved. Thus, an explosion is triggered in the model once the H abundance in the ices reaches this threshold. \par
The hydrogen required by the model is built up by assuming there is a probability ($f_H$) that when a H atom freezes out of the gas phase and onto the ices, it remains atomic rather than immediately reacting to form H$_2$ or other species. Following \citep{Rawlings2013a} and \citep{Duley2011}, a probability of 0.1 is assumed based on the retention of H atoms in amorphous carbon films found in laboratory experiments \citep{Sugai1989}.\par
The cosmic ray induced photodissociation of species in the ice mantles also contributes to the total as any abstracted H is also stored. If a portion of this abstracted H actually desorbs or the probability of H remaining atomic in the ices is less than 0.1, this model will overestimate the amount of H in the ices. In that case, the actual impact of explosions on the chemistry in TMC-1 would be overestimated by the model. \par
\begin{table}
\centering
\caption{Parameters and adopted values for the explosion model.}
\begin{tabular}{cc}
\hline
Parameter & Value \\
\hline
Initial Density & \SI{e22}{\per\centi\metre\cubed}\\
Initial Temperature ($T_0$) & \SI{e3}{\kelvin} \\
Initial Radius ($r_0$) & \SI{e-5}{\centi\metre}\\
Sound speed ($v_s$) & \SI{e4}{\centi\metre\per\second}\\
Trapping Factor ($\epsilon$) & 1.0 \\
Atomic H Retention ($f_H$) & 0.1 \\
\hline
\end{tabular}
\label{table:modelparams}
\end{table}
To model the explosion itself, the single point model is paused and the ice mantle contents are run through a separate chemical model. In this model, the pre-explosion ice mantle is considered to form an adiabatically expanding spherical shell of gas. This gas expands and gas phase chemistry occurs until the density of the cloud is reached. The material is then added to the gas phase of the main chemical model, which resumes with depleted ices.\par
The chemical network for the explosion phase consists of 143 three body reactions. Many of which involve radicals which build up in the ices through partial hydrogenation of frozen species and photodissociation of larger species. Due to the high density, it is assumed that the reactions take place in the high pressure limit, that is to say the rates are not limited by the concentration of the stabilizing third body and the reaction proceeds at the two body rate \citep[Chapter 9][]{Jacob1999}. All reactions are listed in Table~\ref{table:explosionreactions1}. Where possible the rate coefficients are taken from the literature, otherwise rates are randomly sampled in log-space from the range \num{e-15} to \SI{e-9}{\centi\metre\cubed\per\second}. The model is then run 1000 times to generate a mean abundance and variance due to the unknown rates.\par
The parameters used for the explosion phase are listed in Table~\ref{table:modelparams}. The density and temperature of the exploding material have a time dependence based on the adiabatic expansion of a spherical shell, following the work of \citet{Cecchi-Pestellini2010}. If the shell is assumed to expand at the sound speed of the gas, then by mass conservation the density is given by,
\begin{equation}
\frac{n}{n_0} = \left(\frac{r_0}{r_0+\epsilon v_st}\right)^3
\label{eq:density}
\end{equation}
where $n$ is the number density, $r$ is the radius of the shell and the subscript 0 indicates the value of a variable at the start of the explosion. $v_s$ is the sound speed and $\epsilon$ is the trapping factor: an arbitrary constant that allows the expansion to be made slower than that of a freely expanding sphere of gas. Assuming an adiabatic expansion, the temperature, $T$, is given by,
\begin{equation}
T = T_0\left(\frac{r_0}{r_0+\epsilon v_st}\right)
\label{eq:temperature}
\end{equation}
where $T_0$ is the initial temperature, taken to be \SI{1000}{\kelvin}. This value is chosen as previous work on explosions showed that dust grains heated to this temperature could provide explanations for infrared emission bands in interstellar spectra \citep{Duley2011} and the high excitation H$_2$ emission in diffuse clouds \cite{Cecchi-Pestellini2012}.\par
Equations~\ref{eq:density} and~\ref{eq:temperature} are plotted in Figure~\ref{fig:exp-physical} for an $\epsilon$ of 1, the value adopted for this work. A smaller trapping factor increases the timescale of the explosion but it was found that models with $\epsilon = 0.1 $ did not produce greatly different abundances. The explosion ends when the exploding gas reaches ambient gas density. At the completion of this explosion, the abundances of the former ice mantle are added to the gas phase and the main chemical model continues.\par
\begin{figure}
\includegraphics[width=0.5\textwidth]{exp-physical}
\caption{The density (black) and temperature (red) profiles of the expanding gas shell as a function of time during an explosion.\label{fig:exp-physical}}
\end{figure}
\subsection{The Diffusion Model}
In order to test whether explosions are necessary to explain the abundance of COMs in TMC-1, a comparison model is employed. The explosions are turned off and the reaction of species on the grain through the Langmuir-Hinshelwood mechanism is considered. These are reactions between adsorbed molecules as they diffuse around the grain surface and they are implemented through the formalism described by \citet{Hasegawa1992}. Reaction-diffusion competition \citep[eg.][]{Chang2007} and chemical desorption \citep{Minissale2016} are also included in the model. Due to the chemical desorption, a fraction of any products created on the surface through exothermic reactions are released into the gas phase. The implementation of these processes in UCLCHEM was developed by \citet{Quenard2018} and is extensively described in Appendix A of that work.\par
The network used for this model mainly consists of the successive hydrogenation of key species such as CO through to CH$_3$OH as well as the formation of species such as CO$_2$ from CO and O. However, the main additions to the network of \citet{Quenard2018} are reactions taken from \citet{Garrod2006} that produce the COMs in Table~\ref{table:observations}. These reactions are presented in Table~\ref{table:garrod} Each reaction is assumed to be barrierless as they are radical-radical reactions and therefore the rate is largely dependent on the diffusion rate of the reactants.
\begin{table}
\centering
\caption{Surface Reactions necessary to produce observed COMs using the diffusion model. Reactions are taken from \citep{Garrod2006} or invented. A \# indicates a species on the surface.}
\begin{tabular}{cccc}
\hline
Reactant 1 & Reactant 2 & Product & Source\\
\hline
\#HCO & \#CH$_3$O & \#HCOOCH$_3$ & G\&H 2006\\
\#HCO & \#CH$_2$OH & \#HCOOCH$_3$ & \\
\#CH$_3$ & \#CH$_3$O & \#CH$_3$OCH$_3$ & G\&H 2006 \\
\#HCO & \#OH & \#HCOOH & G\&H 2006\\
\#CH$_3$ & \#C$_2$H$_3$ & \#CH$_3$CHCH$_2$ & \\
\#CH$_3$ & \#HCO & \#CH$_3$CHO & \\
\hline
\end{tabular}
\label{table:garrod}
\end{table}
\section{Results}
\label{sec:results}
\subsection{Effect of Explosions on Cloud Chemistry}
\label{sec:simple-results}
There are two motivating reasons to test the ability of the explosion model to reproduce observed abundances of simple species. The first is that there is the potential that the regular release of the ice mantles into the gas phase completely changes the abundances of those species. The model must reproduce the observations at least as well as a standard UCLCHEM model. Otherwise, it cannot be correct, even if it efficiently produces COMs.\par
Secondly, there are a large number of free parameters in the model, both in the assumed properties of TMC-1 and in the explosion itself. By adjusting the cloud parameters to best fit the observed abundances of simple species, the number of free parameters available to fit the COMs are reduced.\par
To fit the simple species, the so-called distance of disagreement measure \citep{Wakelam2006} was used. This is the average log difference between the model and observations. The UV flux, cosmic ray ionization rate and temperature were fit by minimizing this statistic. The temperature was varied between 0 and \SI{30}{\kelvin}. The standard cosmic ray ionization rate was taken to be \SI{1.3e-17}{\per\second} and both it and the UV flux and were varied between
0 and 100 times the standard values. The parameter space was also sampled, repeating parameter values in proportion to the value of the distance of disagreement they produced to test the sensitivity of the abundances to the parameters.\par
\begin{figure*}
\includegraphics[width=\textwidth]{simple}
\caption{The abundances of several simple species as a function of time in the explosion model. The purple shaded region shows the 67\% confidence interval of the abundances considering the uncertainty in the fitting. The average is plotted as a purple line and the output of a standard dark cloud UCLCHEM model without explosions is plotted in grey. The grey horizontal band in each case is the observed abundance in TMC-1 with a 0.3 dex uncertainty.\label{fig:exp-simple}}
\end{figure*}
Figure~\ref{fig:exp-simple} shows the observed abundances with the 0.3 dex uncertainty assumed by \citet{Agundez2013} in grey and the abundances obtained by the model in purple. In each subplot, the purple line shows the median abundance from the model sampling and the shaded region is given by the difference between the 17th and 83rd percentile values of the abundances across the models. The best fit is a cosmic ray ionization rate of \SI{1.7e-17}{\per\second}, a temperature of \SI{12.1}{\kelvin}, and a UV radiation field of 0.7 Habing. The parameter ranges corresponding to the shaded regions include gas temperatures between 9 and \SI{21}{\kelvin}, UV fields between 0.5 and 4.1 Habing and cosmic ray ionization rates up to 8 times the standard.\par
Figure~\ref{fig:exp-simple} also shows the abundance of each species as a function of time in a standard dark cloud model without explosions. The major difference is that for many species, once a maximum value is reached at high densities in the standard model, freeze out starts to deplete its abundance. In the explosion model, a quasi steady state is instead reached, with explosions regularly releasing material back into the gas phase. \par
There is a problem with the model in that it does not well reproduce the observed abundances of ions. It is not uncommon for single point models of dark clouds to give low abundances of ions as they do not capture the chemistry of regions with lower visual extinction. For example, the model without explosions has a HCO$^+$ peak that is an order of magnitude too low but is within a factor of a few of that found in other dark cloud models \citep{Iqbal2018}. However, the explosions seem to exacerbate the issue and the explosion cycle averaged abundance is much lower than the non-explosion peak, particularly in the case of N$_2$H$^+$. However, given the generally good agreement between the explosion model and the observations, the model is considered to give a good representation of dark cloud chemistry.\par
It should be noted that the fact that the explosions affect the abundances of all species, even those mostly formed in the gas phase, poses a problem for the model. As noted in Section~\ref{sec:tmc-1}, observations show that different species peak in emission at different positions in TMC-1. The usual explanation is that differences between gas-phase and surface chemistry are the cause. Explosions do not present a solution to this since all species are affected similarly.\par
\subsection{COMs}
The aim of introducing explosions into this model was to reproduce the abundance of COMs in TMC-1. In this section, the model is further compared to the abundances in the lower half of Table~\ref{table:observations}. In Figure~\ref{fig:exp-coms}, the abundances of those COMs obtained through this modelling are plotted along with the observed values. In this plot, the purple line gives the average abundance of each species in the models, having run the model many times to randomly sample unknown rates. The shaded region is not visible in the plot due to the fact that the abundances of the displayed species are unaffected by the unknown rates. \par
\begin{figure*}
\includegraphics[width=0.9\textwidth]{coms}
\caption{The abundances of COMs observed in TMC-1. The horizontal bands show observed values, the purple line shows the explosion model abundances. There is a shaded region showing the results from 1000 models using random rates for the explosion reactions with unknown rates. However, they do not affect the abundance of the species shown here and so the region is not visible. \label{fig:exp-coms}}
\end{figure*}
As can be seen in the Figure, CH$_3$OCH$_3$ and CH$_3$CHCH$_2$ are not efficiently produced in the model. The low production of these species illustrates an overall problem with the explosion model which is the short timescale of the explosion event. Unless the rate of a reaction is very high, the overall change in reactant abundances is low. In general, the proportion of an ice phase species that reacts during an explosion event is $\ll$1\%. For example, 99.99\% of HCO in the ice phase is released into the gas phase after a typical explosion and only 0.01\% reacts to form other species. Thus, the limiting factor in the formation of a COM such as CH$_3$OCH$_3$ is the rate of reaction in the explosion, not the availability of parent species.\par
This low rate of production is exacerbated by destruction in the gas phase. For example, in the reference model, CH$_3$OCH$_3$ can have a fractional abundance $\sim$\num{e-11} immediately after an explosion. If such abundances were preserved between explosions, the cumulative abundance could reach observed values. However, CH$_3$OCH$_3$ is efficiently destroyed by ions in the gas phase and so does not accumulate.\par
In the model, HCOOCH$_3$ is efficiently produced and is within an order of magnitude of the observed abundance. However, the reaction to produce HCOOCH$_3$ is unconstrained in the model and so the rate is randomly sampled. Despite this, the abundance of HCOOCH$_3$ does not vary. Tests where the reaction is removed from the explosion network show that HCOOCH$_3$ is actually produced in the gas phase. The explosions contribute by releasing parent species from the ice mantle and the reactions during the explosion are not actually directly producing HCOOCH$_3$. Given that the timescale of the explosion appears to be too short in comparison to the chemical timescales of the explosion network, this may be the main way explosions contribute to interstellar chemistry, if they do in fact contribute.\par
Finally, both CH$_3$OH and CH$_3$CHO are each at least an order of magnitude above their observed values. This is a result of the fact that the parent species of each molecule are extremely abundant in the ices and so even with low reaction rates a large amount of each is produced. Further, a large proportion of these species is frozen onto the dust grains and the explosions release this, greatly enhancing their gas phase abundance.\par
In summary, whilst the explosion model gives an adequate description of the dark cloud chemistry of simple species, it does not reproduce the observed abundances of this sample of COMs. The main flaw is that the reactions which form COMs are not sufficiently fast to form large amounts of the complex species in the relatively short explosions. However, a further problem is posed by the fact that for the most simple COMs that were modelled, the predicted abundances that are too high due to the release of large amounts of ice mantle material.\par
\subsection{Comparison to the Diffusion Model}
Species that freeze onto the ices are likely to diffuse and potentially react. If these processes alone are sufficient to model the abundance of COMs in TMC-1, it is questionable whether the explosion process needs to be introduced. However, if diffusion reactions are insufficient, it is possible explosions are an important process in molecular clouds. In this section the ability of the diffusion model to reproduce COMs in TMC-1 is evaluated using the standard parameters from Section~\ref{sec:simple-results}. The abundances of observed COMs in TMC-1 and the abundances obtained in the explosion and diffusion models are summarized in Table~\ref{table:abunds}.\par
The model is successful in reproducing the abundance of CH$_3$OH. For $\sim$\SI{1}{\mega\year} after the collapse to the density of the cloud, the abundance of CH$_3$OH is within an order of magnitude of the observed value. However, the CH$_3$CHO abundance is too high as it has an abundance similar to CH$_3$OH.\par
Beyond this, the diffusion model does not reproduce the observations. The abundances of HCOOCH$_3$, CH$_3$OCH$_3$ and CH$_3$CHCH$_2$ are too low by many orders of magnitude. Given that the production of the reactants that form these species is the same in both models, this must be due to the efficiency of the diffusion of these reactants. The explosion provides a means for the reactants in Table~\ref{table:garrod} to meet and react whereas most are too heavy to quickly diffuse around the grain surface, especially competing with more mobile species such as H.\par
The diffusion model is improved if temperatures of \SI{30}{\kelvin} are used. Non-negligible amounts of the three largest COMs are produced. However, HCOOCH$_3$ and CH$_3$CHCH$_2$ are still too low by over three orders of magnitude. On the other hand, CH$_3$OCH$_3$ is actually higher in these models than the observations. Thus it is possible that if the dust temperature is $\sim$\SI{30}{\kelvin}, diffusion reactions may produce COMs. However, unless the diffusion network is significantly changed, the observations towards TMC-1 still cannot be properly explained by diffusion chemistry alone.
\begin{table*}
\centering
\caption{Abundances of COMs from observations and best fit parameters of the explosion and diffusion models.}
\begin{tabular}{lccc}
\hline
Species & Observed Abundance & Explosion Model & Diffusion Model\\
\hline
CH$_3$OH & \num{6e-9} & \num{8.1e-7}& \num{4.6e-9} \\
CH$_3$CHO & \num{5.5e-10} & \num{2.9e-7}& \num{2.2e-7}\\
HCOOCH$_3$ & \num{1.6e-10} & \num{1.7e-11}& \num{3.2e-15} \\
CH$_3$OCH$_3$ & \num{1.9e-10} & \num{4.2e-15}& \num{1.1e-15} \\
CH$_2$CHCH$_3$ & \num{4e-9} & \num{2.4e-16} & \num{4.6e-22} \\
\hline
\end{tabular}
\label{table:abunds}
\end{table*}
\section{Conclusion}
Explosions of the dust grain ice mantles through the build up of radicals in the ice were added to UCLCHEM, creating a self-consistent gas-grain chemical model with explosions. These explosions cause short lived (\SI{100}{\nano\second}) phases of high density, high temperature gas in which three body reactions can occur. The ability of the model to reproduce observations of a dark molecular cloud was evaluated, with a particular focus on complex organic molecules.\par
It was found that, despite the regular enrichment of the gas phase with ice mantle species, many simple species observed in TMC-1 were well described by the model. The majority of species had model abundances within an order of magnitude of the observed abundances and the exceptions were molecular ions which are also challenging to reproduce in models without explosions. It was also possible to conclude that explosions become more significant when the cosmic ray ionization rate is increased.\par
However, the explosion model could not reproduce the observed abundances of COMs. The abundances of those that formed efficiently on the dust grains were far larger than observed due to the regular release of the ice mantles into the gas phase. Two destruction routes of CH$_3$OH were introduced to the explosion model but it was found reactions during the explosion were not efficient enough to have a great effect.\par
The low efficiency of the reactions during the explosions, short explosion timescale and small abundance of parent species combined to give low abundances of the other COMs in the model. In the case of CH$_3$CHCH$_2$, the reaction rates are experimentally measured and so this failing of the model represents a major flaw. HCOOCH$_3$ was the most abundant of the under-produced COMs, though it formed in the post explosion gas phase from species released by the explosions.\par
Overall, this work shows that, based on our current understanding of the chemical network, it is unlikely that ice mantle explosions contribute significantly to the chemical composition of dark molecular clouds. The explosion model produces simple species equally well to a standard UCLCHEM model but underproduces most COMs and overproduces CH$_3$OH and CH$_3$CHO.\par
This poses a challenge as the models that included surface reactions but no explosions were similarly unable to match the observations. One solution may be found through laboratory measurements. The models of both processes have a large number of uncertain parameters and an improved agreement between the models and observations may be obtained if these are constrained. Alternatively, another formation process may be invoked for COM formation in cold gas. For example, the collision of dust grains in turbulent gas may lead to the synthesis of complex species \citep{Cassone2018} or cosmic rays may produce suprathermal molecules in ice mantles that can overcome reaction energy barriers to produce complex species \citep{Shingledecker2018OnModels}. \par
\acknowledgments
The authors thank the referees for their constructive comments which improved this manuscript. JH, SV and JMCR acknowledge funding from the STFC grant ST/M001334/1. NB and DS thank STFC for financially supporting their visit to UCL in July 2016.
\section{}
\textit{Research Notes of the \href{https://aas.org}{American Astronomical Society}}
(\href{http://rnaas.aas.org}{RNAAS}) is a publication in the AAS portfolio
(alongside ApJ, AJ, ApJ Supplements, and ApJ Letters) through which authors can
promptly and briefly share materials of interest with the astronomical community
in a form that will be searchable via ADS and permanently archived.
The astronomical community has long faced a challenge in disseminating
information that may not meet the criteria for a traditional journal article.
There have generally been few options available for sharing works in progress,
comments and clarifications, null results, and timely reports of observations
(such as the spectrum of a supernova), as well as results that wouldn’t
traditionally merit a full paper (such as the discovery of a single exoplanet
or contributions to the monitoring of variable sources).
Launched in 2017, RNAAS was developed as a supported and long-term
communication channel for results such as these that would otherwise be
difficult to broadly disseminate to the professional community and persistently
archive for future reference.
Submissions to RNAAS should be brief communications - 1,000 words or fewer
\footnote{An easy way to count the number of words in a Research Note is to use
the \texttt{texcount} utility installed with most \latex\ installations. The
call \texttt{texcount -incbib -v3 rnaas.tex}) gives 57 words in the front
matter and 493 words in the text/references/captions of this template. Another
option is by copying the words into MS/Word, and using ``Word Count'' under the
Tool tab.}, and no more than a single figure (e.g. Figure \ref{fig:1}) or table
(but not both) - and should be written in a style similar to that of a
traditional journal article, including references, where appropriate, but not
including an abstract.
Unlike the other journals in the AAS portfolio, RNAAS publications are not
peer reviewed; they are, however, reviewed by an editor for appropriateness
and format before publication. If accepted, RNAAS submissions are typically
published within 72 hours of manuscript receipt. Each RNAAS article is
issued a DOI and indexed by ADS \citep{2000A&AS..143...41K} to create a
long-term, citable record of work.
Articles can be submitted in \latex\ (preferably with the new "RNAAS"
style option in AASTeX v6.2), MS/Word, or via the direct submission in the
\href{http://www.authorea.com}{Authorea} or
\href{http://www.overleaf.com}{Overleaf} online collaborative editors.
Authors are expected to follow the AAS's ethics \citep{2006ApJ...652..847K},
including guidance on plagiarism \citep{2012AAS...21920404V}.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.85,angle=0]{aas.pdf}
\caption{Top page of the AAS Journals' website, \url{http://journals.aas.org},
on October 15, 2017. Each RNAAS manuscript is only allowed one figure or
table (but not both). Including the
\href{http://journals.aas.org//authors/data.html\#DbF}{data behind the figure}
in a Note is encouraged, and the data will be provided as a link in the
published Note.\label{fig:1}}
\end{center}
\end{figure}
\acknowledgments
Acknowledge people, facilities, and software here but remember that this counts
against your 1000 word limit.
|
2,869,038,153,971 | arxiv | \section{Introduction}
\label{sec:intro}
With the current power of machine learning, it is possible to discover information about the items in a training set with high accuracy: for example, \citet{fredrikson-etal:2015:CCS} showed that a model inversion attack, using information about prediction confidence from machine learning APIs, could accurately reconstruct images from a facial recognition training set; \citet{zhu-etal:2019:NeurIPS} showed that the same was possible using information only from gradients in the training process, for various well-known computer vision datasets.
Because of this possibility, Differential Privacy (DP) \citep{dwork-roth:2014} has been applied to the training of deep neural networks, in particular to Stochastic Gradient Descent (SGD) in the form of DP-SGD, initially proposed by \citet{song-etal:2013}. DP-SGD in its original form applies $\epsilon$-DP noise to the gradient vectors within each batch, thereby providing an $\epsilon$ guarantee over training datasets. Song et al. noted that the noise introduced by a DP mechanism impacted SGD performance significantly, but later developments have improved its performance: for example, \citet{abadi-etal:2016:CCS} proposed a Gaussian-based mechanism with a moments accounting method for tighter bounds on the privacy budget;
in the space of language models, \citet{mcmahan-etal:2018:ICLR} showed how to use DP for user-level privacy at the cost of increased computation rather than decreased utility;
also in that space, \citet{li-etal:2022:ICLR} showed that very different regions of the hyperparameter space relative to non-private models, and a new `ghost clipping' technique on gradients, could lead to strong performance of large language models under DP-Adam, an extension of DP-SGD.
Nevertheless, there is still generally a gap in performance between non-private and private models.
DP-SGD is based on the standard version of DP as defined in \citet{dwork-roth:2014}. That is, it provides guarantees wrt adjacent training datasets (ie. differing by one element, whether in terms of example-level elements, user-level or other). The most popular noise-adding mechanism used in DP-SGD is the Gaussian mechanism, introduced by Abadi et al.~\citep{abadi-etal:2016:CCS} which applies Gaussian noise to gradients of deep learning models during training.
However, this Gaussian noise is
isotropic: that is, the noise is equally likely to point in any direction in the high-dimensional space of the deep learning model gradients. In contrast, we might expect that the utility of gradient outputs (and therefore the model) would be better served by a mechanism which is designed to preserve the \emph{direction} of the gradients. Such mechanisms arise in the context of \emph{metric differential privacy} or $d$-privacy~\cite{chatzikokolakis-etal:2013:PETS}, a generalisation of differential privacy in which the notion of adjacency is relaxed to a notion of privacy within a radius defined by a metric $d$. Metric DP generalises both standard DP and local DP, with the former recoverable using Hamming metric on datasets as the metric and the latter using the Discrete metric on individual data points. In the DP-SGD context, \textrm{metric DP}\xspace can allow the definition of potentially better mechanisms based on metrics other than the implicit Hamming metric of standard DP.
Guided by \textrm{metric DP}\xspace, a natural alternative mechanism to apply to gradients is one which preserves angular distance. Recently, one such \textit{directional privacy} mechanism has been developed by \citet{weggenmann-kerschbaum:2021:CCS}, who applied the idea to recurrent temporal data in the context of a dataset of sleep recordings. \citet{weggenmann-kerschbaum:2021:CCS} provide two novel DP mechanisms for their directional privacy, based on the von Mises-Fisher and Purkayastha distributions. Importantly, these mechanisms provide $\epsilon d$-privacy guarantees rather than $(\epsilon, \delta)$-DP, which means that their composition properties are more straightforward ($d$-privacy composition follows standard DP composition) and they do not require complex reasoning using a moments accountant~\citep{mironov:2017:CSF}.
The \textbf{key idea} in this paper is to define analogous mechanisms to apply to gradients in the context of DP-SGD, so that with high likelihood a reported gradient is close in direction to the original gradient and with diminishing likelihood directions further away. The aim of the present paper is to show that these kinds of directional privacy mechanisms applied to deep learning training can have less impact on model performance (because the application of noise can be more targeted) while providing $\epsilon d$-privacy guarantees.
In this paper, we define a model \textsc{DirDP-SGD}\xspace and corresponding privacy mechanism for applying directional privacy to gradients in deep learning training (\S\ref{sec:model}). This mechanism comes with a (metric) DP guarantee; however, as we discuss, it is not straightforward to compare the privacy budgets of the Gaussian mechanism of DP-SGD with \textsc{DirDP-SGD}\xspace. We therefore provide experimental comparisons of both privacy and utility of \textsc{DirDP-SGD}\xspace and DP-SGD (\S\ref{sec:exper}), where the experimental evaluation of privacy is based on recent methods that permit the reconstruction of training set data based on gradients during training \citep{zhu-etal:2019:NeurIPS,geiping-etal:2020:NeurIPS,wei-etal:2020:ESORICS}. We show (\S\ref{sec:results}) that \textsc{DirDP-SGD}\xspace performs notably better on some major datasets for comparable levels of defence against reconstruction attacks.
Our contributions in this paper, then, are:
\begin{itemize}
\item applying for the first time a metric DP mechanism based on angular distance --- via the von Mises-Fisher distribution --- to
use as an alternative to Gaussian noise in training via Stochastic Gradient Descent in deep learning;
\item demonstrating that this provides $\epsilon d$-privacy rather than $(\epsilon, \delta)$-privacy for the training as a whole; and
\item showing that on major datasets, this outperforms Gaussian noise in defending against gradient-based reconstruction attacks.
\end{itemize}
\section{Related Work}
\label{sec:lit-rev}
In this section, we review relevant work on the use of DP in deep learning (\S\ref{sec:lit-rev-DP-deep}); on \textrm{metric DP}\xspace, including its own intersections with deep learning (\S\ref{sec:lit-rev-dpriv}); and, because we carry out an empirical evaluation of privacy, on gradient-based reconstruction attacks (\S\ref{sec:attack}).
\subsection{DP in Deep Learning}
\label{sec:lit-rev-DP-deep}
Neural networks can be victims of several types of attacks, like membership inference \citep{Rahman2018MembershipIA, Mukherjee2021privGANPG}, model stealing \citep{Yu2020CloudLeakLD} and data reconstruction \citep{zhu-etal:2019:NeurIPS, DBLP:journals/corr/abs-2001-02610, geiping-etal:2020:NeurIPS, wei-etal:2020:ESORICS}. This motivates the need for privacy guarantees to protect neural networks, while keeping their utility for the task they are trained to deal with.
\citet{song-etal:2013} proposed Differentially Private Stochastic Gradient Descent (DP-SGD), which first brought DP to the training of gradient-descent models. DP-SGD works by adding calibrated noise in the gradients during training, before updating the parameters.
This was followed by work that looked at providing efficient algorithms and tightening error bounds \citep[for example]{bassily-etal:2014:FOCS}, so that the addition of noise would not degrade utility to impractical levels. A key work in this direction was made by \citet{abadi-etal:2016:CCS}, who introduced a technique to keep track of the privacy budget, called the Moments Accountant, specifically for the Gaussian mechanism.
Afterwards, several papers studied the effect of DP in deep learning in other domains, such as NLP \citep{mcmahan-etal:2018:ICLR}, and in applications like Generative Adversarial Networks \citep{8636556, torkzadehmahani2019dp}. Recent work has also returned to the possibility of feasibly applying DP through output perturbations \citep{lu-etal:2022:TIFS}.
The many ways in which DP has been applied in deep learning in general are beyond the scope of the present work, and we refer the reader to surveys such as \citet{gong-etal:2020}; below we focus only on DP-SGD and related methods.
In this context, the additional privacy comes with a cost, in that the noisy gradients may affect the utility of the model. Therefore, either better features may be collected or handcrafted, or even more data may be needed \citep{tramer2021differentially}. \citet{li-etal:2022:ICLR} (in NLP) and \citet{de-etal:2022} (in computer vision) also found that DP-SGD can perform well in very different regions of the hyperparameter space relative to non-private models. The architecture of the model may also play a role for the utility, with larger and pretrained models being more efficiently fine-tuned, especially with larger batch sizes \citep{li-etal:2022:ICLR, DBLP:journals/corr/abs-2108-01624}, which can be computationally demanding; \citet{li-etal:2022:ICLR} also showed how to reduce the high memory consumption for training via `ghost clipping'.
Proposals to change the DP-SGD algorithm itself have also been made, many of them relating to clipping strategies. \citet{10.1145/3447548.3467268} observed that clipping and noise addition affect underrepresented classes, making the accuracy of the model for them even lower. Thus they proposed to control the contribution of samples in a group according to the group clipping bias. \citet{10.1145/3469877.3490594} proposed to divide gradients from $m$ samples into $k$ groups. Before noise is added, the gradients in each group are clipped with a different bound, as opposed to a global bound from DP-SGD. They argue that a clipping could distort gradient information.
However, all these works in DP and deep learning have adopted isotropic noise, mostly from the Gaussian and Laplace distributions. Clipping the gradients derived from these noises limits their \emph{length}, but does not alter their \emph{direction}. There is a lack of studies comparing how different noise distributions affect the privacy/utility tradeoff and how noise distributions other than isotropic ones can be used during the training of neural networks.
\subsection{Metric Differential Privacy}
\label{sec:lit-rev-dpriv}
There have been many variants of DP proposed in the literature \citep{pejo-desfontaines:2022}. In this work we adopt a relaxation of DP called metric differential privacy (hereafter metric DP), introduced by Chatzikokolakis et al. \cite{chatzikokolakis-etal:2013:PETS} and also known as generalised DP, $d$-privacy, and $d_{\mathcal{X}}$-privacy.
Metric DP is motivated by the observation that secrets which are closer together (wrt the metric $d$) are more ``indistinguishable'' than secrets which are further apart, and therefore can be made ``more private'' using the same amount of noise. Thus the level of indistinguishability between secrets can be reported over a radius $r$, rather than reporting a single $\epsilon$ value for the entire domain. Note that on bounded domains the $\epsilon$ for the domain can be recovered by choosing $r$ to be the sensitivity of the query (in the case of standard DP), or the size of the domain (in the case of local DP).
Metric DP was first applied to the problem of geo-location privacy \citep{andres2013geo}. In this application, the goal is to conceal information about an exact location (`geo-indistinguishability') while allowing the release of an approximate location, e.g. in the case of location-based service provision. In this scenario, $d$ was chosen to be the Euclidean distance and the corresponding mechanism implemented using two-dimensional Laplace noise.
Many later applications of \textrm{metric DP}\xspace have been in this kind of geolocation context, for example,
mobility tracing \citep{chatzikokolakis-etal:2014:PETS},
location data with temporal correlations \citep{xiao-xiong:2015:CCS},
mobile crowdsensing \citep{wang-etal:2018}, and
location data with non-circular distributional characteristics \citep{zhao-etal:2022:TKDE}.
In the area of deep learning in NLP, \citet{fernandes-etal:2019:POST} observed that
learned representations in $n$-dimensional space (word embeddings) could be seen as analogous to locations in geo-location privacy, and proposed a \textrm{metric DP}\xspace
mechanism for authorship privacy using the Earth Mover's distance as the metric. Work following on from that used
hyperbolic rather than Euclidean spaces for hierarchical representations \citep{feyisetan-etal:2019},
calibrated multivariate perturbations \citep{feyisetan-etal:2020},
representations for contextual rather than static language models \citep{qu-etal:2021:CIKM}, and
a variational autoencoder to perturb overall latent vectors rather than individual words \citep{weggenmann-etal:2022:WWW}.
A related application that takes a similar spatial perspective has been to k-means clustering \citep{yang-etal:2022}.
None of these are concerned with differentially private training of a deep learner in the manner of DP-SGD.
The application of \textrm{metric DP}\xspace that we draw on is not related to the existing uses in deep learning just described. In the context of providing privacy guarantees to sleep study data, \citet{weggenmann-kerschbaum:2021:CCS} apply \textrm{metric DP}\xspace to periodic data such as activity that occurs at a particular time of day or day of the week, by noting that
periodicity can be represented as a direction on a circle, and `directional noise' perturbs this direction while preserving utility.
They proposed a variety of privacy mechanisms, including variants of Laplace, plus the novel Purkayastha and von Mises-Fisher (VMF) mechanisms.
\citet{weggenmann-kerschbaum:2021:CCS} also provide e.g. sampling methods for VMF that reduce from a multivariate sampling problem to a univariate one in order to avoid the curse of dimensionality. In the present work we adopt the VMF mechanism to apply directional noise to gradients instead of (isotropic) Gaussian noise more typically used in DP-SGD that perturbs the gradient in any direction, drawing on a similar intuition that preserving the gradient directions should provide better utility.
\subsection{Gradient-based Reconstruction Attacks} \label{sec:attack}
Distributed training aims to train a neural network without centralising data. It has the benefit of not having to hold private data in a single place. It consists of multiple clients, and each one holds their own private training set. Instead of sharing the data, the clients train their neural network and exchange the gradients. However, it is still possible to reconstruct the private training data from the gradients received.
The seminal study of \citet{zhu-etal:2019:NeurIPS} discovered that with few iterations it is possible to recover the private data in attacking neural network architectures which are twice differentiable; their attack has subsequently been referred to as the Deep Leakage from Gradients (DLG) attack. The attacker creates dummy inputs and labels, but instead of optimising the model weights, they optimise the dummy input and labels to minimise the Euclidean distance between their gradients and the gradients received from another client. Matching the gradients transforms the fake input to be similar to the real one.
This attack was refined in further works. \citet{DBLP:journals/corr/abs-2001-02610} proposed iDLG (\emph{i} stands for \emph{improved}), which works against any differentiable network trained with cross-entropy loss over one-hot labels.
The Inverting Gradients method (IGA\xspace), from \citet{geiping-etal:2020:NeurIPS}, maximises the cosine similarity between gradients. Thus it relies on an angle-based cost function, which should be more robust than a magnitude-based one against a trained neural network (which produces gradients with smaller magnitudes). Finally, \citet{wei-etal:2020:ESORICS} study (baptised Client Privacy Leakage --- CPL) how different configurations impact the effectiveness of the attack, such as different ways of initialising the dummy data.
\citet{zhu-etal:2019:NeurIPS}, in proposing DLG, also proposed some suggestions for possible defences. In addition to measures like gradient quantization and compression / sparsification, it also included the addition of Gaussian noise to gradients, although not within a DP context. Recently, \citet{scheliga-etal:2022:WACV} proposed a variational bottleneck-based preprocessing module that aims to disguise the original latent feature space that is vulnerable to gradient-based reconstruction attacks, by learning a joint distribution between input data and latent representation. Like \citet{zhu-etal:2019:NeurIPS}, this also does not come with differentially private guarantees.
\section{The Privacy Model}
\label{sec:model}
A differential privacy mechanism can be described formally as a function which takes as input an element (drawn from a domain $\mathcal{X}$) and produces a randomised value drawn from some distribution over outputs $\mathcal{Y}$, satisfying the characteristic DP inequation:
\begin{equation}\label{dp_eqn}
Pr ({\mathcal M}(x))[Y] \leq e^{\varepsilon}\times Pr({\mathcal M}(x'))[Y]~,
\end{equation}
whenever $x \sim x' \in \mathcal{X}$ and $Y \subseteq \mathcal{Y}$.
Popular methods of randomisation include the Gaussian, the Laplace (when the outputs are continuous) or the Geometric (when the outputs are discrete), all of which involve the addition of noise to the input $x \in \mathcal{X}$ to produce the noisy output $y \in \mathcal{Y}$.
Metric differential privacy describes a constraint on the type of randomisation and its ability to make different inputs to the mechanism indistinguishable when comparing their outputs.
\begin{definition}\label{d1614-a}
(Metric differential privacy) \cite{chatzikokolakis-etal:2013:PETS} Let $\varepsilon{>}0$. A mechanism ${\mathcal M}$ on an (input) metric space $(S, d)$, where $S$ is a set and $d$ is a metric, and producing outputs over $\mathcal{Z}$, satisfies $\varepsilon d$-privacy, if for all $s, s'\in S$ and $Z \subseteq \mathcal{Z}$,
\[
Pr ({\mathcal M}(s))[Z] \leq e^{\varepsilon d(s, s')}\times Pr({\mathcal M}(s'))[Z]~,
\]
where $Pr ({\mathcal M}(s))[Z]$ means the probability that the output of applying mechanism $\mathcal{M}$ to $s$ lies in $Z$.
\end{definition}
Definition~\ref{d1614-a} says that when two inputs $s,s'$ differ by the amount $d(s,s')$, the mechanism can make then indistinguishable up to a ratio proportional to $e^{\varepsilon d(s, s')}$. This means that when points are farther apart they become harder to be made indistinguishable.
This kind of privacy definition is useful when the utility can be captured by the metric $d$. In location privacy, the utility is the approximate location which should be preserved as much as possible. Figure~\ref{f1046} depicts the 2-dimensional Laplace probability density function that can be used to implement location privacy \cite{chatzikokolakis-etal:2013:PETS} and which can be shown to satisfy $d_2$-privacy where $d_2$ is the Euclidean distance. In this case, locations that are far apart do not need to be made indistinguishable, whereas locations close together do.
In our application to deep learning we use a metric based on angular distance of vectors, which we describe in the next sections.
\begin{figure}{\textwidth 0mm}
\centering
\includegraphics[width=0.4\textwidth]{images/DirectionalPrivacy/2-D-Laplace.jpg}
This illustrates the probability density function of the 2-dimensional Laplace distribution which can be used to implement location privacy.
\caption{Laplacian in 2-dimensions satisfying $\varepsilon d_2$-privacy on locations}\label{f1046}
\end{figure}
\subsection{Standard DP-SGD}
The standard DP-SGD from \citet{abadi-etal:2016:CCS} is shown in \Alg{alg:sgd_gauss}. It differs from the original stochastic gradient descent (i.e.\ without perturbation) only at lines 10 and 13, where the gradients $g_t(x_i)$ are first clipped and then perturbed using the Gaussian distribution. This is implemented essentially by adding a random perturbation to each of the components of the gradient when represented as a point in ${\mathbb R}^K$.
\begin{algorithm}[!th]
\caption{DP-SGD with Gaussian noise}\label{alg:sgd_gauss}
\begin{algorithmic}[1]
\State \textbf{Input:} Examples $\{x_1,\ldots,x_N\}$, loss function $\mathcal{L}(\theta) = \frac{1}{N} \sum_i \mathcal{L}(\theta, x_i)$. Parameters: learning rate $\eta_t$, noise scale $\sigma$, group size $L$, gradient norm bound $C$.
\State \textbf{Initialise} $\theta_0$ randomly
\For{$t \in T$}
\Comment{Take a random batch}
\State $L_t \gets $ random sample of $L$ indices from $1{\ldots}N$
\For{$i \in L_t$}
\Comment{Compute gradient vector}
\State $\mathbf{g}_t(x_i) \gets \nabla_{\theta_t} \mathcal{L}(\theta_t, x_i)$
\Comment{Clip gradient vector}
\State $\overline{\mathbf{g}}_t(x_i) \gets \nicefrac{\mathbf{g}_t(x_i)}{\max (1, \frac{\| \mathbf{g}_t(x_i)\|_2}{C}) }$
\EndFor
\Comment{Add noise}
{\color{blue} \State $\tilde{\mathbf{g}}_t \gets \frac{1}{L} \sum_i (\overline{\mathbf{g}}_t(x_i) + \mathcal{N}(0, \sigma^2))$}
\Comment{Descent}
\State $\theta_{t+1} \gets \theta_t - \eta_t \tilde{\mathbf{g}}_t$
\EndFor
\State \textbf{Output} $\theta_T$
\end{algorithmic}
\end{algorithm}
\citet{song-etal:2013} showed that by adding a noisy vector in $\mathbb{R}^K$ drawn from a distribution $\rho(z) \propto e^{\frac{\varepsilon}{2} \lVert z\rVert }$ one obtains $\varepsilon$-DP as per Eqn~\ref{dp_eqn}. In particular, from line 13 we can compute the ``sensitivity'' of the gradient computation as the maximum difference between gradients for batches $B$, $B'$. The clipping step on line 10 ensures the gradient norm is at most $C$. Therefore the sensitivity $\Delta_g$ is calculated as:
\begin{align}\label{eqn_sensitivity}
\Delta_g &~=~ \max_{B \sim B'} (\tilde{\mathbf{g}}_t - \tilde{\mathbf{g}}'_t) \nonumber \\
&~=~ \max_{B \sim B'} ( \frac{1}{L} \sum_i \overline{\mathbf{g}}_t(x_i) - \frac{1}{L} \sum_i \overline{\mathbf{g}}_t(x'_i) ) \nonumber \\
&~=~ \frac{2C}{L}
\end{align}
Here the maximum is taken over all pairs of gradients in $B,B'$.
The sensitivity of the final computation of weights on line 15 is then $\Delta_\theta = \frac{2C\eta_t}{L}$. By choosing $C = 1$ and observing that the scalars $\eta_t$ and $L$ satisfy a distributive law in eg. the Laplace distribution~\footnote{ie. if $X \sim Lap(\mu, b)$ then $\alpha X \sim Lap(\alpha \mu, \alpha b)$ for $\alpha > 0$.}, we arrive at Song et al.'s $\frac{\epsilon}{2}$ noise tuning.
\citet{abadi-etal:2016:CCS} use Gaussian noise in \Alg{alg:sgd_gauss} to arrive at an approximate DP-guarantee. ie.,
\begin{equation}\label{e1807}
Pr({\mathcal G}(B) \in Z) \leq Pr({\mathcal G}(B') \in Z) \times e^{\varepsilon} + \delta ~,
\end{equation}
where $\varepsilon \geq 0$ and $0 < \delta < 1$ are privacy parameters and $Z$ is a (measurable) set of potential outputs of (perturbed) gradients. In this case, the moments accountant method was developed by \citet{abadi-etal:2016:CCS} to produce tighter bounds for epsilon under composition.
\subsection{Directional Privacy and \textsc{DirDP-SGD}\xspace}
Gradient descent optimises the search for parameter selection that minimises the loss. Thus an alternative method of perturbing the gradients is to use randomisation that is based around perturbing the angle of deviation from the original gradient.
To give some intuition, Figure~\ref{f1406-a} illustrates how a gradient of a convex curve can be perturbed, leading to a perturbation of the descents.
\begin{algorithm}[!th]
\caption{\textsc{DirDP-SGD}\xspace with von Mises-Fisher noise}\label{alg:sgd2}
\begin{algorithmic}[1]
\State \textbf{Input:} Examples $\{x_1,\ldots,x_N\}$, loss function $\mathcal{L}(\theta) = \frac{1}{N} \sum_i \mathcal{L}(\theta, x_i)$. Parameters: learning rate $\eta_t$, noise scale $\sigma$, group size $L$, gradient norm bound $C$.
\State \textbf{Initialise} $\theta_0$ randomly
\For{$t \in T$}
\Comment{Take a random batch}
\State $L_t \gets $ random sample of $L$ indices from $1{\ldots}N$
\For{$i \in L_t$}
\Comment{Compute gradient vector}
\State $\mathbf{g}_t(x_i) \gets \nabla_{\theta_t} \mathcal{L}(\theta_t, x_i)$
\Comment{Scale gradient vector}
{\color{blue}\State $\overline{\mathbf{g}}_t(x_i) \gets \nicefrac{\mathbf{g}_t(x_i)}{\frac{\| \mathbf{g}_t(x_i)\|_2}{C} }$}
\EndFor
\Comment{Add noise}
{\color{blue} \State $\tilde{\mathbf{g}}_t \gets \frac{1}{L} \sum_i \mathcal{V}(\sigma, \overline{\mathbf{g}}_t(x_i))$}
\Comment{Descent}
\State $\theta_{t+1} \gets \theta_t - \eta_t \tilde{\mathbf{g}}_t$
\EndFor
\State \textbf{Output} $\theta_T$
\end{algorithmic}
\end{algorithm}
Given two vectors $v, v'$ in $\mathbb{R}^K$, we define the angular distance between them as $d_L(v, v')= \frac{\arccos{v^Tv'}}{\| v\| \| v'\|}$. When $v,v'$ are, for example, vectors on the unit $K$-dimensional sphere, then $d_L$ becomes a metric. Following Weggenmann et al.\ \cite{weggenmann-kerschbaum:2021:CCS}, we can use this to define \emph{directional privacy}.
\begin{definition}\label{d1614}
(Directional Privacy) \cite{weggenmann-kerschbaum:2021:CCS} Let $\epsilon{>}0$. A mechanism ${\mathcal M}$ on $\mathbb{R}^K$ satisfies $\varepsilon d_L$-privacy, if for all $v, v'$ and $Z \subseteq \textrm{supp}\mathcal{M}$,
\[
Pr({\mathcal M}(v))[Z] \leq e^{\varepsilon d_L(v, v')}\times Pr({\mathcal M}(v'))[Z]~.
\]
\end{definition}
Definition~\ref{d1614} says that when the mechanism ${\mathcal M}$ perturbs the vectors $v, v'$, the probabilities that the perturbed vectors lie within a (measurable) set $Z$ differ by a factor of $e^{\varepsilon d_L(v, v')}$. This means that the smaller the angular distance between the vectors $v,v'$ the more likely it will be that these probabilities will be almost the same, providing a high degree of indistinguishability.
The von Mises-Fisher (VMF) mechanism perturbs an input vector $\mu$ on the $K$-dimensional unit sphere as follows:
\begin{definition}\label{d1526}
The VMF mechanism on the $K$-dimensional unit sphere, is given by the density function:
\[
\mathcal{V}(\varepsilon, \mu)(x) ~ = ~ C_K(\varepsilon) e^{\varepsilon \mu^T x}~,
\]
where $\varepsilon >0$ and $C_K(\varepsilon)$ is the normalisation factor.
\end{definition}
\citet{weggenmann-kerschbaum:2021:CCS} showed that the VMF mechanism of Def.~\ref{d1526} satisfies $\varepsilon d_L$-privacy.
We can use Def.~\ref{d1526} to design a new algorithm for DP-SGD based on the VMF distribution, displayed in \Alg{alg:sgd2}. Note that unlike Gaussian noise, the VMF mechanism generates a noisy vector based on an input vector (line 13). Secondly, since the VMF guarantee is for vectors on a $K$-dimensional sphere, we scale gradients to a constant length $C$ rather than clipping them (line 10). This also ensures that the lengths of the gradients do not leak privacy. Observe also that $C$ does not affect the privacy guarantee on angles.
It turns out that \textrm{metric DP}\xspace satisfies useful compositional properties \citep{fernandes:22:CSF} making its guarantees carry over to algorithms that apply it, and in particular to Algorithm \ref{alg:sgd2}. We write ${\mathcal VM}(B)$ for the VMF mechanism applied to the vectors in the batch $B$, and then averaged at Line 13 in \Alg{alg:sgd2}. The following lemma shows that this step satisfies directional privacy over batches.
\begin{theorem}\label{l1647}
Denote by $B = [v_1, \dots v_n]$ a batch of vectors (gradients). If batch $B'$ differs from $B$ in at most one component vector, then \Alg{alg:sgd2} satisfies $\varepsilon d_L$-privacy wrt.\ batches, namely that:
\begin{equation}\label{e1720}
Pr({\mathcal VM}(B) \in Z) \leq Pr({\mathcal VM}(B') \in Z) \times e^{\epsilon d_L(B, B')} ~,
\end{equation}
where $Z$ is a (measurable) set of vectors, $Pr({\mathcal VM}(B) \in Z)$ is the probability that the output vector lies in $Z$ and (abusing notation) $d_L(B, B') = \max_{B \sim B'} d_L(v, v')$ is the maximum angular distance between all pairs $v\in B, v'\in B'$.
\begin{proof}
The difference between the mechanism in Definition~\ref{d1526} and its application in \Alg{alg:sgd2} is that Definition~\ref{d1526} is applied to every vector in $B,B'$, and then averaged. The constraint Equation~\ref{e1720} follows because this process is equivalent to the parallel composition of Definition~\ref{d1526} applied across a batch followed by postprocessing to form the average. The logical properties of \textrm{metric DP}\xspace \citep{fernandes:22:CSF} ensure that the parallel composition preserves metric differential privacy, as does post-processing.
\end{proof}
\end{theorem}
Using Eqn~\ref{eqn_sensitivity} and following \citet{song-etal:2013},
we can tune the noise to $\frac{\epsilon}{2}$
to achieve an overall $\epsilon d_L$-privacy guarantee.
\subsection{Notion of Theoretical Guarantees and Comparison in Practice}
\label{sec:dp-comparison}
At this point it is not clear how directly we can compare the two privacy guarantees for \Alg{alg:sgd_gauss} and \Alg{alg:sgd2}. As mentioned above the guarantee for \Alg{alg:sgd_gauss} includes a $\delta>0$ parameter --- this means that there is a risk that the perturbation will leak more than for an $\epsilon$-private mechanism, and therefore may provide reduced protection
against a threat of reconstruction.
Moreover, previous work~\citep{Chatzi:2019} has shown that comparing epsilons between different privacy mechanisms can be problematic.
Due to these differences it is difficult to compare these two notions, and in particular the $\varepsilon$-parameters in both constraints cannot be compared at all. This is because of the nature of the randomisation (i.e.\ Gaussian versus VMF) which provides incomparable guarantees (metric DP versus approximate DP). To avoid confusion we use $\varepsilon_G$ for the privacy parameter used for \Alg{alg:sgd_gauss} and $\varepsilon_V$ for \Alg{alg:sgd2}.
For these reasons, we evaluate the privacy afforded by these different mechanisms by \textbf{comparing the ability of these two methods of randomisation to defend against gradient-based reconstruction attacks}, which is the primary privacy vulnerability of concern in our deep learning context. Our privacy evaluation therefore avoids the difficult task of comparing epsilons. We simultaneously empirically compare each mechanism's utility on a classification task. Although \Alg{alg:sgd_gauss} has been widely used, \Alg{alg:sgd2} is a novel application of the VMF mechanism, and one of our tasks (detailed below) is to determine ranges of the parameter $\epsilon_V$ that provide a good trade-off between defending against the threat of reconstruction versus allowing a set of parameters to be determined that provide an acceptable level of utility.
\begin{figure}{\textwidth 0mm}
\centering
\includegraphics[width=0.4\textwidth]{images/DirectionalPrivacy/PerturbedGradients.png}
This illustrates how the gradients are perturbed during the DP-SGD. The red line is the unperturbed gradient, and the dotted blue lines are perturbations of angular distance $A$.
\caption{Perturbed gradients}\label{f1406-a}
\end{figure}
\begin{figure}{\textwidth 0mm}
\centering
\includegraphics[width=0.4\textwidth]{images/DirectionalPrivacy/PerturbedDescents.png}
Each gradient from Figure~\ref{f1406-a} corresponds to a descent in SGD. In this case the unperturbed descent would lead to the initial point (green circle) updated to the red circle. The perturbed descents (the blue circle) are chosen with equal probability, and lead either to a smaller or larger descent. Depending on the curvature of the particular loss function (here represented as a parabola) can lead to a loss of utility in finding the best model parameters.
\caption{Perturbed descents}\label{f1406-b}
\end{figure}
\subsection{Implementing Directional Privacy for Gradients}
We use Opacus,\footnote{\url{https://opacus.ai}} introduced by \citet{yousefpour2021opacus}, as a starting point for the experiments. The library, based on PyTorch \citep{NEURIPS2019_9015}, implements DP-SGD.
From an implementation view, there are three main components: (i) the minibatches are built by using Poisson sampling: each sample from the training dataset is picked with a certain probability $p$, which means that a sample may appear in zero, or more than one times in an epoch; (ii) the sample gradients are capped to avoid a very large individual contribution from one sample; (iii) noise is added to the gradients. Only Gaussian noise is supported.
We extend Opacus to work with the VMF distribution. Component (i) is left unchanged. For component (ii), we cap gradients, bounding them by C. It means that, instead of clipping the gradients according to the original formulation of \citet{abadi-etal:2016:CCS}, from
$$ \overline{\mathbf{g}}_t \left(x_i\right) \gets \mathbf{g}_t\left(x_i\right) / \max \left(1, \frac{\| \mathbf{g}_t(x_i)\|_2}{C}\right) $$
\noindent we remove the $\max$ operator
$$ \overline{\mathbf{g}}_t \left(x_i\right) \gets \mathbf{g}_t\left(x_i\right) / \frac{\| \mathbf{g}_t(x_i)\|_2}{C} $$
as described in \Alg{alg:sgd2}.
Finally, for component (iii), we switch the Gaussian noise for the Von Mises-Fisher one.
\section{Experimental Setup}
\label{sec:exper}
Most works evaluating DP in deep learning report performance on some task (typically, classification accuracy) for utility, but for the level of privacy, they report only on the value of $\epsilon$ (and $\delta$ if relevant).
As we note in \S\ref{sec:dp-comparison}, it is not possible to compare epsilons across standard DP and \textrm{metric DP}\xspace. We therefore take an empirical approach to calibrating the respective epsilons, $\epsilon_G$ and $\epsilon_V$.
It is well known that defining an operational interpretation of $\epsilon$ is a challenge, with the meaning of $\epsilon$ being contextually dependent \citep{lee-clifton:2011,dwork-etal:2019}. We therefore choose a framework that directly relates to the approach taken by DP-SGD and our \textsc{DirDP-SGD}\xspace, which obfuscates gradients: we compare their success against gradient-based reconstruction attacks \citep{zhu-etal:2019:NeurIPS,geiping-etal:2020:NeurIPS}. In these attacks, the goal is to reconstruct images solely from their gradients. Obfuscating gradients successfully should to some extent then prevent this reconstruction. (\citet{zhu-etal:2019:NeurIPS} do this in proposing several non-DP defences against their own attack.)
For utility, as is typically done, we compare the accuracy of different neural networks in the task of classification when they are trained with DP guarantees against the baseline without privacy guarantees. For privacy assessment, we evaluate how each type of noise defends against reconstruction attacks.
\subsection{\textsc{DirDP-SGD}\xspace: $\epsilon_V$}
\label{sec:exper-eps}
Unlike Gaussian noise (\S\ref{sec:exper-baseline}), there is no prior work with VMF to use as a guide for selecting an appropriate $\epsilon_V$. Based on preliminary experiments, we found a range of changes to utility in $\epsilon_V \in \{ 5, 10, 50, 500 \}$; we also included $\epsilon_V = 300,000$, which hardly shifts gradients, to investigate the effects of negligible noise.
\subsection{Datasets}
We use classification tasks from the image processing domain, as in many works, e.g. \citet{abadi-etal:2016:CCS} and \citet{zhu-etal:2019:NeurIPS}.
\textbf{MNIST}\footnote{\url{http://yann.lecun.com/exdb/mnist/}} dataset \citep{deng2012mnist} contains 70,000 images of handwritten digits between 0 and 9. The images are 28×28 pixels in greyscale. The training set contains 60,000 instances and the test set has 10,000 instances.
\textbf{CIFAR}\footnote{\url{https://www.cs.toronto.edu/~kriz/cifar.html}} dataset \citep{Krizhevsky09learningmultiple} contains 60,000 coloured images of 32x32 pixels for each one of the 3 channels. It has two versions: CIFAR10, in which each image belongs to one out of 10 classes, and CIFAR100, which contains 100 classes. The training set contains 50,000 instances and the test set has 10,000 instances.
\textbf{LFW}\footnote{\url{http://vis-www.cs.umass.edu/lfw/}}, or Labeled Faces in the Wild dataset \citep{Huang07labeledfaces}, has 13,233 images of 5,749 people collected from the internet. It is a particularly interesting dataset because it is composed of people's faces, which is something that one may wish to hide to preserve their identity, and consequently has been the focus of previous high-profile work on privacy leakage \citep[for example]{fredrikson-etal:2015:CCS}. The images have 250x250 pixels, some in greyscale but most are coloured. The standard task, which we also adopt, is identity recognition; the standard training and test sets for this contain 9,525 and 3,708 instances respectively. Given its large number of classes, many with few instances, we follow \citet{wei-etal:2020:ESORICS} to downsize the dataset.
In doing this,
we kept only the classes that contain at least 14 objects. This reduced the number of classes to 106 and the number of samples to 3,737 \citep{wei-etal:2020:ESORICS}. Even after this, there is a strong imbalance amongst the classes, with some having dozens of members but others having hundreds. Therefore we under-sampled the majority classes by randomly picking objects so that all classes end up with 14 samples. This reduced the dataset even further, to 14*106 = 1,484 instances. Finally, we split the resulting dataset into training (80\%, or 1,113 samples) and test (20\%, or 371 samples) sets.
\subsection{Primary Baseline}
\label{sec:exper-baseline}
In terms of deep learning architectures to investigate, we broadly follow the setup of \citet{scheliga-etal:2022:WACV}.
The architectures of neural networks we use are \textbf{LeNet}, the original convolutional neural network (CNN) proposed by \citet{bb72cefb6cc34854965b753d1ce10cbd}, and a simple Multilayer Perceptron (\textbf{MLP}) with 2 layers, which are feedforward neural networks \citep{GoodBengCour16}.
\citet{scheliga-etal:2022:WACV} include MLPs as they note that \citet{geiping-etal:2020:NeurIPS} provide a theoretical proof that in fully connected networks, their IGA\xspace attack can uniquely reconstruct the input to the network from the network’s gradients. LeNet is a prototypical architecture for CNNs.
The two baselines in terms of privacy protection for these architectures are (i) the state-of-the-art DP-SGD using \textbf{Gaussian noise} and (ii) the neural networks without any DP guarantees. We compare their performance in terms of accuracy and susceptibility to attacks against our \textsc{DirDP-SGD}\xspace.
For the Gaussian noise, there are no standard guidelines on the range to test, as $\epsilon$ does not have an easily interpretable meaning, with no universally agreed-upon point for what counts as `too large'; \citet{dwork-etal:2019} note that ``while all small $\epsilon$ are alike, each large $\epsilon$ is
large after its own fashion, making it difficult to reason about them.'' As a common range for all of our baseline / dataset combinations, we consequently select values going from `small' ($\epsilon_G \leq 1$) to the common largest value of 8 that a number of works \citep[for example]{abadi-etal:2016:CCS,de-etal:2022} have selected over the years. We also added $\epsilon_G = 80$, a very small amount of noise that is outside what is generally considered acceptable privacy, for calibration purposes.
\subsection{Data Reconstruction Attacks}
We investigate how the noise distributions can defend against attacks during distributed learning, as explained in (\S\ref{sec:attack}). We employ the DLG attack from \citet{zhu-etal:2019:NeurIPS} and the Inverting Gradients method from \citet{geiping-etal:2020:NeurIPS}. The reasons behind these choices are that (i) DLG is the first reconstruction attack based on gradient sharing, and well-established as a baseline; and (ii) Inverting Gradients is based on an angular cost function, so we assess whether an angular-based noise like our \textsc{DirDP-SGD}\xspace can defend against it. Next, we explain in more detail each one of these attacks.
\paragraph{DLG} An attacker receives the gradients from another participant. Instead of honestly training its neural network, the attacker maliciously uses the gradients to recover the private data that was used to generate them.
Following the notation from \citet{zhu-etal:2019:NeurIPS}, let $\nabla W$ be the gradients received, $F$ be a twice differentiable neural network, $W$ be its parameters and (\textbf{x}, \textbf{y}) be the (private) training data and the corresponding (private) labels. The attacker creates dummy \textbf{x'}, \textbf{y'} (e.g. by sampling from a Gaussian distribution). The dummy data goes through $F$ and after performing backpropagation taking the derivatives w.r.t $W$, the dummy gradients $\nabla W'$ are created:
\begin{equation}\label{dlg_w}
\nabla W' = \frac{\partial \ell(F(\mathbf{x'}, W), \mathbf{y'}) }{\partial W}
\end{equation}
The private training data can be recovered by optimising
\begin{equation}\label{dlg_obj}
\mathbf{x}'^*, \mathbf{y}'^* = \argmin_{x', y'} \|W' - W\|^2
\end{equation}
More specifically, the attacker takes the difference $ \|W' - W\|^2$, which is differentiable w.r.t (\textbf{x'}, \textbf{y'}). Therefore, \textbf{x'}, \textbf{y'} are optimised by
\begin{equation}\label{dlg_update}
\mathbf{x}', \mathbf{y}' = \mathbf{x}' - \eta \nabla x' \|W' - W\|^2, \mathbf{y}' - \eta \nabla y' \|W' - W\|^2
\end{equation}
\noindent where $\eta$ is the learning rate (usually a value in the range $(0, 1]$).
\paragraph{Inverting gradients (IGA\xspace)} In this attack, \citet{geiping-etal:2020:NeurIPS} note that the cost function in Equation \ref{dlg_obj} optimises a Euclidean matching term, and that the magnitude of a gradient holds information regarding the stage of the training (the gradients tend to be smaller for trained networks). The direction of the gradients can also capture important information, and therefore the authors change Equation \ref{dlg_obj} to a function based on angles by adopting the cosine distance:
\begin{equation}\label{invgr_obj}
\mathbf{x}'^*, \mathbf{y}'^* = \argmin_{x \in [0, 1]^n} 1 - \frac{ \langle W', W \rangle }{\|W'\| \|W\|} + \alpha TV(x)
\end{equation}
\noindent with the additional constraint that the values in the input data must be normalised to fit the space of $[0, 1]$.
We evaluate how to defend against both attacks using Gaussian and von Mises-Fisher noise in untrained models. When injecting Gaussian noise, we set $\mu = 0$ and $\sigma $ as each one of the $\epsilon$ used for the utility experiments described in (\S\ref{sec:utility}). When defending using the von Mises-Fisher noise, we simply substitute the VMF mechanism from Defn~\ref{d1526} for the Gaussian.
\subsection{Evaluation Measures}
We evaluate \textsc{DirDP-SGD}\xspace in terms of utility and privacy. For utility, we measure the impact that different DP strategies have on the accuracy of different neural networks architectures on classification tasks over different datasets. For the level of privacy achieved, we assess how \textsc{DirDP-SGD}\xspace performs in defending against reconstruction attacks on gradients compared to baselines. The metrics we use are the following:
\begin{itemize}
\item \textbf{Accuracy} on classification tasks: we evaluate the impact of the privacy mechanism on model performance according to different values of $\epsilon$ and the absence of privacy guarantees. This is in line with previous works \citep{abadi-etal:2016:CCS, li-etal:2022:ICLR}.
As we observe earlier, the LFW dataset and associated task are particularly challenging relative to the other two, due to the higher number of classes and a smaller number of instances. Therefore, we might expect low accuracies, which the added noise might reduce to near-zero levels, obscuring differences among different types and levels of noise. For this dataset, then, in addition to standard accuracy we also report the Top-5 and Top-10 accuracy rates, where Top-$k$ accuracy deems a classification as correct if the true label is amongst any of the model's top $k$ classes with the highest confidence rate (so standard accuracy is the same as Top-1 accuracy).
\item \textbf{Structural similarity index measure (SSIM)} \citep{1284395} compares any two signals and returns a value between $\left[ -1, 1 \right]$. It compares pixel intensities that have been normalised for luminance and contrast; the work that proposed it demonstrated that it correlates well with human judgements of reconstruction quality. We use it to measure how close the reconstructed images are to the originals; it has previously been used in this way for the specific quantitative evaluation of gradient-based reconstructions \citep{wei-etal:2020:ESORICS} and (so far non-DP) defences against them \citep{scheliga-etal:2022:WACV}.
While there are some complexities in interpreting SSIM scores \citep{nilsson-akenine:2020}, identical images score 1, completely dissimilar images score 0, and negative scores occur rarely and only in unusual contexts.
\item \textbf{Mean Squared Error (MSE)} measures the distance between a reconstructed image and its original counterpart by averaging the squares of the differences between the pixels of two images. We also use it to measure how similar the reconstructed images are to the original ones. This has likewise been used along with SSIM in quantitative evaluation of gradient-based reconstruction attacks and their defences.
\end{itemize}
\section{Results}
\label{sec:results}
In this section, we present results for utility and privacy experiments. More details about hyperparameters and computing environments are described in Appendix \ref{app:hyperparameters}.
\subsection{Classification Results}\label{sec:utility}
We compare the models after they are trained with and without DP guarantees. Table \ref{tab:utility_acc} shows the accuracy for each setting for MNIST, CIFAR10 and CIFAR100; Table~\ref{tab:top5_acc_lfw} shows accuracies for LFW.
Overall, in terms of non-private models, relative performance on the datasets and tasks is as expected: MNIST is the easiest dataset and has the highest accuracies for both LeNet and MLP; CIFAR10 and then CIFAR100 are next; LFW is the most challenging.
Both baselines (top two lines in Table \ref{tab:utility_acc}) work almost perfectly on the MNIST dataset, which contains only greyscale images spread over 10 classes. For the remaining datasets, which are coloured, LeNet's accuracy falls sharply, but MLP maintains high accuracy for CIFAR10 and CIFAR100.
In general, as expected, adding noise reduces performance, and more noise corresponds to a greater performance reduction: considering $\epsilon_G$ and $\epsilon_V$ as privacy budgets, the higher they are, less privacy should be retained, thus increasing the accuracy. We see this consistently for the CIFAR datasets, and for all datasets for VMF (we discuss the exceptions below).
Coming to our core first comparison of our VMF mechanism against the Gaussian, we see that for the CIFAR datasets, for all selected ranges of $\epsilon_G$ and $\epsilon_V$, VMF noise leads to much smaller reductions in utility, for both LeNet and MLP, with differences in the order of $10-20$ percentage points; even the essentially non-private $\epsilon_G = 80$ does not reach the accuracy of the smallest VMF $\epsilon_V$. For LFW, as expected, (Top-1) accuracies are mostly near-zero when noise is added; comparing Gaussian and VMF noise under Top-5 and Top-10, they are fairly similar, until $\epsilon_V = 500$, where VMF rebounds to very strong accuracies. Gaussian noise, however, works better for MNIST, although it behaves oddly.
We note a counter-intuitive behaviour for the LeNet trained with the Gaussian mechanism on MNIST. Even though the accuracy is kept high, it slowly decreases as $\epsilon$ increases. The same happens to the LFW in Table \ref{tab:top5_acc_lfw}. As we used the standard DP-SGD implementation for this, and it behaved as expected on the CIFAR datasets and on the MLP architecture for all datasets, it is not clear what the cause is, and it warrants further investigation.
As a second exception to the general rule that noise leads to a reduction in accuracy, we observe the intuitive trend that larger values of $\epsilon$ lead to higher accuracy. In one case, when LeNet is trained with \textsc{DirDP-SGD}\xspace on CIFAR10 and LFW, its accuracy is even marginally higher than the baseline without any DP guarantees for a large or very large value of $\epsilon$. In this case, the noise is so weak that we hypothesise that instead of preserving privacy, it acts as a regularisation factor that prevents overfitting, which can explain the very modest performance gain on the test set. In fact, the use of noise as a regularisation technique has been studied by \citet{li-liu:2020, li-liu:2021} with Gaussian noise.
\begin{table}
\centering
\resizebox{\columnwidth}{!}{%
\begin{tabular}{lccccc}\hline
& & & MNIST & CIFAR10 & CIFAR100 \\ \hline
\textbf{Model} & \textbf{Noise} & \textbf{$\epsilon_G$, $\epsilon_V$} & \multicolumn{3}{c}{\textbf{Accuracy}} \\
\hline
LeNet & -- & -- & 96.2 & 57.0 & 28.3 \\
MLP & -- & -- & 97.0 & 47.5 & 18.5 \\\hline
\multirow{5}{*}{LeNet} & \multirow{5}{*}{Gauss} & 0.8 & 88.7 & 33.3 & 4.2 \\
& & 1.0 & 88.0 & 34.9 & 5.2 \\
& & 2.0 & 86.5 & 39.1 & 9.8 \\
& & 3.0 & 84.8 & 41.1 & 11.3\\
& & 8.0 & 83.3 & 46.1 & 14.9 \\
& & 80.0 & 81.4 & 50.1& 19.9 \\\hline
\multirow{5}{*}{LeNet} & \multirow{5}{*}{VMF} & 5 & 50.2 & 51.9 & 23.3 \\
& & 10 & 58.5 & 53.5 & 24.5 \\
& & 50 & 65.7 & 54.9 & 27.1 \\
& & 500 & 76.7 & 57.0 & 29.1 \\
& & 300k & 81.8 & 57.2 & 29.5 \\\hline
\multirow{5}{*}{MLP} & \multirow{5}{*}{Gauss} & 0.8 & 91.9 & 35.9 & 9.0 \\
& & 1.0 & 92.0 & 36.6 & 9.4 \\
& & 2.0 & 92.1 & 38.3 & 10.9 \\
& & 3.0 & 91.9 & 40.2 & 11.6 \\
& & 8.0 & 91.8 & 47.3 & 12.3 \\
& & 80.0 & 91.6 & 21.3& 2.5 \\\hline
\multirow{5}{*}{MLP} & \multirow{5}{*}{VMF} & 5.0 & 89.7 & 45.3 & 15.2 \\
& & 10.0 & 89.9 & 45.4 & 15.7 \\
& & 50.0 & 90.2 &46.0& 16.6\\
& & 500.0 & 91.1 & 47.3 & 18.2 \\
& & 300k & 91.8 & 47.9 & 18.6 \\\hline
\end{tabular}%
}
\caption{\textmd{Accuracy under different settings. We emphasise that epsilons for different noise distributions are not directly comparable -- for Gaussian the $\epsilon_G$ and von Mises-Fisher the $\epsilon_V$ parameters appear to be numerically very different but are appropriate for their respective abilities to prevent reconstruction. VMF stands for Von Mises-Fisher.} }
\label{tab:utility_acc}
\end{table}
\begin{table}
\centering
\resizebox{\columnwidth}{!}{%
\begin{tabular}{lccccc}\hline
\textbf{Model} & \textbf{Noise} & \textbf{$\epsilon_G$, $\epsilon_V$} & \textbf{Top-1}& \textbf{Top-5}& \textbf{Top-10} \\
\hline
LeNet & -- & -- & 5.4 & 19.4 & 31.0\\
MLP & -- & -- & 14.0 & 32.1 & 41.5\\\hline
\multirow{5}{*}{LeNet} & \multirow{5}{*}{Gauss} & 0.8 & 0.3 & 2.7 &6.2\\
& & 1.0 & 0.3 & 2.7 & 5.1\\
& & 2.0 & 0.3 & 2.7 &4.9 \\
& & 3.0 & 0.3 & 2.2 & 4.3\\
& & 8.0 & 0.3 & 1.6 & 3.0\\
& & 80 & 0.3 & 0.8 & 3.5 \\\hline
\multirow{5}{*}{LeNet} & \multirow{5}{*}{VMF} & 5 & 0.0 & 2.2 &4.9\\
& & 10 &0.0 & 2.4 & 4.6\\
& & 50 & 0.0& 2.7 & 5.1\\
& & 500 & 8.4& 24.5 & 39.4\\
& & 300k &10.8 & 27.8 &42.0\\\hline
\multirow{5}{*}{MLP} & \multirow{5}{*}{Gauss} & 0.8 & 0.3 & 5.1&9.4\\
& & 1.0 & 0.3 & 5.1 &9.4\\
& & 2.0 & 0.3 & 5.1 & 8.1\\
& & 3.0 & 0.3 & 3.5 &6.7\\
& & 8.0 & 0.3 & 0.8 &4.6\\
& & 80 & 0.5 & 1.9 & 5.1\\\hline
\multirow{5}{*}{MLP} & \multirow{5}{*}{VMF} & 5 & 1.9& 8.9 &15.4\\
& & 10 &2.2 & 9.7 &14.8\\
& & 50 & 3.0& 9.7 &18.9\\
& & 500 & 3.5& 15.6 &23.2\\
& & 300k &8.6 & 24.8 &39.4\\\hline
\end{tabular}%
}
\caption{\textmd{Top-$k$ accuracy rates for the LFW dataset under different privacy settings. }}
\label{tab:top5_acc_lfw}
\end{table}
Next, we analyse if the possible loss of utility is compensated by protection against data reconstruction attacks, leading up to an analysis of how to match up VMF with Gaussian noise effects.
\subsection{Reconstruction Attacks}
Tables \ref{tab:privacy_attack_dlg} and \ref{tab:privacy_attack_invgrad} show the SSIM and MSE scores after the reconstruction attack is performed by DLG and IGA\xspace respectively. The metrics are calculated by comparing the recovered image against its ground truth counterpart. For each attack, 50 images are taken from the training set of their respective dataset. The values in the table are the average for SSIM and the median for MSE. Higher SSIM values and lower MSE values indicate that the reconstructed image is closer to the original.
\begin{table}
\centering
\resizebox{\columnwidth}{!}{%
\begin{tabular}{ccccccccc}\hline
& & & \multicolumn{3}{c}{SSIM} & \multicolumn{3}{c}{MSE} \\ \hline
\textbf{Model} & \textbf{Noise} & \textbf{$\epsilon_G / \epsilon_V$} & \textbf{MNIST} & \textbf{CIFAR} & \textbf{LFW} & \textbf{MNIST} & \textbf{CIFAR} &\textbf{LFW} \\
\hline
LeNet & -- & -- & 0.820 & 0.840 & 0.939 & 0.000& 0.000 & 0.000 \\
MLP & -- & -- & 1.000 & 0.760 & 0.820 & 0.000 & 0.000 & 0.000 \\\hline
\multirow{5}{*}{LeNet} & \multirow{5}{*}{Gauss} & 0.8& 0.000 & 0.000 & 0.000 & 3,636& 787.6 & 991.5 \\
& & 1.0 & 0.000 & 0.000& 0.000 & 4,948 & $\sim$102e5 & $\sim$169e6\\
& & 2.0 & 0.000 & 0.000& 0.000 & $\sim$109e8& $\sim$154e8 & $\sim$143e7 \\
& & 3.0 & 0.000 & 0.000 & 0.000 & $\sim$401e7& $\sim$102e7 & $\sim$84e7 \\
& & 8.0 & 0.000 & 0.000& 0.000 & $\sim$384e7& $\sim$119e7 & $\sim$89e7 \\
& & 80 & 0.000 & 0.000 & 0.000 & $\sim$142e11 & $\sim$177e8 & $\sim$1906e7\\\hline
\multirow{5}{*}{LeNet} & \multirow{5}{*}{VMF} & 5& 0.000 & 0.000 & 0.000 & $\sim$242e11& $\sim$317e13 & $\sim$658e13 \\
& & 10 & 0.000 & 0.000 & 0.000 & $\sim$163e9& 1,076 & 1,132 \\
& & 50 & 0.000 & 0.000 & 0.000 & $\sim$248e6& 429.9 & 412.5 \\
& & 500 & 0.000 & 0.001 & 0.001 & 2,566& 92.2 & 137.9 \\
& & 300k & 0.801 & 0.688 & 0.745 & 0.001& 0.009 & 0.009 \\\hline
\multirow{5}{*}{MLP} & \multirow{5}{*}{Gauss} & 0.8& 0.000 & 0.000 & 0.000 & $\sim$229e8 & $\sim$178e8 & $\sim$147e8 \\
& & 1.0 & 0.000 & 0.000 & 0.000 & $\sim$211e8 & $\sim$817e7 & $\sim$256e8 \\
& & 2.0 & 0.000 & 0.000 & 0.000 & $\sim$332e8 & $\sim$838e7 & $\sim$66e8 \\
& & 3.0 & 0.000 & 0.000 & 0.000 & $\sim$381e8 & $\sim$135e8 & $\sim$164e8 \\
& & 8.0 & 0.000 & 0.000 & 0.000 & $\sim$17e11 & $\sim$973e8 & $\sim$736e9 \\
& & 80 & 0.000 & 0.000 & 0.000 & $\sim$421e9& $\sim$439e9 & $\sim$296e10 \\\hline
\multirow{5}{*}{MLP} & \multirow{5}{*}{VMF} & 5& 0.000 & 0.000 & 0.000 & $\sim$119e8 & $\sim$69e12 & $\sim$17e13 \\
& & 10 & 0.000 & 0.000 & 0.000 & $\sim$223e8 & $\sim$80e14 & $\sim$529e11 \\
& & 50 & 0.000 & 0.000 & 0.000 & $\sim$728e8 & $\sim$216e4 & $\sim$158e6 \\
& & 500 & 0.000 & 0.000 & 0.000 & $\sim$211e6 & 547 & $\sim$197e5 \\
& & 300k & 0.767 & 0.596 & 0.644 & 0.006 & 0.008 & 0.008 \\\hline
\end{tabular}%
}
\caption{DLG \textmd{reconstruction attack metrics against LeNet and MLP. CIFAR10 and CIFAR100 contain the same images. For the attacks, we used the labels from CIFAR100. We report the average SSIM and the median MSE. MSE is unbounded, so its average is sensitive to high values.}}
\label{tab:privacy_attack_dlg}
\end{table}
In the absence of noise, DLG achieves very high SSIM values, indicating that the recovered images are close to their original counterparts. This correlates to the samples in Figure \ref{fig:dlg}, in which the recovered images are very similar, if not identical, to the originals.
The IGA\xspace attack is generally less successful in the absence of noise.
The performance wrt LeNet in particular decreases sharply. None of the three sample images that were attacked by DLG succeeded, as shown in Figure \ref{fig:invgr}.
MLP presents more moderate results in CIFAR and LFW datasets, but poor metrics against MNIST. These metrics are corroborated by the images in Figure \ref{fig:invgr}. In the absence of noise, even though the telephone is well reconstructed, the face is barely recognisable, and the number five has poor quality.
The presence of noise disrupts almost all attacks, where the SSIM score goes to 0, indicating good empirical protection against a reconstruction attack, in terms of this metric that correlates with human perception. The one exception to effectively zero SSIMs is for VMF with a very large of $\epsilon_V$ (which, as noted in \S\ref{sec:exper-eps}, is not intended to be a realistic privacy parameter value, but just to explore what happens when gradients are perturbed by a small amount). Figure \ref{fig:dlg} shows that images are almost retrieved when $\epsilon_V = 300k$ for the VMF noise, even though not perfectly (especially for image (g) in Figure \ref{fig:dlg} from MNIST, in which the general shape of the number 5 is reconstructed, but the background is blurred). Better reconstruction under this large $\epsilon_V$ is expected, since less noise is added.
Figure \ref{fig:dlg} shows examples of how the DLG attack performs to recover the images. We also show reconstructions for IGA\xspace against LeNet and MLP in Figure \ref{fig:invgr}.
\begin{table}
\centering
\resizebox{\columnwidth}{!}{%
\begin{tabular}{lllllllll}\hline
& & & \multicolumn{3}{c}{SSIM} & \multicolumn{3}{c}{MSE} \\ \hline
\textbf{Model} & \textbf{Noise} & \textbf{$\epsilon_G / \epsilon_V$} & \textbf{MNIST} & \textbf{CIFAR} &\textbf{LFW} & \textbf{MNIST} & \textbf{CIFAR}& \textbf{LFW} \\
\hline
LeNet & -- & -- & 0.065 & 0.015 & 0.188 & 0.778 & 1.015 & 0.086 \\
MLP & -- & -- & 0.321 & 0.479 & 0.528 & 0.445 & 0.076 & 0.036 \\\hline
\multirow{5}{*}{LeNet} & \multirow{5}{*}{Gauss} & 0.8& 0.000 & 0.001 & 0.000 & 3.583 & 2.218 & 2.341\\
& & 1.0 & -0.004 & 0.001 & 0.000 & 3.345 & 2.275 & 2.402\\
& & 2.0 & -0.002 & 0.000 & 0.000 & 3.283 & 2.265 & 2.469\\
& & 3.0 & -0.005 & 0.000 & 0.001 & 3.277 & 2.365 & 2.487\\
& & 8.0 & -0.004 & -0.001 & 0.001 & 3.315 & 2.433 & 2.515\\
& & 80 & -0.001 & 0.000 & -0.001 & 3.303 & 2.433 & 1.916\\\hline
\multirow{5}{*}{LeNet} & \multirow{5}{*}{VMF} & 5& -0.001 & 0.000 & 0.000 & 3.397 & 2.797 & 2.935\\
& & 10 & -0.002 & 0.000 & 0.001 & 3.356 & 2.806 & 2.872 \\
& & 50 & 0.008 & 0.001 & 0.000 & 3.138 & 2.659 & 2.417\\
& & 500 & 0.049 & 0.003 & 0.006 & 1.380 & 1.724 & 0.626\\
& & 300k & 0.075 & 0.014 & 0.225 & 0.740 & 1.042 & 0.076\\\hline
\multirow{5}{*}{MLP} & \multirow{5}{*}{Gauss} & 0.8 & 0.003 & -0.001 & 0.001 & 3.813 & 3.890 & 3.677 \\
& & 1.0 & 0.002 & -0.001 & 0.000 & 3.845 & 3.848 & 3.657 \\
& & 2.0 & 0.004 & -0.001 & 0.000 & 3.853 & 3.873 & 3.687 \\
& & 3.0 & -0.001 & -0.001 & 0.001 & 3.839 & 3.831 & 3.694 \\
& & 8.0 & -0.002 & -0.001 & 0.000 & 3.933 & 3.883 & 3.707 \\
& & 80 & 0.003 & 0.000 & 0.000 & 3.909 & 3.869 & 3.610\\\hline
\multirow{5}{*}{MLP} & \multirow{5}{*}{VMF} & 5& 0.003 & 0.001 & 0.000 & 3.845 & 3.802 & 3.613 \\
& & 10 & 0.009 & 0.002 & 0.000 & 3.926 & 3.845 & 3.595 \\
& & 50 & 0.003 & 0.000 & 0.000 & 3.677 & 3.749 & 3.575 \\
& & 500 & 0.007 & 0.000 & 0.000 & 3.487 & 3.650 & 3.362 \\
& & 300k & 0.138 & 0.000 & 0.046 & 0.879 & 1.985 & 0.703 \\\hline
\end{tabular}%
}
\caption{IGA\xspace \textmd{reconstruction attack metrics. CIFAR10 and CIFAR100 contain the same images. For the attacks, we used the labels from CIFAR100. We report the average SSIM and the median MSE. MSE is unbounded, so its average is sensitive to high values.}}
\label{tab:privacy_attack_invgrad}
\end{table}
\begin{figure*}
\begin{tabular}{ccccccc}
\subfloat[Original MNIST]{\includegraphics[width=0.105\textwidth]{images/DLG/MNIST/original_0.png}} &
\subfloat[No noise]{\includegraphics[width=0.105\textwidth]{images/DLG/MNIST/rec_noise_0.png}} &
\subfloat[Gauss $\sigma=0.8$]{\includegraphics[width=0.105\textwidth]{images/DLG/MNIST/rec_noise_g08.png}} &
\subfloat[Gauss $\sigma=80$]{\includegraphics[width=0.105\textwidth]{images/DLG/MNIST/lenet_mnist_dlg_eps80.png}} &
\subfloat[VMF $\epsilon_V=10$]{\includegraphics[width=0.105\textwidth]{images/DLG/MNIST/rec_noise_v10.png}} &
\subfloat[VMF $\epsilon_V=500$]{\includegraphics[width=0.105\textwidth]{images/DLG/MNIST/rec_noise_v500.png}} &
\subfloat[VMF $\epsilon_V=300k$]{\includegraphics[width=0.105\textwidth]{images/DLG/MNIST/rec_noise_v300k.png}} \\
\subfloat[Original CIFAR]{\includegraphics[width=0.105\textwidth]{images/DLG/CIFAR/original_5.png}} &
\subfloat[No noise]{\includegraphics[width=0.105\textwidth]{images/DLG/CIFAR/rec_no_noise.png}} &
\subfloat[Gauss $\sigma=0.8$]{\includegraphics[width=0.105\textwidth]{images/DLG/CIFAR/rec_noise_g08.png}} &
\subfloat[Gauss $\sigma=80$]{\includegraphics[width=0.105\textwidth]{images/DLG/CIFAR/lenet_cifar_dlg_eps80.png}} &
\subfloat[VMF $\epsilon_V=10$]{\includegraphics[width=0.105\textwidth]{images/DLG/CIFAR/rec_noise_v10.png}} &
\subfloat[VMF $\epsilon_V=500$]{\includegraphics[width=0.105\textwidth]{images/DLG/CIFAR/rec_noise_v500.png}} &
\subfloat[VMF $\epsilon_V=300k$]{\includegraphics[width=0.105\textwidth]{images/DLG/CIFAR/rec_noise_v300k.png}} \\
\subfloat[Original LFW]{\includegraphics[width=0.105\textwidth]{images/DLG/LFW/original_40.png}} &
\subfloat[No noise]{\includegraphics[width=0.105\textwidth]{images/DLG/LFW/rec_noise_0.png}} &
\subfloat[Gauss $\sigma=0.8$]{\includegraphics[width=0.105\textwidth]{images/DLG/LFW/rec_noise_g08.png}} &
\subfloat[Gauss $\sigma=80$]{\includegraphics[width=0.105\textwidth]{images/DLG/LFW/lenet_lfw_dlg_eps80.png}} &
\subfloat[VMF $\epsilon_V=10$]{\includegraphics[width=0.105\textwidth]{images/DLG/LFW/rec_noise_v10.png}} &
\subfloat[VMF $\epsilon_V=500$]{\includegraphics[width=0.105\textwidth]{images/DLG/LFW/rec_noise_v500.png}} &
\subfloat[VMF $\epsilon_V=300k$]{\includegraphics[width=0.105\textwidth]{images/DLG/LFW/rec_noise_v300k.png}} \\
\hline
\subfloat[Original MNIST]{\includegraphics[width=0.105\textwidth]{images/DLG/MNIST/original_0.png}} &
\subfloat[No noise]{\includegraphics[width=0.105\textwidth]{images/DLG/MNIST/mrec_noise_0.png}} &
\subfloat[Gauss $\sigma=0.8$]{\includegraphics[width=0.105\textwidth]{images/DLG/MNIST/mrec_noise_g08.png}} &
\subfloat[Gauss $\sigma=80$]{\includegraphics[width=0.105\textwidth]{images/DLG/MNIST/mlp_mnist_dlg_eps80.png}} &
\subfloat[VMF $\epsilon_V=10$]{\includegraphics[width=0.105\textwidth]{images/DLG/MNIST/mrec_noise_v10.png}} &
\subfloat[VMF $\epsilon_V=500$]{\includegraphics[width=0.105\textwidth]{images/DLG/MNIST/mrec_noise_v500.png}} &
\subfloat[VMF $\epsilon_V=300k$]{\includegraphics[width=0.105\textwidth]{images/DLG/MNIST/mrec_noise_v300k.png}} \\
\subfloat[Original CIFAR]{\includegraphics[width=0.105\textwidth]{images/DLG/CIFAR/original_5.png}} &
\subfloat[No noise]{\includegraphics[width=0.105\textwidth]{images/DLG/CIFAR/mrec_no_noise.png}} &
\subfloat[Gauss $\sigma=0.8$]{\includegraphics[width=0.105\textwidth]{images/DLG/CIFAR/mrec_noise_g08.png}} &
\subfloat[Gauss $\sigma=80$]{\includegraphics[width=0.105\textwidth]{images/DLG/CIFAR/mlp_cifar_dlg_eps80.png}} &
\subfloat[VMF $\epsilon_V=10$]{\includegraphics[width=0.105\textwidth]{images/DLG/CIFAR/mrec_noise_v10.png}} &
\subfloat[VMF $\epsilon_V=500$]{\includegraphics[width=0.105\textwidth]{images/DLG/CIFAR/mrec_noise_v500.png}} &
\subfloat[VMF $\epsilon_V=300k$]{\includegraphics[width=0.105\textwidth]{images/DLG/CIFAR/mrec_noise_v300k.png}} \\
\subfloat[Original LFW]{\includegraphics[width=0.105\textwidth]{images/DLG/LFW/original_40.png}} &
\subfloat[No noise]{\includegraphics[width=0.105\textwidth]{images/DLG/LFW/mrec_noise_0.png}} &
\subfloat[Gauss $\sigma=0.8$]{\includegraphics[width=0.105\textwidth]{images/DLG/LFW/mrec_noise_g08.png}} &
\subfloat[Gauss $\sigma=80$]{\includegraphics[width=0.105\textwidth]{images/DLG/LFW/mlp_lfw_dlg_eps80.png}} &
\subfloat[VMF $\epsilon_V=10$]{\includegraphics[width=0.105\textwidth]{images/DLG/LFW/mrec_noise_v10.png}} &
\subfloat[VMF $\epsilon_V=500$]{\includegraphics[width=0.105\textwidth]{images/DLG/LFW/mrec_noise_v500.png}} &
\subfloat[VMF $\epsilon_V=300k$]{\includegraphics[width=0.105\textwidth]{images/DLG/LFW/mrec_noise_v300k.png}} \\
\end{tabular}
\caption{\textmd{Reconstructed images after DLG attack against LeNet and MLP for 1000 iterations.}}\label{fig:dlg}
\end{figure*}
\begin{figure*}
\begin{tabular}{ccccccc}
\subfloat[Original MNIST]{\includegraphics[width=0.105\textwidth]{images/INVGR/MNIST/noise_original_images_0_5-five.jpg}} &
\subfloat[No noise]{\includegraphics[width=0.105\textwidth]{images/INVGR/MNIST/noise_rec_none0_0_images_0_5-five.jpg}} &
\subfloat[Gauss $\sigma=0.8$]{\includegraphics[width=0.105\textwidth]{images/INVGR/MNIST/noise_rec_gaussian_noaccountant0_8_images_0_5-five.jpg}} &
\subfloat[Gauss $\sigma=80$]{\includegraphics[width=0.105\textwidth]{images/INVGR/MNIST/lenet_mnist_iga_eps80.png}} &
\subfloat[VMF $\epsilon_V=10$]{\includegraphics[width=0.105\textwidth]{images/INVGR/MNIST/noise_rec_vonmises10_0_images_0_5-five.jpg}} &
\subfloat[VMF $\epsilon_V=500$]{\includegraphics[width=0.105\textwidth]{images/INVGR/MNIST/noise_rec_vonmises500_0_images_0_5-five.jpg}} &
\subfloat[VMF $\epsilon_V=300k$]{\includegraphics[width=0.105\textwidth]{images/INVGR/MNIST/noise_rec_vonmises300000_0_images_0_5-five.jpg}} \\
\subfloat[Original CIFAR]{\includegraphics[width=0.105\textwidth]{images/INVGR/CIFAR/noise_original_images_5_telephone.jpg}} &
\subfloat[No noise]{\includegraphics[width=0.105\textwidth]{images/INVGR/CIFAR/noise_rec_none0_0_images_5_telephone.jpg}} &
\subfloat[Gauss $\sigma=0.8$]{\includegraphics[width=0.105\textwidth]{images/INVGR/CIFAR/noise_rec_gaussian_noaccountant0_8_images_5_telephone.jpg}} &
\subfloat[Gauss $\sigma=80$]{\includegraphics[width=0.105\textwidth]{images/INVGR/CIFAR/lenet_cifar_iga_eps80.png}} &
\subfloat[VMF $\epsilon_V=10$]{\includegraphics[width=0.105\textwidth]{images/INVGR/CIFAR/noise_rec_vonmises5_0_images_5_telephone.jpg}} &
\subfloat[VMF $\epsilon_V=500$]{\includegraphics[width=0.105\textwidth]{images/INVGR/CIFAR/noise_rec_vonmises500_0_images_5_telephone.jpg}} &
\subfloat[VMF $\epsilon_V=300k$]{\includegraphics[width=0.105\textwidth]{images/INVGR/CIFAR/noise_rec_vonmises300000_0_images_5_telephone.jpg}} \\
\subfloat[Original LFW]{\includegraphics[width=0.105\textwidth]{images/INVGR/LFW/noise_original_images_40_Igor_Ivanov.jpg}} &
\subfloat[No noise]{\includegraphics[width=0.105\textwidth]{images/INVGR/LFW/noise_rec_none0_0_images_40_Igor_Ivanov.jpg}} &
\subfloat[Gauss $\sigma=0.8$]{\includegraphics[width=0.105\textwidth]{images/INVGR/LFW/noise_rec_gaussian_noaccountant0_8_images_40_Igor_Ivanov.jpg}} &
\subfloat[Gauss $\sigma=80$]{\includegraphics[width=0.105\textwidth]{images/INVGR/LFW/lenet_lfw_iga_eps80.png}} &
\subfloat[VMF $\epsilon_V=10$]{\includegraphics[width=0.105\textwidth]{images/INVGR/LFW/noise_rec_vonmises10_0_images_40_Igor_Ivanov.jpg}} &
\subfloat[VMF $\epsilon_V=500$]{\includegraphics[width=0.105\textwidth]{images/INVGR/LFW/noise_rec_vonmises500_0_images_40_Igor_Ivanov.jpg}} &
\subfloat[VMF $\epsilon_V=300k$]{\includegraphics[width=0.105\textwidth]{images/INVGR/LFW/noise_rec_vonmises300000_0_images_40_Igor_Ivanov.jpg}} \\
\hline
\subfloat[Original MNIST]{\includegraphics[width=0.105\textwidth]{images/INVGR/MNIST/mnoise_original_images_0_5-five.jpg}} &
\subfloat[No noise]{\includegraphics[width=0.105\textwidth]{images/INVGR/MNIST/mnoise_rec_none0_0_images_0_5-five.jpg}} &
\subfloat[Gauss $\sigma=0.8$]{\includegraphics[width=0.105\textwidth]{images/INVGR/MNIST/mnoise_rec_gaussian_noaccountant0_8_images_0_5-five.jpg}} &
\subfloat[Gauss $\sigma=80$]{\includegraphics[width=0.105\textwidth]{images/INVGR/MNIST/mlp_mnist_iga_eps80.png}} &
\subfloat[VMF $\epsilon_V=10$]{\includegraphics[width=0.105\textwidth]{images/INVGR/MNIST/mnoise_rec_vonmises10_0_images_0_5-five.jpg}} &
\subfloat[VMF $\epsilon_V=500$]{\includegraphics[width=0.105\textwidth]{images/INVGR/MNIST/mnoise_rec_vonmises500_0_images_0_5-five.jpg}} &
\subfloat[VMF $\epsilon_V=300k$]{\includegraphics[width=0.105\textwidth]{images/INVGR/MNIST/mnoise_rec_vonmises300000_0_images_0_5-five.jpg}} \\
\subfloat[Original CIFAR]{\includegraphics[width=0.105\textwidth]{images/INVGR/CIFAR/mnoise_original_images_5_telephone.jpg}} &
\subfloat[No noise]{\includegraphics[width=0.105\textwidth]{images/INVGR/CIFAR/mnoise_rec_none0_0_images_5_telephone.jpg}} &
\subfloat[Gauss $\sigma=0.8$]{\includegraphics[width=0.105\textwidth]{images/INVGR/CIFAR/mnoise_rec_gaussian_noaccountant0_8_images_5_telephone.jpg}} &
\subfloat[Gauss $\sigma=80$]{\includegraphics[width=0.105\textwidth]{images/INVGR/CIFAR/mlp_cifar_iga_eps80.png}} &
\subfloat[VMF $\epsilon_V=10$]{\includegraphics[width=0.105\textwidth]{images/INVGR/CIFAR/mnoise_rec_vonmises5_0_images_5_telephone.jpg}} &
\subfloat[VMF $\epsilon_V=500$]{\includegraphics[width=0.105\textwidth]{images/INVGR/CIFAR/mnoise_rec_vonmises500_0_images_5_telephone.jpg}} &
\subfloat[VMF $\epsilon_V=300k$]{\includegraphics[width=0.105\textwidth]{images/INVGR/CIFAR/mnoise_rec_vonmises300000_0_images_5_telephone.jpg}} \\
\subfloat[Original LFW]{\includegraphics[width=0.105\textwidth]{images/INVGR/LFW/noise_original_images_40_Igor_Ivanov.jpg}} &
\subfloat[No noise]{\includegraphics[width=0.105\textwidth]{images/INVGR/LFW/mnoise_rec_none0_0_images_40_Igor_Ivanov.jpg}} &
\subfloat[Gauss $\sigma=0.8$]{\includegraphics[width=0.105\textwidth]{images/INVGR/LFW/mnoise_rec_gaussian_noaccountant0_8_images_40_Igor_Ivanov.jpg}} &
\subfloat[Gauss $\sigma=80$]{\includegraphics[width=0.105\textwidth]{images/INVGR/LFW/mlp_lfw_iga_eps80.png}} &
\subfloat[VMF $\epsilon_V=10$]{\includegraphics[width=0.105\textwidth]{images/INVGR/LFW/mnoise_rec_vonmises10_0_images_40_Igor_Ivanov.jpg}} &
\subfloat[VMF $\epsilon_V=500$]{\includegraphics[width=0.105\textwidth]{images/INVGR/LFW/mnoise_rec_vonmises500_0_images_40_Igor_Ivanov.jpg}} &
\subfloat[VMF $\epsilon_V=300k$]{\includegraphics[width=0.105\textwidth]{images/INVGR/LFW/mnoise_rec_vonmises300000_0_images_40_Igor_Ivanov.jpg}} \\
\end{tabular}
\caption{\textmd{Reconstructed images by Inverting Gradients against LeNet (top three rows) and MLP (bottom three rows) after 1000 iterations.}}\label{fig:invgr}
\end{figure*}
\subsection{Calibrating Accuracy vs Reconstruction Defence}
With SSIM values around zero, although positive in terms of protection against reconstruction, the SSIM metric thus does not allow any distinguishing among DP mechanisms, architectures or datasets. Looking at MSE, we see that the two attacks do still behave very differently in the face of noise. While the DLG attack achieves MSEs of 0 in the non-private case, adding noise leads to massive MSEs (as large as $10^{13}$): noise dramatically degrades the performance of the DLG attack. The IGA\xspace attack, by contrast, is much more successful in the face of noise, with relatively low and fairly constant MSEs (in the range $1-3$). Together, these let us draw a few inferences.
For the DLG, Gaussian noise leads to MSE scores that stay large, and any movement is not in a consistent direction: for both LeNet and MLP, it goes up and down. The VMF noise, by contrast, degrades smoothly and fairly consistently. For our $\epsilon_V = 300k$, which is not aiming for privacy, it is as expected barely above the zero of the non-private models; for $\epsilon_V = 500$, while MSE scores like 92.2 do not have any inherent privacy interpretation, they are orders of magnitude larger than the MSEs for IGA\xspace, including for the Gaussian $\epsilon_G$s that do have recognised degrees of privacy. Similarly, in the IGA\xspace, the MSEs for the range of $\epsilon_G$ values is broadly comparable to the VMF $5 \leq \epsilon_V \leq 500$ (sometimes higher, sometimes lower); for LeNet on CIFAR, $\epsilon_V = 500$ is a bit of an edge case, with a lower MSE, but still noticeably above the non-private baseline (and still well-obscured visually: Figure~\ref{fig:invgr} (ab,ai,ap)).
Relating this to the discussion of \S\ref{sec:utility}, we can say that for comparable ranges of $\epsilon_G$ and $\epsilon_V$, the accuracies of the noisy models are much higher for the CIFAR datasets, and in LFW similar for lower values of $\epsilon_G$ and $\epsilon_V$ but much higher for $\epsilon_V = 500$.
%
These same equivalences also hold for MNIST, with Gaussian noise outperforming VMF for LeNet but similar to VMF for MLP; but as observed above, MNIST behaves rather oddly under LeNet Gaussian noise here.
\subsection{Qualitative Analysis}
In broad terms, the attacks successfully reconstruct images in the absence of noise. When noise is added, both attacks suffer to recover the training data, as shown in all images (Figures~\ref{fig:dlg} and \ref{fig:invgr}) with Gaussian noise and VMF noise (the latter for lower $\epsilon_V$). For fully trained networks, it is harder to reconstruct the images (see Appendix \ref{appendix:attack_train}). However, in a real scenario, an attacker can simply use a model with dummy weights instead of a trained network.
Particularly for the IGA\xspace attack with VMF against LeNet, we find a good privacy/utility balance at $\epsilon_V$ = 500. In this setting, most images cannot be reconstructed (at most, the black background starts to be recovered in contrast to a white centre, as in image (f) of Figure \ref{fig:invgr}), and at the same time the model retains high accuracy rates compared to the non-private baseline across all datasets.
We note that the addition of noise helps to prevent reconstruction in both attacks. However, we also observe that when the noise is too small (as in the case of VMF with $\epsilon_V$ set to 300k), images can be reconstructed more often, as seen in images (n), (u) and (ap) in Figure \ref{fig:dlg}. Even though offering little protection against the attacks, such small noise can help the model in its classification task, as shown in Table \ref{tab:utility_acc} for both models with MVF noise in the CIFAR datasets and in Table \ref{tab:top5_acc_lfw} for Top-10 accuracy with LeNet. Also, in the context of face reconstruction in images (u) and (ap), even the negligible VMF noise of $\epsilon_V = 300k$ could be useful in practical applications in obscuring details of the face, in ways not captured by the present metrics.
\subsection{Further Note}
Overall we found that VMF noise is able to protect against gradient-based reconstruction attacks and to offer good levels for utility in image classification datasets over different datasets. This is particularly illustrated in the LFW dataset, which is based on face images where one might wish to hide their identity.
However, VMF can be computationally expensive for gradients with high dimensionality, and future work involves optimising it. This is line with previous studies that enhanced vanilla approaches for DP in deep learning, like backpropagation improvements such as ghost clipping from \citet{li-etal:2022:ICLR}, and the efficient per-sample gradient computation from \citet{yousefpour2021opacus}.
\section{Conclusions}
\label{sec:conclusions}
We defined \textsc{DirDP-SGD}\xspace for directional privacy in deep learning by adding noise to the gradients during training. This problem is particularly relevant because several studies have shown that private training data can be discovered under certain machine learning training settings, such as sharing gradients.
Our mechanism provides an $\epsilon d$-privacy guarantee instead of $(\epsilon, \delta)$-DP. Experiments showed that \textsc{DirDP-SGD}\xspace can protect against gradient-based reconstruction attacks, but it also retains utility for classification tasks across different datasets.
\textsc{DirDP-SGD}\xspace is based on the VMF distribution, and it can be computationally expensive for high dimension data. Future work include optimising the mechanism. Moreover, experiments were restricted to image datasets. We plan to explore the feasibility of \textsc{DirDP-SGD}\xspace for other domains, such as natural language processing.
\section{Implementation details}\label{app:hyperparameters}
All experiments were run on a virtual machine hosted by Oracle Cloud running Ubuntu 22.04 with an Intel Xeon Platinum 8167M, 12 cores at 2.00GHz, 180GB RAM, and two NVIDIA Tesla V100 SXM2 GPUs with 16GB RAM each.
\begin{table}[h]
\resizebox{\columnwidth}{!}{%
\begin{tabular}{cccccccc}
\hline
\textbf{Model} &
\textbf{Dataset} &
\textbf{Batch size} &
\textbf{LR} &
\textbf{Epochs} &
\textbf{Optimiser} &
\textbf{Momentum} &
\textbf{Decay} \\ \hline
LeNet & MNIST & 200 & 0.1 &30 & SGD & 0.0 & 0.0 \\
LeNet & CIFAR & 512 & 0.001 &90 & AdamW & 0.0 & 0.0 \\
LeNet & LFW & 128 & 0.001 & 100 & AdamW & 0.0 & 0.0 \\
MLP & MNIST & 128 & 0.1 &30 & SGD & 0.0 & 0.0 \\
MLP & CIFAR & 128 & 0.1 &25 & AdamW & 0.0 & 0.0 \\
MLP & LFW & 128 & 0.001 & 30 & AdamW & 0.0 & 0.0 \\
\hline
\end{tabular}
}
\caption{\textmd{Hyperparameters used for each set of experiments.}}
\label{tab:app_hyperparametes}
\end{table}
Table \ref{tab:app_hyperparametes} shows the hyperparameters used to train the victim across the experiments from (\S\ref{sec:exper}). The settings for CIFAR were used for both CIFAR10 and CIFAR100.
\section{Attacking trained networks}
\label{appendix:attack_train}
We study how the simple training of a neural network, without any noise involved, can protect against gradient based attacks. The attacker doesn't need to use a fully trained network, thus this scenario cannot be considered typical. However, it can help to empirically understand how the gradients carry training data information.
We attack the networks trained for experiments in Section \ref{sec:utility} with DLG and Inverting Gradients methods. We see from the images in Figure \ref{fig:trained_dlg} that, even though training does not fully prevent the attacks, it affects their efficiency.
The same conclusion is reached by looking at the numbers in Table \ref{tab:privacy_attack_trained}. In almost all cases, the SSIM values are sharply lower for trained models. Higher MSE scores are also observed in trained models.
One intuitive explanation is that gradients are used to update the model's weights. For trained networks, there is little to update, so the gradients at the beginning of the training are more prone to leak information.
\begin{table}[h!]
\centering
\resizebox{\columnwidth}{!}{%
\begin{tabular}{llllllll}\hline
& & \multicolumn{3}{c}{SSIM} & \multicolumn{3}{c}{MSE} \\ \hline
\textbf{Model} & \textbf{Weights} & \textbf{MNIST} & \textbf{CIFAR} &\textbf{LFW} & \textbf{MNIST} & \textbf{CIFAR}& \textbf{LFW} \\
\hline
LeNet & Dummy & 0.820 & 0.840 & 0.939 & 0.000& 0.000 & 0.000 \\
LeNet & Trained & 0.717 & 0.501 & 0.216 & 0.000 & 0.017 & $\sim$107e4 \\
MLP & Dummy & 1.000 & 0.760 & 0.820 & 0.000 & 0.000 & 0.000 \\
MLP & Trained & 0.040 & 0.000 & 0.216 & $\sim$836e5 & $\sim$19e5 & $\sim$ 107e4 \\
\hline
LeNet & Dummy & 0.065 & 0.015 & 0.188 & 0.778 & 1.015 & 0.086 \\
LeNet & Trained & 0.019 & 0.054 & 0.197 & 1.211 & 1.117 & 0.079 \\
MLP & Dummy & 0.321 & 0.479 & 0.528 & 0.445 & 0.076 & 0.036 \\
MLP & Trained & 0.128 & 0.024 & 0.564 & 0.944 & 1.586 & 0.037\\\hline
\end{tabular}%
}
\caption{\textmd{Reconstruction attack metrics amongst trained and untrained models. The top three rows refer to DLG attack, and the bottom three refer to Inverting Gradients.}}
\label{tab:privacy_attack_trained}
\end{table}
\begin{figure*}[h]
\begin{tabular}{ccccc}
\subfloat[Original MNIST]{\includegraphics[width=0.105\textwidth]{images/DLG/MNIST/original_0.png}} &
\subfloat[LeNet]{\includegraphics[width=0.105\textwidth]{images/DLG/MNIST/rec_noise_0.png}} &
\subfloat[Trained LeNet]{\includegraphics[width=0.105\textwidth]{images/DLG/MNIST/trained_rec_0.png}} &
\subfloat[MLP]{\includegraphics[width=0.105\textwidth]{images/DLG/MNIST/mrec_noise_0.png}} &
\subfloat[Trained MLP]{\includegraphics[width=0.105\textwidth]{images/DLG/MNIST/m_trainedrec_0.png}} \\
\subfloat[Original CIFAR]{\includegraphics[width=0.105\textwidth]{images/DLG/CIFAR/original_5.png}} &
\subfloat[LeNet]{\includegraphics[width=0.105\textwidth]{images/DLG/CIFAR/rec_no_noise.png}} &
\subfloat[Trained LeNet]{\includegraphics[width=0.105\textwidth]{images/DLG/CIFAR/trainedrec_5.png}} &
\subfloat[MLP]{\includegraphics[width=0.105\textwidth]{images/DLG/CIFAR/mrec_no_noise.png}} &
\subfloat[Trained MLP]{\includegraphics[width=0.105\textwidth]{images/DLG/CIFAR/mtrainedrec_5.png}} \\
\subfloat[Original LFW]{\includegraphics[width=0.105\textwidth]{images/DLG/LFW/original_40.png}} &
\subfloat[LeNet]{\includegraphics[width=0.105\textwidth]{images/DLG/LFW/rec_noise_0.png}} &
\subfloat[Trained LeNet]{\includegraphics[width=0.105\textwidth]{images/DLG/LFW/trained_rec_40.png}} &
\subfloat[MLP]{\includegraphics[width=0.105\textwidth]{images/DLG/LFW/mrec_noise_0.png}} &
\subfloat[Trained MLP]{\includegraphics[width=0.105\textwidth]{images/DLG/LFW/m_trainedrec_40.png}} \\
\hline
\subfloat[Original MNIST]{\includegraphics[width=0.105\textwidth]{images/INVGR/MNIST/noise_original_images_0_5-five.jpg}} &
\subfloat[LeNet]{\includegraphics[width=0.105\textwidth]{images/INVGR/MNIST/noise_rec_none0_0_images_0_5-five.jpg}} &
\subfloat[Trained LeNet]{\includegraphics[width=0.105\textwidth]{images/INVGR/MNIST/trained_noise_rec_none0_0_images_0_5-five.jpg}} &
\subfloat[MLP]{\includegraphics[width=0.105\textwidth]{images/INVGR/MNIST/mnoise_rec_none0_0_images_0_5-five.jpg}} &
\subfloat[Trained MLP]{\includegraphics[width=0.105\textwidth]{images/INVGR/MNIST/mtrained_noise_rec_none0_0_images_0_5-five.jpg}} \\
\subfloat[Original CIFAR]{\includegraphics[width=0.105\textwidth]{images/INVGR/CIFAR/noise_original_images_5_telephone.jpg}} &
\subfloat[LeNet]{\includegraphics[width=0.105\textwidth]{images/INVGR/CIFAR/noise_rec_none0_0_images_5_telephone.jpg}} &
\subfloat[Trained LeNet]{\includegraphics[width=0.105\textwidth]{images/INVGR/CIFAR/trainednoise_rec_none0.0_images_5_telephone.jpg}} &
\subfloat[MLP]{\includegraphics[width=0.105\textwidth]{images/INVGR/CIFAR/mnoise_rec_none0_0_images_5_telephone.jpg}} &
\subfloat[Trained MLP]{\includegraphics[width=0.105\textwidth]{images/INVGR/CIFAR/mtrained_noise_rec_none0_0_images_5_telephone.jpg}} \\
\subfloat[Original LFW]{\includegraphics[width=0.105\textwidth]{images/INVGR/LFW/noise_original_images_40_Igor_Ivanov.jpg}} &
\subfloat[LeNet]{\includegraphics[width=0.105\textwidth]{images/INVGR/LFW/noise_rec_none0_0_images_40_Igor_Ivanov.jpg}} &
\subfloat[Trained LeNet]{\includegraphics[width=0.105\textwidth]{images/INVGR/LFW/trained_noise_rec_none0_0_images_40_Igor_Ivanov.jpg}} &
\subfloat[MLP]{\includegraphics[width=0.105\textwidth]{images/INVGR/LFW/mnoise_rec_none0_0_images_40_Igor_Ivanov.jpg}} &
\subfloat[Trained MLP]{\includegraphics[width=0.105\textwidth]{images/INVGR/LFW/mtrainnoise_rec_none0_0_images_40_Igor_Ivanov.jpg}} \\
\end{tabular}
\caption{\textmd{Reconstructed images after DLG (top three rows) and Inverting Gradients (bottom three rows) after 1000 iterations.}}\label{fig:trained_dlg}
\end{figure*}
\section*{Acknowledgements}
\bibliographystyle{ACM-Reference-Format}
|
2,869,038,153,972 | arxiv | \section{Introduction}
Protostellar disks play a crucial role in both star and planet formation but details of how these disks form remain unclear. Historically, the disk formation was understood as a simple consequence of angular momentum conservation \citep{bod95}. However, the picture becomes more complicated when the magnetic field is considered. Ideal magnetohydrodynamics (MHD) simulations have shown that as a rotating magnetized dense core collapses, the infalling material drags the magnetic field inward, pinching the magnetic field lines and thereby, greatly increasing magnetic tension within the protostellar envelope. This allows the magnetic field to transport a significant amount of angular momentum outward, known as magnetic braking, and suppress the formation of a disk \citep{all03,gal06,mel08}.
Several ideal MHD simulations of collapse of dense cores have suggested that inclusion of misalignment between the magnetic field and rotational axis of a dense core can greatly alter the final configuration of the magnetic field and reduce the efficiency of magnetic braking \citep{hen09,joo12,li13}. Some simulations have also suggested that the misalignment can instead increase the efficiency of magnetic braking \citep{mat04,tsu18}. However, \citet{hir20} demonstrated that this is likely because they simulate the very early accretion phase. Alternatively, non-ideal MHD simulations suggest that non-ideal MHD effects, namely ohmic dissipation, ambipolar diffusion, and the Hall effect, can greatly reduce the accumulation of magnetic flux in the inner region and thus, also allow the formation of a rotationally supported disk around a protostar \citep{inu10,mac14,mas16,wur19,hir20}. \citet{hir20} further show in their simulations with non-ideal MHD effects that the misalignment promotes the formation of larger disks in the later phase.
Observationally, the role of this misalignment on angular momentum transfer is not yet well understood.
For a sample of $\sim20$ protostars, \citet{gal20} compared the misalignments between the magnetic fields in the protostellar envelopes at a few thousand au scale and the outflow axes with the magnitudes of the velocity gradients in the protostellar envelopes at a $\sim5,000$ au scale, where the outflow axes were adopted as a proxy for the rotational axes of the protostellar sources. They found a positive correlation between the misalignment and the velocity gradient, which could suggest that a larger misalignment reduces the efficiency of magnetic braking. On the other hand, \citet{yen21arxiv} compared the sizes and fluxes of a sample of $\sim50$ protostellar disks observed in the 0.87 mm continuum emission with misalignment between their rotational axes and core-scale magnetic fields and found no significant correlations. This could suggest that misalignment does not play a crucial role in disk formation.
To investigate how dynamically important misalignment between the magnetic field and rotational axis in a dense core is in the star formation process, we studied $\sim1,000$ au envelope-scale kinematics in synergy with $\sim4,000$ au core-scale magnetic field orientations, for a sample of $\sim$32 Class 0 and I protostars in the Perseus cloud. The gas kinematics was analysed using C$^{18}$O (2--1) data at a resolution of $\sim$600~au taken by the Mass Assembly of Stellar Systems and their Evolution with the Submillimeter Array (SMA) survey (MASSES) \citep{ste19}, as described in Section \ref{sec:kinmatics}. The MASSES survey also measured outflow orientations in a subset of their sample \citep{ste17}, which can be taken as a proxy for the rotational axes of these systems \citep[e.g.,][]{cia10}. We compared the outflow orientations with the magnetic field orientations inferred from the $850~\micron$ polarization data at a resolution of $\sim3500$~au taken by the B-fields In STar-forming Region Observations (BISTRO) survey with the James Clerk Maxwell Telescope (JCMT) \citep{war17,cou19,doi20} as well as regular projects, as described in Section \ref{sec:field}. Comparisons between the gas kinematics and the magnetic field morphology, before and after accounting for projection and measurement uncertainties, are discussed in Section \ref{sec:InAnal} and \ref{sec:FiAnal}, respectively. Finally, Section \ref{sec:discussion} discusses possible physical interpretations of our key findings, and Section \ref{sec:conclusion} concludes this paper.
\section{Data}
\subsection{Sample Selection}
\label{sec:sample}
The sample of this study is selected from the SMA MASSES survey \citep{ste19}. This survey observed 1.3 mm and 850 $\micron$ continuum emission and several molecular lines towards 74 known Class 0 and I protostars in the Perseus molecular cloud. In a subset of 57 sources, the CO outflows were detected, and the outflow orientations were measured \citep{ste17}. The Perseus molecular cloud has also been observed with JCMT \citep{war17,cou19,doi20} to trace magnetic field structures with polarized submillimeter continuum emission. We selected sources with detections of outflows in CO and protostellar envelopes in C$^{18}$O with SMA and polarized 850 $\mu$m continuum emission within a radius of $4,000$~au with JCMT. These led to a sample of 32 sources.
\subsection{Gas Kinematics} \label{sec:kinmatics}
C$^{18}$O can trace protostellar envelopes \citep{oha97,gau20}. In this study we use C$^{18}$O ($2-1$) emission line data taken as part of the MASSES survey \citep{ste19} to analyse the gas kinematics in the protostellar envelopes at a $\sim1,000$~au scale. These data sets have a spectral resolution of $\sim 0.2$~km~s$^{-1}$, a spatial resolution of $\sim 2\arcsec$ ($\sim600~$au), and a maximum recoverable angular scale of $\sim 24\arcsec$ ($\sim7,000~$au). The details of the observations and the noise levels of the C$^{18}$O data are described in \citet{ste19}. The same data have also been used to study the morphology, flux, and velocity gradient of the C$^{18}$O emission in the protostellar envelopes at a larger scale by \citet{hem21}.
Using these data cubes, we constructed integrated intensity (moment 0) and intensity-weighted mean velocity (moment 1) maps as shown in Figure \ref{fig:maps1} and Figure \ref{fig:maps2}. To quantify the overall velocity gradients in the protostellar envelopes, we fitted the moment 1 maps with a two-dimensional linear model \citep{goo93}. We note that the velocity gradient in a protostellar envelope could change as a function of spatial scale. For a uniform comparison of the gas kinematics in our sample, all the velocity gradients were measured in the central regions within a radius of $1,000$~au in the sample sources, which is more than three times larger than the spatial resolutions.
The measured overall velocity gradients could trace a combination of rotational, infalling, and turbulent motions and even outflows \citep{gau20}. Assuming that a protostellar envelope is axisymmetric and the associated outflow is parallel to its rotational axis, the infalling and rotational motions tend to induce velocity gradients along and perpendicular to the outflow axis in the protostellar envelope, respectively \citep{yen13,pin19}.
Although a bipolar outflow also exhibits a velocity gradient along the outflow axis, the outflow is expected to have a different velocity structure from the infalling envelope, where the outflow and infalling velocities tend to decrease and increase with decreasing radii, respectively \citep{arc07}.
ALMA observations at higher resolutions of $\sim$100 au toward several protostars shows that the C$^{18}$O and CO emission lines trace different velocity structures at a 1,000 au scale, and the C$^{18}$O emission is more sensitive to infalling and rotating envelopes \citep{aso15,yen17,lee19}.
Indeed we also found distinct velocity structures along the outflow axes in the C$^{18}$O ($2-1$) and CO ($3-2$) position-velocity (PV) diagrams using the SMA MASSES data (Figure \ref{fig:pv2}, Appendix \ref{sec:PVs}).
Thus, we extracted velocity profiles along and perpendicular to the outflow axes from the moment 1 maps and measured velocity gradients, to assess infalling and rotational motions in the protostellar envelopes.
Our measured velocity gradients along and perpendicular to the outflow axis agree with the velocity structures seen in the PV diagrams (Figure \ref{fig:pv}, Appendix \ref{sec:PVs}).
\subsection{Magnetic Field} \label{sec:field}
We used the JCMT polarimetric data at 850 $\mu$m to measure magnetic field structures in the dense cores in the Perseus molecular cloud.
The JCMT polarimetric data were taken using the polarimeter POL-2 with the large program BISTRO survey \citep[M16AL004 and M17BL011;][]{war17,cou19,doi20} and the regular projects (M17AP073 and M17BP058; PI: W.~Kwon). The angular resolution of JCMT at 850 $\mu$m is $\sim14\arcsec$, corresponding to $\sim3500$~au.
We obtained the catalog of the Stokes Q and U intensities of the polarization detections above $3\sigma$ in the Perseus molecular cloud from \citet{yen20}. The polarization data reduction was done following the procedures in \citet{pat17}. The pixel size of one detection is 12\arcsec. Polarization angles were calculated as $0.5 \times arctan(U/Q)$,
where U and Q are respective Stokes parameters. These position angles were further rotated by $90^{\circ}$ to infer magnetic field orientations.
MHD simulations \citep{joo12,li13,hir20} studied angular momentum transportation during the collapse of dense cores with different initial magnetic field orientations. Unlike the envelope-scale magnetic field, which likely gets significantly deformed by the collapse \citep{gir06,mau18,kwo19}, the core-scale magnetic field structures are expected to remain relatively unaffected. Therefore, these larger-scale magnetic field orientations, traced with the JCMT observations, are suitable to compare with those in the simulations.
Using the magnetic field information (as shown in Figure \ref{fig:maps1} and Figure \ref{fig:maps2}), we measured core-scale field orientations and angular dispersions. \citet{cur10} analysed dense cores in the Perseus region using 850 $\mu$m SCUBA maps and found the typical core radius to be $\sim0.02$~pc or $\sim4,000$~au.
Thus, for each source in our sample, we first computed mean Stokes Q and U intensity from the detections within a radius of 4,000 au from the protostar and then estimated the overall orientation of the magnetic field. Standard deviation of the position angles of the individual field orientations was taken as the angular dispersion. According to the Davis–Chandrasekhar–Fermi method, this dispersion is proportional to the turbulence over the plane-of-sky magnetic field strength \citep{dav51,cha53}, and thus, can be considered as a proxy for the field strength which is an another important parameter in the MHD simulations. Errors in both of these quantities were estimated with error propagation. The typical error is $\lesssim 5\arcdeg$.
All the measurements of the velocity gradients in the protostellar envelopes and the orientations and angular dispersions of the magnetic fields in the dense cores analyzed in the present paper are presented in Appendix \ref{sec:measurements}.
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.8]{figures/smallwidthobjs.png}
\caption{Magnetic field orientations observed with JCMT (red segments) overlaid on the C$^{18}$O moment 1 (color) and 0 (contours) maps obtained with the SMA observations.
The minimum separation between the red segments is 12$\arcsec$, comparable to the JCMT resolution of 14$\arcsec$ at 850~$\mu$m.
Purple stars represent locations of the protostars and black arrows originating from them show outflow orientations.
Black segments in the bottom-left corners depict a length scale of 1,000 au. Brown ellipses in the bottom-right corners depict beam sizes of the C$^{18}$O data.
In each panel, the outermost contour represents the $3\sigma$ noise level in the moment 0 map and subsequent inner contours levels are increasing by a factor of two.}
\label{fig:maps1}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.8]{figures/bigwidthobjs.png}
\caption{Same as Fig.~\ref{fig:maps1} but with a different color scale for the moment 1 maps.}
\label{fig:maps2}
\end{figure}
\section{Analysis and Results}
\subsection{Observed velocity gradients vs magnetic field structures}
\label{sec:InAnal}
We inspected relations between the velocity gradients at the envelope scale (Section \ref{sec:kinmatics}) and the misalignment between the core-scale magnetic field orientations (Section \ref{sec:field}) and the outflows. We assume the rotational axes of the protostellar envelopes to be same as the outflow axes, and the MHD simulations suggest that they are indeed mostly parallel \citep{cia10,mac20}. We also note that the rotational axis of the central protostar-disk system, where outflows are launched, in a protostellar source might not be perfectly aligned with the rotational axis of its protostellar envelope. Observations have indeed found misaligned disks and protostellar envelopes around Class 0 and I protostars \citep{lee19,sai20}, which could suggest misaligned rotational axes of the disk and the envelope. Nevertheless, the observed misalignment angles between the disks and the protostellar envelopes are typically small and less than 10\arcdeg--20\arcdeg, which is comparable to the uncertainty in the outflow directions \citep{ste17}.
Three different velocity gradients in the protostellar envelopes are discussed in the present paper, namely (1) {\it overall velocity gradient}, which traces overall envelope-scale kinematics (hereafter overall gradient), (2) {\it velocity gradient perpendicular to the outflow}, which is expected to be proportional to rotational motion (hereafter rotational gradient), and (3) {\it velocity gradient parallel to the outflow}, which is expected to be proportional to infalling motion. In addition, we also computed normalized rotational gradients by dividing the rotational gradients by the velocity gradients parallel to the outflow axes, which are taken as a proxy for strength of the rotational motion for the given infalling motion in a protostellar envelope. We note that the velocity gradients perpendicular to and along the outflow axis might not completely trace rotational and infalling motions in a protostellar envelope \citep[e.g.,][]{tob12_grad}. Nevertheless, these velocity gradients can still be considered as upper limits of rotational and infalling velocities because faster rotational and infalling motions are expected to induce larger velocity gradients perpendicular to and along the outflow axis in a protostellar envelope \citep{yen13,pin19,gau20}.
Figure \ref{fig:corr1} shows the overall, rotational, and normalized rotational gradients in the protostellar envelopes as a function of the misalignment.
We use Spearman rank correlation analysis to study the correlation between the velocity gradients and misalignment angles. The Spearman correlation coefficients and corresponding p-values, as computed using \textit{scipy} package in Python are 0.12 and 0.52 for the overall gradient, 0.22 and 0.23 for the rotational gradient, and 0.35 and 0.05 for the normalized rotational gradient, respectively.
P-values here refer to the probability of getting these correlation coefficients from a random uncorrelated sample, and a low p-value of $\lesssim 0.05$ suggests that the observed correlation is significant.
The coefficients and p-values for the overall and rotational gradients suggest that the gas kinematics in the protostellar envelopes at a 1,000 au scale does not strongly depend on the misalignment. In contrast, with a p-value of $0.05$, the normalized rotational gradient likely has a significant dependence on the magnetic field orientation. This can also be seen in Figure \ref{fig:corr1}, where unlike the overall (top panel) and rotational gradients (middle panel), the normalized rotational gradient (bottom panel) shows an increase from $\sim0.1$--$1$ for the projected magnetic fields roughly parallel to the outflow axes to $\sim1$--$10$ for nearly orthogonal configurations.
It is worth emphasizing that this correlation -- emerging only once the ratio is formed between the perpendicular velocity gradient and the parallel velocity gradient -- is transitioning from smaller-than one to larger-than one. As such, Figure \ref{fig:corr1}c is revealing two different regimes: a more infall-dominated regime with the magnetic field closely aligned with the outflow axis within $\sim 30^{\circ}$, and a more rotation-dominated regime where the ratios grow to larger than one, likely enabled by the larger misalignment angles. These different regimes are unnoticed in Figure \ref{fig:corr1}b when only the absolute magnitude of the rotational velocity gradient is considered. This demonstrates that for such studies it is crucial to form physically motivated quantities that can capture the dynamics in sources that differ in mass and velocity gradient. Possible physical interpretations of these observed trends are discussed in more detail in Section \ref{sec:MisVSVel}.
We also analysed these velocity gradients at a $1,000$~au scale as functions of the angular dispersion of the magnetic fields within $4,000$~au of the protostars, as shown in Figure \ref{fig:corr2}. The Spearman correlation coefficients and p-values for these relations are 0.06 and 0.74 for the overall gradient, 0.06 and 0.75 for the rotational gradient, and -0.07 and 0.72 for the normalized rotational gradient, respectively. Since all the p-values are $>0.7$, we did not find any significant dependence of the gas kinematics in the protostellar envelopes on the angular dispersion of the magnetic fields. In addition, we also found that there is no clear dependence of the velocity gradients and the orientations and angular dispersions of the magnetic fields on the bolometric temperatures of the protostars, which can be an evolutionary indicator for protostellar sources \citep{che95}.
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.8]{figures/UncorrectedCorrMis.png}
\caption{Misalignment between the magnetic field and outflow axis in dense cores as a function of overall velocity gradient (top panel), velocity gradient perpendicular to the outflow (middle panel), and velocity gradient perpendicular to the outflow divided by that along the outflow axis (bottom panel) in the protostellar envelopes at a 1,000 au scale. Marker colours represent bolometric temperatures of corresponding protostars.}
\label{fig:corr1}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.8]{figures/UncorrectedCorrDisp.png}
\caption{Angular dispersion of the magnetic field as a function of overall velocity gradient (top panel), velocity gradient perpendicular to the outflow (middle panel), and velocity gradient perpendicular to the outflow divided by that along the outflow axis (bottom panel) in the protostellar envelopes at a 1,000 au scale. Marker colours represent bolometric temperatures of corresponding protostars.}
\label{fig:corr2}
\end{figure}
\subsection{Corrections for projection effects}
\label{sec:corrs}
The simple correlation coefficient analysis in Section \ref{sec:InAnal}, does not account for uncertainties in the measurements of both the velocity gradients and the angles between the outflows and the magnetic fields.
Besides the measurement uncertainties, the angles measured between the outflows and the magnetic field orientations are angles projected on the plane of the sky (POS), and the actual misalignment in three-dimensional (3D) space might differ significantly. \citet{gal20} demonstrated that the misalignment is likely underestimated due to the projection effect, and this effect is especially prominent for smaller misalignment angles ($\lesssim40^{\circ}$). As derived in Appendix \ref{sec:angles}, for a given projected angle, the actual angle in 3D space can be determined if the inclinations of the magnetic field and the outflow relative to POS are known, as
\begin{equation}
\label{equ:angle}
\cos \theta = \cos \alpha \times \cos \beta \times \cos\lambda + \sin\alpha\times \sin\beta,
\end{equation}
where $\theta$ denotes the actual angle in 3D, $\lambda$ is the angle projected on POS, $\alpha$ denotes the inclination of the outflow with respect to POS, and $\beta$ denotes the inclination of the magnetic field with respect to POS. Because $\alpha$ and $\beta$, i.e. the 3D orientations of the magnetic fields and the outflows in our sample, are not known, we assume a probability distribution of these angles and estimate the underlying probability distribution of the actual angle ($\theta$) from the observed angle ($\lambda$).
The outflow axis and the magnetic field are more likely to be closer to POS in our sample. This is because (1) for a uniform distribution of unit vectors in 3D space more vectors will lie around the equator (parallel to POS) than near the pole (along the line of sight) and (2) we use observational results of magnetic field and outflow components projected on POS inheriting a bias against vectors perpendicular to POS. For simplicity, we generated the distributions of $\alpha$ and $\beta$ assuming that the outflow and magnetic field orientations are uniformly distributed in 3D space. For this, $\alpha$ and $\beta$ should follow a cosine distribution, i.e., $P(\beta) \propto \cos{\beta}$, where is $P(\beta)$ is the probability of the inclination of the magnetic field being equal to $\beta$. Nevertheless, the actual distributions of these angles are not known and can be different from our assumptions. We also repeated our analysis with differently assumed distributions and the final results do not change significantly, as quantified in Appendix \ref{sec:alphabeta}.
Similarly, we corrected the projection effects on the rotational gradients with the assumption of the probability distributions of $\alpha$. When a rotational axis is inclined with respect to POS ($\alpha>0^{\circ}$), the rotational gradient tends to be underestimated with observations. The difference between the actual and observed gradients increases when the protostellar envelope of a source is more face on. For a given $\alpha$, the actual rotational gradient (${\rm VG}_{true}$) can be estimated as,
\begin{equation}
\label{equ:rot}
\text{VG}_{true} = \text{VG}_{obs}/\cos{\alpha}
\end{equation}
where $\text{VG}_{obs}$ is the rotational gradient observed in the moment 1 maps. We did not apply this correction for the normalized rotational gradient because the projection effects ($\cos{\alpha}$) on the velocity gradients parallel and perpendicular to the outflow axis cancels out.
The Perseus region has also been observed in the continuum emission at 8 mm, 1 cm, 4 cm, and 6.6 cm with the VLA Nascent Disk and Multiplicity (VANDAM) survey with the Karl G.\ Jansky Very Large Array (VLA), with angular resolutions down to $\sim 0.06\arcsec$ \citep{tob15}. For ten sources in our sample, their disks were resolved with the VANDAM survey, and their inclination angles were measured \citep{seg18}. For these sources, instead of using the assumed distributions, the disk inclination angles were converted to $\alpha$ values and used to constrain the probability distributions of $\theta$ and deprojected rotational gradients. We note that the disks and the envelopes could be misaligned. Nevertheless, adopting the disk inclinations would still provide better constraints than our simply assumed distribution because of the typically small misalignment angle (if present) of $<$10\arcdeg--20$\arcdeg$ between disks and envelopes \citep[e.g.,][]{lee19,sai20}.
\subsection{Deprojected velocity gradients and misalignment angles}
\label{sec:FiAnal}
In order to account for the measurement and systematic uncertainties discussed in Section \ref{sec:corrs}, we simulated expected probability distributions of deprojected velocity gradients and misalignment angles from our observational measurements. Firstly, for each measurement, we generated 10,000 simulated data points following a normal distribution with the observed values as means and their measurement uncertainties as standard deviations. Then we corrected the projection effects on these simulated data points.
The probability distribution of deprojected misalignment angles ($\theta$) is estimated from the observed angles ($\lambda$) with Equation \ref{equ:angle} on the assumption of the probability distributions of $\alpha$ and $\beta$, as discussed in Section \ref{sec:corrs}. Similarly, the distributions of deprojected rotational gradients are also estimated from the distributions of the observed rotational gradients following Equation \ref{equ:rot}.
Finally, we have 10,000 simulated values of the overall velocity gradient, deprojected rotational gradient, normalized rotational gradient, and deprojected misalignment for each of the 32 sources. Figure \ref{fig:corr_deproj} shows the inferred (or simulated) probability distributions of the velocity gradients with respect to the angles between the magnetic fields and the rotational axes in 3D space. The overall gradients (top panel) do not show any clear trend with respect to the misalignment. The rotational gradients (middle panel) seem to narrow down slightly to higher values with increase in the misalignment, as there are almost no small rotational gradients ($<10$~km~s$^{-1}$~pc$^{-1}$) when misalignment angles are $>50\arcdeg$. However, any overall trend is still not obvious, and the lack of data points in the bottom right corner could just be due to our limited sample size. The normalized rotational gradient (bottom panel) seems to display a more prominent positive trend with respect to the misalignment, with the typical ratio smoothly increasing from $<1$ for smaller misalignment angles to $>1$ for larger angles.
Quantitatively, the correlation coefficient analysis is not straight-forward for these simulated distributions as the number of data points is artificially too large and thus, p-values estimated using the standard methods will come out to be too small. Instead, we made groups of 32 simulated data points, where for each group we randomly pick one simulated data point corresponding to one source in our sample of 32 sources. We made 10,000 such groups, exhausting all the simulated data points. For each of these groups, we calculated Spearman correlation coefficients. The mean values of the correlation coefficients were taken as the representative values of our simulated distributions and the corresponding standard deviations as the uncertainties. Then the correlation coefficients were estimated to be $0.11 \pm 0.14$ for the overall gradient, $0.14 \pm 0.14$ for the rotational gradient, and $0.22 \pm 0.14$ for the normalized gradient. The fractions of the groups showing positive correlation coefficients are 0.77 for the overall gradient, 0.85 for the rotational gradient, and 0.94 for the normalized rotational gradient.
To test the significance of these correlations, we generated artificial uncorrelated samples.
We randomly permutated our simulated velocity gradients with respect to our misalignment angles, and we repeated the same process as discussed above to obtain the distributions of the correlation coefficients for this random uncorrelated artificial data. We found that for the random data, the possibilities to obtain correlation coefficients greater than or equal to the mean correlation coefficients of our observed sample are $0.27$ for the overall gradient, $0.22$ for the rotational gradient, and $0.12$ for the normalized gradient.
These values suggest that the normalized rotational gradient is most strongly correlated with the misalignment, followed by the rotational gradient and then the overall gradient.
This is in agreement with the trends observed in Figure \ref{fig:corr_deproj} and results from the original analysis (Section \ref{sec:InAnal}).
Our results suggest that the normalized rotational gradient is possibly correlated with the misalignment with a Spearman correlation of 0.22 and a confidence level of 88\% after considering the projection effect. Nevertheless, a larger sample is needed to have a more robust constraint on the correlation coefficient.
Moreover, we can see a characteristic range in the simulated probability distributions (innermost contours) for all the quantities. The distribution of the rotational gradients and misalignment angles, as shown in Figure \ref{fig:dist}, peaks around $\sim30$~km~s$^{-1}$~pc$^{-1}$ and $45 \arcdeg$, respectively. The distributions shown in Figure \ref{fig:dist} also include extra 22 sources with the velocity gradient measurements but without the magnetic field information and 7 more sources with the magnetic field information but without the velocity gradient measurements.
Not including these additional sources does not significantly change the final results. The characteristic range of the rotational gradients could be due to the underlying probability distribution of the angular momentum in the protostellar envelopes ($\sim1,000$~au), as discussed further in Section \ref{sec: VelDist}.
Our polarization data are a subset of a larger sample of 62 sources from \citet{yen20}. \citet{yen20} identified dense cores in the JCMT $850~\micron$ maps using the clump identification algorithm \textit{clumpfind} \citep{wil94} and calculated misalignment angles within these detected cores. For different assumed distributions of misalignment angles in 3D space, they simulated distributions of projected angles and compared them with the observed distribution. Despite this slightly different characterization of core-scale magnetic field and handling of projection effects, \citet{yen20} inferred a similar distribution of deprojected angles, with more sources having intermediate misalignment angles ($\sim 30\arcdeg \textup{--} 60\arcdeg$). Also, our distribution of the deprojected angles depends on the assumed distributions of $\alpha$ and $\beta$ (Section \ref{sec:corrs}). Assuming other distributions of $\alpha$ and $\beta$ tends to shift the peak of the distribution of the misalignment angles towards larger values,
as quantified in Appendix \ref{sec:alphabeta}.
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.8]{figures/CorrectedCorr.png}
\caption{Simulated probability distributions of overall gradient (top panel), deprojected rotational gradient (middle panel), and normalized rotational gradient (bottom panel) in the protostellar envelopes with respect to deprojected misalignment angles between the core-scale magnetic fields and outflow axes, as discussed in Section \ref{sec:FiAnal}. Red circles represent the original measurements. Contours represent probability levels: 0.91, 0.68, 0.52 for the top panel, 0.89, 0.61, 0.38 for the middle panel, and 0.86, 0.60, 0.27 for the bottom panel.
}
\label{fig:corr_deproj}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.8]{figures/ParamsDist.png}
\caption{Histograms of the simulated deprojected rotational gradients in the protostellar envelopes (left panel, in a logarithmic scale) and deprojected misalignment angles between the core-scale magnetic fields and the outflow axes (right panel). These simulated probability distributions account for the measurement and projection uncertainties, as discussed in Section \ref{sec:FiAnal}.}
\label{fig:dist}
\end{figure}
\section{Discussions} \label{sec:discussion}
\subsection{Misalignment and angular momentum transportation} \label{sec:MisVSVel}
Ideal MHD simulations of collapse of dense cores with their rotational axes aligned with the magnetic fields, suggest that realistic levels of the magnetic field can greatly suppress rotation in the inner protostellar envelopes \citep{all03,gal06,mel08}. Further studies incorporated misalignment between the magnetic field and rotational axis of a dense core, and found that the misalignment can reduce the efficiency of magnetic braking and thus, allow inward angular momentum transportation to be more efficient in a collapsing dense core \citep{hen09,joo12,li13}. If the magnetic field orientation indeed significantly affects gas kinematics as suggested by these simulations, we expect to see stronger velocity gradients, particularly rotational gradients, for systems with larger misalignments.
However, as discussed in Section \ref{sec:InAnal}, we do not find any significant correlation between the overall or rotational gradients with respect to the misalignment. These results do not change even after accounting for the projection effects and the measurement uncertainties (Section \ref{sec:FiAnal}). This suggests that misalignment between the magnetic field and rotational axis in a dense core is not a dominant factor driving the gas kinematics or the amount of angular momentum at the envelope scale.
This is different from the results of \citet{gal20}. For a sample of $\sim$20 protostars, \citet{gal20} measured magnetic field orientations in the protostellar envelopes on a scale of a few thousand au with SMA. They compared the misalignments between the magnetic fields and the outflow axes with the magnitudes of the overall velocity gradients at a $\sim5,000$~au scale, where some were taken from the literature \citep{wis01,sai99,tob11,tan11,gau20} and others were derived by \citet{gal20} using the published data \citep{mat08,hua13,tob18}.
They primarily used the N$_{2}$H$^+$ emission to trace the gas kinematics.
They found a positive correlation between them with a Pearson correlation coefficient of 0.68, which could suggest greater misalignment results in greater angular momentum in protostellar envelopes.
A subset of nine sources from \cite{gal20} are also in our sample. For these sources, the misalignment angles in \cite{gal20} are generally consistent with those derived from our data within the error bars, with a median difference of $11\arcdeg$. However, we found that our measured rotational gradients at a $1,000$~au scale are only weakly correlated with the velocity gradients in \citet{gal20}, which are at a larger scale, with a Spearman correlation coefficient of 0.41 and a corresponding p-value of 0.24.
Our velocity gradient measurements are typically greater by a factor of $\sim5$.
In addition, \citet{hem21} also used the same C$^{18}$O data from the MASSES survey to study the gas kinematics in the protostellar envelopes in our sample. They measured overall velocity gradients on variable scales ($\sim$1,000--3,000~au), depending on the sizes of the envelopes. We compared our overall velocity gradients to their measurements and found a strong correlation with a Spearman correlation coefficient of 0.59 and a p-value of 0.0003. Our velocity gradients measured at a 1,000 au scale are typically greater than their velocity gradients measured at a larger scale.
Thus, the discrepancy between the correlation observed by \citet{gal20} and no similar correlation found in our study could be due to the different spatial scales and underlying gas motions of the measured velocity gradients. Other observations have also found that the magnitudes and directions of velocity gradients in protostellar envelopes could change from large to small scales \citep{gau20}. In order to investigate this further, the correlations should be tested with a larger sample of sources with velocity gradient measurements at multiple scales.
Another factor influencing the amount of the angular momentum in a protostellar envelope can be the mass already accreted in its protostar-disk system. In the classical picture of collapse of a dense core,
the internal distribution of the specific angular momentum is a increasing function of radius, and consequently,
the disk size increases with the enclosed mass of the central protostar-disk system \citep{ter84,bas98}. Similar trends have also been seen in non-ideal MHD simulations \citep{hen16,zha16,zha18}.
The infalling motion in a protostellar envelope around a more massive protostar-disk system is expected to be faster because of its deeper gravitational potential,
which could induce a larger velocity gradient along the outflow axis in the protostellar envelope \citep{yen13}.
Therefore, in order to delineate the role of the enclosed mass, we also normalized our rotational gradient by the velocity gradient along the outflow axis, and we found a significant correlation ($p\sim0.05$) between the normalized rotational gradient and the misalignment.
This observed correlation could suggest that for similar enclosed masses, more angular momentum is transported to protostellar envelopes in systems with greater misalignment. In other words, misalignment indeed could promote the amount of angular momentum transported to protostellar envelopes. However, it is not a dominant factor, and other parameters, like mass accretion in protostellar sources, also play an important role.
This is also in agreement with the non-ideal MHD simulations by \cite{hir20}. Along with misalignment angles, \cite{hir20} also varied the ratio of thermal-to-gravitational energy, taking it as a proxy for gravitational instability of the initial core. They found that the systems with smaller ratios form larger disks because the dense cores collapse more rapidly and gas is quickly advected to the disks. Moreover, they found that for systems with similar thermal-to-gravitational energy ratios, more misaligned magnetic field is conducive to form larger disks.
\subsection{Distribution of Velocity Gradients} \label{sec: VelDist}
Figure \ref{fig:dist} (left panel) shows the distribution of the simulated deprojected rotational gradients for all the 54 sources. The median rotational gradient is $\sim 29$~km~s$^{-1}$~pc$^{-1}$ which at a radius of $1,000$~au corresponds to a specific angular momentum of $\sim 6.8\times 10^{-4}$~km~s$^{-1}$~pc. This value is in agreement with the mean specific angular momentum of $\sim 6\times 10^{-4}$~km~s$^{-1}$~pc at $<1600$~au scales for a sample of 12 protostars, inferred by \citet{gau20}.
Specific angular momenta of 17 Class 0 and I sources at $<1500$~au scales estimated by \citet{yen15} are also of the order of $\sim 10^{-4}$~km~s$^{-1}$~pc.
Assuming a given angular momentum is efficiently transported from this 1,000 au scale to the edge of a Keplerian disk with negligible mass compared to the stellar mass in an axisymmetric system without magnetic braking, the resultant disk radius can be estimated as \citep{ulr76,ter84,bas98},
\begin{equation}
\label{equ:disk}
R_{d} = \frac{l^{2}}{GM_{*}},
\end{equation}
where $R_{d}$ is the radius of the Keplerian disk, $l$ is the specific angular momentum, $G$ is the gravitational constant, and $M_{*}$ is mass of the central protostar.
\citet{yen17} inferred a time-dependent mass accretion rate using bolometric luminosities and protostellar masses of a sample of 18 Class 0 and I protostars. We integrated this mass accretion rate over the typical lifetime for Class 0 sources of $0.26$~Myr \citep{dun15}. This gives a typical protostellar mass of $0.25$~M$_\odot$. Nevertheless, the masses of Class 0 protostars can still be different by two order of magnitude \citep{yen17}.
Assuming the mass accretion rate is proportional to the protostellar mass, we can assume the distribution of masses of these young protostars to be similar to the mass distribution of main-sequence stars, i.e., the initial mass function (IMF). For low mass stars ($M_{*}\lesssim 1$~M$_\odot$), the IMF can be approximated as a log-normal distribution with a characteristic mass of $0.22$~M$_\odot$ and a variance of $\sim 0.57$~M$_\odot$ \citep{cha03}. As our sample Class 0 and I protostars continue to acquire more mass, we normalized this log-normal distribution of IMF to have a mean mass of $0.25$~M$_\odot$, instead of the original mean mass of $\sim 0.6$~M$_\odot$, and adopted this normalized distribution as the mass distribution of the young protostars.
By adopting the distributions of the deprojected rotational gradients and protostellar masses, we inferred the expected distribution of disk radii with Equation \ref{equ:disk}. We found the median disk radius to be $\sim 107$~au, comparable to the geometric mean ($10^{\mu(\log{R_d})}$, where $\mu$ is the simple mean) of $\sim 92$~au. The logarithmic variance ($\sigma(\log{R_d})$, where $\sigma$ is the simple variance) of the distribution is $\sim 1.4$.
\citet{tob20} observed the 0.87~mm continuum emission around 328 protostars in Orion clouds at a resolution of $\sim 40$~au. They reported a median dust disk radius of $\sim 48$~au for Class 0 sources and $\sim 38$~au for Class I sources. Among these only one source, B5-IRS1 has a disk radius $\gtrsim 100$~au. Similarly, \citet{enc21} surveyed the 0.87~mm continuum emission around 31 protostars in Ophiuchus at a resolution of $\sim 21$~au. They found the mean disk radius to be $\sim 24$~au for Class I sources and $\sim 17$~au for flat-spectrum sources. In Perseus, \citet{seg18} observed the $8$~mm continuum emission around 82 class 0 and I sources. With a resolution of $\sim 12$~au, they identified disk-like structures only around $22\%$ of the sources. However, \citet{seg18} also pointed out that the $8$~mm continuum emission traces large dust grains that could radially drift inwards and thus, these disk sizes are likely lower limits. As also shown in numerical simulations \citep{aso20}, continuum observations may not reliably trace entire Keplerian disks and can underestimate the disk size by a factor of $\sim2$--$3$. \citet{aso20} suggested that this is because as a disk grows, the density and temperature in the outer disk drops and the continuum emission becomes much fainter.
For a small sample of young protostars, disks with radii of a few tens of au have been observed in molecular lines \citep{hsi19,tob12,tob20disk,rey21}.
\citet{mar20} used CO line observations at angular resolutions of $\sim0\farcs7$ to search for disks towards 16 nearby ($<500$~pc) Class 0 sources. They found clear Keplerian disks with radii $>50$~au in only two sources: L1448-C with a disk radius of $200$~au and L1527 with a disk radius of $90$~au.
Although the molecular-line measurements of Class 0 and I disks are still scarce, the results from \citet{mar20} suggest that only $\sim 13\%$ of disks have radii $\gtrsim100$~au. This ratio is much smaller compared to the disk radius distribution derived using our measurements of the angular momentum in the protostellar envelopes at a $1,000$~au scale, where $\sim50\%$ of disks are expected to be larger than $100$~au.
One key assumption in deriving our expected distribution of disk radii is a conserved angular momentum within $\lesssim 1,000$~au scales.
Therefore, the apparent discrepancy between the derived and observed disk radii distributions is likely because the angular momentum is lost at $< 1,000$~au scales. This could be due to efficient magnetic braking in inner protostellar envelopes as a result of pinched field lines in this region \citep[e.g.,][]{li13}. A similar scenario has been observed in the Class I system HH 111, where the angular momentum in the envelope was observed to drop by a factor of $\sim3$ from 2000 au to 100 au scales \citep{chi16}. However, we note that protostellar envelopes with a relatively conserved angular momentum have also been observed \citep{aso15,aso17}.
Using measurements of the gas kinematics at $\sim1600$--$100$~au scales for 11 Class 0 protostars, \citet{gau20} identified eight sources with relatively flat (conserved) angular momentum profiles within this radial range.
Assuming that the angular momentum they measured at a $100$~au scale remains conserved till the edge of disks, they estimated expected disk radii for these source (using Equation \ref{equ:disk}). They found the estimated radii to be in agreement with the disk radii in those sources estimated with the continuum emission. However, it is important to note that \citet{gau20} compared the disk radii with the angular momentum at a $100$~au scale, different from our study with the angular momentum estimated at a $1,000$~au scale. Though, for these eight sources the angular momentum profiles were flatter than the other sources in their sample, still for most of their sample sources the angular momentum was observed to decrease from $1,000$ to $100$ au scales. Together with our results, the observations could suggest that the angular momentum is likely lost in protostellar envelops at radii between $\sim1,000$--$100$~au, possibly due to magnetic braking. In order to better characterize the scale-dependency of magnetic braking, a larger sample of young sources with resolved velocity profiles from envelopes to disks is needed.
\section{Conclusions} \label{sec:conclusion}
For a sample of 32 Class 0 and I protostars, we diagnosed the gas kinematics in the protostellar envelopes at a $1,000$~au scale using the C$^{18}$O data from the SMA MASSES Survey \citep{ste19} and the magnetic fields at the core scale of 4,000~au using the 850~$\micron$ polarimetric data from the JCMT BISTRO survey \citep{war17} and archive.
We assessed the overall, rotational, and infalling motions in the protostellar envelopes with the 2D velocity gradients, velocity gradients perpendicular to the outflows, and velocity gradients along the outflow axes in the C$^{18}$O moment 1 maps, respectively.
We studied the dependence of the gas kinematics in the protostellar envelopes on the magnetic field structures in the dense cores, namely angular dispersion of the magnetic field and misalignment between the magnetic field and outflow axis (taken as a proxy for the rotational axis). Furthermore, we inferred an expected distribution of disk radii using the observed distribution of the rotational velocity gradients in the protostellar envelopes.
Our main results are:
\begin{enumerate}
\item We did not find any significant correlation between the angles between the magnetic field and outflow axis in the dense cores at a 4,000~au scale, and the overall or rotational velocity gradients in the protostellar envelopes at a $1,000$~au scale. We also did not find any correlation between the angular dispersions of the magnetic fields and the velocity gradients. These results could suggest that the misalignment between the magnetic field and rotational axis and the ratio of the turbulence to the magnetic field strength in a dense core are not dominant factors in determining gas kinematics or angular momentum in its protostellar envelope.
\item We found a significant correlation between the rotational velocity gradients normalized by the infalling velocity gradients in the protostellar envelope and the misalignment angles between the magnetic fields and outflows in the dense cores.
In particular, these normalized values transition from smaller-than one to larger-than one, suggesting the presence of an infall-dominated regime with small misalignment angles and a rotation-dominated regime with larger misalignment.
The Spearman correlation coefficient was calculated to be 0.35 with a p-value of 0.05. After considering projection effects, the Spearman correlation coefficient becomes 0.22 with a confidence level of 88\%, which are related to the assumed probability distributions of 3D orientations of the magnetic field and outflows. Assuming that the infalling velocity is proportional to the mass of a central protostar-disk system, our results could suggest that for similar central masses, more angular momentum is transported to protostellar envelopes in systems with greater misalignment. This hints that misalignment between the magnetic field and rotational axis in a dense core could promote angular momentum transportation from large to small scales, although it is not a dominant factor.
\item Assuming our estimated angular momentum in the protostellar envelopes at a $1,000$~au scale is efficiently transported to disk-forming regions, the median disk radius is expected to be $\sim100$~au. However, molecular-line observations like in \citet{mar20} show that disks with radii $\gtrsim100$~au are not common for Class 0 and I sources. Thus, this suggests that the angular momentum is likely lost in protostellar envelops at radii between $\sim1,000$--$100$~au, possibly due to magnetic braking.
\end{enumerate}
\begin{acknowledgments}
The Submillimeter Array is a joint project between the Smithsonian Astrophysical Observatory and the Academia Sinica Institute of Astronomy and Astrophysics and is funded by the Smithsonian Institution and the Academia Sinica.
The James Clerk Maxwell Telescope is operated by the East Asian Observatory on behalf of The National Astronomical Observatory of Japan; Academia Sinica Institute of Astronomy and Astrophysics; the Korea Astronomy and Space Science Institute; the National Astronomical Research Institute of Thailand; Center for Astronomical Mega-Science (as well as the National Key R\&D Program of China with No.~2017YFA0402700). Additional funding support is provided by the Science and Technology Facilities Council of the United Kingdom and participating universities and organizations in the United Kingdom and Canada.
H.-W.Y.\ acknowledges support from the Ministry of Science and Technology (MOST) in Taiwan through grant MOST 108-2112-M-001-003-MY2 and MOST 110-2628-M-001-003-MY3 and support from an Academia Sinica Career Development Award.
P.M.K.\ is supported by the Ministry of Science and Technology (MoST) through grants MoST 109-2112-M-001-022 and MoST 110-2112-M-001-057.
E.J.C.\ was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No.~NRF-2019R1I1A1A01042480).
J.K.\ is supported by JSPS KAKENHI grant No.~19K14775.
W.K.\ was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (NRF-2021R1F1A1061794).
C.W.L.\ is supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (NRF-2019R1A2C1010851).
K.P.\ is a Royal Society University Research Fellow, supported by grant number URF\textbackslash R1\textbackslash211322.
M.T.\ is supported by JSPS KAKENHI grant Nos.~18H05442, 15H02063,and 22000005.
\end{acknowledgments}
|
2,869,038,153,973 | arxiv | \section{Introduction}
More than forty years ago, Frank Stillinger~\cite{FS73} argued that the liquid water interface adjacent to extended non polar hydrophobic substrate is universal and shares the same microscopic features as a liquid-vapor interface. Subsequent theory and molecular simulations support this idea~\cite{LCW,DC05,PT08} . On the other hand, some simulation data shows that interfaces between liquid water and physically realistic hydrophobic substrates lack discernible vapor (or dewetting) regions. Instead, water density profiles extending from realistic models of typical hydrophobic substrates can exhibit molecular layering, the details of which depend on the chemical identity of the substrate~ \cite{MEP01,DC02,BJB04,MP05,LM07,SG09,SG11,NC12}. Here, we demonstrate that this substrate dependence reflects significant sensitivity of the position of the average interface to weak adhesive forces. At the same time, the fluctuations of the interface, and the mean molecular structure relative to the instantaneous interface~\cite{APW10} are both insensitive to weak adhesive forces. We refer to this dynamic frame of reference in which molecular structure is resolved in terms of distances from the time-varying position of the instantaneous interface as the \textit{intrinsic} interface. For water adjacent to an extended hydrophobic surface, the structure of an intrinsic interface exhibits almost no substrate dependence, and this intrinsic interface is quantitatively similar to that of the water-vapor interface. This commonality is not present at the interfaces between water and hydrophilic substrates, which we also demonstrate in this paper. The results presented in this paper thus constitute explicit confirmation of Stillinger's 1973 hypothesis, and they reconcile the apparent inconstancy presented by some numerical studies.
In our classical molecular dynamics simulations, the general system consists of a slab of liquid water in contact with a model substrate. It is simultaneously in equilibrium with its vapor phase and in contact with a substrate. See Fig.~\ref{fig:system}a. An adjustable water-substrate attraction controls the hydrophobicity of the substrate. The water-substrate attractive interactions are isotropic with respect to molecular orientation and weak compared to hydrogen bonding. They thus influence the position interfacial molecules without necessarily disrupting the hydrogen bonding structure of the liquid interface. The methodology we employ involves separating the collective fluctuations of soft liquid interfaces from the intrinsic molecular structure. This separation requires identifying the time-varying position of the liquid-water instantaneous interface, which we accomplish following the algorithm described in Ref.~\cite{APW10}. We use the instantaneous interface as a dynamic frame of reference, performing a spatial transformation that defines the vertical position of each water molecule relative to the local instantaneous interface rather than a fixed Cartesian plane. By doing so the spatial deformations of the liquid phase boundary are projected out. The degrees of freedom that remain after this transformation constitute our definition of the intrinsic interface.
Our methods are described in the next section. After that, we present and discuss our results.
\section{Methods}
\subsection{Simulation Details}
The model system consists of a slab of 2261 SPC/E water molecules~\cite{SPCE} in periodically replicated simulation cell measuring $5 \times 5 \times 10~\mathrm{nm}^3$ in the $x$-, $y$- and $z$-direction, respectively. It is propagated in time using standard molecular dynamics at a temperature of 298K. A rendering of the simulation cell is shown in Fig.~\ref{fig:system}. The simulation cell is long enough in the $z-$dimension so that the water condenses against the substrate and forms both a water-substrate and a water-vapor interface. Although the overall simulation cell is held at constant volume, the presence of the free water-vapor interface acts as a natural barostat to the liquid. At the bottom of the simulation cell extending across the $x$-$y$ plane is a planar hydrophobic substrate whose interactions with individual water molecules is of the form,
\begin{equation}
w_\lambda(z_i) = w_0(z_i) + \lambda w_1(z_i).
\label{eq:wca_pot}
\end{equation}
where $z_i$ is the position of the center of the oxygen atom of the $i$th water molecule, and the functions $w_0(z)$ and $w_1(z)$ are the repulsive WCA potential~\cite{WCA}, and the attractive branch of the Lennard-Jones potential respectively. The two parts of the water-substrate potential are given by,
$$
w_0(z) = \left\{ \begin{array}{ll}
4 \epsilon_\mathrm{s} \left [ (\sigma_\mathrm{s}/z)^{12} - (\sigma_\mathrm{s}/z)^6 + 1/4 \right ] , & \quad z \leq 2^{1/6} \sigma_\mathrm{s}, \\
0, & \quad z > 2^{1/6} \sigma_\mathrm{s}, \end{array} \right.
$$
and
$$
w_1(z) = \left\{ \begin{array}{ll}
-\epsilon_\mathrm{s}, & \quad z \leq 2^{1/6} \sigma_\mathrm{s}, \\
4 \epsilon_\mathrm{s} \left [ (\sigma_\mathrm{s}/z)^{12} - (\sigma_\mathrm{s}/z)^6\right ], & \quad z > 2^{1/6} \sigma_\mathrm{s}, \end{array} \right.
$$
where $\sigma_\mathrm{s}=5\mathrm{\AA}$, $\epsilon = 1.825\,k_\mathrm{B}T$, and the quantity $\lambda$ tunes the strength of the water-substrate attraction. We consider a range of values for $\lambda$ between 0.1 and 0.5, with $\lambda=0.3$ being approximately equal to the effective potential between water and a surface composed of alkane chains. Averages were generated using 6000 snapshots equally spaced over a $750\,\mathrm{ps}$ simulation.
To measure contact angles, 1387 SPC/E water molecules were placed inside a much larger simulation cell (see Fig.~\ref{fig:system}b inset), one that measured $15 \times 15 \times 15~\mathrm{nm}^3$. In this larger cell the number of water molecules is insufficient to bridge the periodic boundaries and responds by forming a liquid droplet. Contact angles were estimated by extrapolation on the horizontal center of mass corrected mean solvent density profile.
\begin{figure*}
\centering
\includegraphics[width=6.69in]{fig1.pdf}
\caption{(a) A snapshot of the simulation system. Water molecules are rendered in red and white and the hydrophobic interface is rendered in green. The position of the instantaneous interface is represented by a solid blue line. (b) The dependence of contact angle, computed using simulation data, on the substrate-water attractive parameter $\lambda$. (c) Schematic illustration of the coordinate system for the standard and intrinsic interface. The vertical position of a molecule $j$ relative to the standard interface is $a^{(\mathrm{S})}_j$ and the vertical position of a molecule $i$ relative to the intrinsic interface is $a^{(\mathrm{I})}_i$. }
\label{fig:system}
\end{figure*}
To generate a model hydrophilic substrate a plane was drawn through a slab of liquid water that had been equilibrated at 298K. The positions of all the molecules whose oxygen atoms reside on one side of the plane were frozen in space to produce the hydrophilic substrate~\cite{AJP10}.
\subsection{Instantaneous interface and relative coordinates}
We refer to the ``standard'' interface to indicate a Cartesian frame of reference and the ``intrinsic'' interface to indicate an instantaneous interface frame of reference. To generate the latter we utilize the construction presented in Ref.~\cite{APW10} for identifying the time-varying position of the instantaneous water interface. The procedure associates a Gaussian density function with the discrete position of each water molecule in the system. The width of the Gaussian is then a coarse-graining length. Here, we use 2.4 $\mathrm{\AA}$ as the width, which is approximately the molecular diameter. The coarse-grained density field is the sum over at the Gaussian density functions. For an individual snapshot of the system the position of the instantaneous interface is the set of points on the coarse-grained density field whose value is equal to a density value intermediate between the average density of the bulk liquid and the bulk vapor. Here, we take an intermediate value as one-half that of the bulk liquid, $\rho_\ell$. Any choices of coarse-graining lengths between 2.2 $\mathrm{\AA}$ and 3.5 $\mathrm{\AA}$, and any choice of intermediate densities between 0.3 $\rho_\ell$ and 0.7 $\rho_\ell$ will yield results essentially identical to those presented below.
We utilize two specific measures of molecular structure in order to characterize the interface between water and a variety of hydrophobic or hydrophilic substrates. One is the mean solvent density projected along an axis perpendicular to the substrate surface, given by
\begin{equation}
\rho^{(\alpha)}(z) = \frac{1}{A}\left \langle \sum_{i=1}^{N_\mathrm{w}} \delta(a_i^{(\alpha)}-z) \right \rangle,
\label{eq:1}
\end{equation}
where the superscript $\alpha$ indicates the relative coordinate system ($\alpha=\mathrm{S}$ for the standard interface and $\alpha=\mathrm{I}$ for the intrinsic interface), $A$ is the substrate surface area, the summation is over all $N_\mathrm{w}$ water molecules and $\delta(x)$ is Dirac's delta function. As illustrated in Fig.~\ref{fig:system}c, $a_i^{(\mathrm{S})}$ and $a_i^{(\mathrm{I})}$ denote the distances of the oxygen atom of molecule $i$ from the substrate surface and instantaneous interface, respectively.
A complementary measure of interfacial structure is the water density fluctuations given by,
\begin{equation}
\left \langle \left(\delta N^{(\alpha)}(z)\right)^2 \right \rangle = \left \langle \left (N^{(\alpha)}(z) - \langle N^{(\alpha)}(z) \rangle \right )^2 \right \rangle,
\end{equation}
where $N^{(\alpha)}(z)$ is the number of water molecules in spherical probe volume with radius $\sigma_\mathrm{p}=3\mathrm{\AA}$ and center a distance $z$ from the $\alpha$th interface ($\alpha$ being either S or I), i.e.,
\begin{equation}
N^{(\alpha)}(z) = \sum_{i=1}^{N_\mathrm{w}} \Theta \left( \sigma_\mathrm{p}-\sqrt{x_i^2 + y_i^2 + (a_i^{(\alpha)}-z)^2} \right),
\end{equation}
where $\Theta(d)$ is the Heaviside function, either 1 if $d\ge0$ or 0 if $d < 0$, and $x_i$ and $y_i$ are the Cartesian coordinates of molecule $i$.
In the next two subsections we analyze $\rho^{(\alpha)}(z)$ and $\left \langle (\delta N^{(\alpha)}(z))^2 \right\rangle$ for the full and intrinsic interfaces at model hydrophobic and hydrophilic substrates.
\section{Results and Discussion}
\subsection{Hydrophobic Substrates}
Changing the parameter $\lambda$ changes the strength of substrate-water attractions and thus changes the hydrophobicity of the substrate. Figure~\ref{fig:system}(b) shows how a water-drop contact angle, $\theta$, reflects these changes. According to our simulations, over the range of $\lambda$ considered, $\cos(\theta)$ is approximately a linear function of this attractive-interaction parameter.
Figure~\ref{fig:denprof} shows the $\lambda$ dependnce of water density profile, $\rho^{(\alpha)}(z)$, computed for both the standard and the intrinsic interface. For comparison, the average density profile of the free liquid-vapor interface is also shown. The density profile for the standard interface, Fig.~\ref{fig:denprof}a, is notably sensitive to the relative hydrophobicity, exhibiting behavior ranging from a sigmoidal liquid-vapor-like profile at $\lambda=0.1$ to an oscillating profile indicative of molecular layering at $\lambda=0.5$. This sensitivity reflects the fact that the instantaneous interface is a soft collective variable. The fluctuations of the soft interface obscures the universal behavior of the intrinsic interface, whether in contact with a hydrophobic surface or a vapor phase. Projecting out the spatial fluctuations of the soft liquid interface by focusing on the intrinsic interface, Fig.~\ref{fig:denprof}b shows a collapse of $\rho^{(\mathrm{I})}(z)$ onto a single curve. The intrinsic hydrophobic density profile exhibits pronounced molecular layering, and it is essentially indistinguishable from the density profile of the intrinsic liquid-vapor interface.
\begin{figure}
\includegraphics[width=3.37in]{denprof_z.pdf}
\includegraphics[width=3.37in]{denprof_a.pdf}
\caption{The mean interfacial water density profile, $\rho^{(\alpha)}(z)$ is plotted for the standard interface ($\alpha=\mathrm{S}$) in the top panel and the intrinsic interface ($\alpha=\mathrm{I}$) in the bottom panel. Densities are normalized by the bulk liquid density $\rho_\mathrm{b}$.}
\label{fig:denprof}
\end{figure}
Fluctuations from the mean density profile illustrate the same point. Specifically, Fig.~\ref{fig:fluctprof} shows our simulation results for $\left \langle (\delta N^{(\alpha)}(z))^2 \right \rangle$ for both $\alpha =$ S and $\alpha =$ I. For the standard substrate-water interface, density fluctuations are much larger near the substrate than in the bulk liquid. The magnitude and spatial variation of these fluctuations depend sensitively on $\lambda$. In contrast, these fluctuations in reference to the instantaneous interface are at most weakly dependent on $\lambda$, and very much similar to those of the liquid-vapor interface.
\begin{figure}
\includegraphics[width=3.37in]{fluct_prof_z.pdf}
\includegraphics[width=3.37in]{fluct_prof_a.pdf}
\caption{The interfacial water density fluctuations in a $3\mathrm{\AA}$ spherical probe volume whose center is a distance $z$ from the position of the substrate surface, i.e. the standard interface (top), or the instantaneous liquid phase boundary, i.e. the intrinsic interface (bottom). The quantity in the denominator, $\left \langle N^{(\alpha)}(z) \right \rangle$ is the average number of water molecules in the same probe volume ($\alpha=\mathrm{S}$ or $\alpha=\mathrm{I}$ for the standard and intrinsic interface respectively). The horizontal line indicates the bulk value of the water density fluctuations.}
\label{fig:fluctprof}
\end{figure}
The $\lambda$ dependences of $\rho^{(\alpha)}(z)$ and $\left \langle (\delta N^{(\alpha)}(z))^2 \right \rangle$ indicate that the hydrophobic interface of water is indeed a liquid-vapor-like interface that is pinned to an attractive substrate. The attractions are weak in comparison to water-water hydrogen bonding, but strong enough to compete with the entropically driven capillary-wave motions of the phase boundary. The molecular structure of the standard interface therefore represents a convolution of the universal intrinsic molecular structure with a position distribution for the instantaneous liquid interface. The latter is substrate dependent and accounts for the observed sensitivity of interfacial structure on substrate hydrophobicity (such as that seen in Fig.~\ref{fig:denprof}a and Fig.~\ref{fig:fluctprof}a).
The distance between the instantaneous interface and the substrate -- the instantaneous interface height, $h$, defined in Fig.~\ref{fig:system}c) -- has a distribution of values. This distribution, $P(h)$, provides another perspective on the story summarized in the previous paragraph. Bear in mind that $P(h)$ is system-size dependent because of the relationship between capillary wave amplitude and wavelength~\cite{BW84,AJP11}. Nonetheless, for a series of identically sized systems, $P(h)$ provides qualitative insight into the statistics governing spatial fluctuations of water-substrate interfaces. Figure~\ref{fig:pofh} shows that $P(h)$ has large dependence on substrate identity. For a liquid-vapor interface, $P(h)$ is broad and roughly Gaussian, consistent with expectations from capillary wave theory. For the hydrophobic substrates $P(h)$ is narrower than the liquid-vapor case and asymmetric about the mean. The tails of $P(h)$ are truncated for fluctuations in the direction of the substrate ($h<\bar{h}$), which is a manifestation of substrate excluded volume. In contrast, the tails are exaggerated for fluctuations of the interface into the bulk ($h>\bar{h}$). Those non-Gaussian fat tails are a signature of transient collective detachments of segments of the liquid interface from the weakly attractive substrate~\cite{DC00,APW09b,AJP10,AJP12} and hence are more pronounced for increasingly hydrophobic surfaces. Sensibly, therefore, the fat tails are most pronounced when $\lambda=0.1$ and become less so monotonically as $\lambda$ increases.
\begin{figure}
\includegraphics[width=3.37in]{pofh.pdf}
\caption{The probability distribution governing height fluctuations of the instantaneous liquid interface.The distributions here are plotted relative to the average height of the interface $\bar{h}$.}
\label{fig:pofh}
\end{figure}
\subsection{Hydrophilic Substrate}
Unlike the picture we have drawn for hydrophobic surfaces, the behavior of water interfaces adjacent to hydrophilic surfaces are not liquid-vapor-like. To see this fact, we follow the protocol in the previous subsection by computing $\rho^{(\mathrm{I})}(z)$ and $\left \langle (\delta N^{(\mathrm{I})}(z))^2 \right \rangle$ for the intrinsic interface between water and a model hydrophilic substrate (see Methods section for substrate details). The model hydrophilic substrate is locally polar and capable of forming favorable hydrogen bonds with water molecules in the liquid. As shown in Fig.~\ref{fig:denphilic}, unlike the hydrophobic case, at the intrinsic hydrophilic interface both $\rho^{(\mathrm{I})}(z)$ and $\left \langle (\delta N^{(\mathrm{I})}(z))^2 \right\rangle$ do not resemble their liquid-vapor counterparts. The solvent density $\rho^{(\mathrm{I})}(z)$ still exhibited molecular layering but with peak positions, and relative heights that are qualitatively different from that of a liquid-vapor interface. The $z$-dependence of $\left \langle (\delta N^{(\mathrm{I})}(z))^2 \right \rangle$ exhibits similar qualitative but different quantitative behavior than for that of a hydrophobic interface indicating that the solvation environment at a hydrophilic interface is fundamentally different than at a hydrophobic interface. In accordance with expectations that the liquid water interface interacts strongly with the hydrophilic substrate, the distribution of interface heights, $P(h)$, is both narrow and symmetric.
\begin{figure}
\includegraphics[width=3.37in]{den_prof_alt.pdf}
\includegraphics[width=3.37in]{fluct_prof_alt.pdf}
\caption{(top) The mean interfacial density profile, $\rho^{(\mathrm{I})}(z)$, for the intrinsic hydrophilic and liquid-vapor interface are plotted with a solid and dashed line respectively. (bottom) Water density fluctuations, $\left \langle (\delta N^{(\mathrm{I})}(z))^2 \right \rangle$, for the intrinsic hydrophilic and liquid-vapor interface are plotted with a solid and dashed line respectively.}
\label{fig:denphilic}
\end{figure}
\section{Acknowledgments}
Thanks to Shekhar Garde, David Limmer, Amish Patel, and Patrick Varilly for useful discussion. This research was enabled by the Helios Solar Energy Research Center and the CPIMS program, which are supported by the Director, Office of Science, Office of Basic Energy Sciences of the U.S. Department of Energy under Contract No. DE-AC02-05CH1123.
\nocite{*}
|
2,869,038,153,974 | arxiv | \section{Introduction}
Cloud radio access network (C-RAN) has been regarded as a promising architecture for the next-generation wireless networks \cite{Simeone-et-al:JCN16}.
The C-RAN enables centralized signal processing by means of fronthaul links connecting central processors (CPs) and access points (APs). Due to the limited capacity of practical fronthaul channels, transmission strategies of the APs should be jointly designed along with the fronthaul interaction methods, i.e., the fronthaul quantization policies \cite{Peng-et-al:WC15}. There have been intensive studies on optimizing the performance of the C-RAN systems by iterative algorithms, e.g., transceiver design \cite{Park-et-al:TSP13,Yu-et-al:WCL19} and AP clustering \cite{Guo-et-al:JSAC16}. These traditional schemes, however, would not be implemented in practice due to their high computational complexity for executing iterative calculations.
Recent progresses on deep learning (DL) techniques have opened new research directions for developing low-complexity optimization methods in wireless networks \cite{Sun-et-al:TSP18, Zhang-et-al:TWC20, Hao-et-al:Access18, Kim-et-al:WCL20}. The basic idea is to replace optimization modules with deep neural networks (DNNs) which are trained in advance for optimizing the system performance. The complexity of trained DNNs are much lower than that of conventional iterative algorithms since DNN computations are carried out by simple matrix multiplications. Power control problems in interfering networks are investigated in \cite{Sun-et-al:TSP18}. Supervised learning approaches are presented which train DNNs to memorize solutions generated by existing weighted minimum mean squared error (WMMSE) algorithms. The authors in \cite{Zhang-et-al:TWC20} address multi-antenna beamforming optimization tasks through the supervised DL technique. DNNs are designed to learn the computations of handcraft beamforming optimization algorithms by exploiting the known optimal solutions. Although the time complexity can be reduced by the DNNs, their training steps need numerous samples of the optimal solutions obtained from the iterative algorithms, thereby increasing the training difficulty.
To address this issue, recent works \cite{Hao-et-al:Access18,Kim-et-al:WCL20} have investigated unsupervised DL techniques which can identify efficient optimization strategies without any labels, i.e., the solutions of conventional algorithms. DNNs are trained to yield beamforming vectors that maximize the sum-rate performance under the sum transmit power constraint. It has been reported that, without the prior information of the optimal solutions, the unsupervised DL-based beamforming schemes could achieve the almost identical performance to those of existing locally optimum algorithms with much reduced complexity.
This letter proposes an unsupervised DL approach for the C-RAN systems by handling the joint optimization task of transmit beamforming and fronthaul quantization. Compared to existing DL studies \cite{Hao-et-al:Access18,Kim-et-al:WCL20} focusing on conventional cellular systems with the sum power constraint, the special nature the C-RANs imposes the per-AP power budget, the fronthaul capacity constraints, and additional optimization variables regarding the fronthaul quantization. These pose nontrivial challenges in designing efficient structure of DNNs suitable for the C-RAN architecture. Therefore, the conventional DL-based beamforming optimization methods cannot be straightforwardly applied to our scenario.
To this end, we develop a structural learning process which constructs a DNN to always provide feasible beamforming vectors and fronthaul quantization policies. The proposed DNN generates intermediate variables that optimally recover the beamforming vectors. The quantization strategy is then determined by the learned beamforming solutions. As a result, the DNN can be trained in an unsupervised manner without the information of the optimal solutions. Numerical results validate the advantages of the proposed DL method.
The remainder of this letter is organized as follows. In Sec. \ref{sec:System-Model}, we describe a downlink C-RAN system, and beamforming and fronthaul quantization optimization problem is formulated under constraints on per-AP power and fronthaul capacity. The proposed DL method will be detailed in Sec. \ref{sec:Proposed-DL}. Then, advantages of the proposed DL method are validated via numerical results. Finally, we conclude this letter with discussion of future works in Sec. \ref{sec:Conclusion}.
\section{System model and Problem Definition\label{sec:System-Model}}
Consider a downlink C-RAN in which a CP communicates with $K$ single-antenna user equipments (UEs) by controlling $M$ single-antenna APs. Let $\mathcal{M} \triangleq \{1,\ldots,M\}$ and $\mathcal{K} \triangleq \{1,\ldots,K\}$ be the sets of APs' and UEs' indices, respectively. Each AP $i\in\mathcal{M}$ is connected to the CP through a fronthaul link of capacity $C$ in bit/symbol.
The received signal of UE $k\in\mathcal{K}$ is written as
\begin{align}
y_k = \mathbf{h}_k^H \mathbf{x} + z_k, \label{eq:received-signal-downlink}
\end{align}
where $\mathbf{h}_k\in\mathbb{C}^{M}$ denotes the channel from APs to UE $k$, $\mathbf{x}\in\mathbb{C}^{M}$ represents the signal vector transmitted by all APs, and $z_k\sim\mathcal{CN}(0,1)$ is the additive noise at UE $k$. The transmitted signal $\mathbf{x}$ is subject to per-AP power constraints expressed as
\begin{align}
\mathtt{E}\left[|x_i|^2\right] \leq P, \, i\in\mathcal{M}, \label{eq:per-AP-power-constraint}
\end{align}
where $x_i$ is the $i$th element of $\mathbf{x}$ representing the signal radiated by AP $i$ and $P$ stands for the power budget at each~AP.
The CP generates the transmit signal vector $\mathbf{x}$ by employing a cooperative linear beamforming followed by fronthaul quantization \cite{Park-et-al:TSP13}. The transmitted signal $\mathbf{x}$ is then modeled~as
\begin{align}
\mathbf{x} = \sum\nolimits_{k\in\mathcal{K}} \mathbf{v}_k s_k + \mathbf{q},\label{eq:quantized-signal}
\end{align}
where $s_k\sim\mathcal{CN}(0,1)$ and $\mathbf{v}_k\in\mathbb{C}^{M}$ denote the data signal and beamforming vector for UE $k$, respectively, and $\mathbf{q} \in \mathbb{C}^{M}\sim\mathcal{CN}(\mathbf{0},\boldsymbol{\Omega})$ with covariance matrix $\boldsymbol{\Omega}\in\mathbb{C}^{M\times M}$ models the quantization noise vector independent of $\mathbf{x}$ under Gaussian test channel. We employ an independent fronthaul quantization scheme where each signal $x_{i}$ is individually compressed across APs $i\in\mathcal{M}$. Then, $\boldsymbol{\Omega}$ is given by a diagonal matrix. Let $\omega_{i}\geq0$ be the $i$th diagonal element of $\mathbf{\Omega}$, i.e., $\boldsymbol{\Omega} = \text{diag}(\{\omega_i\}_{i\in\mathcal{M}})$, which represents the quantization noise power for the fronthaul link toward AP $i$. Due to the limited fronthaul capacity $C$, the following constraint should be satisfied for successful decompression of $x_i$ at AP~$i$~\cite{Gamal-et-al:Book2011}.
\begin{align}
\log_2\Big( 1 + \Big(\sum\nolimits_{k\in\mathcal{K}} |v_{k,i}|^2 \Big)/ \omega_i \Big) \leq C, \label{eq:fronthaul-capacity-constraint}
\end{align}
where $v_{k,i}$ indicates the $i$th element of $\mathbf{v}_k$.
Defining $\mathbf{v}\triangleq \{\mathbf{v}_k\}_{k\in\mathcal{K}}$ and $\boldsymbol{\omega}\triangleq \{\omega_i\}_{i\in\mathcal{M}}$, the achievable rate of UE $k$ $f_{k}(\mathbf{v},\boldsymbol{\omega})$ can be written as
\begin{align}
\!\!f_k\!\left(\mathbf{v}, \boldsymbol{\omega}\right)
\!= \log_2\!\!\left(\!\!1 + \frac{|\mathbf{h}_k^H \mathbf{v}_k|^2}{ 1 + \mathbf{h}_k^H \boldsymbol{\Omega}\mathbf{h}_k \!\! + \!\sum_{l\in\mathcal{K}\setminus\{k\}} \! |\mathbf{h}_k^H \mathbf{v}_l|^2 }\!\!\right)\!. \label{eq:achievable-rate-UE-k}
\end{align}
We jointly optimize the beamforming vectors $\mathbf{v}$ and quantization noise powers $\boldsymbol{\omega}$ for maximizing the sum-rate performance $f(\mathbf{v},\boldsymbol{\omega})\triangleq\sum_{k\in\mathcal{K}}f_k(\mathbf{v},\boldsymbol{\omega})$ while satisfying the transmit power budget (\ref{eq:per-AP-power-constraint}) and fronthaul capacity constraints (\ref{eq:fronthaul-capacity-constraint}).
In addition to the CSI $\mathbf{h}$, the constraints $P$ and $C$ are regarded as important system parameters that possibly vary at each transmission, thereby affecting the optimization procedure. The corresponding problem is formulated as
\begin{subequations}\label{eq:problem}
\begin{align}
\underset{\mathbf{v},\boldsymbol{\omega}}{\mathrm{max}}\,\,\,&f(\mathbf{v},\boldsymbol{\omega})\label{eq:problem-objective}\\
\mathrm{s.t.}\,\,\,\,\, & \sum\nolimits_{k\in\mathcal{K}} |v_{k,i}|^2+\omega_i\leq P,\,\,\,\,
i\in\mathcal{M},\forall P, \forall C,\label{eq:problem-power}%
\\ \,\,\,\,\, & \sum\nolimits_{k\in\mathcal{K}}|v_{k,i}|^2\leq \beta\omega_i,\,\,\,\,i\in\mathcal{M},\forall P, \forall C,\label{eq:problem-fronthaul}
\end{align}
\end{subequations}
where the per-AP power constraint (\ref{eq:problem-power}) is obtained by substituting \eqref{eq:quantized-signal} into (\ref{eq:per-AP-power-constraint}), and \eqref{eq:problem-fronthaul} comes from \eqref{eq:fronthaul-capacity-constraint} with a weight for consuming transmit power of $\boldsymbol{\omega}$ at each AP defined by $\beta\triangleq2^{C}-1$. Both constraints should be achieved for any given $P$ and $C$ so that the resulting beamformer $\mathbf{v}$ and the quantization strategy $\boldsymbol{\omega}$ become feasible for arbitrary system configurations. It is not easy to find the globally optimum solution to \eqref{eq:problem} due to the nonconvex objective function \eqref{eq:problem-objective}. A locally optimal solution can be obtained by the WMMSE algorithm \cite{Yu-et-al:WCL19}, but its iterative nature results in high computational burden for practical C-RAN systems.
To this end, we propose a low-complexity solution to \eqref{eq:problem} using DL techniques. Due to the absence of the optimal solution, instead of employing supervised learning approaches \cite{Sun-et-al:TSP18,Zhang-et-al:TWC20}, our focus is on identifying unsupervised DL framework, which can be implemented without the knowledge of the optimal solution of problem \eqref{eq:problem}. The DL-based beamforming schemes have been recently presented in \cite{Zhang-et-al:TWC20, Hao-et-al:Access18, Kim-et-al:WCL20}, for conventional cellular networks with co-located antennas. Due to the implicit assumption of the infinite fronthaul capacity $C=\infty$, the fronthaul quantization issue has not been addressed in designing DNN architecture and its training strategy. In the following sections, we develop a new DL method which tackles the intrinsic properties of the C-RAN systems, i.e., the per-AP power constraint and fronthaul capacity limitations.
\section{Proposed Deep Learning Method\label{sec:Proposed-DL}}
We first recast the original problem \eqref{eq:problem} into a {\em functional optimization} formulation \cite{DLiu:20} suitable for generalized learning for environment's status $\{\mathbf{h},P,C\}$. It transforms the target of the optimization into a function representing an optimization procedure. Any formulations with specified inputs and outputs can be refined to functional optimization tasks. Problem \eqref{eq:problem} can be viewed as an identification procedure of solutions $\mathbf{v}$ and $\boldsymbol{\omega}$ for arbitrary given channel $\mathbf{h}$ and system parameters $P$ and $C$. Such an input-output relationship can be captured by a functional operator $\{\mathbf{v}, \boldsymbol{\omega}\} = \mathcal{V}(\mathbf{h}, P, C)$. The operator $\mathcal{V}(\cdot)$ will be designed by a proper DNN. Substituting this into \eqref{eq:problem} yields the functional optimization expressed by
\begin{subequations} \label{eq:problem-stochastic}
\begin{align}
&\underset{\mathcal{V}(\cdot)}{\mathrm{max}}\ \mathtt{E}_{\mathbf{h}, P, C}[f(\mathcal{V}(\mathbf{h}, P, C))], \label{eq:problem-mapping-objective}\\
&\mathrm{s.t.}\ \eqref{eq:problem-power}\ \text{and}\ \eqref{eq:problem-fronthaul},\label{eq:problem-mapping}
\end{align}
\end{subequations}
where $\mathtt{E}_{X}[\cdot]$ accounts for the expectation operator over a random variable $X$. The equivalence between \eqref{eq:problem} and \eqref{eq:problem-stochastic} is mathematically verified in \cite{DLiu:20} and the references therein. Unlike the original problem \eqref{eq:problem} which focuses on identifying the solution variables $\mathbf{v}$ and $\boldsymbol{\omega}$ for a certain $\{\mathbf{h},P,C\}$, the functional optimization in \eqref{eq:problem-stochastic} addresses the expected sum-rate maximization rather than its instantaneous value. Consequently, by solving \eqref{eq:problem-stochastic}, a generic mapping rule $\mathcal{V}(\cdot)$ for arbitrarily given input $\{\mathbf{h},P,C\}$ can be obtained.
The remaining work is to design a proper DNN that approximates the intractable operator $\mathcal{V}(\cdot)$ successfully. A straightforward approach is to construct a DNN taking $\{\mathbf{h}, P, C\}$ and $\{\mathbf{v},\boldsymbol{\omega}\}$ as input and output, respectively. We refer to this scheme as a direct learning (DiLearn) method. The DNN can be readily trained to maximize the average sum-rate through the standard stochastic gradient descent (SGD) algorithm. However, the performance of the DiLearn approach has been shown to be poor in various beamforming optimization tasks \cite{Zhang-et-al:TWC20, Kim-et-al:WCL20} even without the fronthaul constraint. This is mainly stemmed from the difficulties of training a DNN with a large number of output variables and the absence of expert knowledge assisting the design of a DNN. In our case, the DiLearn needs to find $2MK+M$ real-valued output variables, which is quite large particularly when both $M$ and $K$ increase. This motivates us to investigate an appropriate DNN structure having much reduced output dimension for addressing \eqref{eq:problem-stochastic} efficiently.
\subsection{Optimal Solution Structure} \label{sub:optimal-structure}
To design an efficient DL architecture, this subsection studies special properties of the optimal beamforming and quantization noise power. The following proposition states that the optimal $\boldsymbol{\omega}$ can be retrieved from the beamforming $\mathbf{v}$.
\begin{prop} \label{prop:virtual-power-constraint}
The solutions $\mathbf{v}$ and $\boldsymbol{\omega}$ are feasible for
\eqref{eq:problem} if
\begin{subequations}\label{eq:prop1}
\begin{align}
&\omega_i=\frac{1}{\beta} \sum\nolimits_{k\in\mathcal{K}} |v_{k,i}|^2, i\in\mathcal{M}, \label{eq:optimal-quantization-noise-power-for-given-beamformer}\\
&\sum\nolimits_{k\in\mathcal{K}} |v_{k,i}|^2 \leq \frac{P}{1+1/\beta}, i\in\mathcal{M}. \label{eq:virtual-power-constraint-feasible}
\end{align}
\end{subequations}
\end{prop}
\begin{proof}
We will show that $\mathbf{v}$ and $\boldsymbol{\omega}$ satisfying \eqref{eq:prop1} are indeed feasible for \eqref{eq:problem}. By substituting \eqref{eq:optimal-quantization-noise-power-for-given-beamformer} into \eqref{eq:problem-fronthaul}, it is easy to see that \eqref{eq:problem-fronthaul} is satisfied with equality. Also, the feasibility for \eqref{eq:problem-power} is shown by combining \eqref{eq:optimal-quantization-noise-power-for-given-beamformer} and \eqref{eq:virtual-power-constraint-feasible}, it follows $\omega_{i}\leq\frac{P}{1+\beta}$ resulting in
\begin{align}
\sum\nolimits_{k\in\mathcal{K}}|v_{k,i}|^2+\omega_{i}\leq\frac{P}{1+1/\beta}+\frac{P}{1+\beta}=P.
\end{align}
We thus attain \eqref{eq:problem-power}. This completes the proof.
\end{proof}
Notice that, for a given $\mathbf{v}$, $\omega_i$ in \eqref{eq:optimal-quantization-noise-power-for-given-beamformer} is indeed optimal for \eqref{eq:problem} since the individual rate $f_{k}(\mathbf{v},\boldsymbol{\omega})$ in (\ref{eq:achievable-rate-UE-k}) is a monotonically decreasing function for each $\omega_{i}$. Therefore, the optimal $\omega_{i}$ is readily obtained from \eqref{eq:optimal-quantization-noise-power-for-given-beamformer} once the beamforming solution $\mathbf{v}$ is optimized. This implies that the corresponding DNN architecture can be designed to produce $\mathbf{v}$ only.
With the optimal $\boldsymbol{\omega}$ at hands, \eqref{eq:problem-power} and \eqref{eq:problem-fronthaul} can be combined into a sole constraint \eqref{eq:virtual-power-constraint-feasible}. As will be explained, this leads to a simple implementation of the proposed DNN. In addition, the left-hand side of (\ref{eq:virtual-power-constraint-feasible}) measures the beamforming power consumed at AP $i$. Therefore, (\ref{eq:virtual-power-constraint-feasible}) can be regarded as a virtual power constraint at AP $i$ compensating for the finite fronthaul capacity $C$. Based on this intuition, we present the following proposition which shows the optimal beamforming structure under the per-AP power constraints for arbitrary given fronthaul quantization processes.
\begin{prop} \label{prop:optimal-beamforming-structure}
Under the per-AP transmit power constraints (\ref{eq:virtual-power-constraint-feasible}), the optimal beamforming structure for a given $\boldsymbol{\omega}$
can be written by $\mathbf{v}_k=\sqrt{p_k}\mathbf{u}_k$, $k\in\mathcal{K}$, where $p_k$ and $\mathbf{u}_k \in\mathbb{C}^{M}$ with $||\mathbf{u}_k||^2 = 1$ stand for the transmit power and the beam direction for UE $k$, respectively. Here, $\mathbf{u}_k$ can be parameterized by $K+M$ nonnegative real numbers $\boldsymbol{\lambda}=\{\lambda_k\}_{k\in\mathcal{K}}$ and $\boldsymbol{\mu}=\{\mu_i\}_{i\in\mathcal{M}}$~as
\begin{equation}
\mathbf{u}_k=\frac{(\sum_{l\in\mathcal{K}}\lambda_l\mathbf{h}_l\mathbf{h}_l^H+\mathrm{diag}(\mathbf{\boldsymbol{\mu}}))^{-1}\mathbf{h}_k}{\|(\sum_{l\in\mathcal{K}}\lambda_l\mathbf{h}_l\mathbf{h}_l^H+\mathrm{diag}(\mathbf{\boldsymbol{\mu}}))^{-1}\mathbf{h}_k\|}, k\in\mathcal{K}.\label{eq:optimal-beamforming-structure}
\end{equation}
\end{prop}
\begin{proof}
The proof follows a similar procedure in \cite{Yu-Lan:TSP07}. For any given $\boldsymbol{\omega}$, the optimal $\mathbf{v}$ of problem \eqref{eq:problem} can be obtained by solving the following problem.
\begin{subequations}\label{eq:prop2-2}
\begin{align}
\underset{\mathbf{v}}{\mathrm{max}}\,\,\,&\sum_{k\in\mathcal{K}}\log_2\left(1+\frac{|\tilde{\mathbf{h}}_k^H\mathbf{v}_k|^2}{1+\sum_{l\in\mathcal{K}\backslash\{k\}}|\tilde{\mathbf{h}}_k^H\mathbf{v}_l|^2}\right)\\
\mathrm{s.t.}\,\,\,\,\, & \sum_{k\in\mathcal{K}} |v_{k,i}|^2\leq \tilde{P},\,\,\,\,
i\in\mathcal{M}.\label{eq:problem-power_2-3-1}%
\end{align}
\end{subequations}
where $\tilde{\mathbf{h}}_k=\mathbf{h}_k/\sigma_k$, $\sigma_k^2=1+\mathbf{h}_k^H\boldsymbol{\Omega}\mathbf{h}_k$, and $\tilde{P}=P/(1+1/\beta)$. Problem (\ref{eq:prop2-2}) can be interpreted as the sum-rate maximization problem for a multi-user downlink system with per-antenna power constraints and constant noise power across users addressed in \cite{Yu-Lan:TSP07}. According to \cite{Zhang-et-al:TWC20, Yu-Lan:TSP07}, the optimal beamforming solution for problem (\ref{eq:prop2-2}) has a structure of
\begin{align}
\mathbf{v}_k=\sqrt{p_k}\mathbf{u}_k,
\end{align}
where $p_k\geq0$ is the power allocated to UE $k$, and $\mathbf{u}_k$ is the beamforming direction for UE $k$ given as
\begin{align}\label{eq:beam_structure1}
\mathbf{u}_k=\frac{(\sum_{l\in\mathcal{K}}\tilde{\lambda}_l\tilde{\mathbf{h}}_l\tilde{\mathbf{h}}_l^H+\mathrm{diag}(\boldsymbol{\mu}))^{-1}\tilde{\mathbf{h}}_k}{\|(\sum_{l\in\mathcal{K}}\tilde{\lambda}_l\tilde{\mathbf{h}}_l\tilde{\mathbf{h}}_l^H+\mathrm{diag}(\boldsymbol{\mu}))^{-1}\tilde{\mathbf{h}}_k\|},\,\,k\in\mathcal{K},
\end{align}
with nonnegative real variables $\tilde{\boldsymbol{\lambda}}$ and $\boldsymbol{\mu}$.
Substituting $\lambda_k=\tilde{\lambda}_k/\sigma_k^2$ and $\tilde{\mathbf{h}}_k=\mathbf{h}_k/\sigma_k$ into the direction vector in (\ref{eq:beam_structure1}), we obtain \eqref{eq:optimal-beamforming-structure}. This completes the proof.
\end{proof}
Proposition \ref{prop:optimal-beamforming-structure} identifies a low-dimensional representation of the optimal beamforming $\mathbf{v}$. It reveals that, for a given $\boldsymbol{\omega}$, the beamforming vectors can be efficiently retrieved from $2K+M$ real-valued parameters $\mathbf{p}\triangleq\{p_k\}_{k\in\mathcal{K}}$, $\boldsymbol{\lambda}$, and $\boldsymbol{\mu}$. Thus, we can further reduce the size of DNN such that it outputs only $2K+M$ nonnegative variables $\{\mathbf{p}, \boldsymbol{\lambda}, \boldsymbol{\mu}\}$. Combining this with Proposition 1, the optimal quantization noise variance $\boldsymbol{\omega}$ can also be recovered from $\{\mathbf{p}, \boldsymbol{\lambda}, \boldsymbol{\mu}\}$ by using \eqref{eq:optimal-quantization-noise-power-for-given-beamformer}. Therefore, compared to the DiLearn method, the number of output variables of DNN has been reduced from $2MK\!+\!M$ to~$2K\!+\!M$.
Proposition \ref{prop:optimal-beamforming-structure} only finds an alternative parameterization of the optimal solutions, but not the determination processes of the intermediate variables $\{\mathbf{p}, \boldsymbol{\lambda}, \boldsymbol{\mu}\}$. Classical optimization techniques cannot be straightforwardly applied to identify those parameters due to their highly coupled structure in \eqref{eq:optimal-beamforming-structure}. We address this issue by exploiting data-driven DL techniques.
\subsection{Proposed DL Methods} \label{eq:proposed-structure-training}
\begin{figure}
\centering\includegraphics[width=0.7\linewidth]{park_WCL2021-0388_fig1.eps}
\caption{{\label{fig:DL-structure}Proposed DL architecture}}
\end{figure}
Fig. \ref{fig:DL-structure} presents the proposed DL architecture which consists of two consecutive modules: DNN $\mathcal{V}_{\Theta}(\cdot)$ with trainable parameter $\Theta$ and solution recovery module $\mathcal{F}(\cdot)$. The training dataset contains numerous realizations of three-tuple $\{\mathbf{h},P,C\}$. The DNN accepts an input feature $\{\mathbf{h},P,C\}$ sampled from the training set and computes an output $\{\mathbf{p}, \boldsymbol{\lambda}, \boldsymbol{\mu}\}$, i.e., $\{\mathbf{p}, \boldsymbol{\lambda}, \boldsymbol{\mu}\}=\mathcal{V}_{\Theta}(\mathbf{h},P,C)$. For $l\in\mathcal{L}\triangleq\{1,\cdots,L\}$, the computation of layer $l$ is given as
\begin{align}
\mathbf{d}_l = g_l\left( \mathrm{BN}\left( \mathbf{W}_l \mathbf{d}_{l-1} + \mathbf{b}_l \right)\right), \forall l\in\mathcal{L}, \label{eq:operation-hidden-layer}
\end{align}
where $g_l(\cdot)$ indicates the activation function for layer $l$, $\mathbf{W}_l\in\mathbb{R}^{S_{l}\times S_{l-1}}$ and $\mathbf{b}_{l}\in\mathbb{R}^{S_{l}}$ are weight matrix and bias vector, respectively, which collectively form the trainable parameter set $\Theta = \{\mathbf{W}_l, \mathbf{b}_l\}_{l\in\mathcal{L}}$. The batch normalization operation \cite{Ioffe-et-al:arXiv15} denoted by $\mathrm{BN}(\cdot)$ is included to accelerate the training step.
The final output of the DNN $\mathbf{d}_{L}$ of length $S_{L}=2K+M$ is represented by $\mathbf{d}_{L}=\{\mathbf{p}, \boldsymbol{\lambda}, \boldsymbol{\mu}\}$. The sequential calculations \eqref{eq:operation-hidden-layer} define the DNN mapping $\{\mathbf{p}, \boldsymbol{\lambda}, \boldsymbol{\mu}\}=\mathcal{V}_{\Theta}(\mathbf{h}, P, C)$.
The recovery module $\mathcal{F}(\cdot)$ further processes the DNN output $\{\mathbf{p}, \boldsymbol{\lambda}, \boldsymbol{\mu}\}$ to retrieve feasible solutions $\mathbf{v}$ and $\boldsymbol{\omega}$ as $\{\mathbf{v},\boldsymbol{\omega}\}=\mathcal{F}(\mathbf{p}, \boldsymbol{\lambda}, \boldsymbol{\mu})$. As illustrated in Fig. \ref{fig:DL-structure}, the beam direction vector $\mathbf{u}\triangleq\{\mathbf{u}_{k}\}_{k\in\mathcal{K}}$ is first obtained from the optimal structure \eqref{eq:optimal-beamforming-structure}, and then it is followed by pairwise multiplication $\mathbf{v}_{k}=\sqrt{p_{k}}\mathbf{u}_{k}$. To guarantee the feasibility of $\mathbf{v}$, we perform a simple scaling inspired by our analysis (\ref{eq:virtual-power-constraint-feasible}).
\begin{align}
\mathbf{v} \leftarrow \frac{\sqrt{P/(1 + 1/\beta)}}{\sqrt{\max_{i\in\mathcal{M}}\sum_{k\in\mathcal{K}} |v_{k,i}|^2}} \mathbf{v}. \label{eq:scaling}
\end{align}
As discussed in Proposition \ref{prop:virtual-power-constraint}, the resulting $\mathbf{v}$ from \eqref{eq:scaling} becomes feasible to the original formulation \eqref{eq:problem}. The optimal quantization noise variance $\boldsymbol{\omega}$ is then computed according to (\ref{eq:optimal-quantization-noise-power-for-given-beamformer}). Finally, the proposed DL structure models the optimization function $\mathcal{V}(\cdot)$ in \eqref{eq:problem-mapping-objective} as
\begin{align}
\mathcal{V}(\mathbf{h},P,C)=\mathcal{F}(\mathcal{V}_{\Theta}(\mathbf{h},P,C)).\label{eq:DNN}
\end{align}
Plugging this into \eqref{eq:problem-mapping-objective} results in a training problem written~by
\begin{align}
\underset{\Theta}{\mathrm{max}}\ \mathtt{E}_{\mathbf{h}, P, C}\big[f\big(\mathcal{F}(\mathcal{V}_{\Theta}(\mathbf{h},P,C))\big)\big].\label{eq:training}
\end{align}
Thanks to the scaling \eqref{eq:scaling}, both the transmit power and fronthaul capacity constraints in \eqref{eq:problem-mapping} can be lifted out in \eqref{eq:training}. The training problem \eqref{eq:training} can be readily addressed by the mini-batch SGD method, e.g., the Adam algorithm \cite{Kingma-et-al:ICLR15}. It iteratively updates the DNN parameter $\Theta$ by using the sample gradient evaluated over the mini-batch set $\mathcal{B}$ randomly sampled from the training dataset. The DNN parameter $\Theta^{[n]}$ obtained at the $n$th iteration is written by
\begin{align}
\Theta^{[n]} \! = \! \Theta^{[n-1]} \! - \! \gamma\mathtt{E}_\mathcal{B}[\bigtriangledown_{\Theta^{[n-1]}} f(\mathcal{F}(\mathcal{V}_{\Theta^{[n-1]}}(\mathbf{h},P,C)))],
\label{eq:update-rule}
\end{align}
where $\gamma>0$ denotes learning rate.
Unlike the supervised DL-based beamforming DNN \cite{Zhang-et-al:TWC20}, which relies on the optimal solutions generated from the iterative algorithms, the proposed training policy \eqref{eq:update-rule} does not require any prior knowledge of the nonconvex problem \eqref{eq:problem}, i.e., optimal $\mathbf{v}$, $\boldsymbol{\omega}$ of problem \eqref{eq:problem}. Thus, the proposed DL approach can be carried out in a fully unsupervised manner, resulting in a simple implementation of the training step. Notice that the training step is carried out in an offline manner before the real-time C-RAN deployment. Once the DNN is trained, the CP exploits the optimized parameter set $\Theta$ to calculate the solutions $\mathbf{v}$ and $\boldsymbol{\omega}$ from \eqref{eq:DNN} for new channel inputs.
\subsection{Complexity Analysis}\label{sub:complexity-analysis}
The proposed DL structure \eqref{eq:DNN} consists of matrix multiplications \eqref{eq:operation-hidden-layer} and beamforming recovery operation \eqref{eq:optimal-beamforming-structure}. We have found that about $13MK$ hidden neurons are sufficient for achieving a good performance-complexity trade-off. In this case, the overall time complexity of the DNN is given by $\mathcal{O}(M^2K^2+M^3)$. The WMMSE algorithm requires to solve convex semidefinite program repeatedly. Assuming $L_\text{WMMSE}$ iterations, the complexity of the WMMSE algorithm becomes $\mathcal{O}(L_\text{WMMSE}(MK+M)^{4.5})$, which is much higher than that of the proposed DL method. The complexity comparison will be numerically shown in Sec. \ref{sec:Numerical-Results}.
\section{Numerical Results\label{sec:Numerical-Results}}
This section provides numerical results validating the effectiveness of the proposed DL method. We consider $M=6$ APs and $K=6$ UEs uniformly distributed within a cell of radius 100 m. The one-ring channel model \cite{Yin-et-al:JSTSP14} is assumed, where there are single-scattering paths scattered by $N$ scatterers positioned on a disk-shaped scattering ring centered on the UE. Then, the channel vector of each UE $k$ is modeled as $\mathbf{h}_k=\sum_{n\in\{1,...,N\}}\mathbf{h}_{k,n}/{\sqrt{N}}$, where $\mathbf{h}_{k,n}=[\sqrt{\beta_{k,n,1}}e^{-j2\pi\frac{d_{k,n,1}+r}{\lambda_\text{c}}}\,\,...\,\,\sqrt{\beta_{k,n,M}}e^{-j2\pi\frac{d_{k,n,M}+r}{\lambda_\text{c}}}]^Te^{j\rho_{k,n}}$ with the path-loss between AP $i$ and UE $k$ via scatterer $n$ of UE $k$ $\beta_{k,n,i}$, distance between AP $i$ and scatterer $n$ of UE $k$ $d_{k,n,i}$, radius of scattering ring $r$, common phase shift $\rho_{k,n}$ and wave length of carrier $\lambda_\text{c}$. Here, for all the scattering paths, $\beta_{k,n,i}$ have been defined as $\beta_{k,n,i}=1/(1+((d_{k,n,i}+r)/d_0)^\eta)$ with the reference distance $d_0$ and path-loss exponent $\eta$. For the simulations, we set the parameters as $d_0=30\,
\mathrm{m}$, $r=5\,\mathrm{m}$, $\eta=3$, $N=2$, $\lambda_\text{c}=0.15\,\mathrm{m}$, and $\rho_{k,n}\sim\mathcal{U}(0,2\pi)$ for $\forall k$, $\forall n$.
With the unit variances of the additive noises, the signal-to-noise ratio (SNR) is equal to $P$. A DNN is constructed with $L=11$ layers in which each hidden layer has $S_l = 480$ neurons. For hidden layers, we adopt the leaky rectified linear unit (LReLU) activation, which is given as $\mathrm{LReLU}(z)=z$ for $z\geq0$ and $\mathrm{LReLU}(z)=0.3z$ otherwise. To produce the nonnegative output $\{\mathbf{p}, \boldsymbol{\lambda}, \boldsymbol{\mu}\}$, the output layer is realized by the sigplus activation $\mathrm{SigPlus}(z)=\log(1+e^{z})$. The Adam optimizer \cite{Kingma-et-al:ICLR15} with the mini-batch size $B=10^4$ is employed as the SGD algorithm. The training step \eqref{eq:update-rule} proceeds until the validation performance is saturated. The trained DNN is evaluated with $100$ test samples.
\subsection{Dataset Generation}
The training samples $\{\mathbf{h}, P, C\}$ are randomly generated according to given distributions. As described, the channel vectors $\mathbf{h}$ follow the one-ring channel model, and the constraint factors $P$ and $C$ are sampled from the uniform distribution as $10\log_{10}P\sim\mathcal{U}( 10\log_{10}P_{\min}, 10\log_{10}P_{\max} )$ and $C \sim \mathcal{U}(C_{\min}, C_{\max})$, where the bounding parameters $(P_{\min}$, $P_{\max})= (1, 10^3)$ and $(C_{\min}$, $C_{\max})= (2, 10)$.
\vspace{-3mm}
\subsection{Results}
We compare the performance of the proposed DL approach with the following benchmark schemes: \textit{i)} WMMSE algorithm: A locally optimal solution to problem (\ref{eq:problem}) is found using the iterative WMMSE algorithm \cite{Yu-et-al:WCL19}; \textit{ii)} DiLearn: A DNN is designed to yield the beamforming vectors $\mathbf{v}$ directly. It is followed by the scaling operation in (\ref{eq:scaling}) and the computation of $\boldsymbol{\omega}$ in (\ref{eq:optimal-quantization-noise-power-for-given-beamformer}).
\begin{figure}
\centering
\begin{subfigure}[b]{0.8\textwidth}
\includegraphics[width=\textwidth]{park_WCL2021-0388_fig2_a.eps}
\caption{Average sum-rate versus the SNR}\label{fig:vsSNR_MK66}
\end{subfigure}
\begin{subfigure}[b]{0.8\textwidth}
\includegraphics[width=\textwidth]{park_WCL2021-0388_fig2_b.eps}
\caption{Average sum-rate versus the fronthaul capacity $C$}\label{fig:vsC_MK66}
\end{subfigure}
\caption{Comparison of average sum-rate for $M=K=6$}\label{fig:MK66}
\end{figure}
In Fig. \ref{fig:MK66}, we evaluates the average sum-rate performance in the C-RAN with $M=K=6$. Fig. \ref{fig:vsSNR_MK66} depicts the average sum-rate performance by varying the SNR for $C\in\{2,\, 6,\, 10\}$. The proposed DL provides a good performance close to the WMMSE algorithm, whereas the DiLearn scheme exhibits severe performance loss. Similar observations can be made from Fig. \ref{fig:vsC_MK66} which presents the average sum-rate result with respect to the fronthaul capacity $C$ for $P\in\{0\,\,\mathrm{dB},10\,\,\mathrm{dB},20\,\,\mathrm{dB},30\,\,\mathrm{dB}\}$. The performance gap between the WMMSE and the DiLearn gets larger as $C$ and SNR grows. On the other hands, the proposed scheme shows only a slight loss compared to the WMMSE algorithm. This means that the DNN of the proposed scheme, which outputs only $2K+M=18$ variables, can be more efficiently trained than that of the DiLearn scheme whose output has $2MK=72$ variables.
\begin{table}[]
\caption{Average CPU run-time {[}sec{]} for $M=K=6$ with $C=10$}
\label{tab:CPU-time}
\hspace{+2mm}
\centering\begin{tabular}{|c|c|c|c|c|c|}
\hline
\multicolumn{4}{|c|}{WMMSE} & \multirow{2}{*}{proposed DL} & \multirow{2}{*}{DiLearn} \\ \cline{1-4}
0 dB & 10 dB & 20 dB & 30 dB & & \\ \hline
43.91 & 64.50 & 200.08 & 878.46 & 5.49$\times 10^{-3}$ & 5.29$\times 10^{-3}$ \\ \hline
\end{tabular}
\end{table}
Table \ref{tab:CPU-time} examines the advantage of the proposed DL scheme compared to the WMMSE algorithm in terms of the average CPU run-time at $C=10$. For the evaluations, both the trained DNNs and the WMMSE algorithm are implemented on a PC with an Intel i9-10900K CPU with 128 GB RAM using MATLAB R2020a. For $M=K=6$, the time complexity of the DL-based schemes is significantly lower than the WMMSE algorithm. Specifically, the gap between the WMMSE and DL-based schemes increases with SNR. This is because the WMMSE algorithm requires a larger number of iterations for convergence in the high SNR regime, while the DL-based schemes show the same complexity regardless of SNR as long as the DNN structures remain unchanged. The DiLearn scheme operates faster than the proposed scheme, since it does not require the matrix inversion in (\ref{eq:optimal-beamforming-structure}). However, the proposed scheme is more competitive considering the trade-off between the performance and complexity.
\section{Conclusions\label{sec:Conclusion}}
This letter has proposed DL methods for joint design of beamforming and fronthaul quantization strategies in C-RANs. The key idea is to design an efficient DNN architecture based on inherent relationships between optimal beamforming and quantization noise statistics.
Numerical results demonstrate that the proposed DL-based scheme achieves the best trade-off between the sum-rate performance and time complexity in comparison to baseline schemes.
As future works, a more generalized framework can be considered by including a channel learning process in the learning structure or considering generalization for the number of UEs and APs.
|
2,869,038,153,975 | arxiv | \section{Introduction}
\label{sec:introduction}
\input{introduction_v4}
\section{Method}
\input{method_v6}
\section{Related work}
\label{sec:related_work}
\input{related_work_v2}
\section{Experiments}
\input{experiments_v5}
\section{Discussion}
\input{discussion_v3}
\section{Conclusion}
\input{conclusion_v2}
\subsection{Image Synthesis}
As shown quantitatively in Table \ref{tbl:FID}, HA-GAN achieves lower FID and MMD, implying that our model generates more realistic images. This is further confirmed by the synthetic images shown in Fig. \ref{fig:result_synthesis}, \ref{fig:result_reconstruction}, where HA-GAN generates sharper images compared to other methods. We found that our method outperforms baseline methods at both the resolution of $128^3$ and $256^3$, but the lead is larger at $256^3$ resolution than $128^3$.
Based on the results, we believe that the sharp generation results come from both the model itself and its ability to directly generate images at $256^3$ without interpolation upsampling.
For the baseline models, we found that $\alpha$-GAN and WGAN have similar performance, and VAE-GAN tends to generate blurry images. WGAN is essentially the $\alpha$-GAN without the encoder. Based on qualitative examples shown in Fig. 3 and supplementary material, it can generate sharper images compared to $\alpha$-GAN and Progressive GAN. However, it also generates more artifacts.
According to the quantitative analysis of comparing 1,024 generated images from each model, as shown in Table 2, overall the generation quality of $\alpha$-GAN is comparable with WGAN.
Although Progressive GAN and $\alpha$-GAN also generate realistic-looking images, it can not directly generate images with size $256^3$ because they are less memory-efficient. Therefore, we have to first generate $128^3$ images and then interpolate these images into $256^3$ for a fair comparison. The interpolated images are usually blurry and lack of high-resolution details. In contrast, HA-GAN is memory-efficient and can directly generate images with $256^3$ size. Therefore, it can generate sharper images with more details.
For the ablation studies, first we found that adding a low-resolution branch can help improve results, we think it's because the low-resolution branch can help the model learn the global structure.
Second, we observe in Table \ref{tbl:ablation} that HA-GAN with encoder outperforms the version without encoder in terms of image synthesis quality. This is consistent with the observation in~\cite{rosca2017variational} that introducing an encoder to GAN improves the quality of synthetic images. When an encoder is introduced to GAN, the reconstruction loss in the objective function ensures that the reconstructed images are voxel-wise consistent with the original images. This term can encourage the generator to represent all data and not collapse, improving the performance of the generator in terms of image synthesis. Finally, we also see that randomly sampled $r$ outperforms deterministic $r$. We think that this is because using deterministic $r$ leads to deterministic sub-volume locations, then the generation of junction region between sub-volumes is not learned well. In comparison, using randomly selected $r$ leads to randomly selected locations of sub-volumes. In this way, the junctions between sub-volumes can be better covered.
The embedding shown in Fig.~\ref{fig:result_pca_copd} and Fig.~\ref{fig:result_pca_gsp} reveals that the distribution of the synthetic images by HA-GAN is more consistent with the real images, compared to all baselines.
The scatters of VAE-GAN seem to be clustered, and lay outside of real data distribution. The clustering pattern suggests there is mode collapse, which is a common form of GAN failure. It occurs when the generator can only generate a few modes of the data distribution, while ignoring the majority of them. This will lead to reduced diversity of generated samples. From Fig. 3 we can also see that samples generated by VAE-GAN are blurry compared to real images, reassuring that they are outliers in latent feature space.
The scatters of WGAN/$\alpha$-GAN show compressed support of real data distribution, which suggests that samples of WGAN/$\alpha$-GAN have lower diversity than the real images.
We think one possible reason is that the models only learn few attributes of samples in the dataset. To be more specific, the models learn an overly simplified distribution where multiple inputs are mapped to a single output regardless of variations in latent code, so the generated images are of lower diversity. The HA-GAN model we proposed has an encoder module, which encourages different latent codes to map to different outputs, improving the diversity of generated samples~\cite{rosca2017variational}.
However, we observe in \ref{fig:result_pca_gsp} that on the GSP dataset, the variance of synthetic images by HA-GAN is smaller than the real images, implying that the generated images are of lower diversity than the real data. We want to improve synthetic diversity by introducing a mini-batch discrimination scheme \cite{salimans2016improved} or using multiple discriminators \cite{durugkar2016generative} in future work.
\subsection{Clinical Applications}
In this study, we demonstrate the clinical value of HA-GAN in three applications, including data augmentation, image super-resolution and clinical-relevant feature extraction. For data augmentation, the results in Table.~\ref{tbl:results_aug} show that samples generated by HA-GAN can help the training of classification model. While samples generated by $\alpha$-GAN can also help the training, the performance gain is smaller. We think one reason is that samples generated by HA-GAN are more realistic, also shown in Table.~\ref{tbl:FID}. The GAN model can learn a rich prior from existing medical imaging datasets, and the generated samples can help classifiers to achieve better performance on other smaller datasets for which few labeled samples exist. We leave data augmentation with HA-GAN for image segmentation in our future work.
In the image super-resolution experiment, we show that our proposed HA-GAN-SR outperforms the patch-based GAN-CIRCLE in Table.~\ref{tbl:recon}.
As shown in Fig.~\ref{fig:result_reconstruction} and Fig.~\ref{fig:result_reconstruction_detail}, the SR images generated by GAN-CIRCLE are smooth, the noise is suppressed. But images generated by HA-GAN-SR are sharper and retain more details.
One possible reason is that our proposed HA-GAN-SR does not require to divide whole volume into separate patches, it's easier to learn global structure and mitigate the patchy artifact.
We would like to note that the COPDGene dataset and GSP dataset we used in this study contain little noise. If the user needs to train HA-GAN on noisy data, we would recommend adding total variation loss during training, which can serve as a constraint to ensure the smoothness and suppress noise in generated images and it’s also found in other GAN papers~\cite{rose2015noise,you2019ct}.
To make further progress, we may try to incorporate other components in HA-GAN-SR to improve performance on image super-resolution, including cycle-consistency~\cite{zhu2017unpaired,you2019ct} and Wasserstein constraint~\cite{gulrajani2017improved,lei2020wasserstein}.
For the experiment of clinical-relevant feature extraction, we encode the full image into a 1D variable to extract meaningful and compact feature representation for downstream clinical feature prediction.
Table \ref{tbl:r2} shows that HA-GAN can better extract clinical-relevant features from the images, comparing to VAE-GAN and $\alpha$-GAN. Some clinical-relevant information might be hidden in specific details in the medical images, and can only be observed under high resolution. VAE-GAN and $\alpha$-GAN can only process lower-resolution images of $128^3$. We speculate that the high-resolution information leveraged by HA-GAN helps it learn representations that are more predictive for the clinical-relevant measurements.
Although our HA-GAN is capable of extracting clinical-relevant features, it is an unsupervised method. Research~\cite{singla2018subject2vec} shows that if we utilize some clinical-relevant supervisions during training, the model is better explaining the clinical-relevant measurement. To achieve this, we plan to incorporate clinical-relevant measurements by maximizing the mutual information between the latent representation and the provided measurements in future work.
\subsection{Exploring Latent Space}
Fig. \ref{fig:exploring} and Fig. \ref{fig:exploring_brain} show that certain directions in the latent space learned by HA-GAN have semantic meanings. We can identify the directions in the latent space that correspond to each semantic meaning of interest. However, we realize that in the identified direction of latent space, some other factors of variations that are irrelevant to the semantic meaning of interest are also changing. For example, in Fig \ref{fig:exploring}, the size of lung is increasing as the amount of emphysema increases; in Fig \ref{fig:exploring_brain}, as the brain size increases, the shape and orientation of the brain also change. This is because the latent representation learned is entangled \cite{bengio2013representation}, such that the change in the latent space corresponds to the change of multiple factors of variations.
For example, the latent direction we learned for ventricle size may also contain a portion for brain size. It is challenging to learn a representation disentangled for ventricle size and brain size in an unsupervised way. In real data, these two variables are associated.
To make sure the model learns disentangled representations, we need to introduce additional regularization terms in the objective function, making each latent variable independent with each other, as introduced in~\cite{higgins2016beta, chen2018isolating}.
\subsection{Memory Efficiency}
Because of its hierarchical structure, HA-GAN process only one sub-volume of high-dimensional images rather than the entire image in each iteration during training. This makes HA-GAN more efficient than baselines in terms of memory usage in each iteration during training, as shown in Fig.~ \ref{fig:result_memory}.
In addition, We found that as the output resolution increases, the total number of model parameters does not increase much,
this is expected because it only needs few more convolutional layers.
But as the multiplier factor increases, the memory usage increases drastically. Based on the experiment results, we believe that the memory efficiency mainly comes from the sub-volume scheme rather than model parameters. In addition, here we only use one level of sub-volume selection and downsampling because it already drastically reduced the memory cost, we leave incorporating multiple levels of subvolumes and downsampling in our future work.
\subsection{Datasets}
The experiments are conducted on two large-scale medical datasets, including the COPDGene dataset~\cite{regan2011genetic} and the GSP dataset~\cite{holmes2015brain}. Both datasets are publicly available and details about image acquisition can be found in Supplementary Material.
{\vspace{2mm} \par \bf \noindent COPDGene Dataset: \hspace{2mm}}
We use 3D thorax computerized tomography (CT) images of 9,276 subjects from COPDGene dataset in our study. Only full inspiration scans are used in our study. We trim blank slices with all-zero values and resize the images to $256^3$. The Hounsfield units of the CT images have been calibrated and air density correction has been applied. Then the Hounsfield Units (HU) are mapped to the intensity window of $[-1024,600]$ and normalized to $[-1,1]$.
{\vspace{2mm} \par \bf \noindent GSP Dataset: \hspace{2mm} }
We use 3D Brain magnetic resonance images (MRIs) of 3,538 subjects from Brain Genomics Superstruct Project (GSP)~\cite{holmes2015brain} in our experiments. The FreeSurfer package~\cite{fischl2012freesurfer} is used to remove the non-brain region in the images, bias-field correction, intensity normalization, affine registration to Talairach space, and resampling to $1 mm^3$
isotropic resolution. We trim the blank slices with all-zero values and rescale the images into $256^3$. The intensity value is normalized to $[-1,1]$.
\subsection{Image Synthesis}
We examine whether the synthetic images are realistic-looking quantitatively and qualitatively, where synthetic images are generated by feeding random noise into the generator.
\subsubsection{Quantitative Evaluation}
If the synthetic images are realistic-looking, then the synthetic images' distribution should be indistinguishable from that of the real images. Therefore, we can quantitatively evaluate the quality of the synthetic images by computing Fréchet Inception Distance (FID)~\cite{heusel2017gans} and Maximum Mean Discrepancy (MMD)~\cite{gretton2012kernel} between the distributions of real images and synthetic images. Lower values of these quantities indicate that the distributions are more similar, implying more realistic-looking synthetic images. We evaluate the synthesis quality at two resolutions: $128^3$ and $256^3$. Due to memory limitations, the baseline models can only be trained with the size of $128^3$ at most. To make a fair comparison with our model (HA-GAN), we apply trilinear interpolation to upsample the synthetic images of baseline models to $256^3$.
We adopt a 3D ResNet model pre-trained on 3D medical images~\cite{chen2019med3d} to extract features for computing FID. Note the scale of FID relies on the feature extraction model. Thus our FID values are not comparable to FID value calculated on 2D images, which is based on feature extracted using model pre-trained on ImageNet.
As shown in Table \ref{tbl:FID}, HA-GAN achieves lower FID and MMD than the baselines, which implies that HA-GAN generates more realistic images. We found that at the resolution of $128^3$, HA-GAN still outperforms the baseline models, but the lead has been smaller compared with result at the resolution of $256^3$.
In addition, we performed statistical tests on the evaluation results at $256^3$ resolution between methods. More specifically, we performed two-sample $t$-tests (one-tailed) between HA-GAN and each of the baseline methods. The results are shown in Table~\ref{tbl:FID_p}. At a significance level of 0.05, HA-GAN achieves significantly higher performance than baseline methods for both datasets.
\begin{table*}[htp]
\centering
\caption{ Evaluation for image synthesis}
\begin{adjustbox}{max width=\textwidth}
\label{tbl:FID}
\begin{tabular}{lcccc|cccc}
\toprule
Dataset&\multicolumn{4}{c|}{COPDGene (Lung)} & \multicolumn{4}{c}{GSP (Brain)}\\
\toprule
Resolution&\multicolumn{2}{c}{$128^3$}&\multicolumn{2}{c|}{$256^3$} & \multicolumn{2}{c}{$128^3$}&\multicolumn{2}{c}{$256^3$}\\
\toprule
& FID$\downarrow$& MMD$\downarrow$& FID$\downarrow$& MMD$\downarrow$ & FID$\downarrow$& MMD$\downarrow$& FID$\downarrow$& MMD$\downarrow$ \\
\midrule
WGAN& $0.012_{\pm.011}$&$0.092_{\pm.059}$&$0.161_{\pm.044}$&$0.471_{\pm.110}$&
$0.006_{\pm.002}$&$0.406_{\pm.143}$&$0.025_{\pm.013}$&$0.328_{\pm.139}$ \\
VAE-GAN& $0.139_{\pm.002}$&$1.065_{\pm.008}$&$0.328_{\pm.007}$&$1.028_{\pm.008}$&
$0.075_{\pm.004}$&$0.667_{\pm.026}$&$0.635_{\pm.040}$&$0.702_{\pm.028}$ \\
$\alpha$-GAN& $0.010_{\pm.004}$&$0.089_{\pm.056}$&$0.043_{\pm.094}$&$0.323_{\pm.080}$&
$0.010_{\pm.007}$&$0.606_{\pm.204}$&$0.029_{\pm.016}$&$0.428_{\pm.141}$ \\
Progressive GAN&$0.015_{\pm.007}$&$0.150_{\pm.072}$&$0.107_{\pm.037}$&$0.287_{\pm.123}$&
$0.017_{\pm.008}$&$0.818_{\pm.217}$&$0.127_{\pm.055}$&$1.041_{\pm.239}$\\
\midrule
HA-GAN & \bm{$0.005_{\pm.003}$}&\bm{$0.038_{\pm.020}$}&\bm{$0.008_{\pm.003}$}&\bm{$0.022_{\pm.010}$}&
\bm{$0.002_{\pm.001}$}&\bm{$0.129_{\pm.026}$}&\bm{$0.004_{\pm.001}$}&\bm{$0.086_{\pm.029}$} \\
\bottomrule
\end{tabular}
\vspace{-3mm}
\end{adjustbox}
\end{table*}
\begin{table}[htp]
\centering
\vspace{-3mm}
\caption{ Statistical test for comparison of image synthesis. Null hypothesis: The mean of distance metric (FID and MMD) of HA-GAN is equal or higher than the baseline methods.}
\begin{adjustbox}{max width=0.48\textwidth}
\label{tbl:FID_p}
\begin{tabular}{lcccc}
\toprule
Dataset&\multicolumn{2}{c}{COPDGene (Lung)} & \multicolumn{2}{c}{GSP (Brain)}\\
\toprule
p-value & FID& MMD& FID& MMD \\
\midrule
WGAN& $\num{2e-3}$&$<\num{1e-3}$&
$0.01$&$<\num{1e-3}$ \\
VAE-GAN& $<\num{1e-6}$ &$<\num{1e-6}$&
$<\num{1e-6}$&$<\num{1e-6}$ \\
$\alpha$-GAN& $\num{2e-3}$&$\num{1e-3}$&
$0.02$&$\num{9e-3}$ \\
Progressive GAN&$<\num{1e-3}$&$<\num{1e-3}$&
$\num{3e-3}$&$<\num{1e-3}$\\
\bottomrule
\end{tabular}
\vspace{-3mm}
\end{adjustbox}
\end{table}
\subsubsection{Ablation Study}
We perform three ablation studies to validate the contribution of each of the proposed components. The experiments are performed at $256^3$ resolution. Shown in Table~\ref{tbl:ablation}, we found that adding a low-resolution branch can help improve results, since it can help the model learn the global structure. Adding an encoder can also help improve performance, since it can help stabilize the training. For the deterministic $r$ experiments, we make the sub-volume selector to use a set of deterministic values of $r$ (equal interval between them) rather than the randomly sampled $r$ currently used. From the results, we can see that randomly sampled $r$ outperforms deterministic $r$.
\begin{table*}[htp]
\centering
\vspace{-3mm}
\caption{ Results of ablation study}
\begin{adjustbox}{max width=\textwidth}
\label{tbl:ablation}
\begin{tabular}{lcccc}
\toprule
Dataset&\multicolumn{2}{c}{COPDGene (Lung)} & \multicolumn{2}{c}{GSP (Brain)}\\
\toprule
& FID$\downarrow$& MMD$\downarrow$& FID$\downarrow$& MMD$\downarrow$ \\
\midrule
HA-GAN w/o low-res branch& $0.030_{\pm.018}$&$0.071_{\pm.039}$&
$0.118_{\pm.078}$&$0.876_{\pm.182}$ \\
HA-GAN w/o Encoder& $0.010_{\pm.003}$&$0.034_{\pm.006}$&
$0.006_{\pm.003}$&$0.099_{\pm.028}$ \\
HA-GAN w/ deterministic $r$ & $0.014_{\pm.003}$&$0.035_{\pm.007}$&
$0.061_{\pm.016}$&$0.612_{\pm.157}$ \\
HA-GAN & \bm{$0.008_{\pm.003}$}&\bm{$0.022_{\pm.010}$}&
\bm{$0.004_{\pm.001}$}&\bm{$0.086_{\pm.029}$} \\
\bottomrule
\end{tabular}
\vspace{-3mm}
\end{adjustbox}
\end{table*}
\subsubsection{Qualitative Evaluation}
To qualitatively analyze the results, we show some samples of synthetic images in Fig.~\ref{fig:result_synthesis}. The figure illustrates that HA-GAN generates sharper images than the baselines. More high-resolution samples and latent space interpolation are provided in \textbf{Supplementary Material}.
To illustrate whether the synthetic images look similar to the real ones, we embed the synthetic and real images into the same space. If the synthetic images are indistinguishable from the real images, then we expect that the synthetic and real images occupy the same region in the embedding space. Following the practice of~\cite{kwon2019generation}, we first use a pretrained 3D medical ResNet model~\cite{chen2019med3d} to extract features for 512 synthetic images by each method. As a reference, we also extract features for the real image samples using the same ResNet model. Then we conduct MDS to embed the exacted features into 2-dimensional space for both COPDGene and GSP datasets. The results are visualized in Fig. \ref{fig:result_pca_copd} and \ref{fig:result_pca_gsp}, respectively. In both figures, we fit an ellipse for the embedding of each model with the least square. In the figures, we observe that synthetic images by HA-GAN better overlap with real images, compared with the baselines. This implies that HA-GAN generates more realistic-looking images than the baselines. We also provide PCA and tSNE visualization results in Supplementary Material.
\begin{figure*}[t!]
\centering
\includegraphics[width = \textwidth]
{figures/result_synthesis_v3.pdf}
\caption[Caption]{ \emph{Randomly generated} images from noise by different models and the real images. The figure illustrates that HA-GAN generates sharper images than the baselines.}
\label{fig:result_synthesis}
\vspace{5mm}
\includegraphics[width = 0.67\textwidth]
{figures/sr_full.pdf}
\vspace{-1mm}
\caption{ Visual comparison of super-resolution results.
First two rows: Coronal and axial view of visual comparison on GSP dataset.
Third row: Axial view of comparison on COPDGene dataset. The display window is [-1024, 600] HU.
Fourth row: Coronal view of comparison on COPDGene dataset. The display window is [-1024, -250] HU to illustrate lung details.
We show the zoom-in airway region marked by the red rectangle in Fig.~\ref{fig:result_reconstruction_detail}. The figure illustrates that HA-GAN-SR generates sharper images than GAN-CIRCLE.}
\vspace{-4mm}
\label{fig:result_reconstruction}
\end{figure*}
\begin{figure}[t]
\centering
\begin{subfigure}{.35\textwidth}
\includegraphics[width = 1.\textwidth]
{figures/MDS_COPD.pdf}
\caption[Caption]{ MDS visualization on COPDGene dataset.}
\label{fig:result_pca_copd}
\end{subfigure}
\\
\begin{subfigure}{.35\textwidth}
\includegraphics[width = 1.\textwidth]
{figures/MDS_GSP.pdf}
\caption[Caption]{ MDS visualization on GSP dataset. }
\label{fig:result_pca_gsp}
\end{subfigure}
\caption{Comparison of the embedding of different models. We embed the features extracted from synthesized images into 2-dimensional space with MDS. The ellipses are fitted to scatters of each model for better visualization. The figures show that the embedding region of HA-GAN has the most overlapping with real images, compared to the baselines.}
\end{figure}
\subsection{Data Augmentation for Supervised Learning}
In this experiment, we used the synthesized samples from HA-GAN to augment the training dataset for a supervised learning task.
Previous work~\cite{frid2018gan} has shown that GAN-generated samples improve the diversity of the training dataset, resulting in a better discriminative performance of the classifier.
Motivated by their results, we designed our experiment with the following three steps: First, we extended our HA-GAN architecture to enable conditional image generation and trained a class-conditional variant of HA-GAN. Next, we used trained HA-GAN to generate new images with class labels. Finally, we combined the original training dataset and GAN-generated images to train a multi-class classifier, and evaluate the performance on the test set.
We demonstrate our experiment on the COPDGene dataset using the GOLD score as a multi-class label. The GOLD score is a 5-class categorical variable ranging from 0-4.
We made two modifications to the original HA-GAN architecture to enable class-conditional image generation: 1) We updated the generator module $G^A(X;c)$ to take a one-hot code $c\sim p_c$ as input, along with latent variable $Z \sim \mathcal{N}( \mathbf{0}, \mathbf{I})$. $c$ represents the target class for the conditional image generation.
2) We updated the discriminator to output two probability distributions, one over the binary real/fake classification (same
as original HA-GAN), and another over the multi-class classification of class labels $P(C|X)$. Thus, the discriminator also acts as an auxiliary classifier for the class labels ~\cite{odena2017conditional}.
A schematic of the modified model can be found in Supplementary Material. In addition, two new terms are added to the original HA-GAN loss function for conditional generation:
\begin{equation}
\label{Eq:loss_cgan}
\footnotesize
\begin{aligned}
\mathcal{L}^H_{class}(G^H,G^A,D^H) = \mathbb{E}[\log P(C=c|X_r^H)] + \mathbb{E}[\log P(C=c|\widehat{X}_r^H)]
\\
\mathcal{L}^L_{class}(G^L,G^A,D^L) = \mathbb{E}[\log P(C=c|X^L)] + \mathbb{E}[\log P(C=c|\widehat{X}^L)]
\end{aligned}
\end{equation}
For comparison, we trained a class-conditional variant of $\alpha$-GAN on COPDGene dataset. The same two modifications discussed above are incorporated into the original $\alpha$-GAN model for conditional generation. We use a 3D CNN (implementation details are included in Supplementary Material Table.VIII) as the classification model.
We randomly sampled 80\% of subjects as training set and the rest are used as test set. We use an image size of $128^3$ for this experiment.
We divided $80\%$ of the subjects into training set, while the rest are included in a test set.
For creating the augmented training set, we combine randomly generated images from class-conditioned GAN (20\%) with the real images in the training set (80\%).
The proportion of different GOLD classes for generated images is the same as the original dataset.
We train two classifiers on the original training set and the GAN-augmented training set for 20 epochs respectively, and evaluated their performance on a held-out test set of real images.
Table.~\ref{tbl:results_aug} shows the results on COPDGene dataset. Classifier trained with GAN augmented data performed better than the baseline model which trains on training set only consisted of real images. Augmentation with HA-GAN can further improve performance compared to $\alpha$-GAN.
\begin{table}[htp]
\caption{Evaluation result for GAN-based data augmentation}
\centering
\begin{adjustbox}{max width=\textwidth}
\centering
\label{tbl:results_aug}
\begin{tabular}{lc}
\toprule
& Accuracy(\%) \\
\midrule
Baseline&$59.7$\\
Augmented with $\alpha$-GAN & $61.7$ \\
Augmented with HA-GAN & \bm{$62.9$} \\
\bottomrule
\end{tabular}
\vspace{-3mm}
\end{adjustbox}
\end{table}
\subsection{Image Super-resolution}
In this section, we use HA-GAN for super-resolution (SR) image restoration from noisy LR input images.
In order to achieve optimal performance for image super-resolution task,
we implemented a variant of HA-GAN for SR (HA-GAN-SR) with the following changes:
(1) Increased dimensionality of the feature representation learned by the encoder (i.e., bottle-neck features) because it can help preserve details from the input; (2) Added residual connection between the input image and the output prediction. GAN-CIRCLE~\cite{you2019ct} suggested these changes as it can make the super-resolution task easier by transforming high-resolution image prediction to residual prediction; (3) Added skip-connection between blocks, which is often used in encoder-decoder models including U-Net~\cite{ronneberger2015u}, which can improve information flow; (4) Updated training procedure to remove the overhead of random image synthesis from Gaussian noise input; (5) Applied the sub-volume selector on the input image to ensure memory efficiency. Despite these changes, the core idea of HA-GAN is retained: Training with randomly selected sub-volume for memory efficiency, and testing with full volume to ensure global structure.
The detailed architecture is shown in Fig.~\ref{fig:unet_arch}.
\begin{figure*}[htp]
\centering
\includegraphics[width=0.67\textwidth]{figures/Unet_sr.pdf}
\caption{Schematic of the generator of HA-GAN-SR for image super-resolution. At the training time, instead of directly generating high-resolution full volume, a selector is used to randomly select a sub-volume from the input low-resolution image. The sub-volume is fed through the network to generate corresponding sub-volume of high-resolution image.
At the inference time, since there is no need for gradient storage, the memory demand is lower and sub-volume selection is no longer needed. We directly forward the low-resolution volume to generate full corresponding high-resolution volume. The number beneath each block indicates the number of feature maps, the number on the side indicates relative image size. }
\label{fig:unet_arch}
\end{figure*}
During training, the selector $S(\cdot,\cdot)$ is applied to randomly select a sub-volume consisting of adjacent slices from the input low-resolution (LR) image.
The selected sub-volume is then passed through the encoder-decoder to create a transformed high-resolution (HR) sub-volume.
The discriminator takes both the input LR and HR sub-volumes as input, and distinguishes the real HR sub-volume from the fake ones ($\mathcal{L}_{GAN}$). There is also a $\ell_1$ loss for reconstruction of ground truth HR sub-volume ($\mathcal{L}_{Recon}$). The overall loss function is defined as:
\begin{equation}
\begin{aligned}
\mathcal{L} & =
\mathcal{L}_{GAN}(G,D) +
\lambda \mathcal{L}_{Recon}(G)
\\
& = \min_G\max_D[\log D(S(X^{L},r),S(X^{H},r))
\\
& + \log(1- D(S(X^{L},r),G(S(X^{L},r))))]
\\
&+\lambda \min_G\left\lVert S(X^{H},r) - G(S(X^{L},r))\right\rVert_{1}.
\end{aligned}
\end{equation}
At inference, there is no need for gradient storage and the memory demand is low, hence, we directly pass the full LR volume through the generator to produce the corresponding full HR volume.
We perform experiments on COPDGene dataset and GSP dataset. On the COPDGene dataset, we crop a bounding box of $256^3$ over the lung region. We followed practice in~\cite{jiang2018super,you2019ct} to generate LR images by adding noise to the original images and then lowering the spatial resolution by a factor of 2. We randomly sampled 80\% of total subjects as training set and the rest are used as test set. We set the learning rate to be $0.0001$ and $0.0004$ for generator and discriminator respectively. We use Adam optimizer and batch size is set as 4. $\lambda$ is set as 1. Trilinear interpolation is used for upsampling. The model is trained for 30000 steps.
For comparison, we also train GAN-CIRCLE on the two datasets. We only change the convolutional kernel from 2D to 3D to adapt to the 3D dataset and use the recommended hyperparameters and optimizer. Due to memory constraint, it can only operate on patches sized $64^3$. So we divide original images sized $256^3$ into non-overlapping $64^3$ patches. During the test time, the predicted patches are aggregated into full image. The results are shown in Fig.~\ref{fig:result_reconstruction} and Table.~\ref{tbl:recon}. Our proposed method performs better than the baseline model both qualitatively and quantitatively.
\begin{table*}[htp]
\caption{Evaluation for image super-resolution}
\centering
\begin{adjustbox}{max width=\textwidth}
\centering
\label{tbl:recon}
\begin{tabular}{lcccccc}
\toprule
Dataset&\multicolumn{3}{c}{COPDGene (Lung)}&\multicolumn{3}{c}{GSP (Brain)}\\
\toprule
& SSIM $\uparrow$ & NMSE(\%) $\downarrow$ & PSNR $\uparrow$ & SSIM $\uparrow$ & NMSE(\%) $\downarrow$ & PSNR $\uparrow$ \\
\midrule
GAN-CIRCLE&$0.856_{\pm.030}$&$0.520_{\pm.266}$&$29.2_{\pm.8}$&$0.838_{\pm.008}$&$0.596_{\pm.060}$&$27.8_{\pm.4}$\\
HA-GAN-SR & \bm{$0.886_{\pm.030}$}& \bm{$0.266_{\pm.203}$}&\bm{$31.9_{\pm.9}$}&\bm{$0.879_{\pm.007}$}&\bm{$0.333_{\pm.031}$}&\bm{$30.1_{\pm.3}$}\\
\bottomrule
\end{tabular}
\vspace{-3mm}
\end{adjustbox}
\end{table*}
\begin{figure*}[t!]
\centering
\includegraphics[width = 0.67\textwidth]
{figures/sr_detail2.pdf}
\vspace{-1mm}
\caption{ Zoom-in airway region. While GAN-CIRCLE generates smooth images, HA-GAN-SR generates sharper images that preserve more details. The display window is [-1024, 600] HU.}
\vspace{-4mm}
\label{fig:result_reconstruction_detail}
\end{figure*}
In addition, we perform structure-wise evaluation for the image super-resolution results on the GSP dataset. The goal is to study whether generated high-resolution (HR) image is consistent with the original HR images on structure level. we use FastSurfer~\cite{henschel2020fastsurfer} to segment a representative subset of 10 brain ROIs from real HR brain images and generated HR brain images, including cerebral white matter (WM), lateral ventricle (LV), cerebellar white matter (CW), thalamus (TH), caudate (CA); putamen (PU), pallidum (PA), brainstem (BS), hippocampus (HP), and amygdala (AM). Dice score is used to evaluate the overlapping between segmented structures from real HR image and segmented structures from generated HR image. Fig.~\ref{fig:result_sr_dice} shows that HA-GAN-SR received a mean Dice score of 0.8 and more on seven structures while outperforming GAN-CIRCLE on eight structures.
\begin{figure}[htp]
\centering
\includegraphics[width = .5\textwidth]
{figures/sr_dice.pdf}
\caption{Structure-wise Dice score on image super-resolution results (Higher is better). The figures quantitatively measure how much the brain structures in the generated HR images overlap with those in the original HR images. The figure shows that HA-GAN achieves higher Dice scores for nearly all of the brain structures.}
\label{fig:result_sr_dice}
\end{figure}
\subsection{Clinical-Relevant Feature Extraction}
In this section, we evaluate the encoded latent variables from real images to predict clinical-relevant measurements. This task evaluates how much information about the disease severity is preserved in the encoded latent features.
\begin{table}[t]
\centering
\caption{$R^2$ for predicting clinical-relevant measurements}
\label{tbl:r2}
\begin{tabular}{lccc}
\toprule
Method
& $\log$ \texttt{FEV1pp}
& $\log$ \texttt{$\text{FEV}_1 / \text{FVC}$}
& $\log$ \texttt{$\%$Emphysema}
\\
\midrule
VAE-GAN
& 0.215 & 0.315 & 0.375
\\
$\alpha$-GAN
& 0.512 & 0.622 & 0.738
\\
\midrule
HA-GAN
& \bf 0.555 & \bf 0.657 & \bf 0.746
\\
\bottomrule
\multicolumn{4}{p{.45\textwidth}}{We do not include the results of WGAN and Progressive GAN, because they do not incorporate an encoder.}
\end{tabular}
\vspace{-4mm}
\end{table}
We select two respiratory measurements and one CT-based measurements of emphysema to measure disease severity. For respiratory measurements, we use percent predicted values of Forced Expiratory Volume in one second (\texttt{FEV1pp}) and its ratio with Forced vital capacity (FVC) (\texttt{$\text{FEV}_1 / \text{FVC}$}).
Given extracted features, we train a Ridge regression model with $\lambda = 1\times10^{-4}$ to predict the {\it logarithm} of each of the measurements. We report the $R^2$ scores on held-out test data. Table~\ref{tbl:r2} shows that HA-GAN achieves higher $R^2$ than the baselines.
The results imply that HA-GAN preserves more information about the disease severity than baselines.
\subsection{Exploring the Latent Space}
This section investigates whether change along a certain direction in the latent space corresponds to semantic meanings.
We segment the lung regions in the thorax CT images using Chest Image Platform (CIP)\cite{san2015chest}, and segment the fat tissues\cite{lee2018fully} and bone tissues via thresholding. The detailed thresholding criteria can be found in Supplementary Material.
For emphysema regions \cite{wang2013optimal}, we apply thresholding below -950HU within lung volume to segment emphysema.
We segment the brain, hippocampus, and lateral ventricle for the synthetic brain MRIs with the FreeSurfer package~\cite{fischl2012freesurfer}.
Next, we train linear regression models that predict the total volume of the different tissues/regions with the encoded latent representations $ Z $ for each image, optimizing with least square. The learned parameter vector for each class represents the latent direction.
Then, we manipulate the latent variable along the direction corresponding to the learned parameters of linear models and generate the images by feeding the resulted latent representations into the generator. More specifically, first a reference latent variable is randomly sampled, then the latent variable is moved along the latent direction learned until the target volume is reached, which is predicted by the linear regression model. As shown in Fig. \ref{fig:exploring}, for thorax CT images, we identify directions in latent space corresponding to the volume of lung, fat, bone and emphysema, respectively. As shown in Fig. \ref{fig:exploring}, for brain MRIs, we identify directions in latent space corresponding to the volume of the brain, hippocampus, and lateral ventricles, respectively. When we go along these directions in latent space, we can observe the change of volumes for these tissues.
\begin{figure}[t]
\centering
\includegraphics[width=.48\textwidth ]{figures/explore_lung_v4.pdf}
\caption{ Latent space exploration on thorax CT images.
The figure reports synthetic images generated by changing the latent code in four different directions, corresponding to the lung, fat, bone volume as well as the proportion of emphysema. The number shown below each slice indicates the percentage of the volume of interest that occupies the volume of lung region of the synthetic image. The segmentation masks are plotted in green.}
\label{fig:exploring}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[trim=20 50 40 50, clip, width=.48\textwidth ]{figures/explore_brain_v4.pdf}
\caption{ Latent space exploration on brain MRIs.
The figure reports synthetic images generated by changing the latent code in three different directions, corresponding to the brain's volumes, hippocampus, and lateral ventricles. The number shown below each slice indicates the percentage of the volume of interest that occupies the synthetic image. The segmentation masks are plotted in red.}
\label{fig:exploring_brain}
\end{figure}
\subsection{Memory Efficiency}
In this section, we compare the memory efficiency of HA-GAN with baselines. We measure the GPU memory usage at the training time for all models under different resolutions, including $32^3$, $64^3$, $128^3$, and $256^3$. The results are shown in Fig.~\ref{fig:result_memory}. Note that the experiments are performed on the same GPU (Tesla V100 with 16GB memory), and we set the batch size to 2. The HA-GAN consumes much less memory than baseline models under different resolutions. In addition, HA-GAN is the only model that can generate images of sizes $256^3$. All other models exhaust the entire memory of GPU; thus, the memory demand cannot be measured.
In order to investigate where the memory efficiency comes from, we report the number of parameters for HA-GAN at different resolutions in Table~\ref{tbl:parameter}. We found that as the resolution increases, the number of parameters only increases marginally, which is expected as the model only requires a few more layers as resolution increases. We also performed experiments testing the impact of the multiplier factor of the sub-volume selector. More specifically, we measured GPU memory usage under different sizes of sub-volume (e.g. 1/8 of the full volume, 1/4 of the full volume, etc) by controlling the multiplier factor of the sub-volume selector. The results are shown in Table~\ref{tbl:multiplier}. We found that as the multiplier factor increases, memory usage increases drastically.
In addition, we empirically find that HA-GAN is more computationally efficient than the baseline models at the $128^3$ resolution, see Supplementary Material for more details.
\begin{figure}[t]
\centering
\includegraphics[width = 0.5\textwidth]
{figures/result_memory.pdf}
\caption[Caption]{ Results of memory usage test.
Note that only HA-GAN can generate images of size $256^3$ without memory overflow.
}
\label{fig:result_memory}
\end{figure}
\begin{table}[htp]
\caption{Evaluation of model parameters}
\centering
\begin{adjustbox}{max width=\textwidth}
\centering
\label{tbl:parameter}
\begin{tabular}{lcc}
\toprule
Output Resolution&Memory Usage (MB)&\#Parameters\\
\midrule
$32^3$&2573&74.7M\\
$64^3$&2665&78.7M\\
$128^3$&3167&79.6M\\
$256^3$&5961&79.7M\\
\bottomrule
\end{tabular}
\vspace{-3mm}
\end{adjustbox}
\end{table}
\begin{table}[htp]
\caption{Evaluation of the impact of sub-volume multiplier }
\centering
\begin{adjustbox}{max width=\textwidth}
\centering
\label{tbl:multiplier}
\begin{tabular}{lc}
\toprule
Multiplier Factor&Memory Usage (MB)\\
\midrule
$1 / 8$&5961\\
$1 / 4$&10689\\
$1 /2 $&13185\\
\bottomrule
\end{tabular}
\vspace{-3mm}
\end{adjustbox}
\end{table}
\subsection{Background}
\label{sec:background}
Generative Adversarial Networks (GANs)\cite{goodfellow2014generative} is widely used to generate realistic-looking images. The training procedure of GANs corresponds to a two-player game that involves a generator $G$ and a discriminator $D$. In the game, while $G$ aims to generate realistic-looking images, $D$ tries to discriminate real images from the images synthesized by $G$. The $D$ and $G$ compete with each other.
Let $P_X$ denote the underlying data distribution, and $P_Z$ denote the distribution of the random noise $Z$. Then the objective of GAN is formulated as below:
\begin{IEEEeqnarray}{c}
\min_G\max_D\underset{X \sim P_{X}}{\mathbb{E}} [\log D(X)]+\underset{Z\sim P_Z}{\mathbb{E}} [\log(1- D(G(Z)))].
\label{Eq:gan}
\end{IEEEeqnarray}
\begin{figure*}[t]
\centering
\includegraphics[width = \textwidth]{figures/gan_loss_v9_yk.pdf}
\caption{ \textbf{Left:} The schematic of the model (encoder is hidden here to improve clarity). At the training time, instead of directly generating high-resolution full volume, our generator contains two branches for high-resolution sub-volume and low-resolution full volume generation, respectively. The two branches share the common block $G^A$. A sub-volume selector is used to select a part of the intermediate feature for the sub-volume generation.
\textbf{Right:} The Schematic of the hierarchical encoder trained with two reconstruction losses, one on the high-resolution sub-volume level (upper right) and another one on the low-res full volume level (lower right). The meanings of the notations used can be found in Table~\ref{tbl:notation}. The model adopts 3D architecture with details presented in \textbf{Supplementary Material}.}
\label{fig:GAN_loss}
\vspace{-3mm}
\end{figure*}
\begin{figure}[t]
\centering
\includegraphics[width = .45\textwidth]{figures/inference_v3.pdf}
\caption{ Inference with the hierarchical generator and encoder. Since there is no need for gradient storage at inference time, the memory demand is lower and sub-volume selection is no longer needed. We directly forward input through the high-res branch for high-res full image generation and encoding.}
\label{fig:inference}
\vspace{-3mm}
\end{figure}
\subsection{The Hierarchical Structure}
\label{sec:hierarchical}
{\vspace{2mm} \noindent \bf Generator \hspace{2mm}} Our generator has two branches that generate the low-resolution image $\widehat{X}^L$ and a randomly selected sub-volume of the high-resolution image $\widehat{X}^H_r$, where $r$ represents the index for the starting slice of the sub-volume. The two branches share initial layers $G^A$ and after they branch off:
\begin{IEEEeqnarray}{ll}
\widehat{X}^L &= G^L (\ \underbrace{ G^A(Z) }_{A} \ ),\\
\widehat{X}^H_r &= G^H( \ \underbrace{ S^L(G^A(Z);r) }_{A_r} \ ),
\label{equ:A_r}
\end{IEEEeqnarray}
where $G^A(\cdot)$, $G^L(\cdot)$ and $G^H(\cdot)$ denote the common, low-resolution and high-resolution blocks of the generator, respectively. $S^L(\cdot, r)$ is a selector function that returns the sub-volume of input image starting at slice $r$, where the superscript $L$ indicates that the selection is made at low resolution. The output of this function is fed into $G^H(\cdot)$, which lifts the input to the high resolution. We use $A$ and $A_r$ as short-hand notation for $G^A(Z)$ and $S^L(G^A(Z);r)$, respectively. We let $Z \sim \mathcal{N}( \mathbf{0}, \mathbf{I})$ be the input random noise vector. We let $r$ be the randomly selected index for the starting slice that is drawn from a uniform distribution, denoted as $r \sim \mathcal{U}$; i.e., each slice is selected with the same probability. The schematic of the proposed method is shown in Fig.~\ref{fig:GAN_loss}. Note that $\widehat{X}^H_r$ depends on a corresponding sub-volume of $A$, which is $A_r$. Therefore, we feed $A_r$ rather than complete $A$ into $G^H$ during training, making the model memory-efficient.
{\vspace{2mm} \noindent \bf Discriminator \hspace{2mm}} Similarly, we define two discriminators $D^H$ and $D^L$ to distinguish a real high-resolution sub-volume $X^H_r$ and a low-resolution image $X^L$ from the fake ones, respectively. $D^H$ makes sure that the local details in the high-resolution sub-volume look realistic. At the same time, $D^L$ ensures the proper global structure is preserved. Since we feed a sub-volumes $S^H(X^H;r)$ rather than the entire image $X^H$ into $D^H$, the memory cost of the model is reduced.
There are two GAN losses $\mathcal{L}^H_{GAN}$ and $\mathcal{L}^L_{GAN}$ for low and high resolutions respectively:
\begin{equation}
\footnotesize
\begin{aligned}
\mathcal{L}^H_{GAN}(G^A,G^H,D^H)
= & \underset{G^H,G^A}{\min} \underset{D^H}{\max} \underset{r\sim U}{\mathbb{E}}
\left[ \underset{X \sim P_{X}}{\mathbb{E}} [\log D^H( S^H(X^H;r) )]
\right.
\\
& +
\left.
\underset{Z\sim P_Z}{\mathbb{E}} [\log(1- D^H( \widehat{X}^H_r )]
\right] ,
\end{aligned}
\vspace{-5mm}
\label{equ:L_H}
\end{equation}
\begin{equation}
\footnotesize
\begin{aligned}
\mathcal{L}^L_{GAN}(G^L,G^A,D^L)
= &
\underset{G^L,G^A}{\min} \underset{D^L}{\max}
\underset{X \sim P_{X}}{\mathbb{E}}[\log D^L(X^L)]
\\
& +\underset{z\sim P_Z}{\mathbb{E}} [\log(1- D^L(\widehat{X}^L)].
\end{aligned}
\end{equation}
Note that the sampler $S^H(\cdot;r)$ in Equation (\ref{equ:A_r}) and $S^L(\cdot;r)$ in Equation (\ref{equ:L_H}) are synchronized, such that $r$ corresponds to the indices for the same percentile of slices in the high- and low-resolution.
{\vspace{2mm} \noindent \bf Inference \hspace{2mm}}
The memory space needed to store gradient is the main bottleneck for 3D GANs models; however, the gradient is not needed during inference. Therefore, we can directly generate the high-resolution image by feeding $Z$ into $G^A$ and $G^H$ sequentially, \ie $\widehat{X}^H(Z) = G^H( G^A(Z) )$). Note that to generate the entire image during inference, we directly feed the complete feature maps $A = G^A(Z)$ rather than its sub-volume $A_r$ into the convolutional network $G^H$. The idea is illustrated at the top of Fig.~\ref{fig:inference}.
\subsection{Incorporating the Encoder}
\label{sec:encoder}
We also adopt a hierarchical structure for the encoder, by defining two encoders $E^H(\cdot)$ and $E^G(\cdot)$ encoding the high-resolution sub-volume and the entire image respectively. We partition the high-resolution image $X^H$ into a set of $V$ \emph{non-overlapping} sub-volumes,
\ie $X^H = \texttt{concat}(\{ S^H(X^H, T_v) \}_{v=1}^V$), where $\texttt{concat}$ represent concatenation, $S^H(\cdot)$ represents the selector function that returns a sub-volume of a high-resolution image, and $T_v$ represents the corresponding starting indices for the non-overlapping partition.
We use $\widehat{A}_v$ to denote the sub-volume-level feature maps for the $v$-th sub-volume, i.e., $\widehat{A}_v = E^H( S^H(X^H; T_v))$. To generate the image-level representation $\widehat{Z}$, we first summarize all sub-volume representation for the image through concatenation, such that $\widehat{A} = \texttt{concat}(\{A_{v}\}_{v=1}^V)$. Then we feed $\widehat{A}$ into the encoder $E^G(\cdot)$ to generate the image-level representation $\widehat{Z}$, i.e.,
$\widehat{Z} = E^G(\widehat{A})$
In order to obtain optimal $E^H$ and $E^G$, we introduce the following objective functions:
\begin{equation}
\label{Eq:recon_H}
\small
\mathcal{L}^H_{recon}(E^H)=
\min_{E^H}
\underset{X \sim P_{X}, r \in U } {\mathbb{E}} \left\lVert S^H(X^H;r) - G^H( \widehat{A}_r )\right\rVert_{1},
\end{equation}
\begin{equation}
\label{Eq:recon_G}
\resizebox{.9\linewidth}{!}{$
\begin{aligned}
\mathcal{L}^G_{recon}(E^G) = & \underset{E^G}{\min}
\underset{X \sim P_{X}}{\mathbb{E}}
\left[
\left\lVert X^L - G^L(G^A(\widehat{Z}))\right\rVert_{1}
\right.
\\
& + \left.
\underset{r \sim U}{\mathbb{E}}
\left[
\left\lVert S^H(X^H;r)
- G^H(S^L(G^A(\widehat{Z});r) )\right\rVert_{1}
\right]
\right].
\end{aligned}
$}
\end{equation}
Equation~(\ref{Eq:recon_H}) ensures a randomly selected high-resolution sub-volume $S^H(X^H;r)$ can be reconstructed. Equation~(\ref{Eq:recon_G}) enforces both the low-resolution image $X^L$ and a random selected $S^H(X^H;r)$ can be reconstructed given $\widehat{Z}$. Note that in Equation~(\ref{Eq:recon_H}), the sub-volume is reconstructed from the intermediate feature maps $\widehat{A}_v$; while in the second term in Equation (\ref{Eq:recon_G}), the sub-volume is reconstructed from the latent representations $\widehat{Z}$. In these equations, we use $\ell_1$ loss for reconstruction because it tends to generate sharper result compared to $\ell_2$ loss~\cite{zhu2017unpaired}. The structure of the encoders are illustrated in Fig.~\ref{fig:GAN_loss}.
When optimizing for Equation~(\ref{Eq:recon_H}), we only update $E^H$ while keeping all other parameters fixed. Similarly, when optimizing for Equation~(\ref{Eq:recon_G}), we only update $E^G$. We empirically find that this optimization strategy is memory-efficient and leads to better performance.
{\vspace{2mm} \noindent \bf Inference \hspace{2mm}} In the inference phase, we can get the latent code $\widehat{Z}$ by feeding the sub-volumes of $X^H$ into $E^H$, concatenating the output sub-volume feature maps into $\widehat{A}$ and then feeding the results into $E^G$, \ie
$\widehat{Z} = E^G ( \texttt{concat} ( \{ E^H( S^H(X^H; T_v) ) \}_{v=1}^V ) ) $. The idea is illustrated at the bottom of Fig.~\ref{fig:inference}.
\subsection{Overall Model}
\label{sec:overall}
The model is trained in an end-to-end fashion. The overall loss function is defined as:
\begin{equation}
\begin{aligned}
\mathcal{L} & =
\mathcal{L}^H_{GAN}(G^H,G^A,D^H) +
\mathcal{L}^L_{GAN}(G^L,G^A,D^L)
\\
& + \lambda_1 \mathcal{L}^H_{recon}(E^H) +
\lambda_2 \mathcal{L}^G_{recon}(E^G),
\end{aligned}
\end{equation}
where $\lambda_1$ and $\lambda_2$ control the trade-off between the GANs losses and the reconstruction losses. The optimizations for generator ($G^H$, $G^L$ and $G^A$), discriminator ($D^H$, $D^L$), and encoder ($E^H$, $E^G$) are altered per iteration.
During training, we sample a batch of real images and pass it through the encoder, followed by the generator to create reconstructed images for minimizing the reconstruction loss. We also sample random noise from Gaussian distribution and pass it through the generator to create randomly synthesized images for minimizing the GAN adversarial loss. Our overall optimization balances between the losses to learn parameters for the encoder, generator, and discriminator in end-to-end training.
During the inference stage, different kinds of input are used to cater to different tasks. For instance, if the goal is random image synthesis, random variable sampled from Gaussian distribution is used as input for generator; If the goal is feature extraction, real image is used as input for encoder.
\subsection{Implementation Details}
\label{sec:implementation}
We train the proposed HA-GAN for ten epochs, the training and validation curves can be found in Supplementary Material. We let the learning rate for generator, encoder, and discriminator to be $1\times10^{-4}$, $1\times10^{-4}$, and $4\times10^{-4}$, respectively. We also employed $\beta_1 = 0$ and $\beta_2 = 0.999$ in the Adam optimizer. The batch size is set as 4. We let the size of the $X^L$ be $64^3$. The size of the randomly selected sub-volume $S^H(X^H;r)$ is defined to be $32 \times 256^2$, where $r$ is randomly selected on the batch level. We let feature maps $A$ have $64$ channels with a size of $64^3$. The dimension of the latent variable $Z$ is chosen to be 1,024. The trade-off hyper-parameters $\lambda_1$ and $\lambda_2$ are set to be 5. The experiments are performed on two NVIDIA Titan Xp GPUs, each with 12GB GPU memory. The detailed architecture can be found in \textbf{Supplementary Material}.
\subsection{ GANs for Medical Imaging }
In recent years, researchers have developed GAN-based models for medical images. These models are applied to solve various problems, including image synthesis~\cite{baur2018generating, kitchen2017deep, chuquicusma2018fool}, data augmentation~\cite{frid2018synthetic, shin2018medical},
image reconstruction~\cite{lei2020wasserstein,murugesan2019recon},
modality/style transformation~\cite{zhao2017synthesizing, wolterink2017deep}, segmentation~\cite{ li2017brain, joyce2018deep}, and model explanation ~\cite{singla2019explanation}. However, most of these methods concentrate on generating 2D medical images. In this paper, we focus on solving a more challenging problem, i.e., generating 3D images.
With the prevalence of 3D imaging in medical applications, 3D GAN models have become a popular research topic. Shan et al. \cite{shan20183} proposed a 3D conditional GAN model for low-dose CT denoising. Kudo et al. \cite{simo2019virtual} proposed a 3D GAN model for CT image super-resolution. Jin et al. \cite{jin2019applying} propose an auto-encoding GAN for generating 3D brain MRI images. Cirillo et al. \cite{cirillo2020vox2vox} proposed to use a 3D model conditioned on multi-channel 3D Brain MR images to generate tumor mask for segmentation. While these methods can generate realistic-looking 3D MRI or CT images, the generated images are limited to the small size of $128\times128\times128$ or below, due to insufficient memory during training.
In contrast, our HA-GAN is a memory-efficient model and can generate 3D images with a size of $256\times256\times256$.
\subsection{Memory-Efficient GANs}
Some works are proposed to reduce the memory demand of high-resolution 3D image generation. In order to address the memory challenge, some works adopt slice-wise~\cite{lei2019mri} or patch-wise~\cite{yu20183d} generation approach. Unfortunately, these methods may introduce artifacts at the intersection between patches/slices because they are generated independently. To remedy this problem, Uzunova et al.~\cite{uzunova2019multi} propose a multi-scale approach that uses a GAN model to generate a low-resolution version of the image first. An additional GAN model is used to generate higher resolution patches of images conditioned on the previously generated patches of lower resolution images. However, this method is still patch-based; the generation of local patches is unaware of the global structure, potentially leading to spatial inconsistency. In addition, the model is not trained in an end-to-end manner, which makes it challenging to incorporate an encoder that learns the latent representations for the entire images. In comparison, our proposed HA-GAN is global structure-aware and can be trained end-to-end. This allows HA-GAN to be associated with an encoder.
\subsection{Representation Learning in Generative Models}
Several existing generative models are fused with an encoder~\cite{diederik2014auto,rosca2017variational,donahue2016adversarial}, which learns meaningful representations for images. These methods are based on the belief that a good generative model that reconstructs realistic data will automatically learn a meaningful representation of it~\cite{chen2016infogan}. A generative model with an encoder can be regarded as a compression algorithm \cite{Townsend2020HiLLoC}. Hence, the model is less likely to suffer from mode collapse because the decoder is required to reconstruct all samples in the dataset, which is impossible if mode collapse happens such that only limited varieties of samples are generated\cite{rosca2017variational}.
Variational autoencoder (VAE) \cite{diederik2014auto} uses an encoder to compress data into a latent space, and a decoder is used to reconstruct the data using the encoded representation. BiGAN \cite{donahue2016adversarial} learns a bidirectional mapping between data space and latent space.
$\alpha$-GAN \cite{rosca2017variational} introduces not only an encoder to the GAN model, but also learns a disentangled representation by implementing a code discriminator, which forces the distribution of the code to be indistinguishable from that of random noise. Variational auto-encoder GAN (VAE-GAN) \cite{larsen2015autoencoding} adds an adversarial loss to the variational evidence lower bound objective.
Despite their success, the methods mentioned above can analyze 2D images or low-resolution 3D images, which are less memory intensive for training an encoder. In contrast, our proposed HA-GAN is memory efficient and can be used to encode and generate high-resolution 3D images during inference.
\section{Network architecture}
In the tables below, we show the detailed architecture of network modules of HA-GAN, including $G^A$, $G^L$, $G^H$, $D^L$, $D^H$, $E^H$ and $E^G$.
\begin{table}[H]
\centering
\caption{Architecture of the $G^A$ Network}
\label{tbl:arch_ga}
\begin{tabular}{lcc}
\toprule
Layer
&Filter size, stride
&Output size$(C,D,H,W)$
\\
\midrule
Input
& - & 1 $\times$ 1024
\\
Dense
& - & 512$\times$4$\times$4$\times$4
\\
\midrule
Conv3D
& 3$\times$3$\times$3, 1 & 512$\times$4$\times$4$\times$4
\\
GroupNorm+ReLU
& - & 512$\times$4$\times$4$\times$4
\\
Interpolation
& - & 512$\times$8$\times$8$\times$8
\\
\midrule
Conv3D
& 3$\times$3$\times$3, 1 & 512$\times$8$\times$8$\times$8
\\
GroupNorm+ReLU
& - & 512$\times$8$\times$8$\times$8
\\
Interpolation
& - & 512$\times$16$\times$16$\times$16
\\
\midrule
Conv3D
& 3$\times$3$\times$3, 1 & 256$\times$16$\times$16$\times$16
\\
GroupNorm+ReLU
& - & 256$\times$16$\times$16$\times$16
\\
Interpolation
& - & 256$\times$32$\times$32$\times$32
\\
\midrule
Conv3D
& 3$\times$3$\times$3, 1 & 128$\times$32$\times$32$\times$32
\\
GroupNorm+ReLU
& - & 128$\times$32$\times$32$\times$32
\\
Interpolation
& - & 128$\times$64$\times$64$\times$64
\\
\midrule
Conv3D
& 3$\times$3$\times$3, 1 & 64$\times$64$\times$64$\times$64
\\
GroupNorm+ReLU
& - & 64$\times$64$\times$64$\times$64
\\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[H]
\centering
\caption{Architecture of the $G^L$ Network}
\label{tbl:arch_gl}
\begin{tabular}{lcc}
\toprule
Layer
&Filter size, stride
&Output size$(C,D,H,W)$
\\
\midrule
Input
& - & 64$\times$64$\times$64$\times$64
\\
\midrule
Conv3D
& 3$\times$3$\times$3, 1 & 32$\times$64$\times$64$\times$64
\\
GroupNorm+ReLU
& - & 32$\times$64$\times$64$\times$64
\\
\midrule
Conv3D
& 3$\times$3$\times$3, 1 & 16$\times$64$\times$64$\times$64
\\
GroupNorm+ReLU
& - & 16$\times$64$\times$64$\times$64
\\
\midrule
Conv3D
& 3$\times$3$\times$3, 1 & 1$\times$64$\times$64$\times$64
\\
Tanh
& - & 1$\times$64$\times$64$\times$64
\\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[H]
\centering
\caption{Architecture of the $G^H$ Network}
\label{tbl:arch_gh}
\begin{tabular}{lcc}
\toprule
Layer
&Filter size, stride
&Output size$(C,D,H,W)$
\\
\midrule
Input
& - & 64$\times$64$\times$64$\times$64
\\
Interpolation
& - & 64$\times$128$\times$128$\times$128
\\
\midrule
Conv3D
& 3$\times$3$\times$3, 1 & 32$\times$128$\times$128$\times$128
\\
GroupNorm+ReLU
& - & 32$\times$128$\times$128$\times$128
\\
Interpolation
& - & 32$\times$256$\times$256$\times$256
\\
\midrule
Conv3D
& 3$\times$3$\times$3, 1 & 1$\times$256$\times$256$\times$256
\\
Tanh
& - & 1$\times$256$\times$256$\times$256
\\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[H]
\centering
\caption{Architecture of the $E^H$ Network}
\label{tbl:arch_eh}
\begin{tabular}{lcc}
\toprule
Layer
&Filter size, stride
&Output size$(C,D,H,W)$
\\
\midrule
Input
& - & 1$\times$32$\times$256$\times$256
\\
\midrule
Conv3D
& 4$\times$4$\times$4, 2 & 32$\times$16$\times$128$\times$128
\\
GroupNorm+ReLU
& - & 32$\times$16$\times$128$\times$128
\\
\midrule
Conv3D
& 3$\times$3$\times$3, 1 & 32$\times$16$\times$128$\times$128
\\
GroupNorm+ReLU
& - & 32$\times$16$\times$128$\times$128
\\
\midrule
Conv3D
& 4$\times$4$\times$4, 2 & 64$\times$8$\times$64$\times$64
\\
GroupNorm+ReLU
& - & 64$\times$8$\times$64$\times$64
\\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[H]
\centering
\caption{Architecture of the $E^G$ Network}
\label{tbl:arch_el}
\begin{tabular}{lcc}
\toprule
Layer
&Filter size, stride
&Output size$(C,D,H,W)$
\\
\midrule
Input
& - & 64$\times$64$\times$64$\times$64
\\
\midrule
Conv3D
& 4$\times$4$\times$4, 2 & 32$\times$32$\times$32$\times$32
\\
GroupNorm+ReLU
& - & 32$\times$32$\times$32$\times$32
\\
\midrule
Conv3D
& 4$\times$4$\times$4, 2 & 64$\times$16$\times$16$\times$16
\\
GroupNorm+ReLU
& - & 64$\times$16$\times$16$\times$16
\\
\midrule
Conv3D
& 4$\times$4$\times$4, 2 & 128$\times$8$\times$8$\times$8
\\
GroupNorm+ReLU
& - & 128$\times$8$\times$8$\times$8
\\
\midrule
Conv3D
& 4$\times$4$\times$4, 2 & 256$\times$4$\times$4$\times$4
\\
GroupNorm+ReLU
& - & 256$\times$4$\times$4$\times$4
\\
\midrule
Conv3D
& 4$\times$4$\times$4, 1 & 1024$\times$1$\times$1$\times$1
\\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[H]
\centering
\caption{Architecture of the $D^L$ Network}
\label{tbl:arch_dl}
\begin{adjustbox}{max width=0.48\textwidth}
\begin{tabular}{lcc}
\toprule
Layer
&Filter size, stride
&Output size$(C,D,H,W)$
\\
\midrule
Input
& - & 1$\times$64$\times$64$\times$64
\\
\midrule
Conv3D
& 4$\times$4$\times$4, 2 & 32$\times$32$\times$32$\times$32
\\
SpectralNorm+LeakyReLU
& - & 32$\times$32$\times$32$\times$32
\\
\midrule
Conv3D
& 4$\times$4$\times$4, 2 & 64$\times$16$\times$16$\times$16
\\
SpectralNorm+LeakyReLU
& - & 64$\times$16$\times$16$\times$16
\\
\midrule
Conv3D
& 4$\times$4$\times$4, 2 & 128$\times$8$\times$8$\times$8
\\
SpectralNorm+LeakyReLU
& - & 128$\times$8$\times$8$\times$8
\\
\midrule
Conv3D
& 4$\times$4$\times$4, 2 & 256$\times$4$\times$4$\times$4
\\
SpectralNorm+LeakyReLU
& - & 256$\times$4$\times$4$\times$4
\\
\midrule
Conv3D
& 4$\times$4$\times$4, 1 & 1$\times$1$\times$1$\times$1
\\
Reshape
& - & 1
\\
\bottomrule
\end{tabular}
\end{adjustbox}
\end{table}
\begin{table}[H]
\centering
\caption{Architecture of the $D^H$ Network}
\label{tbl:arch_dh}
\begin{adjustbox}{max width=0.48\textwidth}
\begin{tabular}{lcc}
\toprule
Layer
&Filter size, stride
&Output size$(C,D,H,W)$
\\
\midrule
Input
& - & 1$\times$32$\times$256$\times$256
\\
\midrule
Conv3D
& 4$\times$4$\times$4, 2 & 16$\times$16$\times$128$\times$128
\\
SpectralNorm+LeakyReLU
& - & 16$\times$16$\times$128$\times$128
\\
\midrule
Conv3D
& 4$\times$4$\times$4, 2 & 32$\times$8$\times$64$\times$64
\\
SpectralNorm+LeakyReLU
& - & 32$\times$8$\times$64$\times$64
\\
\midrule
Conv3D
& 4$\times$4$\times$4, 2 & 64$\times$4$\times$32$\times$32
\\
SpectralNorm+LeakyReLU
& - & 64$\times$4$\times$32$\times$32
\\
\midrule
Conv3D
& 2$\times$4$\times$4, 2 & 128$\times$2$\times$16$\times$16
\\
SpectralNorm+LeakyReLU
& - & 128$\times$2$\times$16$\times$16
\\
\midrule
Conv3D
& 2$\times$4$\times$4, 1 & 256$\times$1$\times$8$\times$8
\\
SpectralNorm+LeakyReLU
& - & 256$\times$1$\times$8$\times$8
\\
\midrule
Conv3D
& 1$\times$4$\times$4, 1 & 512$\times$1$\times$4$\times$4
\\
SpectralNorm+LeakyReLU
& - & 512$\times$1$\times$4$\times$4
\\
\midrule
Conv3D
& 1$\times$4$\times$4, 1 & 128$\times$1$\times$1$\times$1
\\
SpectralNorm+LeakyReLU
& - & 128$\times$1$\times$1$\times$1
\\
\midrule
Dense
& - & 64
\\
SpectralNorm+LeakyReLU
& - & 64
\\
\midrule
Dense
& - & 32
\\
SpectralNorm+LeakyReLU
& - & 32
\\
\midrule
Dense
& - & 1
\\
\bottomrule
\end{tabular}
\end{adjustbox}
\end{table}
\section{Datasets and Training Details}
For the COPDGene dataset, CT scans are acquired using multi-detector CT scanners (at least 16 detector channels).
Volumetric CT acquisitions are obtained on full inspiration (200mAs). The initial voxel size is typically $0.68mm\times0.68mm\times0.54mm$, and the final voxel size is $1mm^3$. The dataset is available at \url{https://www.ncbi.nlm.nih.gov/projects/gap/cgi-bin/study.cgi?study_id=phs000179.v6.p2}.
For the GSP dataset, all imaging data were collected on matched 3T Tim Trio scanners (Siemens Healthcare, Erlangen, Germany) at Harvard University and Massachusetts General Hospital using the vendor-supplied 12-channel phased-array head coil. Structural data included a high-resolution (1.2mm isotropic) multi-echo T1-weighted magnetization-prepared gradient-echo image. The initial voxel size is $1.2mm^3$, and the final voxel size is $1mm^3$. The dataset is available at \url{https://dataverse.harvard.edu/dataverse/GSP}.
For fair comparison between different models, we use the same optimizer (Adam) and the same hyperparameters of the optimizer (beta and gamma) for optimization. For the network, we use the same activation function (ReLU). For the training, the networks are trained on the same dataset for the same amount of iterations. The learning rates are of the same order of magnitude. We follow the setting of~\cite{kwon2019generation} when selecting the hyperparameters for VAE-GAN, WGAN and $\alpha$-GAN.
\section{Segmentation Details}
For region of interest (ROI) segmentation on CT images in COPDGene dataset, the fat tissues are thresholded between the values of $-274$ and $-49$ HU. The bone tissues are segmented by thresholding above $300$ HU. The emphysema regions are segmented by thresholding below $-950$ HU within the lung mask. For region of interest segmentation on MRI in GSP dataset, we use FastSurfer~\cite{henschel2020fastsurfer} to segment a representative subset of 10 brain ROIs from real HR brain images and generated HR brain images, including cerebral white matter (WM), lateral ventricle (LV), cerebellar white matter (CW), thalamus (TH), caudate (CA); putamen (PU), pallidum (PA), brainstem (BS), hippocampus (HP), and amygdala (AM). Fig. 8 in the main text shows the results.
\section{Training Curve}
In this section, we provide training and validation curves, shown in Figure~\ref{fig:curve_g} and Figure~\ref{fig:curve_fid}.
\begin{figure}[htp]
\centering
\includegraphics[width=0.48\textwidth]{figures/curve_G.pdf}
\caption{Training loss curve of the generator.}
\label{fig:curve_g}
\end{figure}
\begin{figure}[htp]
\centering
\includegraphics[width=0.48\textwidth]{figures/curve_FID_extend_v3.pdf}
\caption{Validation curve during training. FID score is used as the validation metric. The images cropped on left upper lobe shown above demonstrate the quality of random image synthesis during training.}
\label{fig:curve_fid}
\end{figure}
\section{Data Augmentation for Supervised Learning}
In this section, we provide more details about our experiment of data augmentation for supervised learning. In Fig.~\ref{fig:arch_acgan}, we show the architecture of conditional HA-GAN for data augmentation. Compared with original HA-GAN, two modifications are made to the original HA-GAN architecture: 1) Besides the latent variable $Z$ sampled from Gaussian distribution, the generator module $G^A(Z;c)$ also takes a 5-class one-hot code variable $c\sim p_c$ as input, which indicates which class the generated images should be. 2) The discriminator gives both a probability distribution over real/fake classification (same as original HA-GAN) and a probability distribution over the class labels $P(C|X)$. In this way, the discriminator also serves as auxiliary classifier for GOLD score. The network architecture of the 3D CNN used for supervised training is shown in~\ref{tbl:arch_3dgan}.
\begin{figure}[htp]
\centering
\includegraphics[width=0.48\textwidth]{figures/supp/gan_loss_v8_yk_acgan.pdf}
\caption{Architecture of conditional HA-GAN for data augmentation (encoder is hidden here to improve clarity).}
\label{fig:arch_acgan}
\end{figure}
\begin{table}[H]
\centering
\caption{Architecture of the 3D CNN Network for Supervised Training}
\label{tbl:arch_3dgan}
\begin{adjustbox}{max width=0.48\textwidth}
\begin{tabular}{lcc}
\toprule
Layer
&Filter size, stride
&Output size$(C,D,H,W)$
\\
\midrule
Input
& - & 1$\times$128$\times$128$\times$128
\\
\midrule
Conv3D
& 3$\times$3$\times$3, 1 & 8$\times$128$\times$128$\times$128
\\
BatchNorm+ELU
& - & 8$\times$128$\times$128$\times$128
\\
\midrule
Conv3D
& 3$\times$3$\times$3, 2 & 8$\times$64$\times$64$\times$64
\\
BatchNorm+ELU
& - & 8$\times$64$\times$64$\times$64
\\
\midrule
Conv3D
& 3$\times$3$\times$3, 1 & 16$\times$64$\times$64$\times$64
\\
BatchNorm+ELU
& - & 16$\times$64$\times$64$\times$64
\\
Conv3D
& 3$\times$3$\times$3, 1 & 16$\times$64$\times$64$\times$64
\\
BatchNorm+ELU
& - & 16$\times$64$\times$64$\times$64
\\
Conv3D
& 3$\times$3$\times$3, 2 & 16$\times$32$\times$32$\times$32
\\
BatchNorm+ELU
& - & 16$\times$32$\times$32$\times$32
\\
\midrule
Conv3D
& 3$\times$3$\times$3, 1 & 32$\times$32$\times$32$\times$32
\\
BatchNorm+ELU
& - & 32$\times$32$\times$32$\times$32
\\
Conv3D
& 3$\times$3$\times$3, 1 & 32$\times$32$\times$32$\times$32
\\
BatchNorm+ELU
& - & 32$\times$32$\times$32$\times$32
\\
Conv3D
& 3$\times$3$\times$3, 2 & 32$\times$16$\times$16$\times$16
\\
BatchNorm+ELU
& - & 32$\times$16$\times$16$\times$16
\\
\midrule
Conv3D
& 3$\times$3$\times$3, 1 & 64$\times$16$\times$16$\times$16
\\
BatchNorm+ELU
& - & 64$\times$16$\times$16$\times$16
\\
Conv3D
& 3$\times$3$\times$3, 1 & 64$\times$16$\times$16$\times$16
\\
BatchNorm+ELU
& - & 64$\times$16$\times$16$\times$16
\\
Conv3D
& 3$\times$3$\times$3, 2 & 64$\times$8$\times$8$\times$8
\\
BatchNorm+ELU
& - & 64$\times$8$\times$8$\times$8
\\
\midrule
Conv3D
& 3$\times$3$\times$3, 1 & 128$\times$8$\times$8$\times$8
\\
BatchNorm+ELU
& - & 128$\times$8$\times$8$\times$8
\\
\midrule
Conv3D
& 3$\times$3$\times$3, 2 & 128$\times$4$\times$4$\times$4
\\
BatchNorm+ELU
& - & 128$\times$4$\times$4$\times$4
\\
\midrule
AvgPool
& - & 128$\times$1$\times$1$\times$1
\\
Reshape
& - & 128
\\
\midrule
Dense
& - & 5
\\
\bottomrule
\end{tabular}
\end{adjustbox}
\end{table}
\section{HU value calibration}
In this section, we evaluate HU value calibration quality for HA-GAN trained on COPDGene dataset. We randomly sampled 100 synthesized images and 100 real images, and plot the distribution of HU value using histogram. The results are shown in Fig.~\ref{fig:copd_calibration}. We can see that the HU value distribution is very similar between generated images and real images, which validates the calibration quality. In addition, we perform two-sample Kolmogorov–Smirnov test between HU values of randomly synthesized images and real images. The null hypothesis is the samples of HU of generated images and HU of real images are drawn from the same distribution. The result p-value is 0.47 so that we can't reject the null hypothesis,
which also validates the HU value calibration quality.
For the \%emph measures, we compare the distribution of LAA-950 between generated images and real images with statistical test. LAA-950 is a widely used clinical descriptor to measure lung emphysema.
Specifically, we perform two-sample Kolmogorov–Smirnov test between LAA-950 values of randomly synthesized images and real images. The null hypothesis is the samples of LAA-950 of generated images and LAA-950 of real images are drawn from the same distribution. The result p-value is 0.17 so that we can't reject the null hypothesis,
which validates that \%emph of generated images of HA-GAN is similar to that of real images.
\begin{figure}[htp]
\centering
\includegraphics[width=0.48\textwidth]{figures/COPD_calibration_combined_50.pdf}
\caption{Evaluation for HU value calibration quality for HA-GAN trained on COPDGene dataset. The blue and brown histograms show HU value distribution for generated images and real images respectively. The good overlapping between the two histograms validates the HU value calibration quality.}
\label{fig:copd_calibration}
\end{figure}
\section{Image Super-resolution}
In this section, we provide more evaluation on the image super-resolution experiment. On COPDGene dataset, we perform new experiment to evaluate performance on regions of interest (ROI) rather than whole field of view. Specifically, we use lungmask~\cite{hofmanninger2020automatic} to segment lung region from real CT scans, and calculate metrics only within the ROI. More experiment details can be found in the Image Super-resolution section in the main text. The results can be found in Table.~\ref{tbl:recon_roi}. From the results we can see that our method achieves competitive results when evaluating on ROI.
\begin{table}[htp]
\caption{Evaluation for image super-resolution on lung ROI}
\centering
\begin{adjustbox}{max width=\textwidth}
\centering
\label{tbl:recon_roi}
\begin{tabular}{lccc}
\toprule
& SSIM $\uparrow$ & NMSE(\%) $\downarrow$ & PSNR $\uparrow$ \\
\midrule
GAN-CIRCLE&$0.817_{\pm.027}$&$7.23_{\pm2.24}$&$28.5_{\pm.8}$\\
HA-GAN-C & \bm{$0.838_{\pm.025}$}& \bm{$3.20_{\pm1.09}$}&\bm{$31.8_{\pm.9}$}\\
\bottomrule
\end{tabular}
\vspace{-3mm}
\end{adjustbox}
\end{table}
\section{Embedding Visualization}
To illustrate whether the synthetic images look similar to the real ones, we embed the synthetic and real images into the same space. If the synthetic images are indistinguishable from the real images, then we expect that the synthetic and real images occupy the same region in the embedding space. Following the practice of~\cite{kwon2019generation}, we first use a pretrained 3D medical ResNet model~\cite{chen2019med3d} to extract features for 512 synthetic images by each method. As a reference, we also extract features for the real image samples using the same ResNet model. Then we conduct PCA and tSNE to embed the exacted features into 2-dimensional space for both COPDGene and GSP datasets.
The PCA results are shown in Fig.~\ref{fig:pca_copd} and Fig.~\ref{fig:pca_gsp}.
The tSNE results are shown in Fig.~\ref{fig:tsne_copd} and Fig.~\ref{fig:tsne_gsp}.
\begin{figure}[t]
\centering
\begin{subfigure}{.35\textwidth}
\includegraphics[width = 1.\textwidth]
{figures/PCA_COPD_v2.pdf}
\caption[Caption]{ PCA visualization on COPDGene dataset.}
\label{fig:pca_copd}
\end{subfigure}
\\
\begin{subfigure}{.35\textwidth}
\includegraphics[width = 1.\textwidth]
{figures/PCA_GSP_v2.pdf}
\caption[Caption]{ PCA visualization on GSP dataset. }
\label{fig:pca_gsp}
\end{subfigure}
\caption{Comparison of the embedding of different models. We embed the features extracted from synthesized images into 2-dimensional space with PCA. The ellipses are fitted to scatters of each model for better visualization. The figures show that the embedding region of HA-GAN has the most overlapping with real images, compared to the baselines.}
\end{figure}
\begin{figure}[htp]
\centering
\begin{subfigure}{0.35\textwidth}
\centering
\includegraphics[width = 1.\textwidth]
{figures/TSNE_COPD.pdf}
\caption[Caption]{tSNE visualization on COPDGene dataset.}
\label{fig:tsne_copd}
\end{subfigure}
\begin{subfigure}{0.35\textwidth}
\centering
\includegraphics[width = 1.\textwidth]
{figures/TSNE_GSP.pdf}
\caption[Caption]{tSNE visualization on GSP dataset.}
\label{fig:tsne_gsp}
\end{subfigure}
\caption{Comparison of the embedding of different models. We embed the features extracted from synthesized images into 2-dimensional space with tSNE. The ellipses are fitted to scatters of each model for better visualization. The figures show that the embedding region of HA-GAN has the most overlapping with real images, compared to the baselines.
}
\end{figure}
\section{Computational Efficiency}
We also compare the computation efficiency of our HA-GAN model with baseline models. More specifically, we measure the number of iterations per second during training. The result is shown in Table~\ref{tbl:computation}. With small image size ($64^3$ or lower), the proposed HA-GAN is comparable with baseline models in terms of computation efficiency. When the image size is large ($128^3$ or higher), HA-GAN is more computation-efficient than the baselines.
\begin{table}[H]
\centering
\caption{Training speed (iter/s) for different models}
\label{tbl:computation}
\begin{tabular}{lccccc}
\toprule
Size
&WGAN&VAE-GAN&PGGAN&$\alpha$-GAN&HA-GAN
\\
\midrule
$32^3$
& \textbf{21.0} & 14.5 & 5.6 & 8.2 & 14.0
\\
$64^3$
& \textbf{9.5} & 4.5 & 2.5 & 5.1 & 8.3
\\
$128^3$
& 2.0 & 1.0 & 1.3 & 1.6 & \textbf{3.8}
\\
$256^3$
& NA & NA & NA & NA & \textbf{1.1}
\\
\bottomrule
\multicolumn{6}{p{.45\textwidth}}{The values of NA indicates that the model cannot be trained with the corresponding resolution, due to memory limit.}
\end{tabular}
\end{table}
\section{Samples of generated images}
In Fig.~\ref{fig:result_synthesis_copd}, we show samples of randomly generated images on COPDGene dataset. In Fig.~\ref{fig:result_synthesis_gsp}, we show samples of randomly generated images on GSP dataset. In Fig.~\ref{fig:result_synthesis_slice}, we show slices of randomly generated images. In Fig.~\ref{fig:result_interpolation}, we show results of images generated by latent variable interpolation along a random direction.
\begin{figure*}[t]
\centering
\includegraphics[width = .78\textwidth]
{figures/supp/synthesis_copd.png}
\vspace{-1mm}
\caption{ Comparison of the quality of the \emph{randomly generated} images on COPDGene dataset. The first two columns are axial and coronal slices. The last two columns are zoom-in regions. The first three columns use the HU range of $[-1024,600]$, the last column uses $[-1024,-250]$ to highlight the lung details. }
\vspace{-1mm}
\label{fig:result_synthesis_copd}
\end{figure*}
\begin{figure*}[t]
\centering
\includegraphics[width = .8\textwidth]
{figures/supp/synthesis_brain.png}
\caption[Caption]{ Comparison of the quality of the \emph{randomly generated} images on GSP dataset. The first three columns are axial, sagittal and coronal slices respectively. The last column shows the zoom-in region. }
\vspace{1mm}
\label{fig:result_synthesis_gsp}
\end{figure*}
\begin{figure*}[t]
\centering
\includegraphics[width = .82\textwidth]
{figures/supp/synthesis_slice_v2.png}
\caption[Caption]{ Slices of \emph{randomly generated} images }
\label{fig:result_synthesis_slice}
\end{figure*}
\begin{figure*}[t]
\centering
\includegraphics[width = .8\textwidth]
{figures/supp/interpolation_v2.png}
\caption{ Samples of latent space interpolation on COPDGene and GSP dataset. Each row corresponds to
a linear interpolation of a random direction in latent space.}
\label{fig:result_interpolation}
\end{figure*}
\clearpage
\newpage
|
2,869,038,153,976 | arxiv | \section{Introduction}
Generalized planning, where a single plan works for a set of similar planning problems, has received continual attention in the AI community \cite{2008Learning,2011Generalized,2018Features,AguasCJ19,2019Illanes}. For example, the general solution ``while the block $A$ is not clear, pick the top block above $A$ and place it on the table" meets the goal $clear(A)$ no matter how many blocks the tower has. Computing general solutions with correctness guarantee has long been a key issue in generalized planning. However, so far only limited correctness results have been obtained, mainly for
the so-called one-dimensional problems \cite{HuL10} and extended LL-domains \cite{SrivastavaIZ11}.
Abstractions are widely used to solve generalized planning problems.
The idea is to develop an abstract model of the problem, which is easier to solve, and exploit a solution in the abstract model to find a solution in the concrete model. A popular kind of abstract models for generalized planning are qualitative numerical planning (QNP) problems, introduced by \cite{2011Qualitative}: they showed that QNPs are decidable, and proposed a generate-and-test method to solve QNPs.
\citeauthor{2018Features} \shortcite{2018Features} proposed solving a generalized classical planning problem by abstracting it into a QNP, they showed that if the abstraction is sound, then a solution to the QNP is also a solution to the original problem. Here, soundness of abstractions is the key to guarantee correctness of solutions to the original problem.
The automatic generation of sound abstractions for generalized planning problems has attracted the attention of researchers in recent years. \citeauthor{BonetFG19} \shortcite{BonetFG19} proposed learning a QNP abstraction for a generalized planning problem from a sample set of instances, however, the abstraction is guarateed to be sound only for the sample instances.
Further, \citeauthor{Bonet19} \shortcite{Bonet19} showed how to obtain first-order formulas that define a set of instances on which the abstraction is guaranteed to be sound.
\citeauthor{2019Illanes} \shortcite{2019Illanes} considered solving a class of generalized planning problems by automatically deriving a sound QNP abstraction from an instance of the problem, by introducing a counter for each set of indistinguishable objects, however, this class of problems is too restricted.
Recently, \citeauthor{2017Abstraction} \shortcite{2017Abstraction} proposed an agent abstraction framework
based on the situation calculus \cite{Reiter01} and Golog \cite{1997GOLOG}. They related a high-level action theory to a low-level action theory by the notion of a refinement mapping, which specifies
how each high-level action is implemented by a low-level
Golog program and how each high-level fluent can
be translated into a low-level formula.
Based on their work,
\citeauthor{Cui21} \shortcite{Cui21} proposed a uniform abstraction framework for generalized planning.
They formalized a generalized planning problem as a triple of a basic action theory, a trajectory constraint, and a goal.
They gave model-theoretic definitions of sound/complete
abstractions of a generalized planning problem,
and showed that solutions to a
generalized planning problem are nicely related to those of its
sound/complete abstractions.
In particular, the refinement of any solution to a sound abstraction is a solution to the original problem.
The significance of the research line initiated by \citeauthor{2018Features} \shortcite{2018Features}
is that abstract problems are easier to solve, soundness of abstractions together with correctness of high-level solutions guarantee correctness of low-level solutions.
In this paper, based on Cui et al.'s work, we explore automatic verification of sound abstractions for generalized planning.
First of all, we give a proof-theoretic characterization of sound abstractions for generalized planning in the situation calculus. Based on the characterization, we give a sufficient condition for sound abstractions which is first-order verifiable.
To verify the condition with first-order theorem provers, we exploit universal and existential extensions of regression, and develop methods to handle counting and transitive closure.
Using the SMT solver Z3, we implemented a verification system and experimented with 7 domains: 5 from the literature and 2 made by us. Experimental results showed that our system was able to successfully verify soundness of abstractions for all domains in seconds.
\section{Preliminaries}
\subsection{Situation Calculus and Golog}
The situation calculus \cite{Reiter01} is a many-sorted first-order language with some second-order ingredients suitable for describing dynamic worlds. There are three disjoint sorts: $action$ for actions, $situation$ for situations, and $object$ for everything else. The language also has the following components: a situation constant $S_{0}$ denoting the initial situation; a binary function $do(a,s)$ denoting the successor situation to $s$ resulting from performing action $a$; a binary relation $Poss(a,s)$ indicating that action $a$ is possible in situation $s$; a binary relation $s\sqsubseteq s'$, meaning situation $s$ is a sub-history of $s'$; a set of relational (functional) fluents, {\it i.e.}, predicates (functions) taking a situation term as their last argument. A formula is uniform in $s$ if it does not mention any situation term other than $s$. We call a formula with all situation arguments eliminated a situation-suppressed formula. For a situation-suppressed formula $\phi$, we use $\phi[s]$ to denote the formula obtained from $\phi$ by restoring $s$ as the situation arguments to all fluents. A situation $s$ is executable if it is possible to perform the actions in $s$ one by one: $Exec(s)\doteq \forall a,s'.do(a,s')\sqsubseteq s\supset Poss(a,s')$.
In the situation calculus, a particular domain of application can be specified by a basic action theory (BAT) of the form
\begin{center}
$\mathcal{D}=\Sigma \cup \mathcal{D}_{ap} \cup \mathcal{D}_{ss}\cup \mathcal{D}_{una}\cup \mathcal{D}_{S_{0}}$,
\end{center}
where $\Sigma$ is the set of the foundational axioms for situations; $\mathcal{D}_{ap}$ is a set of action precondition axioms, one for each action function $A$ of the form $Poss(A(\vec{x}),s)\equiv \Pi_A(\vec{x},s)$, where $\Pi_A(\vec{x},s)$ is uniform in $s$; $\mathcal{D}_{ss}$ is a set of successor state axioms (SSAs), one for each relation fluent symbol $P$ of the form $P(\vec{x},do(a,s))\equiv \phi_{P}(\vec{x},a,s)$, and one for each functional fluent symbol $f$ of the form $f(\vec{x},do(a,s))=y\equiv \psi_{f}(\vec{x},y,a,s)$, where $\phi_{P}(\vec{x},a,s)$ and $\psi_{f}(\vec{x},y,a,s)$ are uniform in $s$; $\mathcal{D}_{una}$ is the set of unique name axioms for actions; $\mathcal{D}_{S_{0}}$ is the initial knowledge base stating facts about $S_{0}$.
\citeauthor{1997GOLOG} \shortcite{1997GOLOG} introduced a high-level programming language Golog with the following syntax:
\begin{center}
$\delta ::= \alpha |\ \varphi ?\ |\ \delta_{1};\delta_{2}\ |\ \delta_{1}|\delta_{2}\ |\ \pi x.\delta\ |\ \delta^{*}$,
\end{center}
where $\alpha$ is an action term; $\varphi$ is a situation-suppressed formula and $\varphi ?$ tests whether $\varphi$ holds; program $\delta_{1};\delta_{2}$ represents the sequential execution of $\delta_{1}$ and $\delta_{2}$; program $\delta_{1}|\delta_{2}$ denotes the non-deterministic choice between $\delta_{1}$ and $\delta_{2}$; program $\pi x.\delta$ denotes the non-deterministic choice of a value for parameter $x$ in $\delta$; program $\delta^{*}$ means executing program $\delta$ for a non-deterministic number of times.
Golog has two kinds of semantics \cite{ConGOLOG}: transition semantics and evaluation semantics. In transition semantics, a configuration of a Golog program is a pair $(\delta, s)$ of a situation $s$ and a program $\delta$ remaining to be executed. The predicate $Trans(\delta,s,\delta', s')$ means that there is a transition from configuration $(\delta,s)$ to $(\delta', s')$ in one elementary step. The predicate $Final(\delta,s)$ means that the configuration $(\delta,s)$ is a final one, which holds if program $\delta$ may legally terminate in situation $s$. In evaluation semantics, the predicate $Do(\delta, s, s')$ means that executing the program $\delta$ in situation $s$ will terminate in a situation $s'$. Do can be defined with $Trans$ and $Final$ as follows:
$ Do(\delta,s,s')\doteq \exists \delta'. Trans^{*}(\delta,s,\delta',s')\land Final(\delta',s'),$
where, $Trans^{*}$ denotes the transitive closure of $Trans$.
\subsection{Regression and Its Extensions}
Regression is an important computational mechanism for reasoning about deterministic actions and their effects in the situation calculus \cite{Reiter01}. The following is the definition of a one-step regression operator:
\begin{definition}\label{def_reg}\rm
Given a BAT $\mathcal{D}$ and a formula $\phi$. We use $\mathcal{R}_{\mathcal{D}}[\phi]$ to denote the formula obtained from $\phi$ by the following steps: for each term $f(\vec{t}, do(\alpha, \sigma))$, replace $\phi$ with $\exists y. \psi_{f}(\vec{t},y,\alpha,\sigma)\land \phi[f(\vec{t}, do(\alpha,\sigma))/y]$\footnote{$\phi[t'/t]$ denotes that formula obtained from $\phi$ by replacing all occurrences of $t'$ in $\phi$ by $t$.}; replace each atom $P(\vec{t}, do(\alpha, \sigma))$ with $\phi_P(\vec{t}, \alpha, \sigma)$; replace each precondition atom $Poss(A(\vec{t}), \sigma)$ with $\Pi_{A}(\vec{t}, \sigma)$; and further simplify the result with $\mathcal{D}_{una}$.
\end{definition}
\begin{prop}
\label{prop2} $\mathcal{D} \models \phi \equiv \mathcal{R}_{\mathcal{D}}[\phi]$.
\end{prop}
Luo et al. \shortcite{2020Forgetting} presented the existentially extended regression, notation $\mathcal{R}^{E}[\phi(s),\delta]$ denotes a state formula expressing that there exists an execution of program $\delta$ starting from $s$ making $\phi$ hold.
\begin{definition}\rm\label{existentially-extended-regression}
Given a situation-suppressed formula $\phi$ and a Golog program $\delta$, the extended regression $\mathcal{R}^{E}[\phi(s),\delta]$ can be inductively defined as follows:
\begin{itemize}
\item $\mathcal{R}^{E}[\phi(s),\alpha]= \mathcal{R}_{\mathcal{D}}[Poss(\alpha,s)\land \phi(do(\alpha,s))]$,
\item $\mathcal{R}^{E}[\phi(s),\psi?] = \psi[s]\land \phi(s)$,
\item $\mathcal{R}^{E}[\phi(s),\delta_{1};\delta_{2}] = \mathcal{R}^{E}[\mathcal{R}^{E}[\phi(s),\delta_{2}],\delta_{1}]$,
\item $\mathcal{R}^{E}[\phi(s),\delta_{1}|\delta_{2}] = \mathcal{R}^{E}[\phi(s),\delta_{1}]\lor \mathcal{R}^{E}[\phi(s),\delta_{2}]$,
\item $\mathcal{R}^{E}[\phi(s),(\pi x)\delta(x)] = (\exists x)\mathcal{R}^{E}[\phi(s),\delta(x)]$.
\end{itemize}
\end{definition}
\begin{prop} \label{prop_Eregression}
Given a basic action theory $\mathcal{D}$, a Golog program $\delta$ and a situation-suppressed formula $\phi$, we have:
\begin{center}
$\mathcal{D} \models \mathcal{R}^{E}[\phi(s), \delta] \equiv \exists s'.Do(\delta, s, s') \wedge \phi[s'].$
\end{center}
\end{prop}
Li and Liu \shortcite{LiL15a} presented the universally extended regression, notation $\mathcal{R}^{U}[\phi(s),\delta]$ denotes a state formula expressing that all executions of program $\delta$ starting from $s$ making $\phi$ hold. To get the definition of universally extended regression, one can replace the symbols $\mathcal{R}^{E}$, $\land$, $\lor$, and $\exists$ in Definition \ref{existentially-extended-regression} with $\mathcal{R}^{U}$, $\supset$, $\land$, and $\forall$, respectively.
\begin{prop} \label{prop_Uregression}
Given a basic action theory $\mathcal{D}$, a Golog program $\delta$ and a situation-suppressed formula $\phi$, we have:
\begin{center}
$\mathcal{D} \models \mathcal{R}^{U}[\phi(s),\delta]\equiv \forall s'. [Do(\delta, s, s')\supset \phi(s')].$
\end{center}
\end{prop}
\subsection{Generalized Planning Abstraction Framework}
Situation calculus cannot express the property of termination, counting, transitive closure, and non-deterministic actions. \citeauthor{Cui21} \shortcite{Cui21} extended the situation calculus for these four aspects: to represent the property of termination, following \cite{2004Representing}, they use the situation calculus with infinite histories; to represent planning with non-deterministic actions, they treat a non-deterministic action as a non-deterministic program; to extended the situation calculus with counting, following the logic FOCN \cite{Kuske2017First}, they introduce counting terms of the form $\# \bar{y}.\varphi$, meaning the number of tuples $\bar{y}$ satisfying formula $\varphi$; transitive closure is often used to define counting terms, following transitive closure logic \cite{transclosure}, they introduced the notation $[TC_{\bar{x},\bar{y}} \varphi](\bar{u},\bar{v})$, where $\varphi(\bar{x},\bar{y})$ is a formula with $2k$ free variables, $\bar{u}$ and $\bar{v}$ are two $k$-tuples of terms, which says that the pair $(\bar{u},\bar{v})$ is contained in the reflexive transitive closure of the binary relation on $k$-tuples that is defined by $\varphi$
In the following, we introduce the generalized planning abstraction framework proposed in \cite{Cui21}.
\begin{definition} \rm
A generalized planning problem is a triple $\mathcal{G}=\langle \mathcal{D},C,G \rangle$, where $\mathcal{D}$ is a BAT, $C$ is a trajectory constraint, i.e., a situation calculus formula with a free variable of infinite histories, and $G$ is a goal condition.
\end{definition}
In the presence of non-deterministic actions, solutions to planning problems are programs whose execution under certain trajectory constraints are guaranteed to terminate and achieve the goal.
\begin{example}\rm \label{eg-clearing a block}
In the blocks world, an agent can perform two kinds of actions: $mt(x)$ (move $x$ to the table, provided $x$ is being held, and $unstack(x,y)$ (unstack $x$ from $y$, provided $x$ is clear and $x$ is on $y$ ).There are four fluents: $clear(x)$, $ontable(x)$, $on(x,y)$, and $holding(x)$. In this problem, the trajectory constraint $C$ is $\top$, the goal state $G$ is $clear(A)$, some example axioms from $\mathcal{D}$ are as follows:\\
\textbf{Precondition Axioms}:\\
\vspace{1mm}
\hspace*{0em}$Poss(mt(x),s) \equiv clear(x,s) \wedge \neg ontable(x,s) $.\\
\textbf{Successor State Axioms}:\\
\vspace{1mm}
\hspace*{0em} $on(x,y,do(a,s)) \equiv on(x,y,s) \wedge \neg a=unstack(x, y)$.\\
\textbf{Initial Situation Axiom}:\\
\hspace*{0.55em}$\exists x.on^{+}(x,A)\land ontable(A)\land \neg holding(x),$ where the formula $on^{+}(x,A)$ is a transitive closure formula with a concise form, which means that the block $x$ is above the block $A$.
\end{example}
Abstractions for generalized planning problems are specified by the following notion of refinement mapping:
\begin{definition}\rm \label{Ext-Refinement-Mapping}
A function $m$ is a refinement mapping from $\mathcal{G}_{h}\amendb{=\langle \mathcal{D}_{h},C_{h},G_{h} \rangle}$ to $\mathcal{G}_{l}\amendb{=\langle \mathcal{D}_{l},C_{l},G_{l} \rangle}$ if
for each HL \amendb{deterministic or non-deterministic} action type $A$, $m(A(\vec{x}))=\delta_{A}(\vec{x})$, where $\delta_{A}(\vec{x})$ is a LL program;
for each HL relational fluent $P$, $m(P(\vec{x}))=\phi_{P}(\vec{x})$, where $\phi_{P}(\vec{x})$ is a LL situation-suppressed formula;
for each HL functional fluent $f$, $m(f(\vec{x}))=\tau_f(\vec{x})$, where $\tau_f(\vec{x})$ is a LL (counting) term.
\end{definition}
Given a refinement mapping $m$, they introduced an isomorphism relation, called $m\text{-}isomorphic$, between a HL and a LL situation as follows:
\begin{definition}\rm\label{df-homomorphic}
Given a refinement mapping $m$, a situation $s_{h}$ of a HL model $M_{h}$ is $m$-isomorphic to a situation $s_{l}$ in a LL model $M_{l}$, written $s_{h}\sim_{m}s_{l}$, if: for any HL relational fluent $P$, and variable assignment $v$, we have
$M_{h}, v[s/s_{h}] \models P(\vec{x},s)$ iff $M_{l}, v[ s/s_{l}] \models m(P)(\vec{x}, s)$;
for any HL functional fluent $f$, variable assignment $v$, we have
$M_{h}, v[s/s_{h}] \models f(\vec{x},s)=y$ iff $M_{l}, v[s/s_{l}] \models m(f)(\vec{x},s)=y$.
\end{definition}
To relate HL and LL models, they defined two relations: $m$-simulation and $m$-back-simulation. Intuitively, simulation means: whenever a refinement of a HL action can occur, so can the HL action, and back-simulation means the other direction. Here we only present the definition of $m$-simulation relation, $m$-back-simulation relation can be defined symmetrically. In the following, $\Delta^{M}_S$ denotes the situation domain of $M$; $S_0^{M}$ stands for the initial situation of $M$; the notation $\textit{Term}(\delta,s,C)$ means starting in situation $s$, program $\delta$ terminates under constraint $C$:
\begin{equation*}
\setlength{\abovedisplayskip}{1mm}
\setlength{\belowdisplayskip}{1mm}
\hspace{0.35em}\textit{Term}(\delta,s,C)\doteq \neg \exists h.C(h)\land \forall s' \sqsubset h\exists \delta'.Trans^{*}(\delta, s, \delta', s').
\end{equation*}
\begin{definition}\rm
A relation $B\subseteq \Delta^{M_{h}}_{S}\times \Delta^{M_{l}}_{S}$\amendb{ is an} $m$-simulation relation between $M_{h}$ and $M_{l}$, if $\langle S^{M_{h}}_{0}, S^{M_{l}}_{0} \rangle \in B$ and the following hold: (1) $\langle s_{h}, s_{l}\rangle \in B$ implies that: $s_{h}\sim_{m}s_{l}$; for any HL action type $A$, and variable assignment $v$, $M_{l},v[s/s_{l}]\models \mbox{Term}(m(A(\vec{x})),s,C_l)$, and if there is a situation $s'_{l}$ s.t. $M_{l},v[s/s_{l},s'/s'_{l}]\models Do(m(A(\vec{x})),s,s')$, then there is a situation $s'_{h}$ s.t. $M_{h},v[s/s_{h},s'/s'_{h}]\models Do(A(\vec{x}),s,s')$ and $\langle s'_{h},s'_{l}\rangle \in B$. (2) For any infinite HL action sequence $\sigma$, if there is an infinite history in $M_l$ generated by $m(\sigma)$ and satisfying $C_l$, then there is an infinite history in $M_h$ generated by $\sigma$ and satisfying $C_h$.
\end{definition}
Based on the notions above, they defined sound/complete abstraction on model and theory level. For these two levels, sound abstractions mean that HL behavior entails LL behavior, complete abstractions mean the other direction.
\begin{definition}\rm \label{model-level-sound-abstraction}
$M_{h}$ is a {\em sound $m$-abstraction} of $M_{l}$, if
there is an $m$-back-simulation relation $B$ between $M_{h}$ and $M_{l}$.
\end{definition}
\begin{definition} \rm \label{model-level-complete-abstraction}
$M_{h}$ is a {\em complete $m$-abstraction} of $M_{l}$, if there is a $m$-simulation relation $B$ between $M_{h}$ and $M_{l}$.
\end{definition}
On the theory level, for sound abstraction, they defined two versions (complete abstractions can be defined symmetrically, and we omit here):
\begin{definition} \rm \label{weak-sound}
$\mathcal{G}_{h}$ is a weak sound $m$-abstraction of $\mathcal{G}_{l}$, if for any model $M_{l}$ of $\mathcal{D}_{l}$, there is a model $M_{h}$ of $\mathcal{D}_{h}$ such that: (1) $M_{h}$ is a sound $m$-abstraction of $M_{l}$ via $B$; (2) for any situations \amendc{$s_h$ in $M_{h}$} and $s_l$ in $M_{l}$, if $\langle s_{h}, s_{l}\rangle \in B$ and $M_h,v[s_h/s] \models G_h[s]$, then $M_l,v[s_l/s] \models G_l[s]$.
\end{definition}
\begin{definition} \rm \label{soundabs_TL}
$\mathcal{G}_{h}$ is a sound $m$-abstraction of $\mathcal{G}_{l}$, if it is a weak sound $m$-abstraction of $\mathcal{G}_{l}$, and
for any model $M_{l}$ of $\mathcal{D}_{l}$, there is a model $M_{h}$ of $\mathcal{D}_{h}$ such that: (1) $M_{h}$ is a complete $m$-abstraction of $M_{l}$ via $B$; (2) for any situations \amendc{$s_h$ in $M_{h}$} and $s_l$ in $M_{l}$, if $\langle s_{h}, s_{l}\rangle \in B$ and $M_h,v[s_h/s] \models G_h[s]$, then $M_l,v[s_l/s] \models G_l[s]$.
\end{definition}
\section{Proof-Theoretic Characterization}
In this section, we give a proof-theoretic characterization for sound abstractions for generalized planning (g-planning).
First of all, we introduce some notations and conventions. We define the program of doing any HL action sequence
and its refinement as follows:
\begin{equation*}
\setlength{\abovedisplayskip}{1mm}
\setlength{\belowdisplayskip}{1mm}
anyhlas\doteq (|_{A\in \mathcal{A}_{h} }\ \pi \vec{x}.A(\vec{x}))^{*}, \thinspace anyllps\doteq m(anyhlas).
\end{equation*}
We call a situation $s$ s.t. $Do(anyllps, S_0, s)$ holds an executable refinement of a HL situation.
The notation $\textit{Infexe}(\delta, h, C)$ means $h$ is an infinite execution of program $\delta$ satisfying trajectory constraint $C$:
\begin{equation*}
\setlength{\abovedisplayskip}{1mm}
\setlength{\belowdisplayskip}{1mm}
\textit{Infexe}(\delta,h,C)\doteq C(h)\land \forall s' \sqsubset h\exists \delta'.\ Trans^{*}(\delta, S_0, \delta', s').
\end{equation*}
We introduce an abbreviation $R(s,s')$, which means that situations $s$ and $s'$ result from executing the refinement of the same HL action sequence:
\begingroup
\addtolength{\jot}{-1mm}
\begin{flalign*}
\setlength{\abovedisplayskip}{1mm}
\setlength{\belowdisplayskip}{1mm}
&\hspace{0.35em} R(s,s')\doteq \forall P. P(S_{0},S_{0})\land \\
&\hspace*{1.35em} \mathsmaller{\bigwedge}_{A\in \mathcal{A}_{h}} \forall \vec{x},s,s_{1},s',s'_{1}. (P(s,s')\land Do(m(A(\vec{x})),s,s_{1}) \\
&\hspace*{4.35em}\land Do(m(A(\vec{x})),s',s'_{1})\supset P(s_{1},s'_{1}))\supset P(s,s').
\end{flalign*}
\endgroup
Let $\phi$ be a HL formula uniform in a situation. We use $m(\phi)$ to denote the formula resulting from replacing each high-level symbol in $\phi$ with its LL definitions. We now define $m(C)$ for a HL trajectory constraint $C$. For this purpose, we first define a normal form for trajectory constraints.
\begin{definition} \rm
Let $C$ be a trajectory constraint. We say that $C$ is in normal form if $C$ contains no occurrence of action variables or $Poss$, and any appearance of $do$ must be in the form of $s'=do(A(\vec{t}),s)$, where $s$ and $s'$ are variables.
\end{definition}
It is easy to prove that any trajectory constraint can be converted to an equivalent one in normal form.
\begin{definition} \rm \label{refinement_of_TC}
Let $C$ be a HL trajectory constraint. We first convert it into an equivalent one $C'$ in normal form. We let $m(C)$ denote the LL constraint obtained from $C'$ as follows: first replace any appearance of $\exists s$ with $\exists s. Do(anyllps, S_0, s)$, then replace any appearance of $s'=do(A(\vec{t}),s)$ with $Do(m(A(\vec{t})), s, s')$, and finally replace any high-level symbols with its LL definitions.
\end{definition}
We now extend the $m$-simulation or $m$-back-simulation relation $B$ to infinite histories, and show that if a HL infinite history $h_h$ and a LL infinite history $h_l$ are $B$-related, then $h_h$ satisfies a constraint $C$ iff $h_l$ satisfies $m(C)$.
\begin{definition} \rm
Let $M_h$ be a sound or complete abstraction of $M_l$ via $B$.
Let $h_h$ and $h_l$ be infinite histories of $M_h$ and $M_l$, respectively. We write $\langle h_{h}, h_{l}\rangle \in B$, if for any $s_h \sqsubset h_h$, there exist $s_l \sqsubset h_l$, such that $\langle s_{h}, s_{l}\rangle \in B$.
\end{definition}
By induction on the structure of a normal form trajectory constraint, we can prove:
\begin{prop}\label{simulation between infinite histories}
Let $M_h$ be a sound and complete abstraction of $M_l$. Let $C$ be a HL trajectory constraint. Then
$M_{h}\models C$ iff $M_{l}\models m(C)$.
\end{prop}
\begin{proof}
We do the proof by using structural induction on $C$:
\vspace{2mm}
\noindent \textbf{Induction base}:
\begin{itemize}
\item $C\doteq s=s'$: Let $s_{h}$ and $s'_{h}$ are two situations in $M_{h}$, and $C$ be the atom $s_{h}=s'_{h}$. On one hand, if we have $M_{h},v[s/s_{h},s'/s'_{h}]\models s=s'$, then there exists a HL grounded action sequence $\alpha$, such that
$M_{h}, v[s/s_{h},s'/s'_{h}]\models s=do(\alpha,S_{0})\land s'=do(\alpha,S_{0})$. Based on the definition of sound abstraction on model level, we have that there exist two situations $s_{l}$ and $s'_{l}$ such that $M_{l}, v[s/s_{l},s/s'_{l}]\models Do(m(\alpha),S_{0},s)\land Do(m(\alpha),S_{0},s')$. Given the Definition \ref{refinement_of_TC}, we can get $M_{l}\models m(s_{h}=s'_{h})$.
On the other hand, if we have $M_{l}\models m(s_{h}=s'_{h})$, then for the HL action sequence $\alpha$, based on the Definition \ref{refinement_of_TC}, we have $M_{l}, v[s/s_{l},s/s'_{l}]\models Do(m(\alpha),S_{0},s)\land Do(m(\alpha),S_{0},s')$.
By using the definition of complete abstraction on model level, we have that there exist two situations $s_{h}$ and $s'_{h}$ such that $M_{h}, v[s/s_{h},s'/s'_{h}]\models s=do(\alpha,S_{0})\land s'=do(\alpha,S_{0})$. Then we can get $M_{h}\models s_{h}=s'_{h}$.
\item $C\doteq s\sqsubseteq s'$: Let $s_{h}$ and $s'_{h}$ are two situations in $M_{h}$, and $C$ be the atom $s_{h}\sqsubseteq s'_{h}$. On one hand, there exists two HL grounded action sequences $\alpha$ and $\alpha'$, such that $\alpha$ is a subsequence of $\alpha'$, and
$M_{h},v[s/s_{h},s/s'_{h}] \models s=do(\alpha, S_{0})\land s'=do(\alpha',S_{0})\land s\sqsubseteq s'$.
Based on the definition of sound abstraction on model level, we have that there exist two situations $s_{l}$ and $s'_{l}$ such that
$M_{l}, v[s/s_{l},s/s'_{l}]\models Do(m(\alpha),S_{0},s)\land Do(m(\alpha'),S_{0},s')\land s\sqsubseteq s'$.
Given the Definition \ref{refinement_of_TC}, we can get $M_{l}\models m(s_{h}\sqsubseteq s'_{h})$.
On the other hand, if we have $M_{l}\models m(s_{h}\sqsubseteq s'_{h})$, then for the HL action sequences $\alpha$ and $\alpha'$, we can get
$M_{l}, v[s/s_{l},s/s'_{l}]\models Do(m(\alpha),S_{0},s)\land Do(m(\alpha'),S_{0},s')\land s\sqsubseteq s'$.
Based on the Definition \ref{refinement_of_TC}, we have
$M_{l}, v[s/s_{l},s/s'_{l}]\models Do(m(\alpha),S_{0},s)\land Do(m(\alpha),S_{0},s')\land s\sqsubseteq s'$.
By using the definition of complete abstraction on model level, we have that there exist two situations $s_{h}$ and $s'_{h}$ such that
$M_{h}, v[s/s_{h},s'/s'_{h}]\models s=do(\alpha,S_{0})\land s'=do(\alpha,S_{0}) \land s\sqsubseteq s'$.
Then we can get $M_{h}\models s_{h}\sqsubseteq s'_{h}$.
\item $C\doteq s\sqsubset h$: Let $s_{h}$ be a situation in $M_{h}$, $h_{h}$ be a infinite history in $M_{h}$, and $C$ be the atom $s_{h}\sqsubset h_{h}$. On one hand, there exists an infinite HL grounded action sequence $\sigma$ and a finite subsequence $\alpha$ of $\sigma$, such that
$M_{h},v[s/s_{h},h/h_{h}] \models s=do(\alpha,S_{0})\land s\sqsubset h$.
Using the action sequence $\sigma$, and based on the definition of sound abstraction on model level, we can easily construct an infinite history $h_{l}$ in $M_{l}$ corresponds to $h_{h}$, and there also exists a situation $s_{l}\sqsubset h_{l}$ such that
$M_{l},v[s/s_{l},h/h_{l}] \models Do(m(\alpha),S_{0},s)\land s\sqsubset h$.
Given the Definition \ref{refinement_of_TC}, we can get $M_{l}\models m(s_{h}\sqsubset h_{h})$.
On the other hand, if we have $M_{l}\models m(s_{h}\sqsubset h_{h})$, then for the HL action sequences $\sigma$ and $\alpha$, based on the Definition \ref{refinement_of_TC}, we have
$M_{l},v[s/s_{l},h/h_{l}] \models Do(m(\alpha),S_{0},s)\land s\sqsubset h$.
By using the definition of complete abstraction on model level, we have that there exist a situation $s_{h}$ and an infinite history $h_{h}$ generated by $\sigma$ such that
$M_{h},v[s/s_{h},h/h_{h}] \models s=do(\alpha,S_{0})\land s\sqsubset h$.
Then we can get $M_{h}\models s_{h}\sqsubset h_{h}$.
\item $C\doteq a=a'$: Let $a$ and $a'$ are two HL primitive deterministic actions, and $C$ be the atom $a=a'$. Then based on the definition of refinement mapping, we have that
$M_{h} \models a=a' \Leftrightarrow M_{l}\models m(a)=m(a')\Leftrightarrow M_{l}\models m(a = a')$.
\item $C\doteq P(\vec{x},s)$: Let $s_{h}$ is a situation in $M_{h}$, $P$ is a HL relational fluent, and $C$ be the atom $P(\vec{x},s)$. Then based on the definition of $m$-isomorphic, we have that there exists a situation $s_{l}$ in $M_{l}$ such that
$M_{h},v[s/s_{h}] \models P(\vec{x},s) \Leftrightarrow M_{l},v[s/s_{l}]\models m(P)(\vec{x},s)$.
\item $C\doteq f(\vec{x},s)=y$: Let $s_{h}$ is a situation in $M_{h}$, $f$ is a HL relational fluent, and $C$ be the atom $f(\vec{x},s)=y$. Then based on the definitions of simulation relations and $m$-isomorphic, we have that there exists a situation $s_{l}$ in $M_{l}$ such that
$M_{h},v[s/s_{h}] \models f(\vec{x},s)=y \Leftrightarrow M_{l},v[s/s_{l}]\models m(f)(\vec{x}, s)=y$.
\end{itemize}
\vspace{2mm}
\noindent \textbf{Induction step}: Let $C_{1}$ and $C_{2}$ are two HL trajectory constraints, and suppose that
\begin{center}
$M_{h}\models C_{1}$ iff $M_{l}\models m(C_{1})$,
$M_{h}\models C_{2}$ iff $M_{l}\models m(C_{2})$.
\end{center}
Then we prove the following three cases:
\begin{itemize}
\item $C\doteq \neg C_{1}$: If we have $M_{h}\models \neg C_{1}$, then we can get $M_{h}\nvDash C_{1}$. Based on the inductive assumption, we can get $M_{l}\nvDash m(C_{1})$, which means $M_{l}\models \neg m( C_{1})$. Thus, we have $M_{l}\models m(\neg C_{1})$.
\item $C\doteq C_{1}\land C_{2}$: If we have $M_{h}\models C_{1}\land C_{2}$, then we can get $M_{h}\models C_{1}$ and $M_{h}\models C_{2}$. Based on the inductive assumption, we can get $M_{l}\models m(C_{1})$ and $M_{l}\models m(C_{2})$, which means $M_{l}\models m(C_{1})\land m(C_{2})$. Thus, we have $M_{l}\models m(C_{1}\land C_{2})$.
\item $C\doteq \exists s. C_{1}(s)$: If we have $M_{h}\models \exists s. C_{1}(s)$, then there exists a situation $s_{h}$ in $M_{h}$ reached from the initial situation $S_{0}$ via executing an action sequence $\alpha$, such that $M_{h}, v[s/s_{h}]\models C_{1}(s)$. Based on the definition of sound abstraction on model level and inductive assumption, we have that there exists a situation $s_{l}$ in $M_{l}$ such that $M_{l},v[s/s_{l}]\models Do(m(\alpha),S_{0},s)\land m(C_{1})(s)$, which means $M_{l}\models m(\exists s. C_{1}(s))$.
\end{itemize}
\end{proof}
Non-deterministic actions in \cite{Cui21} are treated as non-deterministic programs. In particular, each non-deterministic action $A$ has a definition in the form $A(\vec{x})\doteq \pi \vec{u}. A_d(\vec{x},\vec{u})$, where $A_d$ is a deterministic action.
We let $\Pi_A(\vec{x},s)$ denote $\exists \vec{u}. \Pi_{A_d}(\vec{x},\vec{u},s)$, let
$\phi_{P,A_d}(\vec{y},\vec{x},\vec{u}, s)$ denote $\phi_P(\vec{y},A_d(\vec{x},\vec{u}),s)$ simplified by using $\mathcal{D}_{una}$, and let
$\psi_{f,A_{d}}(\vec{y},z,\vec{x},\vec{u},s))$ denote $\psi_f(\vec{y},z,A_d(\vec{x},\vec{u}),s)$ simplified by using $\mathcal{D}_{una}$.
We now introduce the following abbreviations:
\vspace*{1mm}
\noindent$\psi_T \doteq \bigwedge_{A\in \mathcal{A}_{h}}\forall \vec{x}. \textit{Term}(m(A(\vec{x})),s,C_l)$,
\vspace*{1mm}
\noindent$\xi_P \doteq \bigwedge_{A\in \mathcal{A}_{h}}\forall \vec{x},s'. Do(m(A(\vec{x})),s,s') \supset \exists \vec{u}.$\\
\hspace*{1em}$\bigwedge_{P\in \mathcal{P}_{h}}[\forall \vec{y}. m(P(\vec{y},s'))\equiv m(\phi_{P,A_{d}}(\vec{y},\vec{x},\vec{u}, s))]$,
\vspace*{1mm}
\noindent$\xi_f \doteq \bigwedge_{A\in \mathcal{A}_{h}}\forall \vec{x},s'. Do(m(A(\vec{x})),s,s') \supset \exists \vec{u}.$\\
\hspace*{1em}$\bigwedge_{f\in \mathcal{F}_{h}}[\forall \vec{y}, z. m(f(\vec{y},s')=z)\equiv m(\psi_{f,A_{d}}(\vec{y},z,\vec{x},\vec{u},s))]$,
\vspace*{1mm}
\noindent where, $\psi_T$ says that the refinement of any HL action terminates in $s$ under $C_l$; $\xi_P$ and $\xi_f$ says that for any HL action $A(\vec{x})$, if its refinement transforms situation $s$ to $s'$, then there is $\vec{u}$ s.t. the mapping of all SSAs instantiated with $A_d(\vec{x},\vec{u})$ hold for $s$ and $s'$.
The following theorem gives a proof-theoretic characterization for sound abstractions, where Item 1 and Item 6 are easy to understand; Item 2 says that $\mathcal{D}_{l}$ entails that for any executable refinement of a HL situation, the executability of the refinement of any HL action implies its mapped precondition; Item 3 says that $\mathcal{D}_{l}$ entails that for any executable refinement of a HL situation, the mapped precondition of any HL action implies that the executability of its refinement holds in some $R$-related situation; and Item 5 says that $\mathcal{D}_{l}$ entails that the existence of an inifinite execution of $anyllps$ satisfing the LL constraint is equivalent to the existence of one satisfying the mapped HL constraint.
\begin{theorem}(Sound abstraction) \label{s-abs-nd-case}
Given a generalized planning problem $\mathcal{G}_{l}$ and its abstraction $\mathcal{G}_{h}$, $\mathcal{G}_{h}$ is a sound $m$-abstraction of $\mathcal{G}_{l}$ iff the following hold:
\begin{enumerate}
\item $\mathcal{D}_{l}^{S_{0}}\models m(\phi)$, where $\phi\in \mathcal{D}_{h}^{S_{0}}$;
\item $\mathcal{D}_{l}\models \forall s. Do(anyllps,S_{0},s)\supset \\
\hspace*{1em}\bigwedge_{A\in \mathcal{A}_{h}}\forall \vec{x},s'. Do(m(A(\vec{x})),s,s')\supset m(\Pi_A(\vec{x},s))$;
\item $\mathcal{D}_{l}\models \forall s. Do(anyllps,S_{0},s)\supset \\
\hspace*{1em}\bigwedge_{A\in \mathcal{A}_{h}}\forall \vec{x}. m(\Pi_A(\vec{x},s))\supset\\
\hspace*{2em} \exists s',s''.R(s,s')\land Do(m(A(\vec{x})),s',s'')$;
\item $\mathcal{D}^{l}\models \forall s. Do(anyllps,S_{0},s)\supset \\
\hspace*{1em}\bigwedge_{A\in \mathcal{A}_{h}}\forall \vec{x},s'. Do(m(A(\vec{x})),s,s') \supset \psi_T$;
\item $\mathcal{D}_{l}\models \forall s. Do(anyllps,S_{0},s)\supset \xi_P$;
\item $\mathcal{D}_{l}\models \forall s. Do(anyllps,S_{0},s)\supset \xi_f$;
\item $\mathcal{D}_{l}\models \exists h_{l}.\textit{Infexe}(anyllps,h_{l},C_{l})\\
\hspace*{1em}\equiv \exists h_{l}. \textit{Infexe}(anyllps,h_{l},m(C_{h}))$;
\item $\mathcal{D}_{l}\models \forall s. Do(anyllps,S_{0},s)\land m(G_{h})[s]\supset G_{l}[s]$.
\end{enumerate}
\end{theorem}
\begin{proof}
Firstly, we prove the \textbf{only-if direction}:
\begin{enumerate}
\item[1.] For any $M^{S_0}_{l}\models \mathcal{D}^{S_{0}}_{l}$, we can extend it to a model $M_{l}$ of $\mathcal{D}_{l}$. Then there exists a model $M_{h}$ of $\mathcal{D}_{h}$, which is a sound $m$-abstraction of $M_{l}$, and we have that the initial situation $S^{M_{h}}_{0}$ of $M_{h}$ is $m$-isomorphic to the initial situation $S^{M_{l}}_{0}$ of $M_{l}$. Given $\phi\in \mathcal{D}^{S_{0}}_{h}$, since $S^{M_{h}}_{0}$ satisfies $\phi$, so $S^{M_{l}}_{0}$ satisfies $m(\phi)$. Therefore, $\mathcal{D}^{S_{0}}_{l}\models m(\phi)$ for $\phi\in \mathcal{D}^{S_{0}}_{h}$;
\end{enumerate}
\begin{enumerate}
\item[2.] Let $M_{l}$ be a model of $\mathcal{D}_{l}$, then there exists a model $M_{h}$ of $\mathcal{D}_{h}$, which is a complete $m$-abstraction of $M_{l}$ via a $m$-simulation relation $B_{1}$. Let $s_{l}$ be a situation of $M_{l}$ which satisfies $Do(anyllps, S_{0},s)$, then based on the definition of $m$-simulation relation, we have that there is a situation $s_{h}$ of $M_{h}$ such that $\langle s_{h},s_{l} \rangle \in B_{1}$. Thus, for any HL action $A(\vec{x})$, $s_{h}$ satisfies $\forall \vec{x}.\Pi_{A}(A(\vec{x}),s)$ iff $s_{l}$ satisfied $\forall \vec{x}. m(\Pi_{A}(A(\vec{x}),s))$. Furthermore, if $M_{l},v[s/s_{l}]\models \forall \vec{x},\exists s'_{l}. Do(m(A(\vec{x})),s,s'_{l})$, then we have $M_{h},v[s/s_{h}]\models \forall \vec{x},\exists s'_{h}. Do(A(\vec{x}),s,s'_{h})$, which implies $M_{h},v[s/s_{h}]\models \forall \vec{x}. \Pi_{A}(A(\vec{x}),s)$. Therefore, $M_{l},v[s/s_{l}]\models \forall \vec{x}. m(\Pi_A(\vec{x},s))$.
\end{enumerate}
\begin{enumerate}
\item[3.] Let $M_{l}$ be a model of $D_{l}$, then there exists a model $M_{h}$ of $D_{h}$ such that $M_{h}$ is a complete abstraction for $M_{l}$ via a $m$-simulation relation $B_{1}$, and sound abstraction for $M_{l}$ via a $m$-back-simulation relation $B_{2}$. Let $s_{l}$ be a situation of $M_{l}$ which satisfies $Do(anyllps, S_{0},s)$, then based on the definitions of $m$-simulation and $m$-back-simulation relation, we have that there is a situation $s_{h}$ of $M_{h}$ such that $\langle s_{h},s_{l} \rangle \in B_{1}$, and a situation $s'_{l}$ of $M_{l}$ such that $\langle s_{h},s'_{l} \rangle \in B_{2}$. Thus, $s_{l}$ and $s'_{l}$ are $R$-related. Furthermore, for any HL action $A(\vec{x})$, if $M_{l},v[s/s_{l}]\models \forall \vec{x}. m(\Pi_{A}(A(\vec{x}),s))$, then $M_{h},v[s/s_{h}]\models \forall \vec{x}. \Pi_{A}(A(\vec{x}),s)$, and hence $M_{l},v[s/s'_{l}]\models \forall \vec{x}. m(\Pi_{A}(A(\vec{x}),s))$. Therefore, we have $M_{l},v[s/s_{l}]\models \forall \vec{x}, \exists s', s''.R(s,s') \land Do(m(A(\vec{x})),s',s'')$.
\end{enumerate}
\begin{enumerate}
\item[4.] Let $M_{l}$ be a model of $D_{l}$, then there exists a model $M_{h}$ of $D_{h}$ such that $M_{h}$ is a complete abstraction for $M_{l}$ via a $m$-simulation relation $B_{1}$. Let $s_{l}$ be a situation of $M_{l}$ which satisfies $Do(anyllps, S_{0},s)$, then base on the definition of $m$-simulation relation, we have that there is a situation $s_{h}$ of $M_{h}$ such that $\langle s_{h},s_{l} \rangle \in B_{1}$, and $M_{l},v[s/s_{l}]\models \forall \vec{x}. Term(A(\vec{x}),s,C_{l})$ for any high HL action $A(\vec{x})$.
\end{enumerate}
\begin{enumerate}
\item[5.] Let $M_{l}$ be a model of $\mathcal{D}_{l}$, then there exists a model $M_{h}$ of $\mathcal{D}_{h}$, which is a complete $m$-abstraction of $M_{l}$ via a $m$-simulation relation $B_{1}$. Let $s_{l}$ be a situation of $M_{l}$ which satisfies $Do(anyllps, S_{0},s)$, then base on the definition of $m$-simulation relation, we have that there is a situation $s_{h}$ of $M_{h}$, such that $\langle s_{h},s_{l} \rangle \in B_{1}$. For each HL action $A(\vec{x})$, and any situation $s'_{l}$, if $M_{l},v[s/s_{l}, s'/s'_{l}]\models \forall \vec{x}. Do(m(A(\vec{x})),s,s')$, then there exists an execution $A(\vec{x}, \vec{u})$ of $A(\vec{x})$, and a situation $s'_{h}$, such that $M_{h},v[s/s_{h},s'/s'_{h}]\models \forall \vec{x}, \exists \vec{u}. Do(A_{d}(\vec{x},\vec{u}),s,s')$, and $\langle s'_{h},s'_{l} \rangle \in B_{1}$. Then we have that $s_{l}$ satisfies $\forall \vec{x},\vec{y},\exists \vec{u}. m(\phi_{P,A_{d}}(\vec{x},\vec{y},\vec{u},s))$ iff $s_{h}$ satisfies $\forall \vec{x},\vec{y},\exists \vec{u}.\phi_{P,A_{d}}(\vec{x},\vec{y},\vec{u},s)$ iff $s'_{h}$ satisfies $\forall \vec{y}. P(\vec{y},s')$ iff $s'_{l}$ satisfies $\forall \vec{y}. m(P(\vec{y},s'))$.
\end{enumerate}
\begin{enumerate}
\item[6.] The proof of item 6 is similar to item 5.
\end{enumerate}
\begin{enumerate}
\item[7.] Let $M_{l}$ be a model of $\mathcal{D}_{l}$, then there exists a model $M_{h}$ of $D_{h}$ such that $M_{h}$ is a complete abstraction for $M_{l}$ via a $m$-simulation relation $B_{1}$, and sound abstraction for $M_{l}$ via a $m$-back-simulation relation $B_{2}$.
\begin{itemize}
\item[(i)] On one hand, Let $h_{l}$ be a LL infinite history, for an infinite ground HL action sequence
$A_{1}, A_{2}, \cdots, A_{i},\cdots$,
and let $\sigma \doteq A_{1}, A_{2}, \cdots, A_{i},$
we consider the infinite LL situation sequence
$S^{M_{l}}_{0}, s^{1}_{l}, s^{2}_{l}, \cdots, s^{i}_{l}, \cdots$,
where $s^{i}_{l}$ satisfies the following formula
$s^{i}_{l}\sqsubset h_{l}\land Do(m(\sigma), S_{0}, s^{i}_{l})$. Based on the $m$-simulation relation $B_{1}$, we can easily construct an HL infinite history $h_{h}$ such that $\langle h_{h},h_{l} \rangle \in B_{1}$. Furthermore, we can get that if $h_{l}$ satisfies $M_{l},v[h/h_{l}]\models Infexe(anyllps, h_{l}, C_{l})$, then $h_{h}$ satisfies $M_{h},v[h/h_{h}]\models Infexe(anyllps, h_{h}, C_{h})$. Then, based on the Proposition \ref{simulation between infinite histories}, we can get that $M_{l}, [h/h_{l}]\models Infexe(anyllps,h_{l},m(C_{h}))$.
\item[(ii)] On the other hand, given an LL infinite history $h_{l}$, we can also construct an HL infinite history $h_{h}$ based on the $m$-simulation relation $B_{1}$ such that such that $\langle h_{h},h_{l} \rangle \in B_{1}$. If $h_{l}$ satisfies $M_{l}\models \exists h_{l}. Infexe(angllps, h_{l}, m(C_{h}))$, then according to the proposition \ref{simulation between infinite histories}, we can get that $M_{h},v[h/h_{h}] \models Infexe(anyllps, h_{h}, C_{h})$. Given the definition of $m$-back-simulation relation, we then have that there exists a low-level infinite history $h_{l}$ which satisfies $M_{l}, v[h/h_{l}]\models Infexe(anyllps, h_{l}, C_{l})$.
\end{itemize}
\end{enumerate}
\begin{enumerate}
\item[8.] Let $M_{l}$ be a model of $\mathcal{D}_{l}$, then there exists a model $M_{h}$ of $\mathcal{D}_{h}$, which is a complete $m$-abstraction of $M_{l}$ via a $m$-simulation relation $B_{1}$. Let $s_{l}$ be a situation of $M_{l}$ which satisfies $Do(anyllps, S_{0},s)$, then base on the definition of $m$-simulation relation, we have that there is a situation $s_{h}$ of $M_{h}$, such that $\langle s_{h},s_{l} \rangle \in B_{1}$. Suppose $s_{l}$ satisfies $m(G_{h})$, then $s_{h}$ satisfies $G_{h}$, according to the definition of weak sound abstraction on theory level, we can get that $s_{l}$ satisfies $G_{l}$.
\end{enumerate}
Secondly, we prove the \textbf{if direction}: For any model $M_{l}$ of $\mathcal{D}_{l}$, we construct a model $M_{h}$ of $\mathcal{D}_{h}$ as follows: $M_{h}$ interprets the HL relational and functional fluents at initial situation $S^{M_{h}}_{0}$ according to the initial situation $S^{M_{l}}_{0}$ of $M_{l}$ and the refinement mapping. We complete $M_{h}$ by using the action precondition and effect axioms.
For $M_{l}$ and $M_{h}$, we firstly prove that $M_{h}$ is a complete abstraction of $M_{l}$: In the following, we construct a $m$-simulation relation $B_{1}$ between the situation and infinite history domains of $M_{h}$ and $M_{l}$,
\begin{itemize}
\item[1.] $\langle S^{M_{h}}_{0}, S^{M_{l}}_{0}\rangle \in B_{1}$, i.e., $S^{M_{h}}_{0}$ is $m$-isomorphic to $S^{M_{l}}_{0}$;
\item[2.] Let $\langle s_{h},s_{l}\rangle \in B_{1}$, where $s_{l}$ is a situation reached from $S^{M_{l}}_{0}$ via executing $anyllps$, then $s_{h}$ is $m$-isomorphic to $s_{l}$. Let $A(\vec{x})$ be an arbitrary HL action.
By item 4, if $m(A(\vec{x}))$ is executable in the situation $s_{l}$, then we have $M_{l},v[s/s_{l}]\models \forall \vec{x}. Term(m(A(\vec{x})),s,C_{l})$. By item 2, if $m(A(\vec{x}))$ is executable in $s_{l}$, then $A(\vec{x})$ is executable in $s_{h}$. For each situation $s'_{l}$ satisfies $Do(m(A(\vec{x})),s_{l},s'_{l})$ and all the situations $s'_{h}$ that satisfy $Do(A(\vec{x}),s_{h},s'_{h})$, we add all the pairs $\langle s'_{h}, s'_{l}\rangle$ in $B_{1}$. By item 5 (the case of item 6 can be discussed similarly), for an execution $A_{d}(\vec{x},\vec{u})$ of $A(\vec{x})$, $s'_{l}$ satisfies $\forall \vec{y}. m(P(\vec{y},s'_{l}))$ iff $s_{l}$ satisfies $\forall \vec{x},\vec{y},\exists \vec{u}. m(\Pi_{A_{d}}(\vec{x},\vec{y},\vec{u},s_{l}))$ iff $s_{h}$ satisfies $\forall \vec{x},\vec{y},\exists \vec{u}.\Pi_{A_{d}}(\vec{x},\vec{y},\vec{u},s_{h})$. For the situations $s'_{h}$ that do not satisfy $\forall \vec{y}. P(\vec{y},s'_{h})$, we then delete all the pairs $\langle s'_{h},s'_{l}\rangle$ from $B_{1}$.
\item[3.] $B_{1}$ is a $m$-simulation relation follows its construction coupled with item 7.
\end{itemize}
Furthermore, if $\langle s_{h}, s_{l}\rangle \in B_{1}$ and $s_{h}$ satisfies $G_{h}$, then $s_{l}$ satisfies $G_{l}$. Since $s_{h}$ and $s_{l}$ are $m$-isomorphic, $s_{l}$ satisfies $m(G_{h})$, by Item 8, $s_{l}$ satisfies $G_{l}$. Therefore, $M_{h}$ is a sound abstraction of $M_{l}$.
\vspace{2mm}
For $M_{l}$ and $M_{h}$, we then prove that $M_{h}$ is a sound abstraction of $M_{l}$: In the following, we construct a $m$-back-simulation relation $B_{2}$ between the situation and infinite history domains of $M_{h}$ and $M_{l}$,
\begin{itemize}
\item[1.] $\langle S^{M_{h}}_{0}, S^{M_{l}}_{0}\rangle \in B_{2}$, i.e., $S^{M_{h}}_{0}$ is $m$-isomorphic to $S^{M_{l}}_{0}$;
\item[2.] Let $\langle s_{h},s_{l}\rangle \in B_{2}$, where $s_{l}$ is a situation reached from $S^{M_{l}}_{0}$ via executing $anyllps$. then $s_{h}$ is $m$-isomorphic to $s_{l}$. Let $A(\vec{x})$ be an arbitrary HL action. By item 4, if $m(A(\vec{x}))$ is executable in the situation $s_{l}$, then we have $M_{l},v[s/s_{l}]\models \forall \vec{x}. Term(m(A(\vec{x})),s,C_{l})$. By item 3, if $s_{l}$ satisfies $\forall \vec{x}. m(\Pi_{A}(\vec{x},s_{l}))$, then there exists a situation $s'_{l}$ $R$-related to $s_{l}$ satisfies $\forall \vec{x},\exists s''. Do(m(A(\vec{x})),s',s'')$.
Then $s_{h}$ and $s'_{l}$ are $m$-isomorphic, and we replace $\langle s_{h},s_{l} \rangle$ in $B_{2}$ by $\langle s_{h},s'_{l}\rangle $.
In addition, for each situation $s'_{h}$ satisfies $\vec{x}. Do(A(\vec{x}),s_{h},s'_{h})$ and all the situations $s''_{l}$ that satisfy $\vec{x}. Do(m(A(\vec{x})),s_{l},s''_{l})$, we add all the pairs $\langle s'_{h},s''_{l}\rangle$ in $B_{2}$.
By item 5 (the case of item 6 can be discussed similarly), for any execution $A_{d}(\vec{x},\vec{u})$ of $A(\vec{x})$, $s''_{l}$ satisfies $\forall \vec{y}. m(P(\vec{y},s''_{l}))$ iff $s_{l}$ satisfies $\forall \vec{x},\vec{y},\exists \vec{u}. m(\Pi_{A_{d}}(\vec{x},\vec{y},\vec{u},s_{l}))$ iff $s_{h}$ satisfies $\forall \vec{x},\vec{y},\exists \vec{u}.\Pi_{A_{d}}(\vec{x},\vec{y},\vec{u},s_{h})$. For the situations $s'_{h}$ that do not satisfy $\forall \vec{y}. P(\vec{y},s'_{h})$, we then delete all the pairs $\langle s'_{h},s''_{l}\rangle$ from $B_{2}$.
\item[3.] $B_{2}$ is a $m$-back-simulation relation follows its construction coupled with item 7.
\end{itemize}
Furtherfore, if $\langle s_{h},s_{l}\rangle \in B_{2}$ and $s_{h}$ satisfies $G_{h}$, then $s_{l}$ satisfies $G_{l}$. Since $s_{h}$ and $s_{l}$ are $m$-isomorphic, $s_{l}$ satisfies $m(G_{h})$, by Item 8, $s_{l}$ satisfies $G_{l}$.
In conclusion, according to the definition of sound abstraction on theory level, we can get that $\mathcal{G}_{h}$ is a sound abstraction of $\mathcal{G}_{l}$.
\end{proof}
Unfortunately, we are not able to give a proof-theoretic characterization for complete abstractions. Nonetheless, we give the following characterization where
Item 2 says that $M_l^1$ satisfies for any executable refinement of a HL situation, $\psi_T$, $\xi_P$, and $\xi_f$ hold, and the LL goal implies the mapped HL goal, and
the executability of the refinement of any HL action implies its mapped precondition; Item 3 says that $M_l^2$ satisfies that there exists a set $P$ of situations including the initial situation such that for any $P$-situation, $\psi_T$, $\xi_P$, and $\xi_f$ hold, and the LL goal implies the mapped HL goal, and
the mapped precondition of any HL action implies that its refinement is executable and leads to a $P$-situation.
\begin{theorem}
$\mathcal{G}_{h}$ is a complete $m$-abstraction of $\mathcal{G}_{l}$ iff
for any model $M_{h}$ of $\mathcal{D}_{h}$, there exists a model $M^{1}_{l}$ of $\mathcal{D}_{l}$ such that:
\begin{enumerate}
\item $S^{M_{h}}_{0}\sim_{m} S^{M^{1}_{l}}_{0}$;
\item $M^{1}_{l}\models \forall s. Do(anyllps,S_{0},s)\supset \\
\hspace*{1.35em}\bigwedge_{A\in \mathcal{A}_{h}} \forall \vec{x},s'. Do(m(A(\vec{x})),s,s')\supset m(\Pi_A(\vec{x},s))$;
\item $M^{1}_{l}\models \forall s. Do(anyllps,S_{0},s)\supset \\
\hspace*{1.35em}\bigwedge_{A\in \mathcal{A}_{h}}\forall \vec{x},s'. Do(m(A(\vec{x})),s,s') \supset \\
\hspace*{2.7em}\bigwedge_{A\in \mathcal{A}_{h}}\forall \vec{x}. \textit{Term}(m(A(\vec{x})),s,C_l)$;
\item $M^{1}_{l}\models \forall s. Do(anyllps,S_{0},s)\supset \xi_P$;
\item $M^{1}_{l}\models \forall s. Do(anyllps,S_{0},s)\supset \xi_f$
\item $M^{1}_{l}\models \forall s. Do(anyllps,S_{0},s)\supset G_{l}[s]\supset m(G_{h})[s]$;
\item if $M^{1}_{l}\models \exists h_{l}.\textit{Infexe}(anyllps, h_{l}, C_{l})$, then $ M_h \models \exists h_{h}.\textit{Infexe}(anyhlas, h_{h}, C_{h})$,
\end{enumerate}
and there is another model $M^{2}_{l}$ of $\mathcal{D}_{l}$ and a situation set $P$ of $M^{2}_{l}$ such that:
\begin{enumerate}
\item[8] $S^{M_{h}}_{0}\sim_{m} S^{M^{2}_{l}}_{0}$;
\item[9] $M^{2}_{l}\models P(S_0) \land \forall s.P(s)\supset \\
\hspace*{1.35em}\bigwedge_{A\in \mathcal{A}_{h}}\forall \vec{x}.m(\Pi_A(\vec{x},s)) \supset \\
\hspace*{2.7em}\exists s'. Do(m(A(\vec{x})),s,s')\wedge P(s')$;
\item[10] $M^{2}_{l}\models P(S_0) \land \forall s.P(s)\supset \\
\hspace*{1.35em}\bigwedge_{A\in \mathcal{A}_{h}}\forall \vec{x},s'. Do(m(A(\vec{x})),s,s')\land P(s') \supset \psi_T$;
\item[11] $M^{2}_{l}\models P(S_0) \land \forall s.P(s)\supset \xi_P$;
\item[12] $M^{2}_{l}\models P(S_0) \land \forall s.P(s)\supset \xi_f$;
\item[13] $M^{2}_{l}\models P(S_0) \land \forall s.P(s)\supset G_{l}[s] \supset m(G_{h})[s]$;
\item[14] if $ M_h \models \exists h_{h}.\textit{Infexe}(anyhlas, h_{h}, C_{h})$, then $M^{2}_{l}\models \exists h_{l}.\textit{Infexe}(anyllps, h_{l}, C_{l})$.
\end{enumerate}
\end{theorem}
\begin{proof}
Firstly, we prove the \textbf{only-if direction}:
\begin{enumerate}
\item[1.] According to the definition of complete abstraction on theory level, we know that for each high-level model $M_{h}$ of $\mathcal{D}_{h}$, there exist a LL model $M^{1}_{l}$, such that $M_{h}$ is a complete abstraction of $M^{1}_{l}$ via a $m$-simulation relation $B_{1}$. Then, we have $S^{M_{h}}_{0}\sim_{m} S^{M^{1}_{l}}_{0}$;
\end{enumerate}
\begin{enumerate}
\item[2.] Let $s_{l}$ be a situation of $M^{1}_{l}$ which satisfies the formula $Do(anyllps, S_{0},s)$, then based on the definition of $m$-simulation relation, we have that there is a situation $s_{h}$ of $M_{h}$ such that $\langle s_{h},s_{l} \rangle \in B_{1}$. Thus, for any HL action $A(\vec{x})$, $s_{h}$ satisfies $\forall \vec{x}. \Pi_{A}(A(\vec{x}),s)$ iff $s_{l}$ satisfied $\forall \vec{x}. m(\Pi_{A}(A(\vec{x}),s))$. Furthermore, if $M^{1}_{l},v[s/s_{l}]\models \forall \vec{x}. \exists s'_{l}. Do(m(A(\vec{x})),s,s'_{l})$, then $M_{h},v[s/s_{h}]\models \forall \vec{x}. \exists s'_{h}. Do(A(\vec{x}),s,s'_{h})$, which means that $M_{h},v[s/s_{h}]\models \forall \vec{x}. \Pi_{A}(A(\vec{x}),s)$. Therefore, $M^{1}_{l},v[s/s_{l}]\models \forall \vec{x}. m(\Pi_A(\vec{x},s))$.
\end{enumerate}
\begin{enumerate}
\item[3.] Let $s_{l}$ be a situation of $M^{1}_{l}$ which satisfies the formula $Do(anyllps, S_{0},s)$, then based on the definition of $m$-simulation relation, we have that there is a situation $s_{h}$ of $M_{h}$ such that $\langle s_{h},s_{l} \rangle \in B_{1}$, and $M^{1}_{l},v[s/s_{l}]\models \forall \vec{x}. Term(A(\vec{x}),s,C_{l})$ for any HL action $A(\vec{x})$.
\end{enumerate}
\begin{enumerate}
\item[4.] Let $s_{l}$ be a situation of $M^{1}_{l}$ which satisfies the formula $Do(anyllps, S_{0},s)$, then based on the definition of $m$-simulation relation, we have that there is a situation $s_{h}$ of $M_{h}$ such that $\langle s_{h},s_{l} \rangle \in B_{1}$. For each HL action $A(\vec{x})$, and any situation $s'_{l}$, if $M^{1}_{l},v[s/s_{l}, s'/s'_{l}]\models \forall \vec{x}. Do(m(A(\vec{x})),s,s')$, then there exists an execution $A(\vec{x}, \vec{u})$ of $A(\vec{x})$, and a situation $s'_{h}$, such that $M_{h},v[s/s_{h},s'/s'_{h}]\models \forall \vec{x}, \exists \vec u. Do(A_{d}(\vec{x}, \vec{u}),s,s')$, and $\langle s'_{h},s'_{l} \rangle \in B$. Thus, $s_{l}$ satisfies $\forall \vec{x},\vec{y},\exists \vec{u}. m(\phi_{P,A_{d}}(\vec{x},\vec{y},\vec{u},s))$ iff $s_{h}$ satisfies $\forall \vec{x},\vec{y},\exists \vec{u}.\phi_{P,A_{d}}(\vec{x},\vec{y},\vec{u},s)$ iff $s'_{h}$ satisfies $\forall \vec{y}. P(\vec{y},s')$ iff $s'_{l}$ satisfies $\forall \vec{y}. m(P(\vec{y},s'))$.
\end{enumerate}
\begin{enumerate}
\item[5.] The proof of item 5 is similar to item 4.
\end{enumerate}
\begin{enumerate}
\item[6.] Let $s_{l}$ be a situation of $M^{1}_{l}$ which satisfies the formula $Do(anyllps, S_{0},s)$, then based on the definition of $m$-simulation relation, we have that there is a situation $s_{h}$ of $M_{h}$ such that $\langle s_{h},s_{l} \rangle \in B_{1}$. Suppose $s_{l}$ satisfies $G_{l}$, then $s_{h}$ satisfies $G_{h}$ based on the weak complete abstraction on theory level. Thus, we have that $s_{l}$ satisfies $m(G_{h})$.
\end{enumerate}
\begin{enumerate}
\item[7.] According to the definitions of complete abstraction and $m$-simulation, if there exists a LL infinite history $h_{l}$ such that $M^{1}_{l},v[h/h_{l}]\models Infexe(anyllps, h_{l}, C_{l})$, we can construct an HL infinite history $h_{h}$ such that $\langle h_{h},h_{l} \rangle \in B_{1}$. Then we can get $M_{h},v[h/h_{h}]\models Infexe(anyllps, h_{h}, C_{h})$.
\end{enumerate}
For item 8-14, we firstly construct a situation set $P$ of $M^{2}_{l}$ as follows: (i) $S^{M^{2}_{l}}_{0}\in P$; (ii) for any LL program $anyllps$, if there is a situation $s_{l}$ which satisfies $Do(anyllps, S^{M^{2}_{l}}_{0},s_{l})$, then we let $s_{l}\in P$. Now we prove item 8-14.
\vspace{2mm}
\begin{enumerate}
\item[8.] According to the definition of complete abstraction on theory level, we know that for each high-level model $M_{h}$ of $\mathcal{D}_{h}$, there exist a LL model $M^{2}_{l}$, such that $M_{h}$ is a sound abstraction of $M^{2}_{l}$ of via a $m$-back-simulation relation $B_{2}$. Then, we have $S^{M_{h}}_{0}\sim_{m} S^{M^{2}_{l}}_{0}$;
\end{enumerate}
\begin{enumerate}
\item[9.] Let $s_{h}$ be a reachable situation of $M_{h}$ via executing $anyhlas$, then according to the construction of $P$, we can find a situation $s_{l}\in P$ of $M^{2}_{l}$, such that $\langle s_{h},s_{l} \rangle \in B_{2}$. Furthermore, for any HL action $A(\vec{x})$, we have that if $s_{h}$ satisfies $\forall \vec{x}. \Pi_{A}(A(\vec{x}),s)$, then there exists a situation $s'_{l}$ such that $s_{l}$ satisfies $\forall \vec{x}. Do(m(A(\vec{x})),s_{l},s'_{l})$ and $s'_{l}\in P$.
\end{enumerate}
\begin{enumerate}
\item[10.] Let $s_{h}$ be a reachable situation of $M_{h}$ via executing $anyhlas$, then according to the construction of $P$, we have that there exists a situation $s_{l}\in P$ of $M^{2}_{l}$, such that $\langle s_{h},s_{l} \rangle \in B_{2}$. Then for any HL action $A(\vec{x})$, based on the definition of $m$-back-simulation, we can get that $M^{2}_{l},v[s/s_{l}]\models \forall \vec{x}. Term(A(\vec{x}),s,C_{l})$.
\end{enumerate}
\begin{enumerate}
\item[11.] Let $s_{h}$ be a reachable situation of $M_{h}$ via executing $anyhlas$, then according to the construction of $P$, we have that there exists a situation $s_{l}\in P$ of $M^{2}_{l}$, such that $\langle s_{h},s_{l} \rangle \in B_{2}$. For each HL action $A(\vec{x})$, if there exists an execution $A(\vec{x},\vec{u})$ of $A(\vec{x})$, and a situation $s'_{h}$, such that $M_{h},v[s/s_{h},s'/s'_{h}]\models \forall \vec{x}, \exists \vec u. Do(A_{d}(\vec{x},\vec{u}),s,s')$, then based on the definition of $m$-back-simulation and the construction of $P$, we can get that there exists a situation $s'_{l}$ such that $M^{2}_{l},v[s/s_{l}, s'/s'_{l}]\models \forall \vec{x}. Do(m(A(\vec{x})),s,s')\land P(s')$, and $\langle s'_{h}, s'_{l} \rangle \in B_{2}$. Then, $s_{l}$ satisfies $\forall \vec{x},\vec{y},\exists \vec{u}. m(\phi_{P,A_{d}}(\vec{x},\vec{y},\vec{u},s))$ iff $s_{h}$ satisfies $\forall \vec{x},\vec{y},\exists \vec{u}. \phi_{P,A_{d}}(\vec{x},\vec{y},\vec{u},s)$ iff $s'_{h}$ satisfies $\forall \vec{y}. P(\vec{y},s')$ iff $s'_{l}$ satisfies $\forall \vec{y}. m(P(\vec{y},s'))$.
\end{enumerate}
\begin{enumerate}
\item[12.] The proof of item 12 is similar to item 11.
\end{enumerate}
\begin{enumerate}
\item[13.] Let $s_{h}$ be a reachable situation of $M_{h}$ via executing $anyllps$, then based on the construction of $P$, there is a situation $s_{l}$ of $M^{2}_{l}$, such that $\langle s_{h},s_{l} \rangle \in B_{2}$. Suppose $s_{l}$ satisfies $G_{l}$, then according to the condition 2 of the definition of complete abstraction on theory level, we have that $s_{h}$ satisfies $G_{h}$. Thus, $s_{l}$ satisfies $m(G_{h})$,
\end{enumerate}
\begin{enumerate}
\item[14.] According to the definitions of sound abstraction and $m$-back-simulation, if there exists a HL infinite history $h_{h}$ such that $M_{h},v[h/h_{h}]\models Infexe(anyhlas, h_{h}, C_{h})$, then we can construct an LL infinite history $h_{l}$ such that $\langle h_{h},h_{l} \rangle \in B_{2}$. Then we can get that $M^{2}_{l},v[h/h_{l}]\models Infexe(anyllps, h_{l}, C_{l})$.
\end{enumerate}
Secondly, we prove the \textbf{if direction}:
\vspace{2mm}
For $M^{1}_{l}$ and $M_{h}$, we firstly prove that $M_{h}$ is a complete abstraction of $M^{1}_{l}$. For this purpose, we construct a $m$-simulation relation $B_{1}$ between the situation and infinite history domains of $M_{h}$ and $M^{1}_{l}$ as follows:
\begin{itemize}
\item[1.] $\langle S^{M_{h}}_{0}, S^{M^{1}_{l}}_{0}\rangle \in B_{1}$, i.e., $S^{M_{h}}_{0}$ is $m$-isomorphic to $S^{M^{1}_{l}}_{0}$;
\item[2.] Let $\langle s_{h},s_{l}\rangle \in B_{1}$, where $s_{l}$ is a situation reached from $S^{M^{1}_{l}}_{0}$ via executing $anyllps$, then $s_{h}$ is $m$-isomorphic to $s_{l}$. Let $A(\vec{x})$ be an arbitrary HL action.
By item 3, if $m(A(\vec{x}))$ is executable in the situation $s_{l}$, then we have $M_{l},v[s/s_{l}]\models \forall \vec{x}. Term(m(A(\vec{x})),s,C_{l})$. By item 2, if $m(A(\vec{x}))$ is executable in $s_{l}$, then $A(\vec{x})$ is executable in $s_{h}$. For each situation $s'_{l}$ satisfies $\forall \vec{x}.Do(m(A(\vec{x})),s_{l},s'_{l})$ and all the situations $s'_{h}$ satisfies $\forall \vec{x}. Do(A(\vec{x}),s_{h},s'_{h})$, we add all the pairs $\langle s'_{h},s'_{l} \rangle$ in $B_{1}$. By item 4 (the case of item 5 can be discussed similarly), for any execution $A_{d}(\vec{x},\vec{u})$ of $A(\vec{x})$, $s'_{l}$ satisfies $\forall \vec{y}. m(P(\vec{y},s'_{l}))$ iff $s_{l}$ satisfies $\forall \vec{x},\vec{y},\exists \vec{u}. m(\Pi_{A_{d}}(\vec{x},\vec{y},\vec{u},s_{l}))$ iff $s_{h}$ satisfies $\forall \vec{x},\vec{y},\exists \vec{u}. \Pi_{A_{d}}(\vec{x},\vec{y},\vec{u},s_{h})$. For the situations $s'_{h}$ that do not satisfy $\forall \vec{y}. P(\vec{y},s'_{h})$, we then delete all the pairs $\langle s'_{h},s'_{l}\rangle$ from $B_{1}$.
\item[3.] $B$ is a $m$-simulation relation follows its construction coupled with item 7.
\end{itemize}
Furthermore, if $\langle s_{h},s_{l}\rangle \in B_{1}$ and $s_{l}$ satisfies $G_{l}$, then $s_{h}$ satisfies $G_{h}$. Since $s_{h}$ and $s_{l}$ are $m$-isomorphic, $s_{l}$ satisfies $G_{l}$, by Item 6, $s_{l}$ satisfies $m(G_{h})$, thus $s_{h}$ satisfies $G_{h}$.
\vspace{2mm}
For $M^{2}_{l}$ and $M_{h}$, we prove that $M_{h}$ is a sound abstraction of $M^{2}_{l}$. For this purpose, in the following, we construct a $m$-back-simulation relation $B_{2}$ between the situation and infinite history domains of $M_{h}$ and $M^{2}_{l}$, particularly, we consider the situation set $P$ of $M^{2}_{l}$ we constructed above.
\begin{itemize}
\item[1.] $\langle S^{M_{h}}_{0}, S^{M^{2}_{l}}_{0}\rangle \in B_{2}$, i.e., $S^{M_{h}}_{0}$ is $m$-isomorphic to $S^{M^{2}_{l}}_{0}$, where $S^{M^{2}_{l}}_{0}\in P$;
\item[2.] Let $\langle s_{h},s_{l}\rangle \in B_{2}$, where $s_{l}\in P$ is a situation reached from $S^{M^{2}_{l}}_{0}$ via executing $anyllps$, then $s_{h}$ is $m$-isomorphic to $s_{l}$. Let $A(\vec{x})$ be an arbitrary HL action.
By item 10, if $m(A(\vec{x}))$ is executable in the situation $s_{l}$, then we have $M_{l},v[s/s_{l}]\models Term(m(A(\vec{x})),s,C_{l})$. By item 9, if $A(\vec{x})$ is executable in $s_{h}$, then $m(A(\vec{x}))$ is executable in $s_{l}$. For each situation $s'_{h}$ satisfies $\forall \vec{x}. Do(A(\vec{x}),s_{h},s'_{h})$ and all the situations $s'_{l}\in P$ that satisfy $\forall \vec{x}. Do(m(A(\vec{x})),s_{l},s'_{l})$, we add all the pairs $\langle s'_{h},s'_{l}\rangle$ in $B_{2}$. By item 11 (the case of item 12 can be discussed similarly), for any execution $A_{d}(\vec{x},\vec{u})$ of $A(\vec{x})$, $s'_{l}$ satisfies $\forall \vec{y}. m(P(\vec{y},s'_{l}))$ iff $s_{l}$ satisfies $\forall \vec{x},\vec{y},\exists \vec{u}. m(\Pi_{A_{d}}(\vec{x},\vec{y},\vec{u},s_{l}))$ iff $s_{h}$ satisfies $\forall \vec{x},\vec{y},\exists \vec{u}.\Pi_{A_{d}}(\vec{x},\vec{y},\vec{u},s_{h})$. For the situations $s'_{h}$ that do not satisfy $\forall \vec{y}. P(\vec{y},s'_{h})$, we then delete all the pairs $\langle s'_{h},s'_{l}\rangle$ from $B_{2}$.
\item[3.] $B_{2}$ is a $m$-back-simulation relation follows its construction coupled with item 14.
\end{itemize}
Furthermore, if $\langle s_{h},s_{l}\rangle \in B_{2}$ and $s_{l}$ satisfies $G_{l}$, then $s_{h}$ satisfies $G_{h}$. Since $s_{h}$ and $s_{l}$ are $m$-isomorphic, $s_{l}$ satisfies $G_{l}$, by Item 13, $s_{l}$ satisfies $m(G_{h})$, thus $s_{h}$ satisfies $G_{h}$.
\end{proof}
\section{Implementation Theory}
In this section, by introducing some restrictions in Theorem \ref{s-abs-nd-case}, we first give a sufficient condition for sound abstractions that is first-order verifiable. Then we discuss how to verify the sufficient condition with first-order theorem prover when the g-planning problem abstractions are QNPs.
We first discuss how to obtained a sufficient condition for sound abstractions from Theorem 1.
In the following,
we use $\models_{fo}$ to denote classic first-order entailment:
(1). Situations that satisfy $Do(anyllps, S_{0}, s)$ are all executable. State constraints are formulas that hold in all executable situations. Given a BAT $\mathcal{D}$ and a formula $\phi(s)$, $\phi(s)$ is a state constraint for $\mathcal{D}$ if $\mathcal{D}\models \forall s.Exec(s)\supset \phi(s)$. Thus, for getting a sufficient condition for sound abstractions, we replace the condition $Do(anyllps, S_{0}, s)$ by providing LL state constraints in tasks 2, 3, 4 and 6 in Theorem 1. We use $\mathcal{D}_{sc}$ to denote a set of state constraints, and abuse $\mathcal{D}_{sc}$ as the conjunction of its elements.
(2). Determining whether a given program terminates is the quintessential undecidable problem. We assume that any HL action refinement does not involve iteration. Then, the formula $\psi_{T}$ in task 4 is true trivially.
(3). We restrict that each HL action is deterministic, thus, we can ignore trajectory constraints, and then ignore task 5.
(4). Since task 2 involves the reasoning about LL programs, we use existentially extended regression to compute executability conditions of Golog programs. Given a program $\delta$ and a situation $s$, the executability condition $pre(\delta,s)$ of $\delta$ in the situation $s$ can be computed as $\mathcal{R}^{E}[\top(s), A(\vec{x})]$. For space saving, we use the the following abbreviation:
\vspace{1mm}
$\xi_{A} \doteq \bigwedge_{A\in \mathcal{A}_{h}}\forall \vec{x}.pre(m(A(\vec{x})),s)\equiv m(\Pi_{A}(\vec{x},s)).$
\vspace{1mm}
\noindent Again, for getting a sufficient condition for sound abstractions, we verify the following stronger task which can implies both task 2 and task 3: $\models_{fo} \forall s. \mathcal{D}_{sc}(s) \supset \xi_{A}$,
\vspace{1mm}
(5). For further discussion, we introduce two more abbreviations as follows:
\vspace{1mm}
\noindent
$ \zeta_P \doteq \bigwedge_{A\in \mathcal{A}_{h}} \forall \vec{x}. pre(m(A(\vec{x}),s)) $\\
\hspace*{2.2em} $\supset \bigwedge_{P\in \mathcal{P}_{h}} \forall \vec{y}.[\mathcal{R}^{E}[ m(P(\vec{y}))[s], m(A(\vec{x}))] $\\
\hspace*{2.2em} $\supset m(\phi_{P,A}(\vec{x}, \vec{y}, s))] \land [m(\phi_{P,A}(\vec{x}, \vec{y}, s)) $ \\
\hspace*{2.2em} $\supset\mathcal{R}^{U}[ m(P(\vec{y}))[s],m(A(\vec{x}))]$.
\vspace{1mm}
\noindent
$\zeta_f \doteq \bigwedge_{A\in \mathcal{A}_{h}}\forall \vec{x}. pre(m(A(\vec{x}),s))$\\
\hspace*{2.2em} $\supset \bigwedge_{f\in \mathcal{F}_{h}}\forall \vec{y}, z.[\mathcal{R}^{E}[ m(f(\vec{y})=z)[s], m(A(\vec{x}))] $\\
\hspace*{2.2em} $\supset m(\psi_{f,A}(\vec{x},\vec{y},z,s))] \land [m(\psi_{f,A}(\vec{x},\vec{y},z,s))$ \\
\hspace*{2.2em} $\supset \mathcal{R}^{U}[m(f(\vec{y})=z)[s],m(A(\vec{x}))]].$
\vspace{1mm}
\noindent Based on (1) and the two extended regression definitions, we can get that task 4 is equivalent to:
\begin{equation*}
\setlength{\abovedisplayskip}{1mm}
\setlength{\belowdisplayskip}{1mm}
\models_{fo} \forall s. \mathcal{D}_{sc}(s) \supset \zeta_P \land \zeta_f.
\end{equation*}
We have the following result based on the analysis above:
\begin{corollary}\label{implementation_theory}
Given a g-planning problem $\mathcal{G}_{l}$ and its abstraction $\mathcal{G}_{h}$, suppose that all the HL actions are deterministic and their refinements do not involve iteration, then $\mathcal{G}_{h}$ is a sound abstraction of $\mathcal{G}_{l}$ if:
\begin{enumerate}[leftmargin=*]
\item $\mathcal{D}_{l}^{S_{0}}\models_{fo} m(\phi)$, where $\phi\in \mathcal{D}_{h}^{S_{0}}$;
\item $\models_{fo} \forall s. \mathcal{D}_{sc}(s)\supset \xi_A$;
\item $\models_{fo} \forall s. \mathcal{D}_{sc}(s) \supset \zeta_P \land \zeta_f$;
\item $\models_{fo} \forall s. \mathcal{D}_{sc}(s)\land m(G_{h})[s]\supset G_{l}[s]$.
\end{enumerate}
\end{corollary}
The soundness of abstractions can be verified by feeding all 4 tasks above into a theorem prover. However, existing first-order theorem provers do not support counting or transitive closure. QNPs are popular abstraction models for g-planning problems. We now turn to discuss how to verify QNP abstractions. In particular, we develop methods to handle the counting and transitive closure.
QNPs are extensions of classical planning problems. Compared to classical planning problems, state variables of QNPs contain non-negative numerical variables $n$. These variables introduce the non-propositional atoms $n = 0$ and their negations $n>0$. These literals can appear in the initial situations, action preconditions, and goals of QNPs. The effects of actions of QNPs on a numerical variable $n$ can be increments or decrements denoted by $n\uparrow$ and $n\downarrow$, respectively.
\vspace{+1mm}
\noindent {\bf Example \ref{eg-clearing a block} cont'd.}
An abstraction for the g-planning problem $ClearA$ is a QNP $Q=\langle F, V, I, O, G \rangle $, where $F=\{H\}$ contains a propositional variable $H$ that means the agent holding a block, $V=\{n\}$ contains a numerical variable $n$ that means the number of blocks above $A$. The initial state $I$ is $n>0\land \neg H$, the goal state $G$ is $n=0$, and the actions $O=\{pickabove, putaside\}$ are defined as follows:
\vspace{1mm}
\noindent \hspace*{3mm}$pickabove:\langle \neg H\land n>0;H,n\downarrow \rangle;\ putaside:\langle H;\neg H \rangle.$
\vspace{1mm}
In QNP abstraction case, the verification for task 3 in Theorem \ref{implementation_theory} can be more specific . For each HL relational fluent $p\in\mathcal{P}_{h}$, and HL action $A\in \mathcal{A}_{h}$, $A$ makes $p$ true (the false case can be discussed similarly) means that in LL, all the executions of $m(A)$ make $m(p)$ true, then we have the task $3^{*}$:
\vspace{1mm}
\noindent \hspace*{0.5em}$\models_{fo} \forall s. \mathcal{D}_{sc}(s) \land $\\
\hspace*{1.5em}$ \forall \vec{x}. pre(m(A(\vec{x})),s)\supset \mathcal{R}^{U}[m(p(s)),m(A(\vec{x}))].$
\vspace{1mm}
For the verification tasks involving functional fluents, we only discuss a restricted case, that is, effects of actions on functional fluents are $\pm 1$. Given a functional fluent $f\in\mathcal{F}_{h}$, and action $A\in \mathcal{A}_{h}$, if $A$ makes $n$ increase by 1 (the decrease case can be discussed similarly), we prove the task $4^{*}$:
\vspace{1mm}
\noindent \hspace*{0.5em} $\models_{fo} \forall s. \mathcal{D}_{sc}(s) \land $\\
\hspace*{1.5em}$\forall \vec{x}. pre(m(A(\vec{x}),s)\supset\forall s'. Do(m(A(\vec{x})),s,s')\supset $\\
\hspace*{1.5em}$\forall \vec{y},z. m(f(\vec{y},s')=k+1)\equiv m(\psi_{f,A}(\vec{x},\vec{y},k,s)).$
\vspace{1mm}
\noindent which means that if $m(A)$ is executable in any LL situation $s$, then for any situations $s'$ that can arrived by executing $m(A)$ from $s$, we have $m(n)[s']=m(n)[s]+1$. Assuming that $m(n)=\#x.\phi(x)$, we have the formula $\Psi$ below that holds:
\vspace{1mm}
\noindent $\hspace*{1em}[\exists x. \phi(x,s')\land \neg \phi(x,s)]\land[\forall x. \phi(x,s)\supset \phi(x,s')]\land$\\
$\hspace*{1.5em}[\forall x,y. \phi(x, s)\land\neg \phi(x,s')\land \phi(y,s')\land \phi(y,s)\supset x=y].$
\vspace{1mm}
\noindent This formula says that: there exists one and only one object that makes $\phi(x)$ true from false after a program execution. Then base on Proposition \ref{prop_Uregression}, we can get the following task 4$^{\#}$, which is equivalent to task $4^{*}$:
\vspace{1mm}
\noindent \hspace*{1em}$\models_{fo} \forall s. \mathcal{D}_{sc}(s)\land $\\
\hspace*{1.5em}$\forall \vec{x}. pre(m(A(\vec{x})),s) \supset \mathcal{R}^{U}[{\Psi(s),m(A(\vec{x}))}].$
\vspace{1mm}
Transitive closure formulas are often used to define counting terms. Thus, task $4^{\#}$ may involve regression about transitive closure. In fact, the definition of one-step regression of transitive closure formulas is the same as Definition \ref{def_reg}.
\noindent {\bf Example \ref{eg-clearing a block} cont'd.}
Given the successor state axiom of fluent $on(x,y,s)$ in Example \ref{eg-clearing a block}, and a transitive closure formula $\phi$:
\begin{equation*}
\setlength{\abovedisplayskip}{1mm}
\setlength{\belowdisplayskip}{0mm}
[TC_{x,y} on(x,y,do(unstack(A,B),s))](x,C)],
\end{equation*}
\vspace{1mm}
\noindent the regression result $\mathcal{R}_{\mathcal{D}}[\phi]$ of $\phi$ related to the concrete action $unstack(A,B)$ is as follows:
\begin{equation*}
\setlength{\abovedisplayskip}{1mm}
\setlength{\belowdisplayskip}{0mm}
[TC_{x,y} on(x,y,s)\land (x\neq A\lor y\neq B)](x,C).
\end{equation*}
\vspace{1mm}
To handle the regression of transitive closure formulas with existing theorem provers, for a given transitive closure formula $\phi(\vec{x})$, we first get the regression result $\mathcal{R}_{\mathcal{D}}[\phi(\vec{x})]$ for $\phi(\vec{x})$. Then we define $\mathcal{R}_{\mathcal{D}}[\phi(\vec{x})]$ as a new relation $P(\vec{x})$ and endow it with the transitivity and the minimality.
\section{Verification System and Experiment Results}
Based on the discussion in Section 4, we designed a sound abstraction verification system for g-planning. The inputs of our system include a g-planning problem coupled with state constraints, a refinement mapping, and an abstraction problem. The output of our system is $True$, $False$, or $Unknown$. g-planning problems in our system take the form of STRIPS-like problems. The only extension of g-planning problems to STRIPS problems is that their initial states can be first-order formulas with transitive closure (see Example \ref{eg-clearing a block}). The formalization of QNP abstractions in our system is the same as that in \cite{BonetG20}.
The workflow of our verification system is as follows:
\vspace*{+0.5mm}
\par\noindent \textbf{Step 1:} Given the input g-planning problem, the system automatically generates the LL BAT;
\vspace*{+0.5mm}
\par\noindent \textbf{Step 2:} Based on the input abstraction problem, refinement mapping, the generated LL BAT, and LL state constraints, the system generates the verification tasks that we mention in Section 4. Concretely, the system generates task 1 for the HL and the LL initial states; task 2 for each HL action; task 3$^{*}$ for each HL relational fluent; task 4$^{\#}$ for each HL functional fluent; and task 5 for the HL and the LL goal states.
\par\noindent \textbf{Step 3:} The system feeds all the tasks above into the Z3-solver (version 4.8.10.0) \cite{smt} for verification. If all these tasks can be verified, then the system returns $True$, else returns $False$. If the theorem prover times out, the system returns $Unknown$.
\vspace{1mm}
\noindent {\bf Example \ref{eg-clearing a block} cont'd.} The refinement mapping about the HL relational $H$ and functional fluent $n$ are as follows:
\vspace{1mm}
\indent $m(H)=\exists x.holding(x)$; $m(n)=\# x. on^{+}(x,A)$.
\noindent The task 1 generated by our system is as follows:
$\exists x. on^{+}(x,A)\land ontable(A)\land \neg holding(x) $\\
\indent \hspace*{4em}$\supset \neg \exists x. holding(x)\land \exists x. on^{+}(x,A).$
Our verification system was tested on 7 domains: $ClearA$ is our running example; $OnAB$ is about achieving the goal $On(A,B)$ in Blocksworld instances where the gripper is initially empty, and the blocks $A$ and $B$ are in different towers with blocks above them; $Logistics$ involves a vehicle whose goal is to load goods from the original location and transport them to the target location; $Gripper$ involves a robot with two grippers whose goal is to move all balls from room A to B; $GetLast$ and $FindA$ are both linked list domain problems. The predicate and the action sets of these two problems are the same. The goal of $GetLast$ is to traverse all the elements in a linked list, while $FindA$ aims at finding the element $A$ in a linked list; $Corner$ contains instances that an agent needs to navigate in a rectangle grid and arrive at the point $(0,0)$ from any other points $(x,y)$. Abstractions of the problems $ClearA$, $Gripper$, and $OnAB$ come from \cite{BonetFG19}, and abstractions of the problems $Logistic$, $GetLast$, $FindA$, $Corner$ are all provided by hand.
Our experiments were run on a Windows machine with a 3.7GHz CPU and 16GB RAM, the default time limit of each subtask was 10s. We summarize the experimental results in Table 1. \textit{\#A} is the number of HL actions. \textit{\#F} is the number of HL functional fluent $F$. \textit{\#P} is the number of HL relational fluent $P$. $T$ shows the total time costs of all verification tasks. The results show that all abstractions are sound.
\begin{table}
\renewcommand{\arraystretch}{1.3}
\caption{Experimental Results}
\label{table_example}
\centering
\begin{tabular}{ccccccc}
\hline
\bfseries \textit{Domain} & \bfseries \textit{\#A}
& \bfseries \textit{\#F} & \bfseries \textit{\#P} & \bfseries \textit{T(s)} &\bfseries \textit{Result} \\
\hline
ClearA & 2 & 1 & 2 & 4.1687 & True \\
Gripper & 4 & 5 & 2 & 7.0014 & True \\
Logistics & 4 & 3 & 2 & 6.0052 & True \\
OnAB & 4 & 3 & 8 & 10.2191 & True \\
GetLast & 2 & 1 & 1 & 3.5302 & True \\
FindA & 2 & 1 & 1 & 3.5369 & True \\
Corner & 2 & 2 & 0 & 3.6378 & True \\
\hline
\end{tabular}
\end{table}
\section{Conclusion}
In g-planning, solutions of sound abstractions are those with correctness guarantees for the original problems. In this paper, based on Cui et al.’s work, we explored automatic verification of sound abstractions for g-planning. We gave a proof-theoretic characterization of sound abstractions for g-planning in the situation calculus. Then, we got a sufficient condition for sound abstractions which is first-order verifiable. To implement it, we exploited regression extensions and presented methods to handle counting and transitive closure. In the future, we are interested in automatic verification concerning trajectory constraints for non-deterministic abstractions, such as FOND. We are also interested in automatic learning abstractions and abstraction revision based on the verification of sound abstraction.
\section*{Acknowledgments}
We acknowledge support from the Natural Science Foundation of China under Grant No. 62076261.
\bibliographystyle{named}
|
2,869,038,153,977 | arxiv | \section{\boldmath Introduction}
The hadron spectrum was successfully categorized based on the quark model as early as the 1960s~\cite{quarkmodel}. For a long time, all known hadrons could be classified as mesons or baryons with components of a quark-antiquark pair ($q \bar{q}$) or three quarks ($qqq$), respectively. However, Quantum Chromodynamics (QCD) also allows the existence of more complex structures,
such as the tetraquark, pentaquark, or glueball, which possess properties that are forbidden for conventional hadrons. The states that do not fit into the ordinary $q\bar{q}$ or $qqq$ scheme in the quark model are referred to as exotic states.
The experimental discovery of exotic states began in 2003 with the observation of the $X(3872)$~\cite{3872-1}. This new state did not fit any ordinary $c\bar{c}$ quarkonia in the quark model. After that, the $X(3872)$ was observed in multiple decay modes and confirmed by various experiments~\cite{3872-2,3872-3,3872-4}. Many different theoretical interpretations of this state have been proposed, such as meson molecule, tetraquark, and conventional bound state~\cite{3872-theory-1,3872-theory-2,3872-theory-3,3872-theory-4,3872-theory-5}.
During the past two decades, there has been considerable world-wide activity in exotic state research using various processes, such as $e^+e^-$ annihilation (e.g., at $\tau$-charm facilities and B-factories), hadron collisions (e.g., at the Tevatron and the LHC), or photo- and leptoproduction (e.g., at the SPS, HERA or at Jefferson Lab),
and many exotic state candidates were observed~\cite{xyz-yuan,xyz-shen}.
In searches for exotic states, a clear feature that helps distinguish exotic from ordinary hadrons would be a nonzero electric charge in a state which contains a heavy quark-antiquark pair of the same flavor. Such a state must contain at least one more quark-antiquark pair, and is thus not a conventional quark-antiquark meson.
Furthermore, a state with a pair of two identical heavy flavor quarks (for example, $cc$), has even more pronounced features as an exotic state.
Very recently, the LHCb experiment announced observation of an open-double-charm state $T_{cc}^+$ in the $D^{0}D^{0}\pi^{+}$ mass spectrum near threshold~\cite{tcc-lhcb-1,tcc-lhcb-2}.
It contains two charm quarks and two light quarks, thus it is a clear evidence for an exotic state. On the theoretical side, in addition to tetraquark models based on a heavy quark pair and two light quarks, the double-heavy tetraquark states are studied using QCD sum rules~\cite{the_qcd}, quark models~\cite{the_qm1,the_qm2}, and lattice QCD computations~\cite{the_lattqcd}. Besides, a QCD-inspired chiral quark model gives a prediction on the tetraquark states denoted as $X_{cc\bar{s}\bar{s}}$ with $+2$ electric charge in spin-parity channels $J^{P} = 0^{+}$ and $2^{+}$, which are expected to be found in $D_{s}^{+}D_{s}^{+}$ and $D_{s}^{*+}D_{s}^{*+}$ final states~\cite{the_qqss2}. The predicted masses and widths of those resonances are listed in Table~\ref{tab:theoritical_predict}. Among the three predicted resonances in $D_{s}^{*+}D_{s}^{*+}$ final state, the narrowest one has the highest observable probability.
\begin{table}[h]
\begin{center}
\caption{\label{tab:theoritical_predict} Predicted masses and widths for the $X_{cc\bar{s}\bar{s}}$ resonances in $D_{s}^{+}D_{s}^{+}$ and $D_{s}^{*+} D_{s}^{*+}$ final states~\cite{the_qqss2}. }
\renewcommand\arraystretch{1.3}
\begin{tabular}{cccc}
\hline
\hline
Mode & $~~IJ^{P}$~~ & ~~Mass~~ & ~~Width~~ \\
& & ~~(MeV/$c^2$)~~ & ~~(MeV)~~ \\
\hline
$X_{cc\bar{s}\bar{s}} \to D_{s}^{+}D_{s}^{+}$ & 00$^{+}$ & 4902 & 3.54 \\
$X_{cc\bar{s}\bar{s}} \to D_{s}^{*+}D_{s}^{*+}$ & 02$^{+}$ & 4821 & 5.58 \\
& 02$^{+}$ & 4846 & 10.68 \\
& 02$^{+}$ & 4775 & 23.26 \\
\hline
\hline
\end{tabular}
\end{center}
\end{table}
In this paper, we present a search for double-heavy tetraquark candidates using the $D_{s}^{+}D_{s}^{+}$ and $D_{s}^{*+} D_{s}^{*+}$ final states in $\Upsilon(1S,2S)$ inclusive decays, and $e^+e^- \to D_{s}^{+}D_{s}^{+}(D_{s}^{*+}D_{s}^{*+}) + anything$ processes at $\sqrt{s}$ = 10.52, 10.58, and 10.867~GeV. The $D_{s}^{*+}$ candidates are reconstructed in decays to $D_s^+\gamma$, while the $D_s^+$ candidates are reconstructed in the $D_s^+\to \phi(\to K^{+}K^{-})\pi^+$ and $\bar{K}^{*}(892)^{0}(\to K^{-}\pi^+)K^{+}$ decays. Inclusion of charged-conjugate modes is implicitly assumed throughout this analysis.
\section{\boldmath The data sample and the belle detector}
The data samples used in this analysis include: a 5.74 fb$^{-1}$ data sample collected
at the $\Upsilon(1S)$ peak (102 million $\Upsilon(1S)$ events); a 24.7 fb$^{-1}$ data sample collected
at the $\Upsilon(2S)$ peak (158 million $\Upsilon(2S)$ events); an 89.5 fb$^{-1}$ data sample collected at $\sqrt{s}$ = 10.52 GeV; a 711 fb$^{-1}$ data sample collected at $\sqrt{s}$ = 10.58 GeV, and a
121.4 fb$^{-1}$ data sample collected at $\sqrt{s}$ = 10.867 GeV, where $s$ is the
center-of-mass energy squared.
All the data were collected with the Belle detector, which is described in detail in Ref.~\cite{detector}, operating at the KEKB asymmetric-energy $e^+e^-$
collider~\cite{collider}.
It is a large-solid-angle magnetic spectrometer consisting of a silicon vertex detector,
a 50-layer central drift chamber (CDC), an array of aerogel threshold Cherenkov counters (ACC),
a barrel-like arrangement of time-of-flight scintillation counters (TOF), and an electromagnetic
calorimeter comprising CsI(Tl) crystals (ECL) located inside a superconducting solenoid coil that
provides a $1.5~\hbox{T}$ magnetic field. An iron flux return comprising resistive plate chambers
placed outside the coil was instrumented to detect $K^{0}_{L}$ mesons and to identify muons.
Monte Carlo (MC) signal events are generated with {\sc EvtGen}~\cite{evtgen} and processed through a full simulation of the Belle detector based on {\sc GEANT3}~\cite{geant}.
Initial-state radiation (ISR) is taken into account assuming that the
cross sections follow a $1/s$ dependence in $e^+e^- \to X_{cc\bar{s}\bar{s}} + anything$ reactions.
The processes $\Upsilon(1S,2S) \to D_{s}^{+}D_{s}^{+}(D_{s}^{*+}D_{s}^{*+}) + anything$ and $e^+e^- \to D_{s}^{+}D_{s}^{+}(D_{s}^{*+}D_{s}^{*+}) + anything$ at $\sqrt{s}$ = 10.52~GeV, 10.58~GeV, and 10.867~GeV are taken into account, where the $D_{s}^{*+}$ decays into $D_{s}^{+} \gamma$ using a $P$-wave model, and the $D_{s}^{+}$ decays to $K^+K^-\pi^{+}$ final states using a Dalitz plot decay model of Ref.~\cite{cleo-dalitz}. The mass of $X_{cc\bar{s}\bar{s}}$ is chosen in the interval from 4882~MeV/$c^{2}$ to 4922~MeV/$c^{2}$ (4801~MeV/$c^{2}$ to 4841~MeV/$c^{2}$)
in steps of 5~MeV/$c^{2}$, with a width varying from 0.54~MeV to 6.54~MeV (2.58~MeV to 8.58~MeV) in steps of 1~MeV for $X_{cc\bar{s}\bar{s}} \to D_{s}^{+}D_{s}^{+}$ ($D_{s}^{*+}D_{s}^{*+}$).
Inclusive MC samples of $\Upsilon(1S,2S)$ decays, $\Upsilon(4S) \to B^{+}B^{-}/B^{0}\bar{B}^{0}$, $\Upsilon(5S) \to B_{s}^{(*)} \bar{B}_{s}^{(*)}$, and $e^+e^- \to q\bar{q}$ $(q=u, d, s, c)$ at $\sqrt{s}$ = 10.52~GeV, 10.58~GeV, and 10.867~GeV corresponding to four times the integrated luminosity of data are used to study possible peaking backgrounds.
\section{\boldmath Common Event selection criteria}
For reconstructed charged tracks,
the impact parameters perpendicular to and along the beam direction with respect to the interaction point (IP)
are required to be less than 0.2~cm and 1.5~cm, respectively, and the transverse momentum in the
laboratory frame is required to be larger than 0.1~GeV/$c$.
For the particle identification (PID) of a well-reconstructed charged track,
information from different detector subsystems, including specific ionization in the CDC,
time measurement in the TOF, and the response of the ACC, is combined to form a likelihood
${\mathcal L}_i$~\cite{pidcode} for particle species $i$, where $i$ = $\pi$ or $K$.
Tracks with $R_{K}=\mathcal{L}_{\textrm{K}}/(\mathcal{L}_K+\mathcal{L}_\pi)<0.4$ are identified as
pions with an efficiency of 96\%, while 5\% of kaons are misidentified as pions; tracks
with $R_{K}>0.6$ are identified as kaons with an efficiency of 95\%, while 4\% of pions are
misidentified as kaons.
An ECL cluster is taken as a photon candidate if it does not match the extrapolation of any charged tracks.
The energy of the photon candidate from the $D_{s}^{*+}$ decay is required to be greater than 50 MeV.
For $D_{s}^{+}$ candidates, vertex and mass-constrained fits are performed, and then $\chi^{2}_{\textrm{vertex}}/n.d.f < 20$ is required ($> 97\%$ selection efficiency according to MC simulation).
For $D_{s}^{*+}$ candidates, a mass-constrained fit is performed to improve its momentum resolution.
The best $D_{s}^{*+}$ candidate with $\chi^{2}$ of $D_{s}^{*+}$ mass-constrained fit for each $D_{s}^{+}$ candidate is kept to suppress the combinational background.
The signal mass windows for $\bar{K}^{*}(892)^{0}$, $\phi$, $D_s^+$, and $D_{s}^{*+}$ candidates have been optimized by maximizing the Punzi parameter $S/(3/2+\sqrt{B})$~\cite{fom},
where $S$ is the number of selected events in the simulated signal process by fitting the $X_{cc\bar{s}\bar{s}}$ invariant-mass spectrum.
$B$ is the number of selected events obtained from the normalized $M_{D_{s}^{+}D_{s}^{+}}$ sidebands in inclusive MC samples.
The optimized mass window requirements are $|M_{K^{+}K^{-}} - m_{\phi}| < 8$ MeV/$c^{2}$, $|M_{\phi \pi^+} - m_{D_{s}^{+}}| < 7$ MeV/$c^{2}$, $|M_{K^{-}\pi^+} - m_{\bar{K}^{*}(892)^{0}}| < 50$ MeV/$c^{2}$, $|M_{\bar{K}^{*}(892)^{0} K^{+}} - m_{D_{s}^{+}}| < 7$ MeV/$c^{2}$, and $|M_{\gamma D_{s}^{+}} - m_{D_{s}^{*+}}| < 14$ MeV/$c^{2}$, where $m_{\phi}$, $m_{\bar{K}^{*}(892)^{0}}$, $m_{{D}_{s}^{+}}$, and $m_{{D}_{s}^{*+}}$ are the nominal masses of $\phi$, $\bar{K}^{*}(892)^{0}$, ${D}_{s}^{+}$, and ${D}_{s}^{*+}$~\cite{PDG}.
There are no multiple candidates after processing all selections in both $D_{s}^{+}D_{s}^{+}$ and $D_{s}^{*+}D_{s}^{*+}$ cases.
Figure~\ref{DD_mass_data_2D} shows the scatter plots of $D_{s}^{+}$ versus $D_{s}^{+}$ invariant masses from the selected
$e^+e^- \to X_{cc\bar{s}\bar{s}} (\to D_{s}^{+}D_{s}^{+}(D_{s}^{*+}D_{s}^{*+})) + anything$ candidates from data at $\sqrt{s}$ = 10.58~GeV as an example. Here we define the two-dimensional $D_{s}^{+}D_{s}^{+}$ sidebands, and the normalized contribution from $D_{s}^{+}$ and $D_{s}^{+}$ sidebands is estimated using 25\% of the number of events
in the blue dashed line boxes and reduced by 6.25\% of the number of events in the red dotted line boxes.
\begin{figure*}[htbp]
\begin{center}
\includegraphics[height=3.75cm,width=5cm]{fig1a.eps}
\includegraphics[height=3.75cm,width=5cm]{fig1b.eps}
\includegraphics[height=3.75cm,width=5cm]{fig1c.eps}\\
\includegraphics[height=3.75cm,width=5cm]{fig1d.eps}
\includegraphics[height=3.75cm,width=5cm]{fig1e.eps}
\includegraphics[height=3.75cm,width=5cm]{fig1f.eps}
\caption{The top (bottom) plots show the distribution of $M_{D_{s}^{+}}$ vs $M_{D_{s}^{+}}$ from the selected $e^+e^- \to X_{cc\bar{s}\bar{s}} \to D_{s}^{+}D_{s}^{+}~ (D_{s}^{*+}D_{s}^{*+}) + anything$ candidates from data at $\sqrt{s}$ = 10.58~GeV, where the $D_{s}^{+}$ is reconstructed from $\phi \pi^+$ or $\bar{K}^{*}(892)^{0} K^{+}$. The central solid boxes define the signal regions, and the red dash-dotted and blue dashed boxes show the $M_{D_{s}^{+}}$ sideband regions described in the text.}\label{DD_mass_data_2D}
\end{center}
\end{figure*}
\section{\boldmath Invariant-mass spectra}
\begin{figure*}[htbp]
\begin{center}
\includegraphics[height=3.75cm,width=5cm]{fig2a.eps}
\includegraphics[height=3.75cm,width=5cm]{fig2b.eps}
\put(-250,90){\bf (a)} \put(-105,90){\bf (b)}
\\
\includegraphics[height=3.75cm,width=5cm]{fig2c.eps}
\includegraphics[height=3.75cm,width=5cm]{fig2d.eps}
\includegraphics[height=3.75cm,width=5cm]{fig2e.eps}
\put(-395,90){\bf (c)} \put(-250,90){\bf (d)} \put(-105,90){\bf (e)}
\caption{Distributions of $M_{D_{s}^{+}D_{s}^{+}}$ from data for processes (a) $\Upsilon(1S) \to X_{cc\bar{s}\bar{s}} (\to D_{s}^{+}D_{s}^{+}) + anything$, (b) $\Upsilon(2S) \to X_{cc\bar{s}\bar{s}} (\to D_{s}^{+}D_{s}^{+}) + anything$, and $e^+e^- \to X_{cc\bar{s}\bar{s}} (\to D_{s}^{+}D_{s}^{+}) + anything$ at (c) $\sqrt{s}$ = 10.52~GeV, (d) $\sqrt{s}$ = 10.58~GeV, (e) $\sqrt{s}$ = 10.867~GeV. The cyan shaded histograms are from the normalized $M_{D_{s}^{+}D_{s}^{+}}$ sideband events.}\label{DD_mass_data_all}
\end{center}
\end{figure*}
\begin{figure*}[htbp]
\begin{center}
\includegraphics[height=3.75cm,width=5cm]{fig3a.eps}
\includegraphics[height=3.75cm,width=5cm]{fig3b.eps}
\put(-250,90){\bf (a)} \put(-105,90){\bf (b)}
\\
\includegraphics[height=3.75cm,width=5cm]{fig3c.eps}
\includegraphics[height=3.75cm,width=5cm]{fig3d.eps}
\includegraphics[height=3.75cm,width=5cm]{fig3e.eps}
\put(-395,90){\bf (c)} \put(-250,90){\bf (d)} \put(-105,90){\bf (e)}
\caption{Distributions of $M_{D_{s}^{*+}D_{s}^{*+}}$ from data for processes (a) $\Upsilon(1S) \to X_{cc\bar{s}\bar{s}} (\to D_{s}^{*+}D_{s}^{*+}) + anything$, (b) $\Upsilon(2S) \to X_{cc\bar{s}\bar{s}} (\to D_{s}^{*+}D_{s}^{*+}) + anything$, and $e^+e^- \to X_{cc\bar{s}\bar{s}} (\to D_{s}^{*+}D_{s}^{*+}) + anything$ at (c) $\sqrt{s}$ = 10.52~GeV, (d) $\sqrt{s}$ = 10.58~GeV, (e) $\sqrt{s}$ = 10.867~GeV. The cyan shaded histograms are from the normalized $M_{D_{s}^{+}D_{s}^{+}}$ sideband events.}\label{DsDs_mass_data_all}
\end{center}
\end{figure*}
\begin{figure*}[htbp]
\begin{center}
\includegraphics[height=3.75cm,width=5cm]{fig4a.eps}
\includegraphics[height=3.75cm,width=5cm]{fig4b.eps}
\put(-250,90){\bf (a)} \put(-105,90){\bf (b)}
\\
\includegraphics[height=3.75cm,width=5cm]{fig4c.eps}
\includegraphics[height=3.75cm,width=5cm]{fig4d.eps}
\includegraphics[height=3.75cm,width=5cm]{fig4e.eps}
\put(-395,90){\bf (c)} \put(-250,90){\bf (d)} \put(-105,90){\bf (e)}
\caption{Distributions of $M_{D_{s}^{+}D_{s}^{+}}$ from data for processes (a) $\Upsilon(1S) \to X_{cc\bar{s}\bar{s}} (\to D_{s}^{+}D_{s}^{+}) + anything$, (b) $\Upsilon(2S) \to X_{cc\bar{s}\bar{s}} (\to D_{s}^{+}D_{s}^{+}) + anything$, and $e^+e^- \to X_{cc\bar{s}\bar{s}} (\to D_{s}^{+}D_{s}^{+}) + anything$ at (c) $\sqrt{s}$ = 10.52~GeV, (d) $\sqrt{s}$ = 10.58~GeV, (e) $\sqrt{s}$ = 10.867~GeV. The cyan shaded histograms are from the normalized $M_{D_{s}^{+}D_{s}^{+}}$ sideband events.}\label{DD_mass_data}
\end{center}
\end{figure*}
\begin{figure*}[htbp]
\begin{center}
\includegraphics[height=3.75cm,width=5cm]{fig5a.eps}
\includegraphics[height=3.75cm,width=5cm]{fig5b.eps}
\put(-250,90){\bf (a)} \put(-105,90){\bf (b)}
\\
\includegraphics[height=3.75cm,width=5cm]{fig5c.eps}
\includegraphics[height=3.75cm,width=5cm]{fig5d.eps}
\includegraphics[height=3.75cm,width=5cm]{fig5e.eps}
\put(-395,90){\bf (c)} \put(-250,90){\bf (d)} \put(-105,90){\bf (e)}
\caption{Distributions of $M_{D_{s}^{*+}D_{s}^{*+}}$ from data for processes (a) $\Upsilon(1S) \to X_{cc\bar{s}\bar{s}} (\to D_{s}^{*+}D_{s}^{*+}) + anything$, (b) $\Upsilon(2S) \to X_{cc\bar{s}\bar{s}} (\to D_{s}^{*+}D_{s}^{*+}) + anything$, and $e^+e^- \to X_{cc\bar{s}\bar{s}} (\to D_{s}^{*+}D_{s}^{*+}) + anything$ at (c) $\sqrt{s}$ = 10.52~GeV, (d) $\sqrt{s}$ = 10.58~GeV, (e) $\sqrt{s}$ = 10.867~GeV. The cyan shaded histograms are from the normalized $M_{D_{s}^{+}D_{s}^{+}}$ sideband events.}\label{DsDs_mass_data}
\end{center}
\end{figure*}
The $D_{s}^{+}D_{s}^{+}$ and $D_{s}^{*+} D_{s}^{*+}$ invariant mass distributions of selected events from data samples in the kinematically allowed region are shown in Figs.~\ref{DD_mass_data_all} and ~\ref{DsDs_mass_data_all}
together with the backgrounds estimated from the normalized $D_{s}^{+}D_{s}^{+}$ sideband events.
No peaking backgrounds are found in the normalized sideband events in either $D_{s}^{+}D_{s}^{+}$ and $D_{s}^{*+}D_{s}^{*+}$ invariant mass distributions from data,
nor in the $D_{s}^{+}D_{s}^{+}$ and $D_{s}^{*+}D_{s}^{*+}$ mass spectra from inclusive MC samples~\cite{topoo}. Thus in the following we only focus on
the mass spectra from the theoretically predicted regions for $X_{cc\bar{s}\bar{s}}$~\cite{the_qqss2}
which are shown in Figs.~\ref{DD_mass_data} and ~\ref{DsDs_mass_data}.
Since no clear signals are observed in the invariant-mass spectra, the 90\% confidence level (C.L.) upper limits on the numbers of signal events
are given. The upper limit is calculated by the frequentist approach~\cite{pole-1} implemented in the POLE (Poissonian limit estimator) program~\cite{pole-2}, where the mass window is obtained by giving 95\% acceptance to the corresponding simulated signal events, the number of signal candidate events is counted directly, and the number of expected background events is estimated from the normalized mass sidebands.
The possible non-resonant contributions in the $D_{s}^{+}D_{s}^{+}$ and $D_{s}^{*+}D_{s}^{*+}$ invariant-mass spectra are not subtracted and
taken as potential signals, in order to set more conservative upper limits.
The upper limit calculation is repeated with $M_{X_{cc\bar{s}\bar{s}}}$ varying from 4882 MeV/$c^2$ to 4922 MeV/$c^2$ in steps of 5~MeV/$c^2$ and $\Gamma_{X_{cc\bar{s}\bar{s}}}$ varying
from 0.54~MeV to 6.54~MeV in steps of 1.0~MeV for the $M_{D_{s}^{+}D_{s}^{+}}$ distribution, and with $M_{X_{cc\bar{s}\bar{s}}}$ varying from 4801 MeV/$c^2$ to 4841 MeV/$c^2$ in steps of 5 MeV/$c^2$
and $\Gamma_{X_{cc\bar{s}\bar{s}}}$ varying from 2.58~MeV to 8.58~MeV in steps of 1.0 MeV for the $M_{D_{s}^{*+}D_{s}^{*+}}$ distribution.
\section{\boldmath Systematic Uncertainties }
There are several sources of systematic uncertainties on the branching fraction and Born cross section measurements,
which can be divided into multiplicative and additive systematic uncertainties.
The multiplicative systematic uncertainties include detection-efficiency-related (DER) sources (tracking efficiency,
PID, and photon reconstruction),
the statistical uncertainty of the MC efficiency, branching fractions of intermediate states, the total numbers of $\Upsilon(1S)$ and $\Upsilon(2S)$
events, and the integrated luminosities at $\sqrt{s}$ = 10.52~GeV, 10.58~GeV, and 10.867~GeV.
The systematic uncertainties related to detection efficiency ($\sigma_{\textrm{DER}}$) include the tracking efficiency (0.35\% per track, estimated using partially reconstructed $D^{\ast}$ decays in $D^{*+} \to \pi^{+} D^0, D^0 \to K_S^0 \pi^{+} \pi^{-}$), PID efficiency ($2.2\%$ per kaon and $1.8\%$ per pion, estimated using $D^{*+} \to D^{0}\pi^{+}$, $D^{0} \to K^{-} \pi^{+}$ samples), and photon reconstruction (2.0\% per photon, estimated using a radiative Bhabha sample).
The statistical uncertainty in the signal MC simulation efficiency can be calculated as $\Delta \varepsilon$ = $\sqrt{\varepsilon(1-\varepsilon)/N}$, where $\varepsilon$ is the reconstruction efficiency after all event selections, and $N$ is the total number of generated events.
Its relative uncertainty $\sigma_{\textrm{MC stat.}} = \Delta\varepsilon/\varepsilon$ is at most at the 1.0\% level.
Changing the $s$ dependence of the cross sections of $e^+e^- \to X_{cc\bar{s}\bar{s}} (\to D_{s}^{*+}D_{s}^{*+}) + anything$ from $1/s$ to $1/s^{4}$, the product of efficiency and radiative correction factor $\epsilon(1+\delta)_{\textrm{ISR}}$ changes by less than 0.3\% ($\sigma_{\textrm{ISR}}$).
The relative uncertainties of branching fractions for $D_{s}^{*+} \to \gamma D_{s}^{+}$, $D_{s}^+ \to \phi(\to K^{+}K^{-})\pi^+$, and $D_{s}^+ \to \bar{K}^{*}(892)^{0}(\to K^{-}\pi^+)K^{+}$ are 0.75\%, 3.52\%, and 3.45\%~\cite{PDG}, respectively.
The total uncertainties are calculated using $\sigma_{{\cal B}} = \frac{\sqrt{\Sigma{(\varepsilon_{i} \times {\cal B}_i \times \sigma_{{\cal B}_{i}})^2}}}{\Sigma{(\varepsilon_{i} \times {\cal B}_{i}})}$, where $\varepsilon_{i}$ is the efficiency, $\sigma_{{\cal B}_i}$ is the relative uncertainty of intermediate states' branching fractions, and ${\cal B}_i$
is the product of branching fractions of the intermediate states for each reconstructed mode $i$.
The total numbers of $\Upsilon(1S)$ and $\Upsilon(2S)$ events are estimated to be ($102 \pm 2) \times 10^6$ and ($157.8 \pm 3.6) \times 10^6$, which are determined by counting the numbers of inclusive hadrons.
The uncertainties are mainly due to imperfect simulations of the charged multiplicity distributions from inclusive hadronic MC events ($\sigma_{\textrm{N}_{\Upsilon(1S,2S)}}$).
Belle measures luminosity with 1.4\% precision using wide angle Bhabha events ($\sigma_{{\cal L}}$).
All the multiplicative uncertainties are summarized in Table~\ref{tab:DsDs_err} for the measurements of $\Upsilon(1S,2S) \to X_{cc\bar{s}\bar{s}} + anything$ and $e^+e^- \to X_{cc\bar{s}\bar{s}} + anything$ at $\sqrt{s} $ = 10.52~GeV, 10.58~GeV, and 10.867 GeV, respectively. The total multiplicative uncertainty is calculated by adding all sources of multiplicative uncertainty in quadrature,$$\sigma_{\textrm{syst.}} = \sqrt{\sigma_{\textrm{DER}}^2 + \sigma_{\textrm{MC stat.}}^2 + \sigma_{\textrm{ISR}}^2 + \sigma_{{\cal B}}^2 + \sigma_{\textrm{N}_{\Upsilon(1S,2S) / {\cal L}}}^2}.$$
The additive uncertainty due to the number of expected background is considered by counting normalized background distributions directly, fitting the distributions with a constant, and a 1st-order polynominal.
\begin{table*}[htbp]
\caption{\label{tab:DsDs_err} Summary of the multiplicative systematic uncertainties (\%) on the branching fraction measurements for $\Upsilon(1S,2S) \to X_{cc\bar{s}\bar{s}}(\to D_{s}^{+}D_{s}^{+}(D_{s}^{*+}D_{s}^{*+})) + anything$ and on the Born cross section measurements for $e^+e^- \to X_{cc\bar{s}\bar{s}}(\to D_{s}^{+}D_{s}^{+}(D_{s}^{*+}D_{s}^{*+})) + anything$ at $\sqrt{s} $ = 10.52~GeV, 10.58~GeV, and 10.867~GeV.}
\small
\begin{tabular}{ccccccc}
\hline
\hline
\tabincell{c}{$M_{D_{s}^{+}D_{s}^{+}}$ ($M_{D_{s}^{*+}D_{s}^{*+}}$) mode} & DER & MC stat. & \tabincell{c}{\textrm{ISR}} & ${\cal B}$ & \tabincell{c}{ [$\textrm{N}_{\Upsilon(1S)}/\textrm{N}_{\Upsilon(2S)}/{\cal L} $]}& \tabincell{c}{\textrm{Sum}} \\
\hline
$\Upsilon(1S) \to X_{cc\bar{s}\bar{s}} + anything$ & 6.1 (7.3) & 1.0 &--- & 3.0 & 2.0 & 7.2 (8.2) \\
$\Upsilon(2S) \to X_{cc\bar{s}\bar{s}} + anything$ & 6.1 (7.3) & 1.0 &--- & 3.0 & 2.3 & 7.2 (8.3) \\
$e^+e^- \to X_{cc\bar{s}\bar{s}} + anything$ at $\sqrt{s}$ = 10.52 GeV & 6.1 (7.3) & 1.0 &0.3 & 3.0 & 1.4 & 7.0 (8.2) \\
$e^+e^- \to X_{cc\bar{s}\bar{s}} + anything$ at $\sqrt{s}$ = 10.58 GeV & 6.1 (7.3) & 1.0 &0.3 & 3.0 & 1.4 & 7.0 (8.2) \\
$e^+e^- \to X_{cc\bar{s}\bar{s}} + anything$ at $\sqrt{s}$ = 10.867 GeV & 6.1 (7.3) & 1.0 &0.3 & 3.0 & 1.4 & 7.0 (8.2) \\
\hline
\hline
\end{tabular}
\end{table*}
\section{Statistical interpretation of upper limit setting}
Since no signal traces are observed in the $D_{s}^{+}D_{s}^{+}$ or $D_{s}^{*+} D_{s}^{*+}$ distributions from data at all energy points,
the 90\% C.L.\ upper limits on the numbers of signal events ($N^{\textrm{UP}}$) are determined.
To take into account the additive and multiplicative uncertainties, we first study the additive systematic uncertainty and take the most conservative case, then use the total multiplicative systematic uncertainty as an input parameter to the POLE program.
Since there are few events observed from data sample at $\sqrt{s}$ = 10.52 GeV, the continuum contributions are neglected for the $\Upsilon(1S,2S)$ decays.
The conservative upper limit on the product branching fractions in $\Upsilon(1S,2S)$ decays ${\cal B}^{\textrm{UP}}(\Upsilon(1S,2S) \to X_{cc\bar{s}\bar{s}} + anything) \times {\cal B}(X_{cc\bar{s}\bar{s}} \to D_{s}^{+}D_{s}^{+}(D_{s}^{*+} D_{s}^{*+}))$ are obtained by the following formula:
$$\frac{N^{\textrm{UP}}}{N_{\Upsilon(1S,2S)} \times \sum_{i}\varepsilon_{i}{\cal B}_{i} },$$
where $N^{\textrm{UP}}$ is the 90\% C.L.\ upper limit on the number of events from the data signal yields including all systematic uncertainties that are mentioned above from other variables in this expression, $N_{\Upsilon(1S,2S)}$ is the total number of $\Upsilon(1S,2S)$ events,
$\varepsilon_{i}$ is the corresponding detection efficiency, and ${\cal B}_{i}$ is the product of all secondary branching fractions for each reconstructed channel.
The conservative upper limit on the product values of Born cross section and branching fraction $\sigma^{\textrm{UP}}(e^+e^- \to X_{cc\bar{s}\bar{s}} + anything) \times {\cal B}(X_{cc\bar{s}\bar{s}} \to D_{s}^{+}D_{s}^{+}(D_{s}^{*+} D_{s}^{*+}))$ are calculated by the following formula:
$$\frac{N^{\textrm{UP}} \times |1-\Pi|^{2}}{{\cal L} \times \sum_{i}\varepsilon_{i}{\cal B}_{i}
\times (1+\delta)_{\textrm{ISR}}},$$
where $N^{\textrm{UP}}$ is the 90\% C.L.\ upper limit on the number of events in data signal yields including all systematic uncertainties that are mentioned above from other variables in this expression, $|1-\Pi|^{2}$ is the vacuum polarization factor, ${\cal L}$ is the integrated luminosity, $\varepsilon_{i}$ is the corresponding detection efficiency, ${\cal B}_{\textrm{i}}$ is the product of all secondary branching fractions for each reconstructed channel, and $(1+\delta)_{\textrm{ISR}}$ is the radiative correction factor.
The values of $|1-\Pi|^{2}$ are 0.931, 0.930, and 0.929 for $\sqrt{s}$ = 10.52~GeV, 10.58~GeV, and 10.867~GeV~\cite{vacuum}, and the uncertainty is calculated to be less than 0.1\%, which is negligible.
The radiative correction factors $(1+\delta)_{\textrm{ISR}}$ are 0.686, 0.694, and 0.738, as calculated using the formula given in Ref.~\cite{ISR} for $\sqrt{s}$ = 10.52~GeV, 10.58~GeV, and 10.867~GeV, respectively, where we assume that the dependence of cross sections on $s$ is $1/s$.
The calculated 90\% C.L. upper limits on the product branching fractions of $\Upsilon(1S,2S) \to X_{cc\bar{s}\bar{s}} + anything$ and the product values of Born cross section and branching fraction of $e^+e^- \to X_{cc\bar{s}\bar{s}} + anything$ at $\sqrt{s}$ = 10.52~GeV, 10.58~GeV, and 10.867~GeV for the mode $X_{cc\bar{s}\bar{s}} \to D_{s}^{+}D_{s}^{+}$ ($X_{cc\bar{s}\bar{s}} \to D_{s}^{*+}D_{s}^{*+}$)
are displayed in Fig.~\ref{DD_Xup} (\ref{DsDs_Xup}).
Numerical values for the mode $X_{cc\bar{s}\bar{s}} \to D_{s}^{+}D_{s}^{+}$ can be found in Tables~\ref{tab:Ds_Ds_Xup_part1} and~\ref{tab:Ds_Ds_Xup_part2}, while those for the mode $X_{cc\bar{s}\bar{s}} \to D_{s}^{*+}D_{s}^{*+}$ are shown in Tables~\ref{tab:Dstar Dstar_Xup_part1} and~\ref{tab:Dstar Dstar_Xup_part2}.
\begin{figure*}[!htbp]
\begin{center}
\includegraphics[height=5.4cm,width=5.5cm]{fig6a.eps}
\includegraphics[height=5.4cm,width=5.5cm]{fig6b.eps}
\\
\includegraphics[height=5.4cm,width=5.5cm]{fig6c.eps}
\includegraphics[height=5.4cm,width=5.5cm]{fig6d.eps}
\includegraphics[height=5.4cm,width=5.5cm]{fig6e.eps}
\caption{The 90\% C.L. upper limits on the product branching fractions of $\Upsilon(1S,2S) \to X_{cc\bar{s}\bar{s}}(\to D_{s}^{+}D_{s}^{+}) + anything$ and the Born cross sections of $e^+e^- \to X_{cc\bar{s}\bar{s}} + anything$ at $\sqrt{s}$ = 10.52~GeV, 10.58~GeV, and 10.867~GeV with $M_{X_{cc\bar{s}\bar{s}}}$ varying from 4882~MeV/$c^2$ to 4922~MeV/$c^2$ in steps of 5~MeV/$c^2$ and $\Gamma_{X_{cc\bar{s}\bar{s}}}$ varying from 0.54~MeV to 6.54~MeV in steps of 1.0~MeV.}\label{DD_Xup}
\end{center}
\end{figure*}
\begin{table*}[htpb]\scriptsize
\begin{center}
\caption{\label{tab:Ds_Ds_Xup_part1} Summary of 90\% C.L. upper limits with the systematic uncertainties included on the product branching fractions of $\Upsilon(1S)/\Upsilon(2S) \to X_{cc\bar{s}\bar{s}} (\to D_{s}^{+}D_{s}^{+}) + anything$. }
\renewcommand\arraystretch{1.3}
\begin{tabular}{cccccccc}
\hline
\hline
\multicolumn{8}{c}{${\cal B}(\Upsilon(1S) / \Upsilon(2S) \to X_{cc\bar{s}\bar{s}} + anything) \times {\cal B}(X_{cc\bar{s}\bar{s}} \to D_{s}^{+}D_{s}^{+})$ ($\times10^{-4}$)} \\
\hline
\multicolumn{1}{c}{\textbf{$M_{X_{cc\bar{s}\bar{s}}}$ (MeV/$c^{2}$)}} &\multicolumn{7}{c}{\textbf{$\Gamma_{X_{cc\bar{s}\bar{s}}}$ (MeV)}} \\
& \multicolumn{1}{c}{\textbf{$0.54$}} & \multicolumn{1}{c}{\textbf{$1.54$}} & \multicolumn{1}{c}{\textbf{$2.54$}} & \multicolumn{1}{c}{\textbf{$3.54$}} & \multicolumn{1}{c}{\textbf{$4.54$}} & \multicolumn{1}{c}{\textbf{$5.54$}} & \multicolumn{1}{c}{\textbf{$6.54$}} \\
\hline
$4882$ & 1.7/1.2 & 1.7/1.2 & 1.8/1.2 & 1.8/1.3 & 1.8/1.2 & 1.9/2.5 & 1.9/2.5\\
$4887$ & 1.7/1.2 & 1.7/1.2 & 1.8/1.2 & 1.8/1.2 & 1.8/1.2 & 1.9/1.3 & 1.8/1.3\\
$4892$ & 1.7/1.2 & 1.7/1.2 & 1.8/1.2 & 1.8/1.2 & 1.8/1.2 & 1.9/1.3 & 1.8/1.3\\
$4897$ & 1.7/1.2 & 1.7/1.2 & 1.8/1.2 & 1.8/1.2 & 1.8/1.2 & 1.9/1.3 & 1.8/2.5\\
$4902$ & 1.7/1.2 & 1.8/1.1 & 1.8/2.2 & 1.8/2.3 & 1.8/2.2 & 1.9/2.4 & 1.9/2.4\\
$4907$ & 1.7/2.2 & 1.7/2.2 & 1.8/2.3 & 1.8/2.3 & 1.8/2.3 & 1.9/1.9 & 1.8/1.9\\
$4912$ & 1.7/2.2 & 1.7/2.2 & 1.8/1.8 & 1.8/1.8 & 1.8/1.8 & 1.9/1.9 & 3.4/1.9\\
$4917$ & 1.7/1.9 & 1.7/1.8 & 3.3/1.9 & 3.4/1.8 & 3.4/1.8 & 3.5/1.8 & 3.4/1.8\\
$4922$ & 3.3/0.9 & 3.3/0.9 & 3.4/0.9 & 3.5/1.8 & 3.5/1.8 & 3.6/1.9 & 3.5/1.7\\
\hline
\hline
\end{tabular}
\end{center}
\end{table*}
\begin{table*}[htpb]\scriptsize
\begin{center}
\caption{\label{tab:Ds_Ds_Xup_part2} Summary of 90\% C.L. upper limits with the systematic uncertainties included on the cross sections of $e^+e^- \to X_{cc\bar{s}\bar{s}} (\to D_{s}^{+}D_{s}^{+}) + anything$ at $\sqrt{s}$ = 10.52~GeV / 10.58~GeV / 10.867~GeV. }
\renewcommand\arraystretch{1.3}
\begin{tabular}{cccccccc}
\hline
\hline
\multicolumn{8}{c}{$\sigma(e^+e^- \to X_{cc\bar{s}\bar{s}} + anything) \times {\cal B}(X_{cc\bar{s}\bar{s}} \to D_{s}^{+}D_{s}^{+})$ ($\times10^{2} fb$)} \\
\hline
\multicolumn{1}{c}{\textbf{$M_{X_{cc\bar{s}\bar{s}}}$ (MeV/$c^{2}$)}} &\multicolumn{7}{c}{\textbf{$\Gamma_{X_{cc\bar{s}\bar{s}}}$ (MeV)}} \\
& \multicolumn{1}{c}{\textbf{$0.54$}} & \multicolumn{1}{c}{\textbf{$1.54$}} & \multicolumn{1}{c}{\textbf{$2.54$}} & \multicolumn{1}{c}{\textbf{$3.54$}} & \multicolumn{1}{c}{\textbf{$4.54$}} & \multicolumn{1}{c}{\textbf{$5.54$}} & \multicolumn{1}{c}{\textbf{$6.54$}} \\
\hline
$4882$ & 4.8/2.5/6.0 & 5.0/2.6/6.6 & 5.1/3.2/6.7 & 4.1/3.2/10.3 & 4.1/4.1/10.9 & 3.9/4.8/10.4 & 5.7/5.1/10.6\\
$4887$ & 1.9/3.4/4.0 & 2.0/3.5/5.4 & 4.2/3.6/5.5 & 4.1/3.8/5.6 & 6.2/4.3/5.9 & 8.0/4.6/6.8 & 8.0/4.5/7.6\\
$4892$ & 6.4/3.1/4.0 & 6.5/3.4/4.2 & 6.7/3.4/5.1 & 7.0/3.9/5.0 & 6.1/4.0/5.1 & 6.2/4.3/6.1 & 6.1/5.1/5.9\\
$4897$ & 5.9/1.9/2.7 & 6.1/2.6/3.8 & 6.0/3.3/3.9 & 6.2/3.7/5.0 & 6.1/3.7/6.3 & 6.2/4.2/6.0 & 7.2/4.8/7.3\\
$4902$ & 6.0/1.9/4.0 & 6.1/1.8/3.8 & 6.1/2.3/5.1 & 6.3/2.9/5.0 & 6.1/2.9/5.1 & 6.2/3.7/6.1 & 6.2/3.8/6.2\\
$4907$ & 2.6/1.8/5.1 & 4.9/1.8/5.3 & 5.1/1.8/5.1 & 5.2/1.9/5.0 & 7.1/2.3/5.1 & 6.2/2.8/4.7 & 6.1/2.9/7.5\\
$4912$ & 2.6/1.6/4.0 & 2.6/1.6/4.1 & 2.7/1.6/6.6 & 2.8/1.6/6.7 & 2.9/1.9/7.8 & 5.4/2.5/7.3 & 5.4/3.0/9.6\\
$4917$ & 2.6/1.2/5.2 & 2.6/1.6/6.6 & 2.7/1.6/9.0 & 2.8/2.2/9.1 & 5.4/2.2/9.0 & 5.4/3.2/8.6 & 5.4/3.2/8.9\\
$4922$ & 4.9/1.1/6.2 & 5.0/1.2/6.5 & 5.2/1.8/6.6 & 5.4/2.3/7.9 & 5.4/2.7/8.3 & 5.5/2.9/9.0 & 5.5/3.1/9.2\\
\hline
\hline
\end{tabular}
\end{center}
\end{table*}
\begin{figure*}[htbp]
\begin{center}
\includegraphics[height=5.4cm,width=5.5cm]{fig7a.eps}
\includegraphics[height=5.4cm,width=5.5cm]{fig7b.eps}
\\
\includegraphics[height=5.4cm,width=5.5cm]{fig7c.eps}
\includegraphics[height=5.4cm,width=5.5cm]{fig7d.eps}
\includegraphics[height=5.4cm,width=5.5cm]{fig7e.eps}
\caption{The 90\% C.L. upper limits on the product branching fractions of $\Upsilon(1S,2S) \to X_{cc\bar{s}\bar{s}}(\to D_{s}^{*+}D_{s}^{*+}) + anything$ and the Born cross sections of $e^+e^- \to X_{cc\bar{s}\bar{s}} + anything$ at $\sqrt{s}$ = 10.52~GeV, 10.58~GeV, and 10.867~GeV with $M_{X_{cc\bar{s}\bar{s}}}$ varying from 4801~MeV/$c^2$ to 4841~MeV/$c^2$ in steps of 5~MeV/$c^2$ and $\Gamma_{X_{cc\bar{s}\bar{s}}}$ varying from 2.58~MeV to 8.58~MeV in steps of 1.0~MeV.}\label{DsDs_Xup}
\end{center}
\end{figure*}
\begin{table*}[htpb]\scriptsize
\begin{center}
\caption{\label{tab:Dstar Dstar_Xup_part1} Summary of 90\% C.L. upper limits with the systematic uncertainties included on the product branching fractions of $\Upsilon(1S) \to X_{cc\bar{s}\bar{s}} (\to D_{s}^{*+}D_{s}^{*+}) + anything$ / $\Upsilon(2S) \to X_{cc\bar{s}\bar{s}} (\to D_{s}^{*+}D_{s}^{*+}) + anything$}.
\renewcommand\arraystretch{1.3}
\begin{tabular}{cccccccc}
\hline
\hline
\multicolumn{8}{c}{${\cal B}(\Upsilon(1S) / \Upsilon(2S) \to X_{cc\bar{s}\bar{s}} + anything) \times {\cal B}(X_{cc\bar{s}\bar{s}} \to D_{s}^{*+}D_{s}^{*+})$ ($\times10^{-4}$)} \\
\hline
\multicolumn{1}{c}{\textbf{$M_{X_{cc\bar{s}\bar{s}}}$ (MeV/$c^{2}$)}} &\multicolumn{7}{c}{\textbf{$\Gamma_{X_{cc\bar{s}\bar{s}}}$ (MeV)}} \\
& \multicolumn{1}{c}{\textbf{$2.58$}} & \multicolumn{1}{c}{\textbf{$3.58$}} & \multicolumn{1}{c}{\textbf{$4.58$}} & \multicolumn{1}{c}{\textbf{$5.58$}} & \multicolumn{1}{c}{\textbf{$6.58$}} & \multicolumn{1}{c}{\textbf{$7.58$}} & \multicolumn{1}{c}{\textbf{$8.58$}} \\
\hline
$4801$ & 7.5/6.0 & 7.9/7.1 & 7.9/5.7 & 7.6/6.1 & 7.4/6.1 & 7.6/4.4 & 7.7/4.8\\
$4806$ & 7.6/6.1 & 8.1/7.3 & 8.1/5.8 & 7.8/6.2 & 7.5/6.2 & 7.8/6.1 & 7.9/4.9\\
$4811$ & 7.8/6.2 & 8.3/7.4 & 8.2/5.9 & 7.9/6.3 & 7.7/6.3 & 7.9/6.2 & 8.0/6.8\\
$4816$ & 7.6/6.3 & 8.1/7.5 & 8.1/6.0 & 7.8/6.4 & 7.6/6.4 & 7.8/6.3 & 7.9/6.9\\
$4821$ & 7.5/6.3 & 8.0/7.6 & 7.9/6.0 & 7.7/6.5 & 7.4/6.5 & 7.7/6.4 & 7.8/7.0\\
$4826$ & 7.4/6.3 & 7.8/7.5 & 7.8/6.0 & 7.6/6.4 & 7.3/6.4 & 7.6/4.7 & 7.6/5.1\\
$4831$ & 7.3/6.2 & 7.7/7.4 & 7.7/5.9 & 7.5/4.7 & 7.2/4.7 & 7.5/6.2 & 7.5/6.8\\
$4836$ & 7.5/6.2 & 7.9/7.5 & 7.9/5.9 & 7.6/6.4 & 7.4/6.4 & 7.6/4.6 & 7.7/5.1\\
$4841$ & 7.6/6.3 & 8.1/7.6 & 8.1/4.4 & 7.8/4.8 & 7.6/4.8 & 7.8/4.7 & 7.9/5.1\\
\hline
\hline
\end{tabular}
\end{center}
\end{table*}
\begin{table*}[htpb]\scriptsize
\begin{center}
\caption{\label{tab:Dstar Dstar_Xup_part2} Summary of 90\% C.L. upper limits with the systematic uncertainties included on the cross sections of $e^+e^- \to X_{cc\bar{s}\bar{s}} (\to D_{s}^{*+}D_{s}^{*+}) + anything$ at $\sqrt{s}$ = 10.52~GeV / 10.58~GeV / 10.867~GeV. }
\renewcommand\arraystretch{1.3}
\begin{tabular}{cccccccc}
\hline
\hline
\multicolumn{8}{c}{$\sigma(e^+e^- \to X_{cc\bar{s}\bar{s}} + anything) \times {\cal B}(X_{cc\bar{s}\bar{s}} \to D_{s}^{*+}D_{s}^{*+})$ ($\times10^{2} fb$)} \\
\hline
\multicolumn{1}{c}{\textbf{$M_{X_{cc\bar{s}\bar{s}}}$ (MeV/$c^{2}$)}} &\multicolumn{7}{c}{\textbf{$\Gamma_{X_{cc\bar{s}\bar{s}}}$ (MeV)}} \\
& \multicolumn{1}{c}{\textbf{$2.58$}} & \multicolumn{1}{c}{\textbf{$3.58$}} & \multicolumn{1}{c}{\textbf{$4.58$}} & \multicolumn{1}{c}{\textbf{$5.58$}} & \multicolumn{1}{c}{\textbf{$6.58$}} & \multicolumn{1}{c}{\textbf{$7.58$}} & \multicolumn{1}{c}{\textbf{$8.58$}} \\
\hline
$4801$ & 14.5/8.5/16.2 & 14.1/8.5/20.7 & 13.4/10.8/20.4 & 14.1/10.7/23.7 & 10.3/10.9/23.8 & 11.5/11.7/23.2 & 11.1/12.7/24.1\\
$4806$ & 14.5/6.1/21.2 & 14.2/6.2/18.3 & 13.5/8.3/17.3 & 14.1/7.7/18.2 & 14.0/10.4/18.3 & 15.5/12.6/17.8 & 11.1/13.6/23.7\\
$4811$ & 14.5/3.8/21.0 & 14.2/6.3/20.2 & 13.5/7.8/19.9 & 14.1/7.8/20.9 & 26.2/10.4/23.2 & 23.7/12.7/22.6 & 23.0/13.9/23.4\\
$4816$ & 14.1/4.7/16.3 & 13.8/6.8/20.8 & 24.6/6.6/26.3 & 25.8/9.1/27.6 & 25.6/9.5/27.8 & 28.3/12.4/23.3 & 27.5/13.0/24.2\\
$4821$ & 25.8/6.7/16.9 & 25.2/7.5/16.2 & 24.1/7.5/21.2 & 25.1/9.0/22.3 & 24.9/9.2/19.3 & 27.6/9.5/30.2 & 26.8/11.0/31.4\\
$4826$ & 26.4/8.6/16.4 & 25.8/9.3/15.8 & 24.6/9.1/15.6 & 25.7/9.1/18.6 & 25.5/10.2/18.7 & 28.3/11.2/23.4 & 27.5/11.4/24.3\\
$4831$ & 27.1/7.0/21.1 & 26.5/8.6/20.3 & 25.2/11.0/20.1 & 26.4/11.2/21.0 & 34.7/11.5/23.4 & 38.5/12.0/22.8 & 37.4/12.5/23.6\\
$4836$ & 13.8/6.6/16.2 & 13.5/7.5/15.6 & 32.0/9.7/23.3 & 33.4/9.4/23.7 & 33.1/9.6/23.8 & 36.6/12.2/23.2 & 35.6/13.8/24.1\\
$4841$ & 24.7/6.9/21.9 & 24.2/6.7/18.1 & 23.1/7.2/17.9 & 24.1/8.9/18.8 & 23.9/9.9/24.3 & 34.9/12.0/29.6 & 34.0/13.4/30.8\\
\hline
\hline
\end{tabular}
\end{center}
\end{table*}
\section{\boldmath conclusion}
Using the data samples of 102 million $\Upsilon(1S)$ events, 158 million $\Upsilon(2S)$ events,
and data samples at $\sqrt{s}$ = 10.52~GeV, 10.58~GeV, and 10.867~GeV corresponding to integrated luminosities
89.5~fb$^{-1}$, 711.0~fb$^{-1}$, and 121.4~fb$^{-1}$, respectively, we search for the double-heavy
tetraquark states $X_{cc\bar{s}\bar{s}}$ in the processes of $\Upsilon(1S,2S) \to D_{s}^{+}D_{s}^{+}(D_{s}^{*+}D_{s}^{*+}) + anything$ and
$e^+e^- \to D_{s}^{+}D_{s}^{+}(D_{s}^{*+}D_{s}^{*+}) + anything$ at $\sqrt{s}$ = 10.52~GeV, 10.58~GeV, and 10.867 GeV.
No peaking structures are observed in the $M_{D_{s}^{+}D_{s}^{+}}$ and $M_{D_{s}^{*+}D_{s}^{*+}}$ distributions from data.
The 90\% C.L. upper limits on the product branching fractions in $\Upsilon(1S,2S)$ inclusive decays
[${\cal B}(\Upsilon(1S,2S) \to X_{cc\bar{s}\bar{s}} + anything) \times {\cal B}(X_{cc\bar{s}\bar{s}} \to D_{s}^{+}D_{s}^{+}(D_{s}^{*+}D_{s}^{*+}))$] and
the product values of Born cross section and branching fraction for $e^+e^- \to X_{cc\bar{s}\bar{s}} + anything$
[$\sigma(e^+e^- \to X_{cc\bar{s}\bar{s}} + anything) \times {\cal B}(X_{cc\bar{s}\bar{s}} \to D_{s}^{+}D_{s}^{+}(D_{s}^{*+}D_{s}^{*+}))$] at $\sqrt{s}$
= 10.52~GeV, 10.58~GeV, and 10.867~GeV as functions of various assumed $X_{cc\bar{s}\bar{s}}$ masses and widths are determined.
\section*{\boldmath ACKNOWLEDGMENTS}
We thank the KEKB group for the excellent operation of the
accelerator; the KEK cryogenics group for the efficient
operation of the solenoid; and the KEK computer group, and the Pacific Northwest National
Laboratory (PNNL) Environmental Molecular Sciences Laboratory (EMSL)
computing group for strong computing support; and the National
Institute of Informatics, and Science Information NETwork 5 (SINET5) for
valuable network support. We acknowledge support from
the Ministry of Education, Culture, Sports, Science, and
Technology (MEXT) of Japan, the Japan Society for the
Promotion of Science (JSPS), and the Tau-Lepton Physics
Research Center of Nagoya University;
the Australian Research Council including grants
DP180102629,
DP170102389,
DP170102204,
DP150103061,
FT130100303;
Austrian Federal Ministry of Education, Science and Research (FWF) and
FWF Austrian Science Fund No.~P~31361-N36;
the National Natural Science Foundation of China under Contracts
No.~11675166,
No.~11705209;
No.~11975076;
No.~12135005;
No.~12175041;
No.~12161141008;
Key Research Program of Frontier Sciences, Chinese Academy of Sciences (CAS), Grant No.~QYZDJ-SSW-SLH011;
the Shanghai Science and Technology Committee (STCSM) under Grant No.~19ZR1403000;
the Ministry of Education, Youth and Sports of the Czech
Republic under Contract No.~LTT17020;
Horizon 2020 ERC Advanced Grant No.~884719 and ERC Starting Grant No.~947006 ``InterLeptons'' (European Union);
the Carl Zeiss Foundation, the Deutsche Forschungsgemeinschaft, the
Excellence Cluster Universe, and the VolkswagenStiftung;
the Department of Atomic Energy (Project Identification No. RTI 4002) and the Department of Science and Technology of India;
the Istituto Nazionale di Fisica Nucleare of Italy;
National Research Foundation (NRF) of Korea Grant
Nos.~2016R1\-D1A1B\-01010135, 2016R1\-D1A1B\-02012900, 2018R1\-A2B\-3003643,
2018R1\-A6A1A\-06024970, 2019K1\-A3A7A\-09033840,
2019R1\-I1A3A\-01058933, 2021R1\-A6A1A\-03043957,
2021R1\-F1A\-1060423, 2021R1\-F1A\-1064008;
Radiation Science Research Institute, Foreign Large-size Research Facility Application Supporting project, the Global Science Experimental Data Hub Center of the Korea Institute of Science and Technology Information and KREONET/GLORIAD;
the Polish Ministry of Science and Higher Education and
the National Science Center;
the Ministry of Science and Higher Education of the Russian Federation, Agreement 14.W03.31.0026,
and the HSE University Basic Research Program, Moscow;
University of Tabuk research grants
S-1440-0321, S-0256-1438, and S-0280-1439 (Saudi Arabia);
the Slovenian Research Agency Grant Nos. J1-9124 and P1-0135;
Ikerbasque, Basque Foundation for Science, Spain;
the Swiss National Science Foundation;
the Ministry of Education and the Ministry of Science and Technology of Taiwan;
and the United States Department of Energy and the National Science Foundation.
|
2,869,038,153,978 | arxiv | \section{Introduction}
This paper complements and generalizes the results obtained in
\cite{alessio} on the microscopic foundations of the
optics of materials. The main new result of that paper was a proof
of the existence of polaritons in ionic crystals,
that was obtained by calculating the normal modes and
exhibiting the explicit form of the dispersion curves.
Apparently, the existence
of polaritons, whose qualitative importance is evident since it explains
why crystals are transparent to visible light, was
previously understood only at a phenomenological level,
in terms of a macroscopic
polarization field (see for example \cite{gpp}, page 239).
An interesting point is that the microscopic proof was
obtained in \cite{alessio} in a completely classical framework,
on the basis of
the system of Newton's equations for each charge,
in which the full electrodynamic forces are taken into account,
both the mutual retarded ones and the individual radiation reaction
forces. For example it is just retardation that makes it possible
that the new polaritonic branches occur, and
Born and Huang \cite{bh} couldn't get this result just because
they didn't fully take the role of retardation into account.
The result
was obtained in \cite{alessio} by previously reducing
the original electrodynamic model to a Hamiltonian conservative
one. This in turn was made possible by exploiting two
global properties of the original microscopic electrodynamic system, namely,
the Wheeler-Feynman identity \cite{wf} and the Ewald--Oseen resummation of
the far fields (see \cite{ewald}\cite{oseen2} and \cite{bw}, page 101),
which, jointly used, provide both a cancellation of the
radiation reaction force acting on each charge, and an elimination
of the problems related to delay. Both properties
were proven in \cite{alessio} (following \cite{cg} and \cite{mcg}),
for the case of ionic crystals.
It is then natural to ask whether such a result concerning the
dispersion curves may be complemented by providing a microscopic
expression for the electric susceptibility of the system, which
would allow one to determine the expected
absorption and emission spectra. Moreover one might also look for
an extension of the methods,
formulating the theory in such a general frame that it can apply to
disordered dielectric systems such as glasses, or even liquids or gases.
In the present paper we show how a microscopic expression for
susceptibility is obtained for ordered systems, and how the
result can be extended,
at least partly, to cover the case of disordered systems.
Indeed we will show how, if
the two mentioned global properties hold (so that the system can be
reduced to a conservative Hamiltonian one), then
the statistical mechanical methods of
Green--Kubo type \cite{gc}\cite{gc2} can be used to provide a microscopic
expression for macroscopic polarization, and so for susceptibility.
In particular, it will be explicitly exhibited that
the phenomena of absorption and
emission are not related, at least in a any direct way, to the
radiation reaction force, and can in fact be understood as symmetrical
features of a time reversible dynamics.
In order to obtain such results, we have to overcome a
difficulty which arises if one tries to imitate in a strict way the
Green--Kubo type methods generally used in the quantum case.
Indeed, the available procedure makes use, in an apparently
essential way, of the Gibbs measure in phase space, whereas Gibbs' measure
does not even exist in the classical case, due to the divergence
induced by the attractive Coulomb potentials. We however show how
susceptibility can actually be proven to exist, obtaining for it an
expression in terms of time correlations. Then we
study its properties, and in particular deduce the $f$--sum rule,
the essentially
classical character of which was already pointed out by Van Vleck and
Huber \cite{vanpaper}.
The existence of susceptibility and its general expression are
completely independent of the qualitative nature of the motions of the system.
It is instead the form of the spectrum that depends on the
stability properties of the motions. We show how
a pure line spectrum occurs for stable (almost periodic) motions
of the system, while a broadening of the lines or even a
continuous spectrum occur when chaoticity sets in. We also
discuss the relevance that in this connection have some quite recent
results on the theory of dynamical systems, in particular the results
that made possible to extend to systems of interest
for statistical mechanics the methods of perturbation
theory \cite{andrea}\cite{maiocchi}\cite{fpu} which allow
one to estimate when a
transition from ordered to chaotic motions should occur
(see the numerical works \cite{cggp}\cite{plasmi2}).
For what concerns the extension to disordered systems, all depends on
proving the two mentioned global electrodynamic properties.
For the Wheeler-Feynman identity, we do here more than required,
because we give a proof
which applies to completely general
systems, and not just to dielectrics.
In fact, the identity is shown to be equivalent to
a general form of causality of
electrodynamics, which is reminiscent of a general property
assumed in quantum
electrodynamics. The properties related to the Ewald
resummation methods are instead assumed to hold for dielectrics,
just by analogy with the case of ordered systems.
In section \ref{2} it is recalled how a first step in passing
from microscopic to
macroscopic electromagnetism consists in performing a local
space--average.
Our treatment is standard, apart from a minor point.
In section \ref{3} the second step is performed, which involves a
phase space (or ensemble) average, and leads to a
Green--Kubo type formula for macroscopic susceptibility,
in a completely symmetrical
way for absorption and emission. The proof
is obtained without using the Gibbs measure.
Preliminarily, it is recalled how the reduction to a
conservative Hamiltonian system is obtained through the Ewald
resummation methods, making use of the
Wheeler--Feynman identity.
In section \ref{4} the analyticity properties of susceptibility are
recalled, and the $f$--sum rule is proven. In section \ref{5} it is
shown how under quite general conditions susceptibility is
expressed in terms of equilibrium time--correlation functions between
positions and velocities of the charges.
In section \ref{6} it is discussed how the spectrum depends on the
qualitative stability properties of the motions of the system, and in
particular how a pure spectrum arises in the presence of suitable
stability properties (almost periodicity) of the
orbits. Instead, a broadening of the lines, or even a continuous spectrum
are expected to occur as chaoticity sets in. In section \ref{7}
this is illustrated by studying the particular case of
ionic crystals.
Some final comments are added in Section \ref{8} .
An Appendix is devoted to a proof of the
Wheeler--Feynman identity (and of the consequent cancellation of the
radiation reaction forces), which applies in a completely general
situation, irrespective of the ordered or disordered structure of the
system.
\section{From microscopic to
macroscopic electromagnetism. First step:
local space--averages and the microscopic polarization field}\label{2}
As we know,
\cite{drude}\cite{lorentz}\cite{born}\cite{vanbook}\cite{degroot}\cite{degroot2}
macroscopic electromagnetism is characterized by four fields:
the electric field $ {\boldsymbol{\mathcal{E}}} $,
the magnetic induction field $ {\boldsymbol{\mathcal{B}}} $, the electric induction
field $\boldsymbol{\mathcal{D}}$
and the magnetic field $\boldsymbol{\mathcal{H}}$.
Since the times of Lorentz, the first two
are thought of as local space--averages of corresponding microscopic fields
$\vett{E}$, $\vett{B}$, while the latter ones are defined as
$\boldsymbol{\mathcal{D}}= {\boldsymbol{\mathcal{E}}} +4\pi {\boldsymbol{\mathcal{P}}} $ and $\boldsymbol{\mathcal{H}}
= {\boldsymbol{\mathcal{B}}} -4\pi {\boldsymbol{\mathcal{M}}} $, where the polarization vector $ {\boldsymbol{\mathcal{P}}} $ and
the magnetization vector $ {\boldsymbol{\mathcal{M}}} $ are the response
of a material body to the presence of an external electric or
magnetic field. In the macroscopic treatments one assumes
that there hold the constitutive relations
$\boldsymbol{\mathcal{D}}=\varepsilon {\boldsymbol{\mathcal{E}}} $ and
$\boldsymbol{\mathcal{H}}=\mu {\boldsymbol{\mathcal{B}}} $,
or rather that analogous relations hold frequency by frequency,
i.e., that one has
$$
\hat{\boldsymbol{\mathcal{D}}} (\vett x,\omega)=\varepsilon(\omega)\hat {\boldsymbol{\mathcal{E}}} (\vett
x,\omega) \ , \quad
\hat{\boldsymbol{\mathcal{H}}} (\vett x,\omega)= \mu(\omega)
\hat {\boldsymbol{\mathcal{B}}} (\vett x,\omega) \ ,
$$
where $\hat {\boldsymbol{\mathcal{E}}} $, $\hat{\boldsymbol{\mathcal{D}}}$, $\hat {\boldsymbol{\mathcal{B}}} $ and
$\hat{\boldsymbol{\mathcal{H}}}$, are the time Fourier transforms of
the corresponding fields. In this section we recall how, in order to obtain
a macroscopic expression for polarization, a first step
is accomplished through a local space--averaging
procedure. This is a completely standard passage, and only a minor modification
to the familiar procedure will be introduced.
Consider a dielectric body, thought of as
microscopically constituted of a certain number $N$ of
neutral molecules or atoms, each containing a stable aggregate of point
charges. In such a case the microscopic Maxwell equations
read
\begin{equation*}
\begin{split}
\Div \vett E & = 4\pi \sum_{k=1}^N \sum_{j=0}^{n_k}
e_j\delta(\vett x-\vett {x}_{j,k}) \\
\Rot \vett E & = -\, \frac {1}c\partial_t \vett B \\
\Div \vett B & = 0 \\
\Rot \vett B & = \frac {4\pi}c \sum_{k=1}^N \sum_{j=0}^{n_k} e_j
{\dot{\vett x}}_{j,k} \delta(\vett x - \vett{x}_{j,k}) +
\frac {1}c\partial_t \vett E \ ,
\end{split}
\end{equation*}
where $\vett{x}_{j,k}$ is the position of the $j$--the particle (of charge
$e_j$) in the $k$--th molecule or atom.
\subsection*{The local space--averaging procedure. Space--averaged fields
and sources}
Now, following Lorentz, the
macroscopic fields $ {\boldsymbol{\mathcal{E}}} $ and
$ {\boldsymbol{\mathcal{B}}} $ are defined as local space--averages
of the values the microscopic fields take in
what is sometime called
a ``physically infinitesimal domain''\cite{degroot}, or a ``physically
small volume element'' \cite{kirk}, of volume $\Delta V$
located about the considered
point $\vett x$. Think for example of a cubic volume element with side
$100$ \"Amstrong, which, in a solid or in a liquid, in ordinary
conditions contains about one million molecules.
Due to the linearity of the Maxwell equations, the
space--averaged fields are expected to be solutions of those
same equations, having as sources
the averaged charge and current densities.
This becomes a rather simple theorem if
the space--averaging procedure at $\vett x$ is mathematically implemented
through a convolution with a suitable smooth
($C^\infty$ class) function $N(\cdot )$ centered at $\vett x$, which
essentially vanishes outside the chosen volume element, while
having inside it essentially a constant normalizing
value, namely, $1/\Delta V $.
The macroscopic fields are thus defined as
\begin{equation*}
\begin{split}
{\boldsymbol{\mathcal{E}}} (\vett x,t) &= N\ast \vett E\;(\vett x,t) \stackrel {\mathrm{def}}{=}
\int_{\mathbb{R}^3} \mathrm{d}\vett y N(\vett x -\vett y)\vett E(\vett y,t) \\
{\boldsymbol{\mathcal{B}}} (\vett x,t) &= N\ast \vett B\;(\vett x,t) \stackrel {\mathrm{def}}{=}
\int_{\mathbb{R}^3} \mathrm{d}\vett y N(\vett x -\vett y)\vett B(\vett y,t) \ .
\end{split}
\end{equation*}
As the microscopic fields are distributions
(because $\delta$ functions occur in the sources), it turns out
that the differential operators commute with the
convolution, i.e., one has
\begin{equation*}
\begin{split}
\Div {\boldsymbol{\mathcal{E}}} &= N \ast \Div \vett E \ , \quad
\Rot {\boldsymbol{\mathcal{E}}} = N \ast \Rot \vett E \\
\Div {\boldsymbol{\mathcal{B}}} &= N \ast \Div \vett B \ , \quad
\Rot {\boldsymbol{\mathcal{B}}} = N \ast \Rot \vett B \ ,
\end{split}
\end{equation*}
exactly as it would occur if the fields were smooth.
Thus,
multiplying the Maxwell equations by $N(\vett x-\vett y)$ and
integrating, due to the linearity of the equations
the macroscopic fields are found, as expected, to satisfy the
Maxwell equations
with charge density $\rho$ and current density
$\vett j(\vett x, t)$ which now are smooth fields rather than
distributions, and are obtained by averaging with the same procedure.
So the macroscopic fields satisfy the equations
\begin{equation*}
\begin{split}
\Div {\boldsymbol{\mathcal{E}}} &= 4\pi \rho \\
\Rot {\boldsymbol{\mathcal{E}}} &= - \frac {1}c\partial_t {\boldsymbol{\mathcal{B}}} \\
\Div {\boldsymbol{\mathcal{B}}} &= 0 \\
\Rot {\boldsymbol{\mathcal{B}}} &= \frac {4\pi}c \vett j
+ \frac {1}c\partial_t {\boldsymbol{\mathcal{E}}} \ .
\end{split}
\end{equation*}
which involve the space--averaged sources
\begin{equation}\label{eq:carica}
\rho (\vett x,t) \stackrel {\mathrm{def}}{=} \sum_{k=1}^N \sum_{j=0}^{n_k} e_j N(\vett x -
\vett x_{j,k})
\end{equation}
\begin{equation}\label{eq:corrente}
\vett j(\vett x, t) \stackrel {\mathrm{def}}{=} \sum_{k=1}^N \sum_{j=0}^{n_k} e_j
\dot{\vett x}_{j,k} N(\vett x-\vett x_{j,k})\ .
\end{equation}
\subsection*{The microscopic polarization field}
We now show how the space--averaged charge density $\rho$ can be
written as the divergence of a field, which should be interpreted as a still
microscopic form of the polarization field. This is obtained by expanding
the positions of the charges entering the function $N(\cdot)$,
about the centers of mass of
their molecules or atoms.
This makes the single microscopic dipoles come in.
Denote by $\vett{x}_k^0$ the position of the center of mass of the $k$--th
molecule or atom,
\footnote{In the case of crystals
the formulas are simplified if one even thinks of $\vett{x}_k^0$
as a fixed position of a cell, for example a given corner.}
and by $\vett{q}_{j,k} \stackrel {\mathrm{def}}{=} \vett x_{j,k}-\vett{x}_k^0$ the corresponding displacements
(which are assumed to be bounded)
of the charges. We have now to find
which expression does the space averaged charge density $\rho$ take, as a
function of the displacements $\vett{q}_{j,k} $. Here the familiar
procedure consists in introducing a multipole expansion and a
truncation, through which $\rho$ is shown to be the divergence of a
vector field.
We obtain this result, perhaps in a simpler and
more rigorous way, by
making use of the finite--increment Lagrange
formula, according to which for a smooth function $f$ one has
$$
f(\vett x+\vett h)- f(\vett x)=\int_0^1d\zeta\, \frac{d~}{d\zeta}
f(\vett x+\zeta \vett h)\ .
$$
Indeed one then has
\begin{equation*}
\begin{split}
N(\vett x & -\vett x_{j,k}) = N(\vett x-\vett{x}_k^0) + \int_0^1 \mathrm{d} \zeta
\frac{d~}{d\zeta}
\, N(\vett x-\vett{x}_k^0 - \zeta \vett{q}_{j,k} ) \\
& = N(x-\vett{x}_k^0) - \int_0^1 \mathrm{d} \zeta \; \vett{q}_{j,k} \cdot \nabla N(\vett x-\vett{x}_k^0 -
\zeta \vett{q}_{j,k} ) \\
& = N(\vett x-\vett{x}_k^0) - \Div \Big( \vett{q}_{j,k} \int_0^1 \mathrm{d}
\zeta \, N(\vett x-\vett{x}_k^0 - \zeta \vett{q}_{j,k} ) \Big)\ .
\end{split}
\end{equation*}
Thus, substituting this formula in the expression (\ref{eq:carica}) for the
space--averaged charge density $\rho$, and
recalling that the
molecules are neutral so that
$$
\sum_{j=0}^{n_k} e_j N(\vett x-\vett{x}_k^0) = 0 \ ,
$$
one finds
\begin{equation}\label{divergenza}
\rho = - 4 \pi \Div \vett P \ ,
\end{equation}
where the field $\vett P$ is given by
\begin{equation}\label{eq:defpol}
\vett P(\vett x) \stackrel {\mathrm{def}}{=} \sum_{k=1}^N \sum_{j=0}^{n_k} e_j \Big( \vett{q}_{j,k} \int_0^1 \mathrm{d}
\zeta \, N(\vett x-\vett{x}_k^0 - \zeta \vett{q}_{j,k} ) \Big)\ .
\end{equation}
Without much error this can be written
in the simplified form
\begin{equation}\label{eq:polarizzazione}
\vett P(\vett x) = \frac 1\Delta V \sum_{\vett{x}_k^0\in\Delta V } \sum_{j=0}^{n_k} e_j
\vett{q}_{j,k} \ ,
\end{equation}
i.e., as the sum of the dipole moments of the single molecules or atoms with
respect to their centers of mass, as one might have expected.
On the other hand we
know that, in a dielectric,
the macroscopic charge density is expressed as the divergence of
polarization.
So one might be tempted to altogether
identify $\vett P$ with the macroscopic polarization $ {\boldsymbol{\mathcal{P}}} $
itself. This however is not correct. The reason is that the field
$\vett P(\vett x)$ still is a dynamical variable, by which we mean a
function defined on the global ``mechanical phase space'' of the
charges, a point of which, call it $z$, is identified through
the positions and the momenta of all
charges. Now, $\vett P(\vett x)$ evidently depends on the phase point,
and thus
may be called the \emph{microscopic polarization field }.
The microscopic magnetization field could be given
along similar lines. However we don't need it for our aims, because
with good approximation in dielectrics one can put $\mu=1$, unless one
is just interested in magneto--optical phenomena.
\subsection*{Need for an ensemble average}
As usual in statistical mechanics, a macroscopic quantity is
defined as the average over phase space of a microscopic quantity
(a function of $z$), with respect to a given measure.
Denoting such an averaging in the mechanical phase space
by $\langle\cdot \rangle$,
the macroscopic polarization field will then be defined by
$$
{\boldsymbol{\mathcal{P}}} (\vett x) = \langle \vett P(\vett x)\rangle \ ,
$$
i.e., by
\begin{equation*}
{\boldsymbol{\mathcal{P}}} (\vett x) = \frac 1\Delta V \big\langle
\sum_{\vett{x}_k^0\in\Delta V } \sum_{j=0}^{n_k} e_j \vett{q}_{j,k} \big\rangle \ .
\end{equation*}
Now, the microscopic polarization, being itself a space--mean over many
molecules, should already satisfy some central limit theorem and so should not
fluctuate very much as the phase space point $z$ is varied.
In such a case the ensemble average just provides a ``typical value'',
so that the use of a further ensemble average
may appear to be redundant. This is not so, because it is
just by performing ensemble averages that analytical
manipulations can be performed which lead to significant results.
One such result, as we will see,
is the existence itself of electric susceptibility, namely, the fact that
polarization responds linearly to an external perturbation even if the
unperturbed system presents highly nonlinear motions. This is obtained
by Green-Kubo methods in phase space, just because of the
linearity of the equation of motion for the probability density.
A further result is the proof
of the $f$--sum rule.
However, it is not at all obvious
how phase space methods can be used in a microscopic model
which involves both retarded forces
and dissipative ones. How to do this, and how to use Hamiltonian
techniques in phase space will be shown in the next section.
\section{Ensemble average and Green--Kubo theorem for
polarization. Role of the Wheeler--Feynman identity and of the
Ewald resummation methods}\label{3}
\subsection*{Reduction to the mechanical phase space
(Wheeler--Feynman and Ewald--Oseen)}
The reduction of the original electrodynamic problem to a purely
mechanical one in the mechanical phase space is quite hard a task.
First of all, the original problem is different from those usually studied
in statistical mechanics because, due to the finite
propagation speed of the
electromagnetic interactions among the charges, the equations of motion for the
displacements $\vett{q}_{j,k} $ of the charges
turn out to be differential equations with delay. Notice that the delay
cannot be neglected, as it
produces qualitatively essential features. For example, in the case of
ionic crystals it is just retardation that produces the two new
branches of the
dispersion relation which correspond to polaritons (see formula (15)
of (\cite{alessio}),
thus explaining why visible light can propagate inside them.
Thus, in the original electrodynamic problem, having to deal with
equations with delay we know nothing about the properties of the
corresponding dynamical system,
not even how to correctly frame a Cauchy problem. Neither do we
know which is the phase space suited to the system,
nor can we know which measure should be used to define the averages.
Finally, the system is not a conservative one, at least not in any obvious
way, inasmuch as the charges should radiate energy away during their
necessarily accelerated motions.
From a heuristic point of view such problems can be overcome in the
following way. Due to the long range character of the field produced by any
single charge (a range much longer than the purely Coulomb one), in order to
determine the force acting on any charge and produced by all the other
ones, one necessarily has to perform a ``resummation'' of the forces.
This can be done in an exact way in the case of crystals (through the
so called Ewald method, as implemented for
example in \cite{alessio})
by suitably splitting the field into two contributions. The first one
essentially comes from the near (in a microscopic sense) charges,
and can thus be considered to all effects as being instantaneous,
while the second one is essentially due to the far charges.
In turn, the contribution of the far charges too can be divided into
two parts. One of them exactly cancels the radiation reaction
force (which necessarily is nonvanishing, because of the accelerated
motions of the charges). This indeed
is the so called Wheeler--Feynman identity, which was postulated by
those authors in their paper of the year 1945 and was proven, in the case
of ionic crystals, in \cite{alessio}, following \cite{cg} and \cite{mcg}.
The second part of the contribution of the far charges
enters in the same way as an external electromagnetic
field, which propagates inside matter with a suitable refractive
index (see the first term in the force entering formula (15)
of (\cite{alessio}), notwithstanding the fact that the microscopic far
fields entering the original problem
do propagate with the speed of light in vacuum (this is the so--called
Ewald--Oseen cancellation
property). So we have to deal
both with the Wheeler--Feynman property (or identity) and with
the Ewald--Oseen resummation properties.
In the case of ionic crystals both properties were proved to hold,
so that the original electrodynamic equations of motion
for the charges could be consistently dealt with
as a system of non dissipative differential equations
(possibly depending on time), of the form
$$
m_j\ddqjk = \sum_{\vett{x}_{k'}^0 \in U} \sum_{j'}\vett F_{j,j'}(\vett{q}_{j,k} - \vett{q}_{j',k'} ) + e_j
{\boldsymbol{\mathcal{E}}^c} (\vett{x}_k^0,t)
$$
where $U$ is a microscopic (namely, much smaller than $\Delta V $) neighborhood
of $\vett{x}_k^0$, while the field $ {\boldsymbol{\mathcal{E}}^c} $ is what Ewald calls the
``exciting'' electric field (\emph{``erregende Feld''} in his words,
see \cite{ewald}, page 7). This is the field produced by the far
charges that actually enters the equations of motion
as if it were an external field, propagating with a macroscopic
refractive index.
Analogous proofs should be provided here for the disordered case of
interest for dielectrics.
For what concerns the Wheeler--Feynman
identity, we here do more than required, because we give in an Appendix
a proof which holds in any situation, and
actually shows the deep significance of the identity, as corresponding
to some general form of causality.
Instead, the Ewald--Oseen property is not proven here for the case of
disordered systems, and its validity is assumed to hold by analogy with the
case of crystals. We are confident that a proof may be provided on
another occasion.
\subsection*{The macroscopic polarization through a Green--Kubo
type theorem. General expression of the response function for an
absorption process }
So our phase space can be taken to be the usual one of
statistical mechanics, namely, the space having as coordinates
the positions $\vett{q}_{j,k} $ and the momenta $\vett{p}_{j,k} \stackrel {\mathrm{def}}{=} m_j\dqjk$
of all the charges of the system, and our aim is now to obtain an
expression for the electric susceptibility following the standard methods
of Green--Kubo type of quantum statistical mechanics.
Here however a difficulty arises.
Indeed the analogous methods transported to the classical case
amount to studying the Liouville equation
for the probability density in phase space, looking for its time
evolution under the action of a perturbation. However, in the quantum
case it is first of
all assumed that an unperturbed (or equilibrium) solution exists,
given exactly by the
Gibbs ensemble. Now, if one looks
at the procedures used in the proofs, one might have the impression
that the role of the Gibbs density is essential, and that the proof
couldn't be obtained without using it.
On the other hand we
have to deal with Coulomb attractive interactions,
which have the effect that the Gibbs measure does not even exist, in
the classical case.
We show here how any reference to the equilibrium Gibbs measure can be
avoided, and even in a rather simple way.
Indeed in this section the existence of susceptibility is proven, and a quite
general
expression for it is provided, essentially without introducing
any requirement at all on the equilibrium measure. Then in section
\ref{5} it will be shown how
susceptibility is expressed in terms of time--correlation functions,
if an assumption of a quite general character for the measure is introduced
(validity of the large deviation principle for momenta).
So we only assume that an equilibrium probability density exists, which
will be denoted by $\rho_0$ (no confusion
with the space--averaged charge density should occur), and
its form will not need be specified.
In other terms, $\rho_0$ is only assumed to be invariant under the
flow determined by the equations of motion, i,e.,
to be a stationary solution of the continuity equation
$$
\partial_t\rho + \vett v\cdot\nabla\rho=0 \ ,
$$
where $\vett v$ is the vector field defined by the equations of
motion in phase space for the isolated system.\footnote{For the sake
of
simplicity
we are admitting that the vector field $\vett v$ has vanishing
divergence. Nothing should change in the general case.}
Consider now the case in which there is an external electromagnetic
field $\vett E^{in}$
(for example a monochromatic wave of frequency $\omega$) which incides
on the body, with an intensity that starts increasing slowly and
then reaches a stationary
value (the so called case of an adiabatically switched on perturbation).
Then a change, say $\delta {\boldsymbol{\mathcal{E}}^c} (\vett x,t)$, will be induced on the
Ewald exciting field, which is
the one actually entering the
equations of motion for the charges. The change is due
both to the presence itself of the incoming external
field, and to the fact that the far charges which are responsible for
that field are now
moving in a modified way.
For the sake of consistency, the relation
between $\delta {\boldsymbol{\mathcal{E}}^c} $ and the incoming external
field $\vett E^{in}$ should be determined, and to this end
the validity of the Lorentz--Lorenz relation should be established. This is in
any case a necessary step, if macroscopic optics should be deduced at
all. This problem will not be dealt with in the present paper.
Under the perturbation induced by the external field,
the density $\rho$ will evolve according to the
equation
\begin{equation}\label{rho1}
\partial_t \rho + \vett v\cdot\nabla\rho +\sum_{k,j}
e_j \delta {\boldsymbol{\mathcal{E}}^c} (\vett{x}_k^0,t) \frac {\partial\rho}{\partial \vett{p}_{j,k} } =0 \ ,
\end{equation}
inasmuch as the equation of motion for $\vett{q}_{j,k} $ contains the further force
term $e_j \delta {\boldsymbol{\mathcal{E}}^c} (\vett{x}_k^0,t)$.
As $\delta {\boldsymbol{\mathcal{E}}^c} $ is assumed to be a small perturbation, one can look
for the solution as a series expansion
$$
\rho = \rho_0 + \rho_1 + \ldots \ ,
$$
and the first order term $\rho_1$ is immediately seen to satisfy
the equation
\begin{equation}\label{xxx}
\partial_t \rho_1 = - \vett v\cdot\nabla\rho_1 - \sum_{k,j}
e_j \delta {\boldsymbol{\mathcal{E}}^c} (\vett{x}_k^0,t) \frac {\partial\rho_0}{\partial \vett{p}_{j,k} } \ .
\end{equation}
Clearly the suited ``initial'' condition is the asymptotic one
\begin{equation}\label{incoming}
\rho_1\to 0\quad \mathrm{for}
\quad t\to-\infty\ ,
\end{equation}
and the corresponding well known solution is then
\begin{equation}
\rho_1(z,t) = - \int_{-\infty}^t\mathrm{d} s \sum_{k,j}
e_j \delta {\boldsymbol{\mathcal{E}}^c} (\vett{x}_k^0,s) \frac {\partial\rho_0}{\partial
\vett{p}_{j,k} }\Big(\Phi^{s-t} z \Big) \ ,
\end{equation}
where $\Phi^t z$ is the flow
relative to the \emph{unperturbed} equations of motion,
The macroscopic polarization $ {\boldsymbol{\mathcal{P}}} (\vett x,t)$ can now be computed
to first order, as the average of the microscopic polarization
$\vett P(\vett x,t)$ with respect to the density
$\rho_0 +\rho_1$. Assuming that $ {\boldsymbol{\mathcal{P}}} $
vanishes at equilibrium (absence of ferroelectricity),
one remains with the contribution of $\rho_1$ only, which gives
\begin{equation}\label{eq:polarizza}
\begin{split}
{\boldsymbol{\mathcal{P}}} (\vett x,t) = &
- \int \mathrm{d} z\, \vett P(\vett x,t)
\int_{-\infty}^t\mathrm{d} s\\ & \sum_{k,j} e_j \delta
{\boldsymbol{\mathcal{E}}^c} (\vett{x}_k^0,s)
\frac {\partial\rho_0}{\partial\vett{p}_{j,k} }\Big(\Phi^{s-t} z \Big)\ .
\end{split}
\end{equation}
One has now to insert the expression (\ref{eq:polarizzazione}) for
the microscopic polarization
$\vett P(\vett x,t)$. Then, first of all one performs
two elementary transformations
(namely, interchange of the integration orders of $s$ and $z$, an change of
variable
$z\to\Phi^{t-s}z$ -- taking into account that the modulus of the jacobian
determinant of $\Phi^t z$ is unitary\footnote{Because the unperturbed
vector field has vanishing divergence.}).
Moreover, one uses the fact that
$\delta {\boldsymbol{\mathcal{E}}^c} (\vett{x}_k^0,s)$, being a
macroscopic field, takes on essentially the same
value $\delta {\boldsymbol{\mathcal{E}}^c} (\vett x,s)$ at all points of the volume element $\Delta V $.
This eventually produces the result that macroscopic
polarization depends linearly on the exciting field. So the
macroscopic polarization can be written in
the familiar form of linear response theory, namely as
\begin{equation}\label{eq:rispostalineare}
{\boldsymbol{\mathcal{P}}} (\vett x,t) = \int_{-\infty}^t\mathrm{d} s \; \delta {\boldsymbol{\mathcal{E}}^c} (\vett x ,s)
\tilde\chi(t-s) \ ,
\end{equation}
in terms of a dielectric response function
$\Tilde\chi(t)$, which is given by
\begin{equation}\label{eq:suscettivita}
\Tilde\chi(t) \stackrel {\mathrm{def}}{=} - \frac 1\Delta V
\int \mathrm{d} z\, \sum_{\vett{x}_k^0,\vett{x}_{k'}^0 \in\Delta V }\sum_{j,j'=0}^{n_k}
e_j e_{j'}\, \vett{q}_{j',k'} (t)\, \frac {\partial\rho_0}{\partial\vett{p}_{j,k} } \ .
\end{equation}
Actually, in this expression for the response function
we have introduced one more simplification. This consists in the
fact that, when the expression (\ref{eq:polarizzazione})
for the microscopic polarization
$\vett P$ is introduced into formula (\ref{eq:polarizza}),
one has two sums over $k$
and $k'$, corresponding to two volume elements, whereas now
the first sum was restricted to just the molecules
that belong to the volume element entering the second sum.
This amounts to presuming that the microscopic dynamics in
two different macroscopic volume elements be totally
uncorrelated. This point will be discusses later.
We add now some comments.
The first one concerns the fact that in the deduction of the
formula for the dielectric response function
no reference at all was made to nonconservative forces. Indeed,
it was explicitly assumed that in the equation of motion for each
charge the radiation reaction force be canceled by a
part of the retarded forces due to the ``far'' charges of the
dielectric body.\footnote{Curiously enough, the radiation reaction
force is still taken into consideration in the paper of Callen and
Welton \cite{callen} which is usually considered to be the first
modern work on the fluctuation--dissipation theorem.}
The first scientist who realized the occurring of this
cancellation (already
in the year 1916) is the
Swedish physicist Oseen \cite{oseen}. However, his
result was ignored, having even been qualified as wrong
(\emph{``irrig''} (see \cite{jaffe}, page 266), as also was essentially ignored the
work of Wheeler and Feynman, in which the same
property was conjectured to hold quite in general.
So we are dealing with a time--reversible dynamical system.
An asymmetry in the proof was however introduced above through the
choice of the incoming
external field ${\vett E}^{in}$ (which was adiabatically switched on), and through the
corresponding choice (\ref{incoming}) for the ''initial'' (or rather,
asymptotic in the past) condition needed to solve the continuity equation for
the probability density (vanishing of $\rho_1$ as $t\to -\infty$).
Clearly. these are the choices which are responsible for the fact
that the formula just found corresponds to
an absorption process. How an emission process can be analogously
described in the present time--reversible frame,
will be shown in the next subsection.
The second remark is that the proof shows how
the existence of a linear response to the
external field is quite independent of the nature of the unperturbed
motions, which may have either an ordered or a disordered
character. The linearity of the response is inherited from that of the
Liouville equation, under the only assumption that the higher order
corrections (beyond the first one) to the equilibrium solution be
negligible.This fact is characteristic of linear response theory,
and so also occurs in its present classical formulation in phase
space. The situation
was quite different with the older approaches. In the oldest one,
typically described in Drude's book \cite{drude} but still somehow
surviving
in the Born--Wolf book \cite{bw}, to each observed spectral line was
associated the motion of a material oscillator, which was supposed to
perform linear oscillations, forced by the inciding
field. For example, in the words of Kronig \cite{kronig}, in that
approach one is dealing with
\emph{``an electric charge,
elastically bound to an equilibrium position, having} -- as he even
adds -- \emph{a damping proportional to its velocity''}.
A different attitude was taken by Van Vleck \cite{van24} who,
working in the spirit of Bohr's approach, thought it appropriate to
formulated a theory of susceptibility by assuming that the
unperturbed system performs quasi periodic motions.
Here, instead, essentially no property is required
for the unperturbed motions.
\subsection*{Emission process}
The proof of the existence of a linear response was given above
in a way suited to describe an absorption process. However, the proof was
formulated in the general frame of a time-reversible
dynamics, in such a way that different types of nonequilibrium processes
can be looked upon as determined by an asymmetry of the asymptotic
conditions. So an emission process
should be described by the same equations previously considered,
just choosing a suitable asymptotic condition, and external field
(see \cite{dirac}).
The suitable asymptotic condition can be inferred in the following way.
Recall how the absorption process was described.
For $t\to-\infty$ we have a stationary state described
by an equilibrium probability density $\rho_0$, in the presence of
a well defined exciting field $ {\boldsymbol{\mathcal{E}}^c} $. A perturbation is then introduced
through a ``free'' field ${\vett E}^{in}$, incoming
from infinity. During the process, one has a density
$\rho_0+\rho_1$ and a corresponding exciting field $ {\boldsymbol{\mathcal{E}}^c} +\delta {\boldsymbol{\mathcal{E}}^c} $, and one
presumes that eventually, for $t\to+\infty$,
one will have a new equilibrium (at a higher energy), with a density
$\rho_0'=\lim\big(\rho_0+\rho_1\big)$, together
with a new exciting field ${ {\boldsymbol{\mathcal{E}}^c} }'$ and a new free field $\vett
E^{out}$. Moreover, one should have $\vett E^{out}\simeq 0$, as
the whole incoming field is supposed to have been absorbed.
Let us now consider the inverse process, namely, the one which is
obtained with the interchanges $t \to -t$ and
$\vett{p}_{j,k} \to -\vett{p}_{j,k} $ (the Hamiltonian being assumed to be even in the
momenta). So one starts up with a density
$\rho_0'$ at $t=-\infty$, and asymptotically when
$t\to +\infty$ one gets a density $\rho_0$, whereas the electric field
is now the sum of the exciting field $ {\boldsymbol{\mathcal{E}}^c} $ and of the free field $\vett
E^{in}$. This means that the field $\vett E^{in}$ was emitted from the body,
in passing from the state $\rho_0'$ to the state $\rho_0$.
Mathematically, the process is still described through the perturbed
continuity equation
(\ref{rho1}),
provided the asymptotic condition
\begin{equation*}
\rho\to \rho_0\quad \mathrm{for}
\quad t\to +\infty\ ,
\end{equation*}
be assumed. If, as in the case of the absorption process, we look for
the solution in the form of a series,
the first correction $\rho_1$ has to satisfy the same equation
(\ref{xxx}) as before,
but now with the ``final'' condition
\begin{equation}\label{outcoming}
\rho_1\to 0\quad \mathrm{for}
\quad t\to+\infty\
\end{equation}
So the solution now has the form
\begin{equation}
\rho_1(\vett x,t) = \int_t^{+\infty}\mathrm{d} s \sum_{k,j}
e_j \delta {\boldsymbol{\mathcal{E}}^c} (\vett{x}_k^0,s) \frac {\partial\rho_0}{\partial
\vett{p}_{j,k} }\Big(\Phi^{s-t} z \Big) \ ,
\end{equation}
and thus, in the same hypotheses as before, the final polarization can be
written as
\begin{equation}\label{eq:polarizza2}
{\boldsymbol{\mathcal{P}}} (\vett x,t) = \int_t^{+\infty}\mathrm{d} s \; \delta {\boldsymbol{\mathcal{E}}^c} (\vett x ,s)
\tilde \chi (t-s)\ ,
\end{equation}
with $\tilde \chi$ given exactly by the expression
(\ref{eq:suscettivita})
that occurs in the absorption process.
\section{ Susceptibilities for absorption and for emission. Analyticity
properties, and the $f$--sum rule}\label{4}
\subsection*{Susceptibilities}
Susceptibilities are defined as responses to forcings of given
frequencies, and thus are obtained from the formulas
(\ref{eq:rispostalineare}) and
(\ref{eq:polarizza2})
if the latter are expressed in the form of convolutions,
namely, with integrals over the whole real axis ${\mathbb{R}}$.
Thus we introduce the functions
\begin{equation}
\chi^{abs}(t) \stackrel {\mathrm{def}}{=}
\left\{
\begin{array}{cc}
\tilde\chi(t)&\quad\mbox{for}\quad t > 0\ , \\
0&\quad\mbox{for}\quad t\le 0 \\
\end{array}
\right.
\end{equation}
\begin{equation}
\chi^{em}(t) \stackrel {\mathrm{def}}{=}
\left\{
\begin{array}{cc}
0&\quad\mbox{se}\quad t>0 \\
-\tilde\chi(t)&\quad\mbox{se}\quad t\le 0 \\
\end{array}
\right.
\end{equation}
so that
through the change of variables $s\to t-s$ formulas
(\ref{eq:rispostalineare}) and (\ref{eq:polarizza2})
for the polarizations
in an absorption or an emission process take the form
$$
{\boldsymbol{\mathcal{P}}} (\vett x,t) = \int_{\mathbb{R}}\mathrm{d} s \; \delta {\boldsymbol{\mathcal{E}}^c} (\vett x ,t-s)
\chi^{abs}(s) \ ,
$$
$$
{\boldsymbol{\mathcal{P}}} (\vett x,t) = \int_{\mathbb{R}}\mathrm{d} s \; \delta {\boldsymbol{\mathcal{E}}^c} (\vett x ,t-s)
\chi^{em}(s) \ ,
$$
namely, of convolutions between the change of exciting field and the function
$\chi^{abs}(t)$ or $\chi^{em}(t)$ respectively.
Now, as the Fourier transform of a convolution
is the product of the Fourier transforms (which we denote by a hat),
the relations between polarization and
exciting field can be written in the familiar form
\begin{equation}\label{eq:rispostalinearetrasf}
\begin{split}
\tilde{ {\boldsymbol{\mathcal{P}}} }(\vett x,\omega)= & \tilde{\chi}^{abs}(\omega) {\delta\tilde {\boldsymbol{\mathcal{E}}^c} }
(\vett x,\omega)\\
\tilde{ {\boldsymbol{\mathcal{P}}} }(\vett x,\omega)= & \tilde{\chi}^{em}(\omega) {\delta\tilde {\boldsymbol{\mathcal{E}}^c} }
(\vett x,\omega)
\end{split}
\end{equation}
where
\begin{equation}
\begin{split}
\hat{\chi}^{abs}(\omega) = & - \int_{-\infty}^0\mathrm{d} t\ \tilde\chi(t)
e^{i\omega t} \\
\hat{\chi}^{em}(\omega) = & \int_0^{+\infty} \mathrm{d} t \ \tilde\chi(t)
e^{i\omega t}\ .
\end{split}
\end{equation}
As $\tilde \chi$ is odd (see below), by the change of variable $t\to -t$
in the second integral one gets that
$\hat{\chi}^{em}$ is the complex conjugate of $\hat{\chi}^{abs}$. So the
emission and the absorption spectra coincide.
To show that $\tilde\chi(t)$ is an odd function,
we notice that, from the definition, one has
$$
\tilde\chi(-t)=\int \mathrm{d} z \frac 1\Delta V \sum_{\vett{x}_k^0,\vett{x}_{k'}^0 \in\Delta V }
\sum_{j,j'} e_j e_{j'} \vett{q}_{j',k'} (-t)
\frac {\partial\rho_0}{\partial\vett{p}_{j,k} } \ ,
$$
so that, performing into the integral the substitution $\vett{p}_{j,k} \to -\vett{p}_{j,k} $,
one finds
$$
\tilde\chi(-t)= - \int \mathrm{d} z \frac 1\Delta V
\sum_{\vett{x}_k^0,\vett{x}_{k'}^0 \in\Delta V }\sum_{j,j'} e_j e_{j'} \vett{q}_{j',k'} (t)
\frac {\partial\rho_0}{\partial\vett{p}_{j,k} } = - \tilde\chi(t) \
$$
(indeed, as $\rho_0$ is even, its derivatives are odd, whereas, by changing
sign to the momenta, $\vett{q}_{j',k'} (-t)$ goes into $\vett{q}_{j',k'} (t)$).
\subsection*{Analyticity properties. The Kramers--Kronig relations }
It is well known that, as the function $\chi^{abs}(t)$ vanishes for $t<0$,
then its Fourier transform enjoys two relevant properties:
\begin{itemize}
\item It is analytic in the half plane $\IM \omega > 0$;
\item The Kramers--Kronig relations hold
\begin{align}\label{eq:KK}
\RE \hat \chi^{abs}((\omega) &= \frac 1\pi \int_{\mathbb{R}} \mathrm{d} \Omega \, \frac{\IM
\hat \chi^{abs}(\Omega)}{\Omega-\omega} \nonumber \\
\IM \hat \chi^{abs}((\omega) &= -\frac 1\pi \int_{\mathbb{R}} \mathrm{d} \Omega \, \frac{\RE
\hat \chi^{abs}(\Omega)}{\Omega-\omega} \ .
\end{align}
\end{itemize}
From a conceptual point of view the Kramers--Kronig
relations are often interpreted as expressing the causality
principle, the latter being meant in the sense that the affect (here,
polarization) cannot precede the cause (the exciting field).
On the other hand, analogous relations obviously hold also
for the function $\hat \chi^{em}((\omega)$, which clearly is not causal
in that sense, as $\chi^{em}(t)$ vanishes after the field is
applied.
A second remark concerns the poles of the two
susceptibilities. Since the original work of
Kramers, the emission was attributed to the presence of the radiation
reaction force (proportional to the time derivative of acceleration)
in the equations of motion.
In such a way, however, in the expression for the susceptibility,
calculated
by considering a single damped and forced oscillator, there
appeared a pole in the wrong half--plane, and Kramers himself had to
patch the expression in some suitable way. Instead,
with the full electrodynamic treatment considered here,
in virtue of the Wheeler--Feynman cancellation
the radiation reaction forces entering
the original equations of motion eventually disappears,
and the expressions of the
susceptibilities have poles in the correct half--plane.
\subsection*{The $f$--sum rule}
We finally come to the $f$--sum rule. The reason of the name will be
recalled in the next section.
For the sake of concreteness we here concentrate on the case of the
absorption susceptibility, because the formulas for the case of
emission are simply obtained by passing to the conjugate complex. In
order to have simpler notations, we also omit the superscript
${abs}$, i.e., we let $\hat{\chi}^{abs}\equiv \hat \chi$.
The $f$--sum rule states that
\begin{equation}\label{eq:fsumrule}
\int_{\mathbb{R}} \omega\IM\hat\chi(\omega)\mathrm{d}\omega= \frac \pi\Delta V
\sum_{\vett{x}_k^0\in\Delta V } \sum_{j} \frac {e_j^2}{m_j} \ ,
\end{equation}
so that it essentially relates the total absorption
to the electron charge density. Indeed one should
take into account that for nuclei the ratio
$e_j^2/m_j$ is negligible with respect to that of the
electrons, so that the sum at the right hand side can be restricted
to the electrons present in the considered volume. Thus, denoting by
$e$ and $m$ the charge and the mass of the electron, the r.h.s.
just reduces to $\pi e^2/m$ times the electron density (number of
electrons per unit volume).
The next part of this section is devoted to a proof of the $f$--sum
rule (\ref{eq:fsumrule}). We start noting that for a smooth
functions $f(t)$ one has
$$
\int_{\mathbb{R}} -i\omega \hat f(\omega)\mathrm{d}\omega= 2\pi\dot f(0) \ .
$$
Indeed, on the one hand the Fourier transform of $\dot f(t)$ is given by
$-i\omega \hat f(\omega)$, as one immediately checks by an integration by
parts. On the other hand the inversion theorem for the Fourier
transform gives
$$
\int_{\mathbb{R}} -i\omega \hat f(\omega)e^{-i\omega t}\mathrm{d}\omega= 2\pi \dot
f(t) \ .
$$
So the thesis should follow by simply taking $t=0$. However, in our case
$\dot \chi(t)$ presents a discontinuity of first type at
$t=0$, as it vanishes for $t>0$,
while being equal to $\dot{\tilde\chi}(t)$ for $t<0$. Now, the inversion
theorem tells us that at a discontinuity points
the integral equals the semi sum of the right and the left limits, so
that eventually one has
$$
\int_{\mathbb{R}} -i\omega\hat\chi(\omega)\mathrm{d}\omega= \pi
\dot{\tilde\chi}(0) \ .
$$
However, as is easily checked,\footnote{Indeed, one has
$$
\RE \hat \chi(\omega) = \int_{-\infty}^0 \tilde\chi(t)\cos(\omega t)\mathrm{d}
t \
$$
so that, changing $\omega$ into $-\omega$, the value of the integral
does not change.} $\RE \hat\chi(\omega)$ is an even
function of $\omega$,
so that one has
$$
\int_{\mathbb{R}} -i\omega\hat\chi(\omega)\mathrm{d}\omega= \int_{\mathbb{R}} \omega
\IM \hat\chi(\omega)\mathrm{d}\omega = \pi \dot{\tilde\chi}(0) \ .
$$
Now it turns out that $\dot{\tilde\chi}(0)$ can be evaluated
exactly and, as will be seen in a moment, one has
$$
\dot{\tilde\chi}(0)= \frac 1\Delta V
\sum_{\vett{x}_k^0\in\Delta V } \sum_{j} \frac {e_j^2}{m_j} \ ,
$$
which indeed proves the $f$--sum rule (\ref{eq:fsumrule}).
In order to show the latter relation, we differentiate the expression
(\ref{eq:suscettivita}) for $\tilde\chi(t)$.
Exchanging derivative and integral one gets
\begin{equation*}
\begin{split}
\dot{\tilde\chi}(0) &= - \int \mathrm{d}
z \frac 1\Delta V \sum_{\vett{x}_k^0,\vett{x}_{k'}^0 \in\Delta V }\sum_{j,j} e_j e_{j'} \dqjkp(0)
\frac {\partial\rho_0}{\partial\vett{p}_{j,k} } =\\
&= - \int \mathrm{d}
z \frac 1\Delta V \sum_{\vett{x}_k^0,\vett{x}_{k'}^0 \in\Delta V }\sum_{j,j} \frac {e_j
e_{j'}}{m_{j'}} \vett{p}_{j',k'} (t)
\frac {\partial\rho_0}{\partial\vett{p}_{j,k} } \ ,
\end{split}
\end{equation*}
where in the second line use was made of $\dqjkp(0)=\vett{p}_{j',k'} /m_{j'}$.
Now there just remains to integrate by parts. The boundary term
vanishes (due to the vanishing of the probability for a particle to have
an infinite momentum), so that
\begin{equation*}
\begin{split}
\dot{\tilde\chi}(0) &= \int \mathrm{d}
z \frac 1\Delta V \sum_{\vett{x}_k^0,\vett{x}_{k'}^0 \in\Delta V }\sum_{j,j'=0} \frac {e_j e_{j'}}{m_{j'}}
\frac {\partial\vett{p}_{j',k'} }{\partial\vett{p}_{j,k} } \rho_0 =\\
&= \int \mathrm{d}
z \frac 1\Delta V \sum_{\vett{x}_k^0\in\Delta V }\sum_{j} \frac {e_j^2}{m_{j}}
\rho_0 = \frac 1\Delta V \sum_{\vett{x}_k^0\in\Delta V }\sum_{j} \frac {e_j^2}{m_{j}} \ ,
\end{split}
\end{equation*}
inasmuch as $ \frac
{\partial\vett{p}_{j',k'} }{\partial\vett{p}_{j,k} }=\delta_{k,k'}\delta_{j,j'}$, whereas
the density $\rho_0$ is assumed to be normalized to $1$.
\section{Response functions and susceptibilities
in terms of correlation functions}\label{5}
After the detour on the analyticity properties of the dielectric
response functions and susceptibilities, which were based on the
general expression (\ref{eq:suscettivita}),
we show here how more transparent expressions are obtained if
a further property of a quite general character is introduced for
the equilibrium density $\rho_0$. The point is that formula
(\ref{eq:suscettivita}) involves sums of integrals of the type
\begin{equation}\label{integrale}
\mathcal{I}_{k,j,k',j'} = \int \mathrm{d} z \, \vett{q}_{j',k'} (t-s) \frac
{\partial\rho_0}{\partial\vett{p}_{j,k} } \ ,
\end{equation}
the computation of which requires to have
available a definite expression for the
derivative of $\rho_0$ with respect to $\vett{p}_{j,k} $.
Now, if we were allowed to take for
$ \rho_0$ the Gibbs form, the above quantity would be proportional to
$\vett{p}_{j,k} \, \rho_0$. On the other hand, essentially the same result is
guaranteed under much milder conditions, essentially under conditions
which allow for a large deviation principle to
hold with respect to the momenta only, irrespective of the
positions (which, through the attractive
Coulomb potential, introduce divergences in the classical form of
Gibbs' measure).
Indeed this allows one to get
\begin{equation}\label{ld}
\frac {\partial\rho_0}{\partial\vett{p}_{j,k} }= -\frac {1}{m_{j'}\, \sigma^2_p}\,
\vett{p}_{j,k} \, \, \rho_0\ ,
\end{equation}
where the constant $\sigma_p^2$
is nothing but the mean square deviation
of momentum, which would just reduce to temperature if the density
were the Gibbs one. For the large deviation argument one can see the
classical book of Khinchin \cite{khin}.
So we have
\begin{equation*}\begin{split}
\mathcal{I}_{k,j,k',j'}& = \frac {-1}{m_{j'}\sigma^2_P}\int \mathrm{d} z
\vett{q}_{j',k'} (t-s) \vett{p}_{j,k} \rho_0(z)\\
& = \frac {-1}{m_{j'}\sigma^2_p} \langle \vett{q}_{j',k'} (t-s) \vett{p}_{j,k} (0) \rangle \ ,
\end{split}
\end{equation*}
namely, the integrals (\ref{integrale}) are just equilibrium
time--corre\-la\-tions
between position and momentum of each charge.
This fact, by the way, makes reasonable a property that was assumed in
the last part of section \ref{3}, when passing from
(\ref{eq:polarizza}) to (\ref{eq:polarizza2}). Namely, the property that the
integrals (\ref{integrale}) should present a fast decay
with respect to spatial separation of the charges, i.e., that one should have
$$
\mathcal{I}_{k,j,k',j'} =0
$$
if the molecules $\vett{x}_k^0$ e $\vett{x}_{k'}^0 $ belong to different
volume elements.
In conclusion, the
expression (\ref{eq:suscettivita}) for the dielectric response
function can be rewritten in the form
\begin{equation}\label{eq:suscettivita2}
\tilde\chi(t) = \frac 1{\sigma_p^2}
\sum_{\vett{x}_k^0,\vett{x}_{k'}^0 \in\Delta V }\sum_{j,j'} \frac {e_j e_{j'}}{m_j} \langle
\vett{q}_{j',k'} (t) \vett{p}_{j,k} (0)\rangle \ ,
\end{equation}
which involves equilibrium time--correlations of momenta and positions of
the charges.
Now there remains the problem that we have to compute
phase averages with respect to the equilibrium probability density
$\rho_0$, the
form of which is still essentially undetermined. A great step
forward is accomplished by making use of a general principle
of statistical mechanics according to which, under extremely mild
conditions, the phase space equilibrium averages can be computed as
corresponding time averages (see for example \cite{khin}, page
63).
So we estimate the required phase space integral
as time averages, i.e. as
\begin{equation}\label{eq:corrpq_bis}
\begin{split}
\langle &\vett{q}_{j',k'} (t) \vett{p}_{j,k} (0)\rangle=\\
= &\lim_{T\to+\infty}\frac 1{2T}\int_{-T}^T
\vett{q}_{j',k'} (t+s) \cdot \vett{p}_{j,k} (s) \mathrm{d} s \ .
\end{split}
\end{equation}
\section{Line spectrum and the ``virtual orchestra''}\label{6}
Here we show how it can at all happen
that a conservative Hamiltonian system (to which our original
electrodynamic system has been reduced) presents a line spectrum.
This depends of the qualitative properties of the dynamical orbits (or
motions) of the system, because it turns out that a discrete spectrum occurs
if the motion of the representative point in phase space is, informally
speaking, ``non chaotic''. Indeed in
dynamical systems theory the property of presenting a
continuous spectrum is sometimes even assumed to be the
characteristic property for an orbit to be
chaotic.
More precisely, one certainly has a pure line spectrum if
the motion is assumed to be ``almost periodic'' in the sense
introduced in the year 1924 by Harald Bohr, the brother of Niels Bohr.
\footnote{For an introduction to almost periodic functions see for example
\cite{nem}, Part II, Chapter 5, where in particular the relations
between almost periodicity and Liapunov stability of an orbit
are discussed.}
\subsection*{Pure line spectrum for almost periodic motions}
Almost periodicity can be defined in several equivalent
ways.
However, the following characteristic property (which thus can be taken as
a definition), is more significant for our purposes:
if an orbit, say the motion $\vett{q}_{j,k} (t)$ of a particle, is almost
periodic, then it can be represented by a generalized Fourier expansion
\begin{equation}\label{eq:sviluppoq}
\vett{q}_{j,k} (t) = \sum_n \big[\cjkn \cos(\omega_n t) + \djkn \sin(\omega_n t)\big]
\end{equation}
where the sequence $\{\omega_n\}$ of \emph{positive} frequencies is
determined in the following way. Having defined the
functions\footnote{For almost periodic functions these limits are
proven to exist. See for example the classical text \cite{besi}.}
$\vett c_{j,k}(\omega)$ and $\vett d_{j,k}(\omega)$ by
$$
\vett c_{j,k}(\omega)=\lim_{t\to+\infty} \frac 1{2t}\int_{-t}^t
\vett{q}_{j,k} (s) \cos(\omega s)\mathrm{d} s \ ,
$$
$$
\vett d_{j,k}(\omega)=\lim_{t\to+\infty} \frac 1{2t}\int_{-t}^t
\vett{q}_{j,k} (s) \sin(\omega s)\mathrm{d} s \ ,
$$
then these functions turn out to vanish for all frequencies
but for a discrete set of frequencies $\{\omega_n\}$. This
determines the frequencies.
Then, the coefficients of the expansion simply are the values of the
expansion
simply are the values of the the above functions at $\omega_n$,
i.e., one has
$$
\cjkn = \vett c_{j,k}(\omega_n) \ , \quad
\djkn = \vett d_{j,k}(\omega_n) \ .
$$
Corresponding to the expansion (\ref{eq:sviluppoq}) for the position
as a function of time, one also has an analogous expansion
for the momenta, namely,
\begin{equation}\label{eq:svilippoq}
\vett{p}_{j,k} (t) = m_j \sum_n -\omega_n \cjkn \sin(\omega_n t) + \omega_n \djkn
\cos(\omega_n t) \ ,
\end{equation}
which is obviously obtained by differentiating with respect to time
the expansion for $\vett{q}_{j,k} (t)$.
One thus obtains
\begin{equation}\label{eq:corrpq}
\begin{split}
& \langle \vett{q}_{j',k'} (t) \vett{p}_{j,k} (0)\rangle =\\
&= \sum_n \omega_n\Big[
\frac {\cjkn\cdot\cjknp +\djkn\cdot\djknp }2 \sin \omega_n t \\
&+
\frac {\cjkn\cdot\djknp - \djkn\cdot\cjknp}2\cos \omega_n t\Big]
\ .
\end{split}
\end{equation}
This relation is obtained by evaluating the integrals through the
familiar prosthaphaeresis formulas, recalling that the time
average of any non constant trigonometric function vanishes.
The result is the following one.
Defining
\begin{equation*}
\begin{split}
I_{sc}& \stackrel {\mathrm{def}}{=} \lim_{T\to+\infty}\frac 1{2T}\int_{-T}^T
\sin\omega s\cos\omega'(t+s)\mathrm{d} s\\
I_{ss}& \stackrel {\mathrm{def}}{=} \lim_{T\to+\infty}\frac 1{2T}\int_{-T}^T
\sin\omega s\sin\omega'(t+s)\mathrm{d} s \\
I_{cc}& \stackrel {\mathrm{def}}{=} \lim_{T\to+\infty}\frac 1{2T}\int_{-T}^T
\cos\omega s\cos\omega'(t+s)\mathrm{d} s \\
I_{cs}& \stackrel {\mathrm{def}}{=} \lim_{T\to+\infty}\frac 1{2T}\int_{-T}^T
\cos\omega s\sin\omega'(t+s)\mathrm{d} s\ ,
\end{split}
\end{equation*}
one finds that all the $I$'s vanish for $\omega\ne\omega'$, whereas for
$\omega=\omega'$
one has
$$
I_{sc}=I_{cs}= -\frac 12 \sin\omega t\ ,
\quad I_{ss}=I_{cc}= -\frac 12 \cos\omega t\quad .
$$
\subsection*{Form of susceptibility for
almost periodic motions}
Now, substitute into formula (\ref{eq:suscettivita2})
the expression (\ref{eq:corrpq}) just found for the
correlations. Remarking that,
due to the antisymmetry with respect to the interchange
$k,j \leftrightarrow k',j'$ of the terms occurring in the sum.
one has
$$
\sum_{\vett{x}_k^0,\vett{x}_{k'}^0 \in\Delta V }\sum_{j,j'} \frac {e_j e_{j'}}{m_j}
\frac {\cjkn\cdot\djknp - \djkn\cdot\cjknp }2 = 0 \ ,
$$
one obtains
\begin{equation*}
\begin{split}
&\tilde\chi(t) = \frac 1{\sigma_p^2} \, \sum_n \omega_n \sin \omega_n
t\, \cdot\\
&\cdot\,
\sum_{\vett{x}_k^0,\vett{x}_{k'}^0 \in\Delta V }\sum_{j,j'} \frac {e_j e_{j'}}{m_j}\,
\frac {\cjkn\cdot\cjknp +\djkn\cdot\djknp }2 \ ,
\end{split}
\end{equation*}
In order to find the susceptibility there just remains to compute the
Fourier transform of $\tilde\chi(t)$. A not difficult computation
shows that one has
$$
\int_{-\infty}^0 \sin \omega_n t \,e^{i\omega t}\mathrm{d} t = \frac
{-\omega_n}{\omega_n^2 -\omega^2} +i\pi\Big(
\delta(\omega-\omega_n) +\delta(\omega+\omega_n)\Big) \ .
$$
Thus, defining
\begin{equation}\label{eq:forzaosc}
f_n \stackrel {\mathrm{def}}{=} \omega_n^2\left[ \sum_{\vett{x}_k^0,\vett{x}_{k'}^0 \in\Delta V }\sum_{j,j}
\frac {e_j e_{j'}}{m_j}
\frac {\cjkn\cdot\cjknp +\djkn\cdot\djknp }2 \right]\ ,
\end{equation}
for the real and the imaginary parts of susceptibility one finds the
expressions
\begin{equation}\label{orchestra}
\begin{split}
\RE \chi(\omega) & = \sum \frac {f_n}{\omega_n^2 - \omega^2} \\
\IM \chi(\omega) & = \pi \sum \frac {f_n}{2\omega_n} \Big(
\delta(\omega-\omega_n)
+\delta(\omega+\omega_n)\Big) \ .
\end{split}
\end{equation}
\subsection*{The ``virtual orchestra'' of Bohr, Kramers and Slater}
Due to the delta functions appearing in
the imaginary part of susceptibility,
formula (\ref{orchestra}) shows that the spectrum of a macroscopic
dielectric body performing almost periodic motions
presents infinitely tight absorption
lines, in correspondence of the frequencies
$\omega_n$. This is the way in which, in the spectrum of a
macroscopic dielectric body, ``lines'' show up without necessarily
having to make reference to energy levels of the single molecule or atom.
This result is exactly the property of a spectrum which, before the
advent of quantum mechanics, (starting from Lorentz \cite{lorentz} and
Drude up to Kronig \cite{kronig} and even Born and Wolf
\cite{bw}), was interpreted in microscopic terms by thinking
that each line should be attributed to the motion of a material
harmonic ``resonator'', of exactly that frequency.
Analogously the molecules were thought of as
constituted of charges with mutual elastic
bonds. So there would exist corresponding normal modes,
which could be
equivalently described as harmonic oscillators with characteristic
frequencies $\omega_n$ (which were introduced from outside, in
correspondence with the
observed ones).
However, as the lines are infinite in number, one was meeting with
the absurd situation that any atom or molecule had to be composed of
an infinite number of oscillating charges
For this reason such oscillators were denoted as ``virtual'' i.e., as
somehow non physical (see \cite{bks}), and each of them was
weighted with a suitable weight (usually called ``force'') $f_n$.
In the year 1925 the ``$f$--sum rule'' was empirically discovered,
according to which the ``forces'' of the virtual oscillators were not
arbitrary, but had to satisfy the rule
\begin{equation}\label{eq:fsumrulequan}
\sum_{n} f_n = \frac 1\Delta V \sum_{\vett{x}_k^0\in\Delta V } \sum_{j} \frac
{e_j^2}{m_j} \ .
\end{equation}
Namely, the sum of the ``forces'' of the oscillators just equals
the number of electrons per atom or per molecule,
times the factor $e^2/m_e$
(indeed, as already explained, the contribution of the nuclei is
negligible).
One of the big triumphs of quantum mechanics was to ``explain'' the
$f$--sum rule in terms of the quantum commutation rules.
On the other hand, such a rule holds in the classical case too. Indeed an
explicit computation gives
$$
\int_{\mathbb{R}} \omega\IM \chi(\omega)\mathrm{d} \omega = \pi \sum f_n \ ,
$$
which, using the general formula (\ref{eq:fsumrule}), actually gives
the $f$--sum rule (\ref{eq:fsumrulequan}).
\section{Broadening and chaoticity: the case of ionic crystals}\label{7}
So, a pure line spectrum occurs for stable (almost periodic) motions, whereas
a broadening of the lines and even a continuous spectrum
are expected to occur when chaoticity of the motions sets in.
This connection between optical properties of the system
and qualitative properties (order or chaos, or their coexistence) of
the corresponding orbits can be
illustrated in a particularly clear way in the case of ionic crystals.
If one is interested in the infrared spectrum,
in the expression (\ref{eq:suscettivita2}) for the dielectric response
function it is sufficient to limit oneself to
the motions of the ions only.
In such a case it is convenient to choose
as a reference point $\vett{x}_k^0$ (with respect to which the displacements
$\vett{q}_{j,k} $ are computed), an arbitrary fixed point inside each cell of
the lattice. In such a way the index $k$ is now labeling also the cells.
Following \cite{alessio} one can pass to the normal modes
of the lattice, which we here denote by $A_{ {\boldsymbol{\xi}} ,l}(t)$ and are defined by
$$
\vett{q}_{j,k} (t) = \sum_l \int_{\mathcal{B}} \vett u_l(j, {\boldsymbol{\xi}} )
A_{ {\boldsymbol{\xi}} ,l}(t) e^{i {\boldsymbol{\xi}} \cdot(\vett{x}_k^0+\vett \tau_j) } \mathrm{d} {\boldsymbol{\xi}} \ .
$$
Here, the integration is performed over the Brillouin zone $\mathcal{B}$,
the vectors $\vett u_l(j, {\boldsymbol{\xi}} )$ are the eigenvectors of the dynamical
matrix of the crystal, while the vector $\tau_j$ specifies the
equilibrium position of the $j$--th atom inside the cell $k$.
The index $l$ is now a label for the
different branches of the dispersion relation.\footnote{ We recall that, while
in the purely mechanical case the number of branches is $3N$
($N$ being the number of ions in the fundamental cell), instead,
when the interaction with the field of the far ions is taken into
account, the number of branches can vary, and polaritonic
branches can appear.}
So, one gets the relation
$$
\sum_{\vett{x}_k^0\in \Delta V} \vett{q}_{j,k} (t) \simeq (2\pi)^3
\sum_l \vett u_l(j,0) A_{0,l}(t) \ ,
$$
because, in summing over a volume element, one has
$$
\sum_{\vett{x}_k^0\in \Delta V} e^{i {\boldsymbol{\xi}} \cdot\vett{x}_k^0} \simeq
(2\pi)^3 \delta( {\boldsymbol{\xi}} ) \ .
$$
Thus, in the case of a ionic crystal the dielectric response function for
the ions can be written as
\begin{equation*}
\begin{split}
\tilde\chi(t) = & \frac 1{\sigma_p^2}
\sum_{l,l'} \Big( \sum_{j,j'} e_j e_{j'} \vett u_l(j,0) \cdot
\vett u_l(j',0) \Big) \\
&\langle A_{0,l}(t)(t) \dot A_{0,l'}(0) \rangle \ ,
\end{split}
\end{equation*}
so that the relevant quantities now are the time correlations of the modes
$A_{0,l}(t)$.
If the harmonic approximation, each mode
performs a periodic motion with frequency $\omega_l$, so that one has
$$
\langle A_{0,l}(t)(t) \dot A_{0,l'}(0) \rangle = C_l \delta_{ll'}
\sin(\omega_l t)\ ,
$$
being $\delta_{ll'}$ the Kronecker's delta,
and one ends up with a formula of the type (\ref{orchestra}), now however
with only a finite number of terms, each corresponding to a
branch of the dispersion relation (omitting the
``acoustic'' branches , for which it is $A_{0,l}=0$).
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.8\textwidth]{fig_corr}
\end{center}
\caption{\label{fig:due} The response function $\chi$ versus time. Solid
line refers to the system at low temperature, while broken line
refers to the system at high temperature.}
\end{figure}
On the other hand, if the nonlinear terms are taken into account
the motion is no more integrable, and the previous analysis
has to be changed. In the case of a ``small''
nonlinearity, the behavior of the correlations over some
(large) time--scale does not change with respect to the unperturbed
(i.e., linear) case, whereas over a larger time scale
the correlations are expected to decay to zero, so that one should have
$$
\langle A_{0,l}(t)(t) \dot A_{0,l}(0) \rangle = C_l e^{-\sigma_l t}
\sin(\omega_l t)\ ,
$$
In conclusion, passing to the Fourier transform, one can presume that
in the case of a small nonlinearity one should get
$$
\int_0^{+\infty} e^{i\omega t} \langle A_{0,l}(t)(t) \dot A_{0,l}(0)
\rangle = \frac {f_l}{(\omega^2 - \omega_l^2+\sigma_l^2) + 2i\sigma_l\omega} \ ,
$$
i.e., the classical expression of Lorentz and Drude
\cite{lorentz}\cite{drude}, that such authors interpreted in terms of
motions of material damped ``resonators''.
Thus the line broadening corresponds to a decay of the time correlations
which is induced by the nonlinearity and the
presumably associated chaoticity (or rather partial chaoticity)
of the motions.
Here no damping is active, neither the linear one which was heuristically
introduced by Lorentz and Drude, nor that of the radiation reaction,
which was always taken into consideration by Van Vleck, Planck and many
others. Indeed the radiation reaction, although being actually
present in the original full electrodynamic model, turns out to
be eliminated by the electrodynamic action of the far charges,
through the Wheeler--Feynman mechanism.
So much for the case of a small nonlinearity, i.e., for the case of
what may called the ``perturbation regime'' (with respect to the
linear one). Instead, in the case of a large nonlinearity
the motion is expected to be completely chaotic, displaying time
correlations completely different from those of the linear case.
In particular the spectrum should be now a continuous one, with no
peaks anymore.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.75\textwidth]{fig_spettro}
\end{center}
\caption{\label{fig:tre} Plot of $|\chi(\omega)|$ versus
$\omega$ for two different temperatures. Circles refer to the system
at a high temperature (no peak), while the triangles, which
exhibit a peak for $\omega\simeq\omega_0$, refer to the system
at a low temperature . In the inset, which concerns the system
at low temperature, the plot of $|\chi(\omega)|$
is reported for $\omega$ near $\omega_0$,
together with the best--fit Lorentzian curve (solid
line). Here, $\omega_0$ is the frequency of the optical branch.}
\end{figure}
On the other hand, when in statistical mechanics one makes reference to
the qualitative properties of the motions with respect to order
(stability)
or chaoticity type, it is usually presumed
that in the thermodynamic limit the motions should always be chaotic.
This has the consequence that, in our case, which is concerned with
macroscopic dielectric systems dealt with in a classical frame,
one would always meet with a
continuous spectrum. Now, in the domain of the theory of
classical dynamical systems, particularly in connection with
the so called Fermi--Pasta--Ulam problem, a long
debate is going on about this point, and the results of
numerical computations appeared to be not yet conclusive.
However, rather recently it was analytically proven \cite{andrea}
that in the perturbation regimes significant stability properties do
persist in the thermodynamic limit, and indeed in a form
suited for applications to statistical
mechanics. In particular, in the works \cite{fpu} and \cite{maiocchi}
it was proven that in the FPU and in related models the normal
mode energies remain correlated
for long times also in the thermodynamic limit (see also the numerical
work \cite{cggp}, or the work \cite{plasmi2} concerning plasmas).
Thus one can conclude that the conjecture that
macroscopic systems should perform chaotic motions
is, at least, not always appropriate,
and should be checked in any particular case.
Just in order to give an example which should exhibit
in a qualitative way the features described above, we report here the
results of a numerical computation performed on the classical
one--dimensional alternating
masses model (with 1024 paticles), introduced already in the year
1912 by Born and von K\'arman.
Through a numerical simulation of the dynamics we computed
the response function $\chi(t)$, defined
by (\ref{eq:suscettivita2}) with the sum extended over all
particles of the crystal, and then the corresponding spectrum.
We considered two cases relative to a low
temperature and to a larger one. The response functions for the two
temperatures are reported in
Fig.~\ref{fig:due}, whereas
the corresponding spectra (computed as the
discrete Fourier transforms) are reported in Fig.~\ref{fig:tre}.
In the case of low temperature the response function presents a well
distinct profile, apparently not very dissimilar from a
periodic one. However a decay occurs at much longer times, as
witnessed by the broadened form of the spectrum (shown in the inset of
Fig.~\ref{fig:tre}). Further results not
reported here show that with increasing temperature the broadening,
and a shift too, become larger and larger. Finally, at some high
temperature, the results reported in the figures show that
the response function presents a decay at a short time, and the
corresponding spectrum is essentially a continuum. For an analogous
phenomenon occurring in a model of interest for plasma
confinement, in which a transition from an ordered to a chaotic
motion is witnessed by the form of the spectrum,
see \cite{plasmi2}.
We leave for a future work the numerical study of the
spectrum for a realistic three--dimensional model of a ionic crystal
involving the microscopic electrodynamic forces,
already considered in \cite{alessio}
in connection with the dispersion curves.
\section{Final comments}\label{8}
So we have complemented the result obtained in \cite{alessio}, by
showing how electric susceptibility can be consistently pro\-ven to
exist for a
dielectric macroscopic body, in a classical microscopic theory in which
the full electrodynamic interactions
among the charges are taken into account. Preliminarily we had to make
essential use
of two global properties of the electrodynamic interactions, i.e.,
the Wheeler--Feynman identity and the Ewald--Oseen resummation
properties. The former was proved here for a general system in the
thermodynamic limit, whereas the latter were proven in \cite{alessio}
for crystals, their proof for a general system being still lacking.
Thus our result is at the moment proven only for crystals, although we
are firmly convinced that it can be extended to cover the case of a
generic dielectric body.
On the basis of such global electrodynamic properties, the dynamical system
can be dealt with as if it were a Hamiltonian one, and in particular the
radiation reaction forces are completely eliminated, so that
absorption and emission appear as completely
symmetrical phenomena of a time--reversal invariant system.
Susceptibility turns out to be
expressed in terms of the time correlation functions of
positions and velocities of the charges, calculated for
motions of the system at equilibrium, i.e.,
in the absence of an external field. Notice however that the system
still contains a trace of the electrodynamic interactions, because the
equations of motion of the charges, that have to be solved in order that
the time correlation functions may be computed, still contain
the force of the ``exciting field'', namely, the field originated by
the far charges, that propagates
in the body as an external field, having however
the correct refractive index.
Having reduced the original electrodynamic system to a Hamiltonian one,
susceptibility was proven to exist through methods of
Green--Kubo type. However, this required to overcome
the difficulties of working in the absence of a Gibbs
measure, which does not exist for systems with attractive Coulomb
interactions,
For what concerns the spectrum, which is the same
for absorption and for emission, we have illustrated how it
reflects the stability properties of the unperturbed
equilibrium motions of the system. For stable (almost periodic) motions, as
occurs with a crystal in the linear approximation, one
has a purely line spectrum. So, the susceptibility
presents the standard form that,
since the first work of Lorentz of the year 1872, was
explained by thinking of the system as if it were composed of
single linear material oscillators with proper frequencies equal to
the observed ones (see the booklet
\cite{pauli} of Pauli).
When chaoticity sets in, as occurs in a
crystal in the presence of nonlinearities, one might conjecture that the
motions be completely chaotic, so that
the lines completely disappear, and a continuous spectrum occurs.
We have however pointed out that the most recent analytical result
appear to support the conjecture that, at least in the case of
crystals, partially ordered motions
persist in the thermodynamic limit (i.e., for a macroscopic system).
Thus the time correlations in
general should decay only after a sufficiently long time,
with the consequence that the lines are
in general broadened. In such a case the spectrum has the form that would
occur if the system were composed of
single linear material oscillators with the observed frequencies,
having in addition suitable linear
dissipative forces. However, no physical dissipative force is
actually present in our system, because, in virtue of the
Wheeler--Feynman identity, the radiation reaction forces
are canceled by the
electrodynamic forces due to the far charges. So, the decay
of correlations occurs in
the familiar dynamical way which characterizes autonomous Hamiltonian
systems that are (at least partially) chaotic,
and has nothing to do with the radiation
reaction force, to which for example Planck, Van Vleck and many others
were thinking. Correspondingly, the poles of susceptibility
in the complex frequency plane quite naturally do lie
in the correct half plane.
In any case, while in the
theory of dynamical systems the presence of a continuous or partly
continuous spectrum is sometimes used as a tool to
qualify the ordered or chaotic character of motions, here the
situation is reversed, and it is the
spectrum itself, in its original physical optical connotation, that is a pure
line spectrum in the case of ordered motions, while presenting broadened lines
or a fully continuous aspect in the case of partly or fully chaotic motions.
\section{Appendix. {Proof of the Wheeler--Feynman identity}}\label{9}
\subsection*{Proof of the identity}
The Wheeler--Feynman identity deals with the classical problem of
the solutions of the inhomogeneous wave equation
\begin{equation*}
\dale A^{\nu} =j^{\nu}(t,\vett x) \ ,
\end{equation*}
for the four--potential $A^{\nu}$, with a given four--current
$j^{\nu}(t,\vett x)$, and states that, possibly under suitable
conditions,
the advanced potential coincides with the retarded one, or more
precisely, in terms of their difference which is a regular function,
that one has
$$
A_{ret}^\nu - A_{adv}^\nu=0\ .
$$
Clearly this in not true for an arbitrary current, and the authors, on
the basis of four arguments, advanced the conjecture that the identity
should hold if the problem is considered as a global one involving, as
they said, all charges ``of the universe''. A much more innocuous
setting in which the problem can be framed, is the standard one of
statistical mechanics, where reference is made to the ``thermodynamic
limit''. So we consider the microscopic current inside a domain of
volume $V$ (i.e., the ``truncated'' function $j_V$ which coincides
with $j$ inside the domain and vanishes outside), and take the limit
in which both the volume and the number of elementary charges
constituting the current tend to infinity, in such a way that the
charge density (number of charges per unit volume) remains constant.
Such a framing of the problem is immediately reflected in a deep
mathematical property of the current, because for the current density
one clearly has to give up any property of decrease at infinity, and
one should assume for example only the property $j^\nu\in
L^\infty(\mathbb{R}^3)$, i.e., that the density $j^\nu(t,\vett x)$ be only
locally integrable.
As a possible substitute for the global integrability condition there
is one that quite naturally comes to one's mind for its physical
significance. Moreover, it is somehow analogous to what is sometimes called the
locality condition of quantum field theory, although it rather appears
to express a kind of causality condition. Precisely, we start up
defining the autocorrelation of the current density $j^\nu$ by
\begin{equation}\label{eq:defcorr}
\mathcal{C}_{j^\nu}(s,t,\vett x) \stackrel {\mathrm{def}}{=} \lim_{V\to\mathbb{R}^3} \frac 1{V}
\int_V j^\nu(s,\vett y) j^\nu(s+t,\vett y- \vett x) \mathrm{d}\vett y \ ,
\end{equation}
where the symbol $V$ denotes both the space region of integration and
its (Lebesgue) measure. It is implicitly assumed that the average of
$j^\nu(t,\vett x)$ over the whole space--time vanishes.
Now our global hypothesis reads as follows.
\begin{definition}[Causality Condition]\label{hyp:1}
A source $j(t,\vett x)$ satisfies the Causality Condition, iff 1)
$j\in L^\infty(\mathbb{R}^3)$, 2) the correlation $\mathcal{C}_{j}(s,t,\vett x) $
exists for all $s$, $t$, $\vett x$, and 3) for all $s$ one has
\begin{equation}\label{eq:2}
\mathcal{C}_{j}(s,t,\vett x) = 0 \quad \mbox{for} \qquad c^2t^2 - \vett
x\cdot \vett x \le 0 \ .
\end{equation}
\end{definition}
In other terms we are assuming that there exists no correlation
between space--separated points of space--time.
This requirement is natural from the
physical point of view, because one should assume that the interactions
cannot propagate faster than light, so that it seems
natural to assume that space separated events be
independent.\footnote{We do not discuss here whether this is active or
passive locality in the sense of Nelson \cite{nelson}.}
We now show that the above ``causality condition'' is sufficient to
guarantee the validity of the identity. Indeed the
following Theorem~\ref{teo:main} holds:
\begin{theorem}\label{teo:main}
Consider the wave equation
\begin{equation}\label{eq:onde}
\dale A =j(t,\vett x) \ ,
\end{equation}
having as source a current $j(t,\vett x)$ satisfying the Causality
Condition~\ref{hyp:1}. Let $A_{ret}$ and $A_{adv}$ be the retarded and the
advanced solutions respectively. Then for all $t$ one has
\begin{equation}\label{eq:WF}
\lim_{V\to\infty} \frac 1V \int_V
\Big(A_{ret}(t,\vett x)-A_{adv}(t,\vett x)\Big)^2 \mathrm{d}\vett x = 0 \ .
\end{equation}
\end{theorem}
This theorem states that for causal currents the retarded and advanced
fields are almost equal, i.e., they differ at most on a set having zero
relative measure.
To prove the theorem, let us start defining by $j_V(t,\vett x)$,
the ``truncated'' current, i.e. the function coinciding with
$j(t,\vett x)$ inside $V$, and vanishing outside it. The wave equation
(\ref{eq:onde}) can be written in Fourier space (with respect to the
spatial coordinates) as
$$
\ddot A_{\vett x} + \omega_k^2 A_{\vett x}= \hat j_V(t,\vett k) \ ,
$$
where $\omega_k=c|\vett k|$, whereas $\hat j_V(t,\vett k)$ is the
space Fourier transform of the truncated current. The retarded and advanced
solutions are then given by
\begin{equation*}
\begin{split}
A^{ret}_{\vett k} & = \int^t_{-\infty}\frac
{\sin\omega_k(t-s)}{\omega_k} \hat j_V(t,\vett k) \mathrm{d} s
\\ A^{adv}_{\vett k} & = - \int_t^{\infty}\frac
{\sin\omega_k(t-s)}{\omega_k} \hat j_V(t,\vett k) \mathrm{d} s \ ,\\
\end{split}
\end{equation*}
so that one gets
\begin{equation*}
A^{ret}_{\vett k}-A^{adv}_{\vett k} = \frac 1{2i\omega_k}\Big(
e^{i\omega_k t} \hat j_V(-\omega_k,\vett k) - e^{-i\omega_k t} \hat
j_V(\omega_k,\vett k) \Big) \ ,
\end{equation*}
where $\hat j_V(\omega,\vett k)$ is the Fourier transform, with
respect to time, of $\hat j_V(t,\vett k)$. Now one uses the
Plancherel theorem, which states
\begin{equation}\label{yyy}
\int_{\mathbb{R}^3} \Big| A^{ret}(t,\vett x)-A^{adv}(t,\vett x)\Big|^2
\mathrm{d} \vett x = \int_{\mathbb{R}^3} \Big|A^{ret}_{\vett k}-A^{adv}_{\vett
k} \Big|^2 \mathrm{d} \vett k \ ,
\end{equation}
to get (use $2|\vett a\cdot \vett b|\le a^2+b^2$)
\begin{equation}\label{eq:diff}
\begin{split}
\int_{\mathbb{R}^3} \Big| & A^{ret}(t,\vett x)-A^{adv}(t,\vett
x)\Big|^2 \mathrm{d} \vett x \\
& \le \int_{\mathbb{R}^3} \frac 1{2\omega_k^2}
\Big(|j_V(-\omega_k,\vett k)|^2 + |j_V(\omega_k,\vett k)|^2
\Big)\mathrm{d} \vett k \\
&= \frac 1{2c^2} \int \Big(|j_V(-ck,\vett k)|^2 + |j_V(ck,\vett k)|^2
\Big)\mathrm{d} k\mathrm{d}\Omega \ ,
\end{split}
\end{equation}
where $\mathrm{d}\Omega$ is the surface element on the unit
sphere in the $\vett k$ space.
We now use the causal property of the current. In fact one has the
following theorem, which will be proven below:
\begin{theorem}\label{teo:due}
If $j(t,\vett x)$ is a causal current in the sense of
Definition~\ref{hyp:1}, then one has
\begin{equation}\label{eq:CCm}
\lim_{V\to+\infty} \frac 1V \int_{\mathcal{C}} |\hat {j}_V(\omega,
\vett k)|^2 \mathrm{d} \Omega\mathrm{d} R =0 \ ,
\end{equation}
on each circular cone $\mathcal{C} \stackrel {\mathrm{def}}{=} \{ |\omega| = \alpha|\vett k|,
\alpha\ge c \}$, where $\mathrm{d}\Omega$ is the surface element on the unit
sphere in the $\vett k$ space, while $\mathrm{d} R$ runs along the cone
generatrix.
\end{theorem}
So,
dividing relation (\ref{eq:diff}) by $V$, using (\ref{eq:CCm}) with
$\alpha=c$ and taking the limit, one gets
(\ref{eq:WF}).
As a comment, one may add that from (\ref{yyy}) it is rather easily seen
that the validity almost everywhere of the Wheeler--Feynman identity implies the
vanishing of the ``spectrum of the current'', i.e. of the limit of
$|\hat {j}_V(\omega,\vett k)|^2/V$, on almost the whole
light cone $\omega^2=c^2\vett k\cdot\vett k$.
So, the problem of proving the Wheeler--Feynman identity
is reduced to proving formula
(\ref{eq:CCm}) of theorem~\ref{teo:due}.
To this end, we start defining the function
\begin{equation}\label{eq:defK}
K_V(t,\vett x) \stackrel {\mathrm{def}}{=} \int j_V(s,\vett y)j_V(s+t,\vett y +\vett x) \mathrm{d}
s\mathrm{d} \vett y \ ,
\end{equation}
which, apart from the factor $1/V$, is nothing but the correlation
of the truncated current, integrated over $s$, as one would naturally
do in defining correlations for functions having domain in
space--time. One then immediately sees that:
\begin{itemize}
\item one has
\begin{equation}\label{eq:CCK}
\lim_{V\to+\infty}\frac 1V \, K_V(t,\vett x) = 0 \ , \quad
\mbox{if}\quad c^2t^2-\vett x\cdot\vett x \le 0 \;
\end{equation}
\item the Fourier transform $\hat
K_V(\omega,\vett k)$ of $K_V(t,\vett x)$ coincides with $|\hat
j_V(\omega,\vett k)|^2$.
\end{itemize}
Indeed the first property is just a translation of the fact that
$j_V(t,\vett x)$ is causal, i.e., that (\ref{eq:2}) holds, whereas
the second one is nothing but
the ``faltung'' theorem on the Fourier transform of a convolution.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.6\textwidth]{WF_mod}
\end{center}
\caption{\label{fig:uno} The domain $\mathcal{D}$ of integration in
formula (\ref{eq:6}).}
\end{figure}
Now, considering the spherical mean of the spectrum
$|\hat j_V(\omega,\vett k)|^2$, one gets
\begin{equation}\label{eq:4}
\begin{split}
\int_{S_2} &| \hat j_V(\omega,\vett k)|^2 \mathrm{d}\Omega = \frac 1{\pi^2}
\int \mathrm{d} t\mathrm{d}\vett x \,K_V(t,\vett x) \int_{S_2} e^{i(\omega t+\vett
k\cdot\vett x)} \mathrm{d} \Omega \\ &= \frac 2{\pi} \int \mathrm{d} t\mathrm{d}\vett x
\,K_V(t,\vett x)\int_{0}^\pi e^{i(\omega t + k r\cos \theta)} \sin
\theta \mathrm{d} \theta \\ &= \frac 2{\pi^2} \int \mathrm{d} t\mathrm{d} r \, r \frac
{e^{i(\omega t + k r)} - e^{i(\omega t - k r)}}{ik}
\int_{S_2} K_V(t,\vett x) \mathrm{d} \Omega\\ &=\frac 2{\pi} \int
\mathrm{d} t\mathrm{d} r\, r \tilde K_V(t,r) \frac { e^{i(\omega t + k r)}
- e^{i(\omega t - k r)}}{ik} \ ,
\end{split}
\end{equation}
where $\tilde K_V(t,r)$ is the spherical mean of
$K_V(t,\vett x)$. Now, if one makes use of the of parity property of
the correlation $K_V(t,\vett x)=K_V(-t,-\vett x)$, which easily follows
from the very definition \eqref{eq:defK}, one finds that the spherical
mean $\tilde K_V(t,r)$ is an even function of time, so that the
imaginary part of the integral in the last line of \eqref{eq:4}
vanishes, and one gets
\begin{equation}\label{eq:5}
\begin{split}
\int_{S_2} | \hat j_V(\omega,\vett k)|^2 \mathrm{d}\Omega & =\frac 2{\pi} \int
\mathrm{d} t\mathrm{d} r \, r \tilde K_V(t,r) \\
&\Big[ \frac { \sin(\omega t + k r)}k - \frac
{\sin(\omega t - k r)}k \big] \ .
\end{split}
\end{equation}
Consider now ``a ray'' in the momentum $(\omega,\vett k)$ space, i.e. all
vectors of the form $(R\omega_0,R{\vett k}_0)$, $R>0$, and integrate relation
\eqref{eq:5} along this ray: one gets
\begin{equation*}
\begin{split}
&\int_0^{\infty }\mathrm{d} R\, \int_{S_2} | \hat j_V(R\omega_0,R\vett
k_0)|^2 \mathrm{d}\Omega =\frac 2{\pi} \int \mathrm{d} t\mathrm{d} r \, r \tilde
K_V(t,r) \frac 1{k_0} \\ &\Big[ \int_0^{\infty }\mathrm{d} R\,\frac {
\sin\big(R(\omega_0 t + k_0 r)\big)}R - \int_0^{\infty }\mathrm{d}
R\,\frac {\sin\big(R(\omega_0 t - k_0 r)\big)}{R} \Big] \ .
\end{split}
\end{equation*}
Now using the relation
\begin{equation*}
\int_0^{\infty }\mathrm{d} R\,\,\frac { \sin \alpha R } R = \left\{
\begin{split}
\frac {\pi}2 \quad &\text{if} \quad \alpha >0 \\ 0\quad
&\text{if} \quad \alpha =0 \\ -\frac {\pi}2\quad &\text{if}
\quad \alpha <0 \ ,
\end{split}
\right.
\end{equation*}
one gets
\begin{equation} \label{eq:6}
\begin{split}
\int_0^{\infty }\mathrm{d} R\, \int_{S_2} | \hat j_V(R\omega_0,R\vett
k_0)|^2 & \mathrm{d}\Omega = \\
& 2 \int_{\mathcal{D}(\omega_0,k_0)} \mathrm{d} t\mathrm{d} r\, r
\tilde K_V(t,r)
\end{split}
\end{equation}
where the domain $\mathcal{D}(\omega_0,k_0)$ (depicted in
figure~\ref{fig:uno}) is the domain in the half--plane $r>0$, bounded
by the two half--lines $\omega_0 t \pm k_0r=0$. Now, dividing by
$V$ and taking the limit, the integral is seen to vanish if
$\omega_0^2-k_0^2\ge0$. In fact, by the causality
property (\ref{eq:CCK}), in that limit $\tilde K_V(t,r)/V$
vanishes for all points
inside the region bounded by the lines $ct\pm r=0$, i.e., in
particular, for all points of $\mathcal{D}(\omega_0,k_0)$. So
(\ref{eq:CCm}) holds and Theorem \ref{teo:main} is proven.
\subsection*{Use of the identity in canceling the radiation reaction
forces}
In their paper \cite{wf}, Wheeler and Feynman showed how the condition
$$
A_{ret}^\mu-A_{adv}^\mu=0\
$$
implies the vanishing of the radiation reaction force
(or self force) acting on each charge. One starts from the
relativistic equation of motion for the charge
$$
m\ddot q^\mu = f_{mec}^\mu + \tilde F^{\mu\nu}_{ret}\,\dot q_\nu +\frac
{2e^2}{3c^3}\Big( \dddot q^{\mu} +\ddot q^\nu \ddot q_\nu \dot q^\mu \Big) \ ,
$$
where $m$ and $e$ are the charge and the mass of the particle, dots
represent derivatives with respect to proper time, repeated
index means summation (Einstein convention), $ f_{mech}^\mu$ is a
four--force of mechanical (non electromagnetic) type, while
$\tilde F^{\mu\nu}_{ret}$ is the retarded electromagnetic field due to
all other charges,
evaluated at the four--position $q^\mu$ of the considered charge,
and finally the term $\frac
{2e^2}{3c^3}\Big( \dddot q^{\mu} +\ddot q^\nu \ddot q_\nu \dot q^\mu
\Big)$ is the relativistic expression for the radiation reaction
force.
The electromagnetic field $\tilde F^{\mu\nu}_{ret}$, or
rather the field $\tilde F_{ret,\mu\nu}$, is defined as
$$
\tilde F_{ret,\mu\nu} = \sum \Big(\partial_\mu A_{ret,\nu}^k
- \partial_\nu A_{ret,\mu}^k \Big) \ ,
$$
where $ A_{ret,\nu}^k$ is the retarded field produced by the $k$--th
charge, and the summation is extended over all charges but the
considered one. The field $\tilde F_{ret,\mu,\nu}$ can be rewritten in
a more useful form as
\begin{equation*}
\begin{split}
\tilde F_{ret,\mu\nu} &= \sum \Big(\partial_\mu \frac {A_{ret,\nu}^k
+A_{adv,\nu}^k}2
- \partial_\nu \frac {A_{ret,\mu}^k+A_{adv,\mu}^k}2\Big) \\
&+ \sum \Big(\partial_\mu \frac {A_{ret,\nu}^k - A_{adv,\nu}^k}2
- \partial_\nu \frac {A_{ret,\mu}^k- A_{adv,\mu}^k}2\Big)
\ ,
\end{split}
\end{equation*}
because, as we will show below, the Wheeler-- Feynman identity implies that
\begin{equation}\label{eq:canc}
\begin{split}
\sum &\Big(\partial_\mu \frac {A_{ret,\nu}^k - A_{adv,\nu}^k}2
- \partial_\nu \frac {A_{ret,\mu}^k- A_{adv,\mu}^k}2\Big) = \\
& - \frac{2e^2}{3c^3}\Big( \dddot q^{\mu} -\ddot q^\nu \ddot q_\nu \dot q^\mu
\Big)\ ,
\end{split}
\end{equation}
so that the equations of motion, at the end, can be written as
$$
m\ddot q^\mu = f_{mec}^\mu + \frac {\tilde F^{\mu\nu}_{ret} +
\tilde F^{\mu\nu}_{adv}}2 \dot q_\nu
$$
with
\begin{equation*}
\begin{split}
\frac {\tilde F_{ret,\mu\nu} + \tilde F_{adv,\mu\nu}}2 &= \\
\sum \Big( \partial_\mu &\frac {A_{ret,\nu}^k + A_{adv,\nu}^k}2 -
\partial_\nu \frac {A_{ret,\mu}^k+A_{adv,\mu}^k}2\Big) \ .
\end{split}
\end{equation*}
The new form of the equations of motion clearly shows that they are
indeed reversible and the radiation reaction has disappeared. So, such
a force force cannot be held responsible for the emission.
To show how relation (\ref{eq:canc})
follows from the Wheeler--Feynman identity, one first has to notice that
such an identity states that one has
$$
A_{\mu,ret}-A_{\mu,adv} = \sum_{\mbox{all}} \left({A_{ret,\mu}^k-
A_{adv,\mu}^k}\right)=0\ ,
$$
where the sum is extended to all charges. Thus, at all points
$x^\mu\ne q^\mu$ (i.e., at all points different from the
four--position of the considered charge) one has
\begin{equation}\label{eq:zerotot}
\sum_{\mbox{all}} \Big(\partial_\mu \frac {A_{ret,\nu}^k
- A_{adv,\nu}^k}2
- \partial_\nu \frac {A_{ret,\mu}^k- A_{adv,\mu}^k}2\Big) = 0 \ ,
\end{equation}
because the vanishing of the potentials implies the vanishing of their
derivatives.
Now, it was shown by Dirac (see \cite{dirac}) that for the
field $ \frac {A_{ret,\mu}^j- A_{adv,\mu}^j}2$ created by the particle
$q^\mu$ itself one has
\begin{equation*}
\begin{split}
\lim_{x^\mu\to q^\mu} \Big(\partial_\mu \frac {A_{ret,\nu}^j - A_{adv,\nu}^j}2
- &\partial_\nu \frac {A_{ret,\mu}^j- A_{adv,\mu}^j}2\Big) \dot q^{\mu} = \\
&\, \frac {2e^2}{3c^3}\Big( \dddot q^{\mu} +\ddot q^\nu \ddot q_\nu \dot q^\mu
\Big)\ ,
\end{split}
\end{equation*}
while on he other hand the remaining
fields are regular at $q^\mu$. So taking the limit of
the previous relation (\ref{eq:zerotot}) for $x^\mu\to q^\mu$, one
gets (\ref{eq:canc}).
|
2,869,038,153,979 | arxiv | \section{Introduction}
Cellular Automata (CAs for short) are both discrete dynamical systems and a model of computation. They
were introduced in the late 1940s independently by John von~Neumann and Stanislaw Ulam to
study, respectively, self-replicating systems and the growth of quasi-crystals.
A $d$-dimensional CA consists of cells aligned on~\ensuremath{{\mathbb{Z}^d}}\xspace that may be in a finite number of states, and are updated synchronously with a local rule, \ie depending only on a finite neighborhood.
All cells operate under the same local rule. The state of all cells at some time step is called
a configuration. CAs are very well known for being simple systems that may exhibit complicated
behavior.
A $d$-dimensional subshift of finite type (SFT for short) is a set of colorings of \ensuremath{{\mathbb{Z}^d}}\xspace by a finite number of
colors containing no pattern from a finite family of forbidden patterns. Most proofs of undecidability
concerning CAs involve the use of SFTs, so both topics are very
intertwined~\cite{Kar1990,Kar1992,Kar1994,Mey2008,Kar2011}. A recent trend in
the study of SFTs has been to give computational characterizations of dynamical properties, which
has been followed by the study of their computational structure and in particular the comparison
with the computational structure of effectively closed sets, which are the subsets of \ensuremath{\left\{0,1\right\}^\NN}\xspace on
which some Turing machine does not halt. It is quite easy to see that SFTs are such sets.
In this paper, we follow this trend and study the limit set $\limitset{\ca A}$ of a CA $\ca A$, which consist of all the
configurations of the CA that can occur after arbitrarily long computations. They were introduced
by \citet{CPY1989} in order to classify CAs. It has
been proved that non-trivial properties on these sets are undecidable
by \citet{Kar1994b,GR2010} for CAs of all dimensions. Limit sets of CAs are subshifts, and
the question of which subshifts may be limit sets of CA has been a thriving topic, see
\cite{Hur1987,Hur1990b,Hur1990,Maa1995,FK2007,DiLM2009,BGK2011}. However, most of these results
are on the language of the limit set or on simple limit sets. Our aim here is to study the
configurations themselves.
In dimension~$1$, limit sets are effectively closed sets, so it is quite
natural to compare them from a computational point of view. The natural measure of
complexity for effectively closed sets is the Medvedev degree \cite{Sim2011a}, which,
informally, is a measure of the complexity of the simplest points of the set. As limit
sets always contain a uniform configuration (wherein all cells are in the
same state), they always contain a computable point and
have Medvedev degree
{\ensuremath{\mathbf{0}}}. Thus, if we want to study their computable structure, we need a finer measure; in
this sense, the set of Turing degrees is appropriate.
It turns out that for SFTs, there is a
characterization of the sets of Turing degrees found by \citet{JeandelV2013:turdeg}, which states
that one may construct SFTs with the same Turing degrees as any effectively closed set
containing a computable point. In the case of limit sets, such a characterization would
be perfect, as limit sets always contain a computable point\footnote{Note that this is not the case
for subshifts: there exist non-empty subshifts containing only non-computable points.}. This is
exactly what we achieve in this article:
\begin{maintheorem}\label{mainthm}
For any effectively closed set $S$, there exists a cellular automaton~$\ca A$ such that
\[
{\ensuremath{\deg_T}}\limitset{\ca A}={\ensuremath{\deg_T}}{S}\cup\{{\ensuremath{\mathbf{0}}}\}\text{.}
\]
\end{maintheorem}
In the way to achieve this theorem, we introduce a new construction which gives us some
control over the limit set. We hope that this construction will lead to other unrelated results
on limit sets of CAs, as it was the case for the construction in \cite{JeandelV2013:turdeg}, see
\cite{JeandelV2013}.
The paper is organized as follows. In Section~\ref{prelim} we recall the usual definitions concerning
CAs and Turing degrees. In Section~\ref{requirements} we give the reasons for each trait of the
construction which allows us to prove theorem~\ref{mainthm}. In Section~\ref{construction} we
give the actual
construction. We end the paper by a discussion, in Section~\ref{CB}, on the Cantor-Bendixson ranks of the
limit sets of CAs. The choice has been made to have colored figures, which are best viewed on screen.
\section{\label{prelim}Preliminary definitions}
A ($1$-dimensional) \emph{cellular automaton} is a triple $\ca A = (Q, r, \delta)$, where $Q$ is the finite set of \emph{states},
$r > 0$ is the \emph{radius} and $\delta : Q^{2r + 1}\to Q$ the \emph{local transition function}.
An element of~$i\in\ensuremath{\mathbb{Z}}$ is called a \emph{cell}, and the set $\inter{i - r}{i + r}$ is the
\emph{neighborhood} of~$i$ (the elements of which are the \emph{neighbors} of~$i$). A
\emph{configuration} is a function $\cacf c : \ensuremath{\mathbb{Z}}\to Q$. The local transition function induces a
\emph{global transition function} (that can be regarded as the automaton itself, hence the notation),
which associates to any configuration~$\cacf c$ its \emph{successor}:
\[
\ca A(\cacf c) : \left\{\begin{array}{ccl}
\ensuremath{\mathbb{Z}} &\to& Q\\
i &\mapsto& \delta(\cacf c(i - r), \dots, \cacf c(i - 1), \cacf c(i), \cacf c(i + 1), \dots, \cacf c(i + r)))\text{.}
\end{array}\right.
\]
In other words, all cells are finite automata that update their states in parallel, according to the same local transition rule,
transforming a configuration into its successor.
If we draw some configuration as a horizontal bi-infinite line of cells, then add its successor above it, then the successor of
the latter and so on, we obtain a \emph{space-time diagram}, which is a two-dimensional representation of some
computation performed by~$\ca A$.
A \emph{site} $(i, t)\in\ensuremath{\mathbb{Z}}^2$ is a cell~$i$ at a certain time step~$t$ of the computation we consider
(hereinafter there will never be any ambiguity on the
automaton nor on the computation considered).
The \emph{limit set} of~$\ca A$, denoted by $\limitset{\ca A}$, is the set of all the configurations that can
appear after arbitrarily many computation steps:
\[
\limitset{\ca A} = \bigcap_{k\in\ensuremath{\mathbb{N}}}\ca A^k(Q^\ensuremath{\mathbb{Z}})\text{.}
\]
For surjective CAs, the limit set is the set of all possible configurations $Q^\ensuremath{\mathbb{Z}}$, while
for non-surjective CAs, it is the set of all configurations containing no
orphan of any order, see \cite{Hur1990b}. An \emph{orphan of order~$n$} is a finite word~$w$
which has no preimage by $\ca A^n_{|Q^{|w|}}$.
An \emph{effectively closed set}, or \emph{\ensuremath{\Pi^0_1}\xspace class}, is a subset~$S$ of \ensuremath{\left\{0,1\right\}^\NN}\xspace for which there
exists a Turing machine that, given any $x\in\ensuremath{\left\{0,1\right\}^\NN}\xspace$, halts if and only if $x\not\in S$.
Equivalently, a class $S\subseteq\ensuremath{\left\{0,1\right\}^\NN}\xspace$ is \ensuremath{\Pi^0_1}\xspace if there exists a computable set~$L$
such that $x\in S$ if and only if no prefix of~$x$ is in~$L$.
It is then quite easy to see that limit sets of CAs are \ensuremath{\Pi^0_1}\xspace classes: for any limit set, the set of forbidden patterns is the set of all orphans of all orders, which form a recursively enumerable set, since it is
computable to check whether a finite word is an orphan.
For $x, y\in\ensuremath{\left\{0,1\right\}^\NN}\xspace$, we say that $x{\ensuremath{\leq_T}} y$ if $x$ is computable by a Turing machine using $x$
as an oracle. If $x{\ensuremath{\leq_T}} y$ and $x{\ensuremath{\geq_T}} y$, $x$ and~$y$ are said to be Turing-equivalent,
which is noted $x{\ensuremath{\equiv_T}} y$. The \emph{Turing degree} of~$x$, noted ${\ensuremath{\deg_T}} x$, is its equivalence
class under relation~${\ensuremath{\equiv_T}}$. The Turing degrees form a lattice whose bottom is {\ensuremath{\mathbf{0}}},
the Turing degree of computable sequences.
Effectively closed sets are quite well understood from a computational point of view, and
there has been numerous contributions concerning their Turing degrees, see the book of
\citet{CR1998} for a survey. One of the most interesting results may be that there exist
\ensuremath{\Pi^0_1}\xspace classes whose members are two-by-two Turing incomparable \cite{JS1972}.
\section{\label{requirements}Requirements of the construction}
The idea to prove Theorem~\ref{mainthm} is to make a construction that embeds computations of a Turing
machine that will check a read-only oracle tape containing a member of the
\ensuremath{\Pi^0_1}\xspace class $S$ that will have to appear ``non-deterministically''. The following constraints have to be addressed.
\begin{itemize}
\item Since CAs are intrinsically deterministic, this non-determinism will
have to come from the ``past'', \ie
from the ``limit'' of the preimages.
\item The oracle tape, the element of \ensuremath{\left\{0,1\right\}^\NN}\xspace that needs to be checked,
needs to appear entirely on at least one
configuration of the limit set.
\item Each configuration of the limit set containing the oracle tape
needs to have exactly one head of the Turing machine, in order
to ensure that there really is a computation going on in the associated space-time diagram.
\item The construction, without any computation, needs to have a very simple limit set,
\ie it needs to be computable, and in particular countable; this to ensure that no complexity
overhead will be added to any configuration containing the oracle tape, and that ``unuseful''
configurations of the limit set --~the configurations that do not appear in a
space-time diagram corresponding to a computation~-- will be computable.
\item The computation of the embedded Turing machine needs to go backwards, this to ensure
that we can have the non-determinism. And an error in the computation must ensure
that there is no infinite sequence of preimages.
\item The computation needs to have a beginning (also to ensure the presence of a head),
so the construction needs some marked beginning, and the representation of the oracle and work tapes in the construction need to disappear at this point, otherwise by compactness the part without any computation could
be extended bi-infinitely to contain any member of \ensuremath{\left\{0,1\right\}^\NN}\xspace, thus leading to the full set
of Turing degrees.
\end{itemize}
There are other constraints that we will discuss during the construction, as they arise.
In order to make a construction complying to all these constraints, we reuse, with heavy modifications,
an idea of~\citet{JeandelV2013:turdeg}, which is to construct a sparse grid.
However, their construction, being meant for subshifts, requires to be completely
rethought in order to work for CAs. In particular, there was no determinism in this construction,
and the oracle tape did not need to appear on a single column/row, since their result was on two-dimensional subshifts.
\section{\label{construction}The construction}
\subsection{\label{sparsegrid}A self-vanishing sparse grid}
In order to have space-time diagrams that constitute sparse grids, the idea is to have columns
of squares, each of these columns containing less and less squares as we move to the left, see
fig.~\ref{butterfly:baselayer}. The CA
has three categories of states:
\begin{itemize}
\item a \emph{killer state}, which is a spreading state that erases anything on its path;
\item a \emph{quiescent state}, represented in white in the figures; its sole purpose is to mark the spaces that are ``outside'' the construction;
\item some \emph{construction states}, which will be constituted of signals and background colors.
\end{itemize}
In order to ensure that just with the signals themselves it is not possible to encode anything
non-computable in the limit set, all signals will need to have, at all points, at any time, different
colors on their left
and right, otherwise the local rule will have a killer state arise. Here are the main signals.
\begin{itemize}
\item Vertical lines: serve as boundaries between columns of squares and
form the left/right sides of the squares.
\item SW-NE and SE-NW diagonals: used to mark the corners of the squares, they are
signals of respective speeds $1$ and $-1$. Each time
they collide with a vertical line (except for the last square of the row), they bounce
and start the converse diagonal of the next square.
\item Counting signal: will count the number of squares inside a column; every time
it crosses the SW-NE diagonal of a square it will shift to the left. When it is superimposed
to a vertical line, it means that the square is the last of its column, so when it crosses
the next SE-NW diagonal, it vanishes and with it the vertical line.
\item Starting signals: used to start the next column to the left, at the bottom of one
column. Here is how they work.
\begin{itemize}
\item The bottommost signal, of speed~$-\frac 14$, is at the boundary between the empty part of the
space-time diagram and the construction. It is started $4$~time steps after the collision
with the signal of speed~$-\frac 13$.
\item The signal of speed~$-\frac 13$ is started just after the vertical line sees the incoming SE-NW diagonal of the first square
of the row on the right, at distance~$3$\footnote{That can be done, provided the radius of the CA is large enough.} (the diagonal will collide with the vertical line $2$~time steps after the start of that signal).
\item At the same time as the signal of speed~$-\frac 13$ is created, a signal of speed~$-\frac 12$ is
generated. When this signal collides with the bottommost signal, it bounces into
a signal of speed~$\frac 14$ that will create the first SE-NW diagonal of the first square of
the row of squares of the left, $4$~time steps after it will collide with the vertical line.
\end{itemize}
\end{itemize}
On top of the construction states, except on the vertical lines, we add a parity layer $\{0, 1\}$: on a
configuration, two neighboring cells of the construction must have different parity bits,
otherwise a killer state appears. On the left of a vertical line there has to be parity~$1$ and
on the right parity~$0$, otherwise the killer state pops up again. This is to ensure that the columns
will always contain an even number of squares.
\begin{figure}[ht!]
\centering
\scalebox{.75}{\includepicture{butterfly-0.mps}}
\caption{\label{butterfly:baselayer} The sparse grid construction: it is based on columns containing
a finite number of squares, whose number decreases when we go left. Note that the figure is
squeezed vertically.}
\end{figure}
The following lemmas address which types of configurations may occur in the limit set of this CA. First
note that any configuration wherein the construction states do not appear in the right order do not have a preimage.
\begin{lemma}\label{limitlem:squares}
The sequence of preimages of a segment
ended by consecutive vertical lines
(and containing none) is a slice of a column of squares of even side.
\end{lemma}
\begin{proof}
Suppose a configuration contains two vertical-line symbols, then to be in the limit set, in between
these two symbols there needs to be two diagonal symbols, one for the SE-NW one and one for SW-NE one,
a symbol for the counting signal, and in between these signals there needs to be the appropriate
colors: there is only one possibility for each of them. If this is not the case, then the configuration
has no preimage.
Also, the distance between the first vertical line and the SE-NW diagonal needs to be the
same than the distance between the second vertical line and the SW-NE diagonal, otherwise the signals
at the bottom --~the ones starting a column, that are the only preimages of the first diagonals~-- would have, in one case, created a vertical line in between, and in the other case, not started at the same time on
the right vertical.
The side of the squares is even, otherwise the parity layer has no preimage
\end{proof}
\begin{lemma}\label{limitlem:distances}
A configuration of the limit set containing at least three vertical-line symbols needs to
verify, for any three consecutive symbols, that if the distance between the first one and the second one is~$k$, then the distance between the second one and the third one needs to be $(k + 2)$.
\end{lemma}
\begin{proof}
Let us take a configuration containing at least three vertical-line symbols, take three consecutive ones.
The states between them have to be of the right form as we said above. Suppose the first of these
symbols is at distance~$k_1$ of the second one, which is at distance~$k_2$ of the third one. This means
that the first (resp. second) segment defines a column of squares of side~$k_1$
(resp.~$k_2$). It is clear that the second column of squares cannot end before the first one.
Now let~$i$ be the position of the counting signal of the first column and~$j$ the
distance between the SW-NE diagonal and the left vertical line. The preimage of the first segment
ends $(k_1i + j)$ (resp. $(k_1(i - 1) + j)$) steps before if the counting signal is on the left (resp. right)
of the SW-NE diagonal. Then, the preimages of the left and right vertical lines of this column are the
creating signals. Before the signal created on the right bounces on the one of speed~$-\frac 14$ created on the
left, it collides with the one of speed~$-\frac 13$, thus determining the height of the squares on the right
column of squares. So $k_1 = k_2 - 2$.
\end{proof}
\begin{lemma}\label{limitlem}
A configuration having two vertical-line symbols pertaining to the limit set needs to
verify one of the following statements.
\begin{itemize}
\item It is constituted of a finite number of vertical lines.
\item It appears in the space-time diagram of fig.~\ref{butterfly:baselayer}.
\item It is constituted of an infinite number of vertical lines, then starting
from some position it is equal on the right to some (shifted) line of
fig.~\ref{butterfly:baselayer}.
\end{itemize}
\end{lemma}
\begin{proof}
We place ourselves in the case of a configuration of the limit set.
Because of lemma~\ref{limitlem:squares}, two consecutive vertical lines at distance~$k$ from each other
define a column of squares. In a space-time diagram they belong to, on their left there
necessarily is another column of squares, because of the starting signal
generated at the beginning of the left vertical line, except when $k = 3$, in which case there is
nothing on the left. In this column, the vertical lines are at distance~$(k - 2)$,
see lemma~\ref{limitlem:distances}. So, if there is an infinite number of vertical lines,
either it is of the form of fig.~\ref{butterfly:baselayer}, or there is some
killer state coming from infinity on the left and ``eating'' the construction.
\end{proof}
\subsection{\label{computationingrid}Backward computation inside the grid}
We now wish to embed the computation of a reversible Turing machine inside the
aforementioned sparse grid, which for this purpose is better seen as a lattice. The fact
the TM is reversible allows us to embed it backwards in the CA. We will below denote by
\emph{TM time} (resp. \emph{CA time}) the time going forward for the Turing machine (resp.
the CA); on a space-time diagram, TM time goes from top to bottom, while CA time goes from
bottom to top (\cf arrows in fig.~\ref{computation:inbutterfly}). That way, the beginning
of the computation of the TM will occur in the first (topmost) square of the first
(leftmost) column of squares.
We have to ensure that any computation of the TM is possible, and in particular ensure that
such a computation is consistent over time; the idea is that at the first TM time step,
\ie the moment the sparse grid disappears, the tape is on each of the vertical line
symbols, but since these all disappear a finite number of CA steps before, we have to
compel all tape cells to shift to the right regularly as TM time increases.
Moreover, we want to force the presence of exactly one head (there could be none if
it were, for instance, infinitely far right). To do that, the grid is divided into three
parts that must appear in this order (from left to right): the left of the head, the right
of the head (together referred to as the computation zone) and the unreachable zone (where
no computation can ever be performed), resp. in blue, yellow and green in
fig.~\ref{computation:inbutterfly}.
The vertices of our lattice are the top left corners of the squares, each one marked by the
rebound of a SE-NW diagonal on a vertical line, while the top right corners will just serve
as intermediate points for signals. More precisely, if we choose (arbitrarily) the top left
corner of the first square of the first column to appear at site~$(0, 0)$, then for any $i,
j\in\ensuremath{\mathbb{N}}$, the respective sites for the top left and top right corners of $s_{i, j}$, the
$(j + 1)$-th square of the $(i + 1)$-th column, are the following
(cf.~fig.~\ref{computation:inbutterfly}):
\[
\left\{\hspace{-1mm}
\begin{array}{
l@{\;}l@{\;}l} s^\ell_{i, j} &=& (i(i + 1), -2(i + 1)j)\\
s^r_{i, j} &=& ((i + 1)(i + 2), -2(i + 1)j)\text{.}
\end{array}\right.
\]
Fig.~\ref{computation:mt} illustrates a computation by the TM, with the three
aforementioned zones, as it would be embedded the usual way (but with reverse time) into a
CA, with site~$(i, -t)$ corresponding to the content of the tape at $i\in\ensuremath{\mathbb{N}}$ and TM
time~$t\in\ensuremath{\mathbb{N}}$.
Fig.~\ref{computation:inter} represents another, still simple, embedding, which is a
distortion of the previous one: the head moves every even time step within a tape
that is shifted every odd time
steps, so that instead of site~$(i, -t)$, we have two sites, $(i + t, -2t)$ and $(i + t,
-2t - 1)$, resp. the \emph{computation site} (big circle on fig.~\ref{computation:inter})
and the \emph{shifting site} (small circle on fig.~\ref{computation:inter}). The head only
reads the content of the tape when it lies on a computation site. This type of embedding
can easily be realized forwards or backwards (provided the TM is reversible).
Our embedding, derived from the latter, is drawn on fig.~\ref{computation:inbutterfly}. The
``only'' difference is the replacement of sites $(i + t, -2t)$ and $(i + t, -2t - 1)$ by
sites $s^\ell_{i, t}$ and $s^\ell_{i, t + 1}$. Notice that as the number of squares in a
column is always finite, each square can ``know'' whether its top left corner is a
computation or a shifting site with a parity bit. More precisely, the $j$-th square (from
bottom to top) of a column has a computation site on its top left if and only if $j$ is
even.
Let $s_{i, j}$ be a square of our construction. $s^\ell_{i, j}$ is either a computation
site or a shifting site. In the latter case, it is supposed to receive the content of a
cell of the TM tape with an incoming signal of speed~$-1$. All it has to do is to send it
to $s^\ell_{i, j - 1}$ (at speed~$0$), which is a computation site. In the former case,
however, things a slightly more complicated. The content of the tape has to be transmitted
to $s^\ell_{i - 1, j - 1}$ (which is a shifting site). To do that, a signal of speed~$0$ is
sent and waits for site~$s^r_{i - 1, j}$, which sends the content to $s^\ell_{i - 1, j -
1}$ with a signal of speed~$-1$ along the SE-NW diagonal. The problem is to recognize which
$s^r$~site is the correct one. Fortunately, there are only two possibilities: it is either
the first or the second $s^r$~site to appear after (in CA time, of course) $s^\ell_{i, j}$
on the vertical line. The first case corresponds exactly to the unreachable zone (where
$j\leq i$), hence the result if the three zones are marked. The lack of other cases is due
to the number of $s_i$~squares, which is only $2(i + 1)$.
Another issue is the superposition of such signals. Here again, there are only two cases:
in the unreachable zone there is none, whereas in the computation zone a signal of
speed~$0$ from a computation site can be superimposed to the signal of speed~$0$ sent by
the shifting site just above it. As aforesaid, there is no other case because of the
limited number of $s_i$ squares. Thus, there is no problem to keep the number of states of
the CA finite, since the number of signals going through a same cell is limited to two at
the same time.
While the two parts of the computation zones are to be separated by the presence of a head,
the unreachable zone is at the right of a signal that is sent from any computation site that
has two diagonals (one from the left and one from the right) below it (indicated as circles
on fig.~\ref{butterfly:baselayer}), goes at speed~$0$ until the next $s^r$~site, then at
speed~$1$ (along SE-NW diagonals) to the second next shifting site, and finally at
speed~$0$ again, to the next computation site (cf.~fig.~\ref{computation:inbutterfly}),
which also has two diagonals below it if the grid contains no error. Another way to detect
the unreachable zone is to detect that the counting signal crossed the SW-NE diagonal
exactly two CA time steps after it has crossed the SE-NW diagonal. This means that the
unreachable zone is structurally coded in the construction.
Now only the movements of the head remain to be described (in black on
fig.~\ref{computation:inbutterfly}). Let $s^\ell_{i, j}$ be a computation site containing
the head.
\begin{itemize}
\item If the previous move of the head (previous because we are
in CA time, that is, in reverse TM time) was to the left, the next computation site
is the one just above, that is, $s^\ell_{i, j - 2}$. The head is thus transferred by a
simple signal of speed~$0$.
\item If the previous move was to stand still, the next
computation site is $s^\ell_{i - 1, j - 2}$. It can be reached by a signal of
speed~$0$ until the second next $s^r$~site, from which a signal of speed~$-1$
(along a SE-NW diagonal) is launched, to be replaced by another signal of speed~$0$
from $s^\ell_{i - 1, j - 1}$ on.
\item If the previous move was to the right, the
next computation site is $s^\ell_{i - 2, j - 2}$. It can be reached by a signal
of speed~$0$ until the second next $s^r$~site, from which a signal of speed~$-1$
(along a SE-NW diagonal) is launched, to be replaced by another signal of
speed~$0$ from $s^\ell_{i - 1, j - 1}$ on, which itself waits for the next
$s^r$~site (which is $s^r_{i - 2, j}$) to start another signal of speed~$1$
(along a SW-NE diagonal) that is finally succeeded to by a last signal of
speed~$0$ from $s^\ell_{i - 2, j - 1}$ on.
\end{itemize}
\begin{figure}[htp!] \centering \begin{minipage}{.66\linewidth}
\hspace{-.25cm}\subfloat[]{\label{computation:inbutterfly}\scalebox{.6}{\includepicture{butterfly-1.mps}}}
\end{minipage} \hspace{.25cm}\begin{minipage}{.3\linewidth}
\subfloat[]{\label{computation:mt}\scalebox{.6}{\includepicture{butterfly-2.mps}}}\\
\subfloat[]{\label{computation:inter}\scalebox{.6}{\includepicture{butterfly-3.mps}}}
\end{minipage} \caption{\label{butterfly:computation}The embedding of a Turing machine
computation in the sparse grid (\ref{computation:inbutterfly}), compared to the usual
embedding (\ref{computation:mt}) and a slightly distorted one (\ref{computation:inter}).
The paths followed by the content of each cell of the tape are in red and orange (two
colors just to keep track of the signals), while the one of the head is in black. The
arrows indicate the next move of the head (for TM time, going towards the bottom). The
green background denotes the zone the head cannot reach, while the computation zone is in
blue on the left of the head and in yellow on its right.} \end{figure}
\subsection{\label{hooper}The computation itself}
As we said before, the computation will take place on the computation sites, which will
contain two kinds of tape cells: one for the oracle and one for the work. In the unreachable
zone there are only oracle cells, which do not change over time except for the shifting.
Now we want to eliminate all space-time diagrams corresponding to rejecting computations
of some Turing machine $M$. \citet{Ben1973} has proved that for any Turing machine, we
can construct a reversible one computing the same function. So a first idea would just
be to encode this reversible Turing machine in the sparse grid; however there is no way
to guarantee that the work tape that was non-deterministically inherited from the
past corresponds to a valid configuration and by the time the Turing machine ``realizes''
this it will be too late, there will already exist configurations containing some oracle
that we would otherwise have rejected.
The solution to this problem is to use a robust Turing machine in the sense of \citet{Hoo1966},
that is to say a Turing machine that regularly rechecks its whole computation. \citet{KO2008}
have constructed reversible such machines. In these constructions the machines
constructed were working on a bi-infinite tape, which had the drawback that some infinite
side of the tape might not be checked; here it is not the case, hence we can modify the machine
so that on an infinite computation it visits all cells of the tape (we omit the details for
brevity's sake).
In terms of limit sets, this means that if some oracle is rejected by the machine, then it
must have been rejected an infinite number of times in the past (CA time). So, only oracles
pertaining to the desired class may appear in the limit set.
Furthermore, even if some killer state coming from the right eats the grid, at some point in
the past of the CA, it will be in the unreachable zone, and stay there for ever, so the
computation from that moment on even ensures that the oracle computed is correct. Though,
that doesn't matter, because in this case the configurations of the corresponding space-time
diagram that are in the limit set are uniform both on the right and on the left except for
a finite part in the middle, and are hence computable.
\section{\label{CB}Cantor-Bendixson rank of limit sets}
The \emph{Cantor-Bendixson derivative} of some set $S\subseteq \Sigma^\ensuremath{\mathbb{Z}}$, with $\Sigma$ finite,
is noted $\CBd{S}$ and consists of all configurations of~$S$ except the isolated ones. A
configuration $\cacf c$ is said to be \emph{isolated} if there exists a pattern~$P$ such that
$\cacf c$ is the only configuration of~$S$ containing $P$ (up to a shift). For any
ordinal $\lambda$ we can define $\CBdn{S}{\lambda}$, the Cantor-Bendixson derivative of
rank $\lambda$, inductively:
\[
\begin{array}{l@{\;\;}c@{\;\;\;}l}
\CBdn{S}{0} &=& S\\
\CBdn{S}{\lambda + 1} &=& \CBd{\CBdn{S}{\lambda}}\\
\CBdn{S}{\lambda} &=& \displaystyle{\bigcap_{\gamma<\lambda}}\CBdn{S}{\gamma}\text{.}
\end{array}
\]
The \emph{Cantor-Bendixson rank} of $S$, denoted by $\CB{S}$, is defined as the first ordinal ~$\lambda$ such that $\CBdn{S}{\lambda + 1} = \CBdn{S}{\lambda}$. In particular, when $S$ is countable,
$\CBdn{S}{\CB{S}}$ is empty. An element~$s$ is of rank~$\lambda$ in~$S$ if $\lambda$ is the
least ordinal such that $s\notin\CBdn{S}{\lambda}$. For more information about Cantor-Bendixson
rank, one may skim~\cite{Kechris}.
The Cantor-Bendixson rank corresponds to
the height of a configuration corresponding to a preorder on patterns as noted by \citet{BDJ2008}{}
Thus, it gives some information on the way the limit set is structured pattern-wise. A
straightforward corollary of the construction above is the following.
\begin{corollary}\label{CBrank}
There exists a constant $c\leq 10$ such that for any \ensuremath{\Pi^0_1}\xspace class $S$, there exists a
CA $\ca A$ such that
\[
\CB{\limitset{\ca A}} = \CB{S}+c\text{.}
\]
\end{corollary}
Here the constant corresponds to the pattern overhead brought by the sparse-grid construction.
\section*{Acknowledgments}\label{sec:Acknowledgments}
This work was sponsored by grants EQINOCS ANR 11 BS02 004 03 and TARMAC ANR 12 BS02 007 01.
The authors would like to thank Nicolas Ollinger and Bastien Le Gloannec for some useful discussions.
\printbibliography
\end{document}
|
2,869,038,153,980 | arxiv | \section{Introduction}
\noindent
In this work, we establish a connection between two seemingly
disparate topics and techniques: mock modular forms (holomorphic parts
of harmonic Maass forms) and
non-critical
values of $L$-functions of cusp forms. To describe this connection, we
first outline each of these topics and some of the corresponding questions
that arise.
A very fruitful technique that has recently emerged in the broader area
of
automorphic forms and its arithmetic applications is based on
``completing"
a holomorphic but not quite automorphic form into a harmonic Maass
form by addition of a suitable non-holomorphic function. This method
originates in its modern form in Zwegers' PhD thesis \cite{Zw}.
Zwegers completed all of Ramanujan's mock theta
functions introduced by Ramanujan in
his famous last letter to Hardy \cite{Ra}, including
\begin{equation*}
f(q):=1+\sum_{n=1}^{\infty}\frac{q^{n^2}}{(1+q)^2(1+q^2)^2\cdots (1+q^n)^2}.
\end{equation*}
To be more precise, Zwegers found a (purely) non-holomorphic
function
\begin{equation} \label{nonholint}
N_f(z):= \int_{- \overline{z}}^{i\infty} \frac{\Theta_f(w)}{\sqrt{z+w}}dw,
\end{equation}
where $\Theta_f$ is some explicit weight $\frac32$ cuspidal theta
function, so
that $$f(q)+N_f(z)$$
transforms like an automorphic form of weight ``dual"
to that of $f$, i.e., of weight $\frac12$ in our case (throughout we write $q:=e^{2 \pi i z}$).
Such completions proved to be useful in obtaining information for
the original function ($f$ in our context), including exact
formulas for Fourier coefficients, made use of, e.g., in the proof in \cite{BO1} of the
Andrews-Dragonette Conjecture \cite{An,Dr}.
On the other hand, one can also reverse
the question and start with a
modular form, define
an integral $N$ resembling the one in (\ref{nonholint}) and find a
holomorphic function $F$ such that
$N+F$ transforms like a modular form. Such
``lifts" were constructed for cusp forms of weight $\frac12$ in terms of
combinatorial series by the first author, Folsom, and Ono \cite{BFO} and
by the first
author and Ono for general cusp forms \cite{BO4}. Recently, also lifts for
non-cusp forms were found \cite{DIT}.
Obstructions to modularity occuring from functions like $f$
may also be viewed in terms of critical values of $L$-functions
\cite{BGKO} in a way
we will describe later.
We next introduce the second topic, non-critical values of
$L$-functions. We will first outline the background concerning general
values of $L$-functions and critical values. Let
$f$ be an element of $S_k$, the space of cusp forms of weight $k \in
2\mathbb N$ for $\mathrm{SL}_2(\mathbb Z)$,
and
let $L_f(s)$ denote its $L$\nbd function. Special values of $L$-functions
have
been the focus of intense research in arithmetic algebraic geometry and
analytic number theory, because they provide deep insight to $f$ and
associated arithmetic
and geometric objects. Several of the outstanding conjectures in number
theory are related to special values of $L$-functions, e.g. the
ones posed by
Birch-Swinnerton-Dyer, Beilinson and Bloch-Kato
(see, for example, \cite{KZ}). In particular, they are commonly interpreted as regulators in $K$-theory \cite{Sch88}.
Among the special values, more is known about the \textit{critical} values
which, for our purposes, are $L_f(1), L_f(2), \dots ,L_f(k-1)$
(see \cite{De, KZ} for an intrinsic characterization). For instance,
Manin's Periods Theorem \cite{M} implies that, when $f$ is an eigenform of
the Hecke operators, its critical values are algebraic linear combinations
of two constants depending only on $f$. This result was established by
incorporating a ``generating function" of the critical values into a
cohomology which has a rational structure.
The generating
function is the \textit{period polynomial}
$$
r_f(X):=\int_0^{i\infty} f(w)(w-X)^{k-2} dw
\text{,}
$$
and each of its coefficients is an explicit multiple of a critical values
of $L_f(s)$ (see Lemma \ref{r_f} for the precise statement).
The period polynomial of $f$ satisfies the \textit{Eichler-Shimura
relations}:
$$
r_f|_{2-k}(1+S)=r_f\Big|_{2-k}\left(1+
U+U^2\right)=0 \qquad \text{
with
$S:=\left(\begin{smallmatrix} 0 & -1\\ 1 & 0\end{smallmatrix}\right)$,
$U:=\left(\begin{smallmatrix} 1 & -1\\ 1 & 0\end{smallmatrix}\right)$}
$$
in terms of the action $|_m$ on $G: \HH \to \C$ defined for each $m \in 2
\mathbb Z$ by
\begin{equation*}\label{|}
G|_m\g(X):=G(\g X) (cX+d)^{-m} \qquad \text{for $\g=\left (
\begin{smallmatrix} * & * \\
c & d \end{smallmatrix}\right ) \in \mathrm{SL}_2(\R)$.}
\end{equation*}
Because of the importance of these Eichler-Shimura relations, the
space $V_{k-2}$ of all polynomials of degree at most $k-2$ satisfying
them
has been
studied independently. It is called the \textit{space of period polynomials}
and is denoted by $W_{k-2}$.
Non-critical values are much less understood and there are
even some ``negative" results such as that of Koblitz \cite{Ko}, asserting
that, in a strong sense, there can not be a Period Theorem for
non-critical values. In any case, it is generally expected that the
algebraic
structure of such values is more complicated than that of critical values.
Nevertheless, in \cite{CD} it is shown that it is possible to
define ``generating series" of non-critical values, which can further be
incorporated into a cohomology similar to the Eichler
cohomology. This fits into the philosophy of Manin's
\cite{M2} and Goldfeld's \cite{Go} cohomological interpretation of
values and derivatives of $L$-functions, respectively.
The generating series is a function $r_{f, 2}$ on the
Poincar\'e upper-half plane $\HH$ given by
$$
r_{f, 2}(z):=\int_0^{i\infty} \frac{F_f(w)}{(wz-1)^k} dw,
$$
where $F_f$ is the \textit{Eichler integral}
associated to $f$
$$
F_f(z):=\int_{z}^{i\infty} f(w)(w-z)^{k-2} dw.
$$
The function $r_{f, 2}$ is the direct counterpart of the period
polynomial $r_f$ associated to critical values.
The non-critical values are obtained from $r_{f, 2}$ as ``Taylor
coefficients" of
$r_{f, 2}$ (see Lemma \ref{r_{f,2}}), just as critical values are
retrieved as coefficients of the period polynomial $r_f$. The
ambient space of functions consists of harmonic functions rather
than
polynomials and the action is $|_k$ instead of $|_{2-k}.$
The \textit{first} link between the aforementioned two topics emerges as we
use techniques from the theory of mock modular forms to
intrinsically
interpret the constructions that were associated to
non-critical values in \cite{CD}. Those constructions
were in some respects ad hoc and
not as intrinsic as those relating to critical values.
For example, whereas the period polynomial is
expressed as a constant multiple of
$$
F_f|_{2-k}(S-1),
$$
the generating function $r_{f, 2}(z)$ has an analogous expression
only up
to an explicit ``correction term". That problem would seem to be
insurmountable, because $r_{f, 2}(z)$ is not invariant under $S$.
However, in this paper we show that it is
exactly thanks to
the ``correction term" that our generating
function $r_{f, 2}$ can be completed into a function which belongs
to a natural analogue of the space of period polynomials $W_{k-2}$.
We
show that an appropriate counterpart of
$$
W_{k-2}:=\{P
\in V_{k-2}; P|_{2-k}(1+S)=P|_{2-k}\left(1+U+U^2\right)=0\}$$
\rm is
$$
W_{k, 2}:=
\left\{\mathcal{P}: \mathfrak H \to \C;\,
\xi_k(\mathcal{P}) \in V_{k-2}; \,
\mathcal{P}|_k(1+S)=\mathcal{P}|_k\left(1+U+U^2\right)=0\right\}.
$$
Here, $\xi_{k}$ is a key operator in the theory of mock modular
forms defined, for $y:=$Im$(z)$ by
\[
\xi_{k}:=2iy^{k}\overline{\frac{d}{d\overline{z}}}.
\]
Our first main result then is
\begin{theorem}\label{W_k,2intro}
Let $k \in 2 \mathbb N$ and
$f$ a weight $k$ cusp form. Then the function
\begin{equation*}\label{rf2}
\widehat r_{f, 2}(z):=r_{f, 2}(z)-
\int_{-\overline z}^{i \infty} \frac{r_f(w)}{(w+z)^k}dw
\end{equation*}
belongs to the space $W_{k, 2}$.
\end{theorem}
Theorem \ref{rf2} suggests the name \textit{mock period
function}
for $r_{f, 2}$ (see Definition \ref{definemock})
The completion of $r_{f, 2}$ by a purely non-holomorphic
term does not
cause us to lose
information about non-critical values, because it only introduces
critical values (see Lemma \ref{tilde2rf}),
which from our viewpoint can be thought of as understood.
The \textit{second} link between the two main subjects of the
paper amounts to a technique that allows us to encode information about
the
mock period function of $f \in S_k$ into a certain ``higher order" version
of
harmonic Maass forms. This is the direct analogue of a recent
result
proved for
critical values by the first author, Guerzhoy, Kent, and Ono (Theorem 1.1
of \cite{BGKO}) and in a different guise earlier in \cite{Fay}:
\begin{theorem}\label{periodPoincareintro1} (\cite{Fay, BGKO})
For each $f \in S_k$, there is a harmonic
Maass form $M_f$
with holomorphic part $M_f^+$, such that
\begin{equation*}\label{moddef}
r_f(-z)=M_f^+|_{2-k}(1-S).
\end{equation*}
\end{theorem}
The authors further use similar techniques to establish a
structure theorem
for $W_{k-2}$ (Theorem 1.2 of \cite{BGKO}).
The first step of our approach towards establishing the counterpart of
Theorem \ref{periodPoincareintro1} for non-critical values is to identify
the objects taking the role played by harmonic Maass forms in \cite{BGKO}.
The class of these objects is formed by
\textit{sesquiharmonic Maass forms} (see Definition \ref{sesquiharm}).
Sesquiharmonic Maass form are natural higher
order versions of harmonic Maass forms, the first example of which has
appeared in a
different context \cite{DI, DIT}. (See also \cite{Br1, Br2, Br3} for an
earlier application
of the underlying method).
The main difference of sesquiharmonic to harmonic Maass forms is that the
latter are annihilated by
the weight $k$ Laplace-operator
\[
\Delta_k:=-y^2\left(\frac{\partial^2}{\partial
x^2}+\frac{\partial^2}{\partial
y^2}\right)+iky\left(\frac{\partial}{\partial x}+i\frac{\partial}{\partial
y}\right)
\text{,}
\]
whereas sesquiharmonic Maass forms are annihilated by
\[
\Delta_{k, 2}:=\Delta_{2-k}\circ
\xi_k=-\xi_k\circ\xi_{2-k}\circ\xi_k=\xi_k\circ\Delta_k.
\]
In Section \ref{Sesquisection}, we will show that we can isolate a
``harmonic" piece from each sesquiharmonic Maass, paralleling the way
we can isolate a ``holomorphic" piece from each harmonic Maass form.
This construction allows us to formulate and prove the analogue of
Theorem \ref{periodPoincareintro1}:
\begin{theorem}\label{periodPoincareintro}
For each $f \in S_k$, there is a sesquiharmonic
Maass form $M_{f, 2}$
with harmonic part $M_{f, 2}^{+-}$, such that
\[
\widehat{r}_{f, 2}(z)= M_{f, 2}^{+-}(z)\Big|_k(S-1).
\]
\end{theorem}
The above two techniques we just described can be considered as a
new version of the ``completion" method, this time applied to the level of
$1$-cohomology.
The \textit{third} main result and technique of this paper is a
mock Eichler-Shimura isomorphism for $W_{k, 2}.$ The
classical Eichler-Shimura isomorphism ``parametrizes" $W_{k-2}$
in terms of cusp forms. It can be summarized as:
\begin{theorem}\label{ESisom1} (e.g., \cite{KoZ}) Every $P \in W_{k-2}$ can
be written as
$$
P(X)=r_f(X)+r_g(-X)+a|_{2-k}(S-1)
$$
for unique $f, g \in S_k$ and $a \in \C$.
\end{theorem}
In Section \ref{ESchar},
we show that $W_{k, 2}$ can be
``parametrised" by cusp forms in a very similar fashion:
\begin{theorem}\label{ESisom2} Every $P \in W_{k, 2}$ can
be written as
$$P=\widehat r_{f, 2}+\widehat r^*_{g, 2}+a F|_k(S-1)
$$
for unique $f, g \in S_k$ and an $a \in \C$.
Here, $F$ is an element of an appropriate space of functions on $\HH$
and $\widehat r^*_{g, 2}$ is a period function associated $r_g(-X)$. (They
will be defined precisely in Section \ref{ESchar}).
\end{theorem}
The construction of $\widehat r^*_{g, 2}$ is of independent
interest and involves (regularized) integrals (see Section \ref{ESchar}).
Some of the
techniques are related to the
theory of periods of weakly holomorphic forms as studied by Fricke \cite{Fr}.
It is surprising that
pairs of cusp forms suffice for this Mock Eichler-Shimura
isomorphism just as they suffice for the classical
Eichler-Shimura isomorphism. A priori, the spaces $W_{k-2}$ and
$W_{k, 2}$ appear to be very different, especially since, as shown here,
they are associated with critical and non-critical values respectively,
which are expected to have completely different behaviour.
In the final section we interpret our two first main results
cohomologically (Theorem \ref{periodPoincare'}) in order to highlight the essential
similarity of the construction we associate here to non-critical
values with the corresponding setting for critical values. Since we
have an entirely analogous reformulation (see \eqref{ESBGKO}) of the
Eichler-Shimura theory and the
results of \cite{BGKO}, Theorem \ref{periodPoincare'} justifies the claim
that our constructions form the non-critical value counterpart of the
corresponding results in the case of \textit{critical} values of
$L$-functions.
A suggestive comparison of this cohomological interpretation with
Hida's evidence for a possible description of non-critical values in
terms of non-top degree cohomology (cf. \cite{Hi}) might also be made. We
intend to
return to possible explicit connections with Hida's construction in a
future work.
\vspace{0.5em}\noindent
\textit{Acknowledgments}: To be entered after the referee's report is received.
\section{Cusp forms and periods associated to their $L$-values}
\label{periods}
Set $\G:= {\mathrm SL}_{2}(\Z)$.
Let $f(z)=\sum_{n=1}^{\infty} a(n) q^n$
($q=e^{2 \pi i z}$)
be a cusp form of weight $k$ for $\Gamma$.
Further let $L_f(s)$ be the entire function obtained by analytic continuation of
the series $L_f(s)=\sum_{n=1}^{\infty} a(n)/n^s$ originally defined in an appropriate
right half plane.
In the Eichler-Shimura-Manin
theory one associates to $f$ an Eichler integral $F_f: \HH \to \C$
and a period
polynomial $r_f: \C \to \C$
as follows:
\begin{eqnarray*}
F_f(z)&:=&\int_{z}^{i\infty} f(w)(w-z)^{k-2} dw,\\
r_f(z)&:=&\int_0^{i\infty} f(w)(w-z)^{k-2}dw.
\end{eqnarray*}
These objects are connected to each other and intimately related to
critical values of $L_f(s)$ (see e.g. \cite{KoZ}, Section 1.1):
$L_f(1), \dots, L_f(k-1)$.
\begin{lemma} \label{r_f} For every $f \in S_k,$ we have
\begin{eqnarray*}
F_f|_{2-k}(1-S)&=&r_f, \\
r_f(z)&=& -\frac{(k-2)!}{(2 \pi i)^{k-1}}
\sum_{n=0}^{k-2}
\frac{L_f(n+1)}{(k-2-n)!}(2 \pi i z)^{k-2-n}.
\end{eqnarray*}
\end{lemma}
We shall consider the analogues of $F_f$ and $r_f$ yielding non-critical
values of $L_f(s)$. Set
\begin{eqnarray*}
F_{f, 2}(z)&:=&\int_{-\overline{z}}^{i\infty} \frac{F_f(w)}{(w+z)^k} dw,\\
r_{f, 2}(z)&:=&\left.\left( \int_0^{i\infty} \frac{F_f(w)}{(w+z)^k} dw \right) \right
|_k
S=\int_0^{i\infty} \frac{F_f(w)}{(wz-1)^k} dw.
\end{eqnarray*}
The function $r_{f, 2}$ is not a polynomial, but the next lemma, proved in
\cite{CD}, shows that we
can still retrieve values of $L$-functions of $f$ as its ``Taylor
coefficients at $0$". It also explains the reason for letting $S$ act on
the integral in the definition of $r_{f, 2}$ in an apparent disanalogy to
$r_f$:
\begin{lemma} \label{r_{f,2}} For every $f \in S_k$ and
$m \in \mathbb{N}$, we have
\begin{equation*} \lim_{z \to 0^+} \frac{d^m }{dz^m}\left( r_{f, 2}(z) \right)
=i^{k+m}\frac{(m+k-1)! m!}{(k-1) (2 \pi)^{m+k}}
L_f(k+m).
\end{equation*}
\end{lemma}
In \cite{CD}, it is also proved that $F_{f, 2}$ and $r_{f, 2}$ are linked
in
a
way that parallels the link between $F_f$ and $r_f$. For our purposes, we
will need a reformulation of that result:
\begin{proposition} \label{SuperM} For every $f \in S_k,$ we have
\begin{equation} \label{SuperMformula}
\left. F_{f, 2}\right|_k(S-1)=
r_{f, 2}-\widetilde r_{f, 2}
\end{equation}
with
$$
\widetilde r_{f, 2}(z):=
\int_{-\overline z}^{i\infty} \frac{r_f(w)}{(w+z)^k}dw.$$
\end{proposition}
\begin{proof}
>From the proof of Theorem 3 of \cite{CD}, it follows that
$$
\left.F_{f, 2}(z)\right|_k(S-1)= r_{f, 2}(z) +
\left.\left ( \int_{-\overline z}^{0}
\frac{r_f(w)}{(w+z)^k}dw \right ) \right|_k S.
$$
\rm
The last term may now easily be simplified using that
$r_f \in W_{k-2}$.
\end{proof}
The correction term
$\widetilde r_{f, 2}$ may be explicitly expressed in terms of {\emph critical} values, and it does not affect the analogy with the relation between
$F_f$ and $r_f$.
\begin{lemma} \label{tilde2rf}
For all $f \in S_k$,
\begin{equation*} \label{tilde2rfform}
\widetilde r_{f, 2}(z)=
-(k-2)! \sum_{n=0}^{k-2}\sum_{\ell=0}^{k-2-n}
\frac{L_f(n+1)}{\ell! (k-2-n-\ell)!(1+n+\ell)}(-4 \pi i z)^{\ell} (-4 \pi
y)^{-1-n-\ell}.
\end{equation*}
\end{lemma}
\begin{remark} We note that all of the exponents of $y$ are
negative, thus $\widetilde r_{f, 2}$ is a purely non-holomorphic
function.
\end{remark}
\begin{proof}
>From Lemma \ref{r_f},
\begin{align*}
\int_{-\overline z}^{i\infty} \frac{r_f(w)}{(w+z)^k}dw
&=
(k-2)!
\sum_{n=0}^{k-2} i^{-n+1}
\frac{L_f(n+1)}{(2 \pi)^{n+1}(k-2-n)!} \int_{-\overline z}^{i\infty} \frac{w^{k-2-n}}{(w+z)^k}dw.
\end{align*}
Making the change of variable $w \to w-z$ and then using the Binomial
Theorem, we obtain that
the integral equals
\begin{equation*}
\sum_{\ell=0}^{k-2-n} \binom{k-2-n}{\ell}(-z)^\ell
\frac{(2iy)^{-1-n-\ell}}{1+n+\ell}.
\end{equation*}
This implies the result.
\end{proof}
Because of Lemma \ref{tilde2rf}, it is natural to complete $r_{f,2}$ by
substracting this ``lower-order" non-holomorphic function to
obtain
$$
\widehat r_{f, 2}:=r_{f, 2}-\widetilde r_{f, 2}.
$$ Lemma
\ref{r_{f,2}} and Proposition
\ref{SuperM} suggest, by comparison with Lemma \ref{r_f}, that
$\widehat
r_{f, 2}$ can be viewed as an analogue of the period polynomial
associated to non-critical values.
In the next section, we will show that this interpretation can be
formalized in a way that justifies the name \textit{mock
period function} for $r_{f, 2}$.
\section{Mock period functions}
One of the reasons that the theory of periods has been so successful in
proving important results about the values of $L$-functions is that they
satisfy relations that allow us to view them as elements of a space with a
rational structure. This space is, in effect, the first cohomology
group of Eichler cohomology. However, to make the relation with
$L$-functions more immediate we will use the more
concrete formulation and notation of
\cite{KoZ}.
In the last section, we will give a cohomological interpretation of our
results.
For $n \in \mathbb N$, let $V_n$ denote the space of polynomials of degree at
most $n$ acted upon by $|_{-n}$, and set
$$
W_n:=\left\{P \in V_n; P|_{-n}(1+S)=P|_{-n}\left(1+U+U^2\right)=0
\right\}.
$$
The period polynomial $r_f$ associated to $f \in S_k$ belongs to
$W_{k-2}$ (cf. \cite{KoZ}).
According to the well-known Eichler-Shimura Isomorphism (cf. \cite{KoZ}
and the references therein), the polynomials
characterize the entire space.
\begin{theorem}\label{ES} (Eichler-Shimura Isomorphism) Let $k$ be an
even positive integer. Then for each $P \in W_{k-2}$ there exists a
unique
pair $(f, g) \in S_k \times S_k$ and $c \in \C$ such that
$$
P(z)=r_f(z)+r_g(-z)+c\left(z^{k-2}-1\right).
$$
\end{theorem}
\begin{remark} Usually, the second term is written as
$\overline {r_g(\bar z)}$, that is
the polynomial obtained by replacing each coefficient of the
polynomial $r_g$ with its
conjugate. However, this may be rewritten as
\begin{equation} \label{conj}
\overline {r_g(\overline z)} =
\int_0^{i\infty}\overline{g(w)}(\overline{w} - z)^{k-2} d \overline{w}
=-
\int_0^{i\infty}\overline{g(-\overline{w})}(-w - z)^{k-2}
dw=-r_{g^c}(-z).
\end{equation}
Recall that $g^c(z):=\overline{g(-\overline{z})} \in S_k$.
\end{remark}
We will show that there is a space similar to $W_{k-2}$
within which the
completed period-like functions $\widehat r_{f, 2}$
live.
We first
recall the operator $\xi_k:=2iy^k\frac{\overline{d}}{d\overline{z}}$
($y:=$Im$(z)$).
This map satisfies $\xi_k(f|_k \g)=(\xi_k f)|_{2-k} \g$ for all
$\g \in \G$, and thus maps weight $k$ automorphic objects to
weight $2-k$ automorphic objects.
We then set
$$
W_{k, 2}:=
\left\{\mathcal{P}: \mathfrak H \to \C; \xi_k(\mathcal{P}) \in V_{k-2};
\mathcal{P}|_k\left(1+S\right)=\mathcal{P}|_k\left(1+U+U^2\right)=0 \right\}.
$$
This space consists not of polynomials but of functions which
become polynomials only after application of the
$\xi_k$-operator.
The next theorem explains in what sense $r_{k, 2}$ can be
considered a mock period function.
\begin{theorem} \label{W_k,2} Let $k \in 2 \mathbb N$ and $f
\in S_k$.
Then the function $\widehat r_{f, 2}$ is an element of
$W_{k, 2}$.
\end{theorem}
\begin{proof} The first condition follows from the identity
\begin{equation}\label{*}
\xi_k\Big(\widehat{r}_{f, 2}(z)\Big)=-2i y^k\overline{\frac{d}{d\overline{z}}\int_{-\overline{z}}^{i \infty } \frac{r_f(w)}{(w+z)^k}\
dw}
=(2i)^{1-k}r_{f^c}(z)\in V_{k-2},
\end{equation}
where for the last equality we used (\ref{conj}).
The relation
$$
\left.\widehat{r}_{f, 2} \right|_k(1+S)=0
$$
follows directly from the identity in Proposition \ref{SuperM}.
To deduce the relation for $U$ we first note that
$F_{f, 2}|_k T =F_{f, 2}$, which follows directly from
$f(w+1)=f(w)$.
Thus
$$
\left.
F_{f, 2}\right|_k(1-S)=\left.F_{f, 2}\right|_k(1-TS)=\left.F_{f, 2}\right|_k(1-U)
$$
and the claim follows from $U^3=1$.
\end{proof}
\begin{remark}\label{harmo}
It is immediate that, if $\xi_k(\mathcal{P}) \in
V_{k-2}$, then $\Delta_k(\mathcal{P})=-\xi_{2-k} \circ \xi_k(\mathcal{P})=0$, and thus
Theorem \ref{W_k,2} implies that $\widehat{r}_{f, 2}$ is harmonic.
\end{remark}
This theorem suggests the name \textit{mock period function} for
$r_{f, 2}$ as well as the more general
\noindent
\begin{definition} \label{definemock}
A holomorphic function $p_2: \HH\to\C$ is called a
\textit{mock
period function} if there exists a
$\widetilde{p}_2 \in \oplus_{j=1}^{k-1}y^{-j}V_{k-2}$
such
that
\[ p_2+\widetilde{p}_2\in W_{k, 2}.\]
\end{definition}
The Eichler-Shimura relations for $\widehat r_{f, 2}$ proved in Theorem
\ref{W_k,2}
are reflected in \textit{mock} Eichler-Shimura relations for
$r_{f, 2}$.
\begin{theorem}\label{mockperiod}
We have
\begin{align*}
r_{f, 2}(z)\Big|_k (1+S) &=\int_0^{i\infty}\frac{r_f(w)}{(w+z)^k}\ dw,\\
r_{f, 2}(z)\Big|_k \Big(1+U+U^2\Big)&=\int_{-1}^{i\infty}\frac{r_f(w)}{(w+z)^k}\
dw + \int_{-1}^{0}\frac{r_f|_{2-k}\widetilde{U}(w)}{(w+z)^k}\ dw
\end{align*}
with
$\widetilde{U}:=\left(\begin{smallmatrix}
-1 & -1 \\ 1 & 0\end{smallmatrix}\right)= SU^2S^{-1}$.
\end{theorem}
\begin{proof}
By \eqref{SuperMformula} and Theorem \ref{W_k,2} it
suffices to consider the action of $1+S$ and $1+U+U^2$
on $\widetilde{r}_{f, 2}$ only. Further, since $r_f\in W_{k-2}$,
we have
\begin{equation} \label{periodrel}
r_f\Big|_{2-k}(1+S)=r_f\Big|_{2-k}\Big(1+U+U^2\Big)=0.
\end{equation}
For the first identity we have by (\ref{periodrel})
\begin{align*}
\widetilde{r}_{f, 2}(z)\Big|_k S
&=z^{-k}\int_{\frac{1}{\overline{z}}}^{i\infty}\frac{r_f(w)}{\left(w-\frac{1}{z}\right)^k}\ dw
\\
&=\left(\int_{-\overline{z}}^{i\infty}-\int_0^{i\infty}\right)\frac{r_f|_{2-k}S(w)}{(w+z)^k}\ dw
=-\widetilde{r}_{f, 2}(z)+\int_0^{i\infty}\frac{r_f(w)}{(w+z)^k}\ dw.
\end{align*}
To prove the second identity, we observe that (\ref{periodrel}) implies that
\begin{equation} \label{periodrel2}
r_f\Big|_{2-k}\Big(1+\widetilde{U}+\widetilde{U}^2
\Big)= 0.
\end{equation}
The change of variables $w \to \widetilde{U}w$
gives
\[
\widetilde{r}_{f, 2}(z)\Big|_k U=
\int_{-\overline{z}}^0\frac{r_f|_{2-k}\widetilde{U}(w)}{(z+w)^k}\
dw.
\]
Likewise, the change of variables $w \to \widetilde{U}^2 w$
yields
\[
\widetilde{r}_{f, 2}(z)\Big|_k U^2=
\int_{-\overline{z}}^{-1}\frac{r_f|_{2-k}\widetilde{U}^2(w)}{(w+z)^k}\ dw.
\]
Thus
\begin{multline*}
\widetilde{r}_{f, 2}(z)\Big|_k\Big(1+U+U^2\Big)
=\int_{-\overline{z}}^{i\infty}\frac{r_f|_{2-k}
\left(1+\widetilde{U}+\widetilde{U}^2\right)(w)}{(w+z)^k}\ dw \\
-\int_0^{i\infty}\frac{r_f|_{2-k} \widetilde{U}(w)}{(z+w)^k}\ dw
-\int_{-1}^{i\infty}\frac{r_f|_{2-k} \widetilde{U}^2(w)}{(w+z)^k} dw.
\end{multline*}
Applying (\ref{periodrel2}) we obtain
the claim.
\end{proof}
\section{Sesquiharmonic Maass forms}\label{Sesquisection}\label{section3}
In this section, we introduce
new automorphic objects related
to non-critical values of $L$-functions.
\begin{definition}\label{sesquiharm}
A real-analytic function $\mathcal{F}: \HH\to\C$ is called a \textit{sesquiharmonic
Maass form of weight $k$} if the following conditions are satisfied:
\begin{enumerate}
\item[i)] We have for all $\gamma\in \G$ that
$\mathcal{F}|_k \gamma=\mathcal{F}$.
\item[ii)] We have that
$\Delta_{k, 2}\left(\mathcal{F}\right)=0$.
\item[iii)] The function
$\mathcal{F}$ has at most linear exponential growth at infinity.
\end{enumerate}
\end{definition}
\noindent
We denote the space of such functions by $H_{k, 2}$. The subspace
of
harmonic weak Maass forms, i.e., these sesquiharmonic forms $\mathcal{F}$ that satisfy
\[
\Delta_k(\mathcal{F})=-\xi_{2-k}\circ \xi_k(\mathcal{F})=0
\]
is denoted by $H_k$. Our definition in particular implies that
\[
\xi_k\left(H_{k, 2}\right)\subset H_{2-k}.
\]
The holomorphic differential $D := \frac{1}{2 \pi i} \frac{d}{d z}$ plays a role originating in Bol's identity. It is well-known that (see \cite{BF})
$$
\xi_{2-k}\left(H_{2-k}\right) \subset M_k^!, \qquad
D^{k-1}\left(H_{2-k}\right) \subset M_k^!.
$$
Here, $M_k^!$ denotes the space of weakly holomorphic modular
form, i.e., those meromorphic modular forms whose poles may only lie at
the cusps. This suggests the following distinguished subspaces.
\begin{definition} \label{H^+} For $k \in 2 \mathbb N$, set
\begin{enumerate}
\item[i)] $H_{2-k}^+:=\{f \in H_{2-k}; D^{k-1}(f) \in S_k\}$ and
$H_{2-k}^-:=\{f \in H_{2-k}; \xi_{2-k}(f) \in S_k\}$,
\item[ii)] $H_{k, 2}^+:=\{f \in H_{k, 2}; \xi_{k}(f) \in H^+_{2-k}\}$.
\end{enumerate}
\end{definition}
Employing the theory of Poincar\'e series, we will prove
that the restriction of $\xi_k$ on $H_{k, 2}^{+}$ surjects onto
$H_{2-k}^+.$
In general, for functions $\varphi$ that are translation invariant,
we define the following Poincar\'e series
\begin{equation} \label{generic}
\mathcal{P}_k(\varphi; z):=\sum_{\gamma\in\Gamma_\infty\setminus
\G }\varphi\Big|_k\gamma(z)
\end{equation}
whenever this series converges absolutely.
Here, $\G_{\infty}$ is the
set of translations in $\G$. For $k >2$, the classical
Poincar\'e series, spanning $S_k$ for $m>0$, are
in this notation
\[
P_k(m; z):=\mathcal{P}_k\left(q^m; z\right).
\]
For all $m \in \mathbb Z \setminus\{0\}$, the \textit{Maass Poincar\'e
series} are defined by \cite{He}
\[
\field{P}_{k}(m, s; z):=\mathcal{P}_k\left(\varphi_{m, s}; z\right)
\]
with
\[
\varphi_{m, s}(z):=\mathcal{M}_s^k(4\pi my)e(mx),
\]
Here, $e(x):=e^{2 \pi i x}$ and
\[
\mathcal{M}_s^k(u):=|u|^{-\frac{k}{2}}M_{\sgn(u)\frac{k}{2},
s-\frac12}\big(|u|\big),
\]
where $M_{\nu,\mu}$ is the usual $M$-Whittaker function
with the integral representation
\begin{equation}\label{Mwhit}
M_{\mu,
\nu}(y)=y^{\nu+\frac12}e^{\frac{y}{2}}\frac{\Gamma(1+2\nu)}{\Gamma\left(\nu+\mu+\frac12\right)
\Gamma\left(\nu-\mu+\frac12\right)}\int_0^1t^{\nu+\mu-\frac12}
(1-t)^{\nu-\mu-\frac12}e^{-yt}\ dt
\end{equation}
for $\text{Re}\left(\nu\pm
\mu+\frac12\right)>0$.
Using that as $y \to 0$
\begin{eqnarray} \label{Mbound}
\mathcal{M}_s^k(y)= O \left(y^{\text{Re}(s)-\frac{k}{2}} \right),
\end{eqnarray}
we see that
the series $\field{P}_{k}(m, s; z)$ converges absolutely for
Re$(s)>1$ and satisfies
\begin{equation}\label{Laplace}
\Delta_k\left(\field{P}_{k}(m, s; z) \right)
= \left(s(1-s)+\frac14\left(k^2-2k \right)\right)\field{P}_{k}(m, s; z).
\end{equation}
In particular, the Poincar\'e series is annihilated for $s= \frac{k}{2}$ or
$s=1-\frac{k}{2}$ (depending on the range of absolute
convergence).
Moreover, for $m>0$ and $k\geq 2$,
we have
\begin{equation}\label{xipoincare}
D^{k-1}\left(\field{P}_{2-k}\left(m, \frac{k}{2} ; z\right)\right)=
-(k-1)!m^{k-1} P_k(m; z)
\end{equation}
(see, e.g. \cite{BKR}) and
\begin{equation}\label{dipoincare}
\xi_{2-k} \left(\field{P}_{2-k}\left(-m, \frac{k}{2} ; z\right)\right)=
(k-1)(4 \pi m)^{k-1} P_k(m; z)
\end{equation}
(see, e.g. Theorem 1.1 (2) of \cite{BO4}).
This implies
$$\field{P}_{2-k}\left(m, \frac{k}{2}; z\right) \in H_{2-k}^+, \qquad
\field{P}_{2-k}\left(-m, \frac{k}{2}; z\right) \in H_{2-k}^-.
$$
In fact, the Poincar\'e series span the respective spaces
$H_{2-k}^+$ and
$H_{2-k}^-$.
For the space $H_k^-$ this follows from Remark 3.10 of \cite{BF}. \rm
For the space $H_k^+$ one
may argue analogously
by using the flipping operator \cite{BKR}, which gives a bijection between
the two spaces.
For $k>0$, we then set
\[
\field{P}_{k, 2}(m; z):= \mathcal{P}_k\left(\psi_{m}; z\right)
\]
with \[
\psi_{m}(z):=\frac{d}{ds}\left[\mathcal{M}_s^k(4\pi my)\right]_{s=\frac{k}{2}} e(mx).
\]
Differentiation in $s$ only introduces logarithms
and thus, using (\ref{Mbound}), we can easily see
that, for Re$(s)>1$ and for every $\epsilon>0$, the derivative is
$O(y^{\text{Re}(s)-\epsilon-k/2})$, and thus, as $y \to 0$, we find
$\psi_m(z)=O(y^{-\epsilon}).$
Thus for all nonzero integers $m$, and $k>0$,
$\field{P}_{k, 2}(m; z)$ is absolutely convergent.
One could further explicitly compute the Fourier expansion of
$\field{P}_{k, 2}$ but for the purposes of
this paper, this is not required.
\begin{theorem} \label{PoinTheorem}
For $m \in \mathbb N$, the function $\field{P}_{k, 2}(-m; z)$ is
an element of
$H_{k, 2}^{+}$ and satisfies:
\begin{eqnarray} \label{imagexi}
\xi_k\left(\field{P}_{k, 2}(-m; z)\right)
&=&(4\pi m)^{1-k}\field{P}_{2-k}\left(m, \frac{k}{2}; z\right), \\\label{imageD}
D^{k-1}\circ \xi_k\left(\field{P}_{k, 2}(-m;
z)\right)&=&-(k-1)! (4 \pi )^{k-1} P_k(m; z).
\end{eqnarray}
In particular, the map
\[
\xi_k \text{: } H_{k, 2}^{+}\to H_{2-k}^{+}
\]
is surjective.
\end{theorem}
\begin{proof}
Due to the absolute convergence of the series, the
transformation law is satisfied by construction.
To verify the (at most) linear exponential growth
at infinity of $\field{P}_{k, 2}(m; z)$ we recall that $M_{\mu, \nu}$ has
at most linear exponential growth as $y \to \infty$ (cf. \cite{NIST},
(13.14.20)). We further note that this also holds for its derivative in
$s$ and thus $\psi_m(z)$ too, because differentiation in $s$ only
introduces logarithms. Therefore, since
Im$(\g y) \to 0$ as $y \to \infty$ whenever $\g \ne 1$, we have
\[
\field{P}_{k, 2}(m; z) \ll |\psi_m(z)|+y^{-\frac{k}{2}}\sum_{\g \in
\G_{\infty}
\backslash \G - \{1\}} \text{Im}(\g z)^{-\epsilon+\frac{k}{2}}.
\]
This together with the well-known polynomial growth of Eisenstein
series at the cusps implies the
claim.
To prove \eqref{imagexi} and \eqref{imageD}, and thus the annihilation
under
$\Delta_{k, 2}$, we first note that $\xi_k$ commutes with the group
action of $\G$ and therefore we only have to compute
\begin{align} \label{xiaction}
\nonumber&\quad
\xi_k\left(\frac{d}{ds}\left[\mathcal{M}_s^k(-4\pi
my)e(-mx)\right]_{s=\frac{k}{2}}\right)
\\&=
y^k(4\pi
m)\overline{q}^{-m}\frac{d}{ds}\left[\frac{d}{dy}\left[\mathcal{M}_{s+\frac{k}{2}}^k(-y)e^{-\frac{y}{2}}\right]_{y=4\pi
my}\right]_{s=0}.
\end{align}
Notice that we do not need to conjugate the internal function because
upon
differentiation at $s=0$ we obtain a real function. \
The integral representation \eqref{Mwhit} implies for $y>0$
\[
\mathcal{M}_{s+\frac{k}{2}}^k(-y)
e^{-\frac{y}{2}}=\frac{y^s\Gamma(2s+k)}{\Gamma(s)\Gamma(s+k)}\int_0^1t^{s-1}(1-t)^{s+k-1}e^{-yt}\
dt
\]
which, in turn, gives that
\begin{align*}
&\quad
\frac{d}{dy}\left(\mathcal{M}_{s+\frac{k}{2}}^k(-y)
e^{-\frac{y}{2}}\right)
\\[4pt]&=
\frac{s}{y}\cdot
y^{-\frac{k}{2}}M_{-\frac{k}{2},
s+\frac{k}{2}-\frac12}(y)e^{-\frac{y}{2}}
-\frac{y^s\Gamma(2s+k)}{\Gamma(s)\Gamma(s+k)}
\int_0^1t^s(1-t)^{s+k-1}e^{-yt}\ dt
\\[4pt]&=s
y^{-\frac{k}{2}-1}M_{-\frac{k}{2},
s+\frac{k}{2}-\frac12}(y)e^{-\frac{y}{2}}
-\frac{s}{2s+k} y^{-\frac{k}{2}-\frac12}
M_{\frac12-\frac{k}{2}, s+\frac{k}{2}}(y) e^{-\frac{y}{2}}.
\end{align*}
Differentiating with respect to $s$ and setting $s=0$ gives (\cite{Sl}, (2.5.2))
\[
y^{-\frac{k}{2}-1}e^{-\frac{y}{2}}\frac1k\left(kM_{-\frac{k}{2},
\frac{k}{2}-\frac12}(y)-\sqrt{y}M_{\frac12-\frac{k}{2},
\frac{k}{2}}(y)\right)
=y^{-\frac{k}{2}-1}e^{-\frac{y}{2}}M_{1-\frac{k}{2},
\frac{k}{2}-\frac12}(y)
=e^{-\frac{y}{2}}y^{-k}\mathcal{M}_{\frac{k}{2}}^{2-k}(y).
\]
Thus
\begin{equation*}
\xi_k\left(\frac{d}{ds}\left[\mathcal{M}_s^k(-4\pi
my)e(-mx)\right]_{s=\frac{k}{2}}\right)
=(4\pi m)^{1-k}\mathcal{M}_{\frac{k}{2}}^{2-k}(4\pi my)e(mx),
\end{equation*}
which implies \eqref{imagexi}.
>From \eqref{imagexi} we may also deduce that $\Delta_{k, 2}\Big(
\field{P}_{k, 2}(m; z)\Big)=0.$
Equality \eqref{xipoincare} implies (\ref{imageD}).
Since, as mentioned above the functions $\field{P}_{2-k}(m, k/2; z)$
span $H^{+}_{2-k}$, \eqref{imagexi} implies the last assertion.
\end{proof}
Since we have a basis of $S_k$ consisting of Poincar\'e series,
Theorem \ref{PoinTheorem} implies
\begin{corollary}\label{surjective}
For $f\in S_k$ there exists $\mathcal{M}_{f,2}\in H_{k, 2}^{+}$ such
that
\[
D^{k-1}\circ \xi_k\left(\mathcal{M}_{f,2}\right)=f.
\]
\end{corollary}
To state and prove our second main theorem we analyze the Fourier
expansion of $\mathcal{F}$
in $H_{k, 2}^{+}$.
Since
$F:=\xi_k\left(\mathcal{F}\right)\in H_{2-k}^{+}$, it has a Fourier
expansion of
the form
\begin{equation*}\label{f.e.}
F(z)=
\sum_{n\geq 0}\widetilde{a}(n)q^n+\sum_{\substack{n \gg -\infty\\ n
\not=0}}\widetilde{b}(n)\Gamma(k-1, 4\pi
ny)q^{-n}
\end{equation*}
for some $\widetilde{a}(n), \widetilde{b}(n)\in \C$ and $\Gamma(s,y)$ the incomplete gamma function (see, for instance, \cite{BF}).
The first summand is called the
\textit{holomorphic part} and the
second the \textit{non-holomorphic part} of $F$, and we denote
them by $F^{+}$ and $F^{-}$, respectively.
A direct calculation implies that for some
$a(n), b(n), c(n), d(0) \in \C$
\begin{equation}\label{superexp}
\mathcal{F}(z)=\sum_{n\gg -\infty}a(n)q^n+\sum_{n>0}
b(n)\Gamma(1-k, 4\pi ny) q^{-n}
+\sum_{\substack{ n \gg - \infty \\ n \not=0 }}
c(n)\mathbf{\Gamma}_{k-1}(4\pi ny)q^n
+ d(0)y^{1-k},
\end{equation}
where for $y>0$, we define
\begin{equation*}\label{Maasssplit}
\mathbf{\Gamma}_s(y):=\int_y^\infty\Gamma(s, t) t^{-s}e^t \frac{dt}{t}.
\end{equation*}
Similarly for $y<0$, we integrate from $-\infty$ instead of $\infty$.
We call the first summand of the right hand side of \eqref{superexp} the
\textit{holomorphic part}, the
second the \textit{harmonic part}, and the third the
\textit{non-harmonic part} of $\mathcal{F}$ and we denote
them by $\mathcal{F}^{++}$, $\mathcal{F}^{+-}$, and $\mathcal{F}^{--}$ respectively.
We note that for $\mathcal{F}^{++} \not= 0, \mathcal{F}^{+-} \not= 0,$ and
$\mathcal{F}^{--} \not= 0$, we have
\begin{equation} \label{vanishxi}
\xi_{k}\left(\mathcal{F}^{++}\right)=0, \quad \xi_{k}\left(\mathcal{F}^{+-}\right)\not=0 \quad \xi_{k}\left(\mathcal{F}^{--}\right)\not=0,
\quad
\xi_k\left(y^{1-k}\right) \ne 0
\text{,}
\end{equation}
\begin{equation}\label{vanishxixi}
\xi_{2-k}\circ \xi_{k}\left(\mathcal{F}^{+-}\right)=0, \quad \xi_{2-k}\circ \xi_{k}\left(\mathcal{F}^{--}\right)\not=0,
\quad
\xi_{2 - k} \circ \xi_k \left(y^{1-k}\right) = 0
\text{,}
\end{equation}
\begin{equation}\label{vanishDxi}
D^{k-1}\circ \xi_{k}\left(\mathcal{F}^{+-}\right)\not=0, \quad
D^{k-1}\circ \xi_{k}\left(\mathcal{F}^{--}\right)=0, \quad
D^{k-1}\circ \xi_{k}\left(y^{1-k}\right)=0
.
\end{equation}
With this terminology and notation we have
\begin{theorem}\label{periodPoincare}
For $f \in S_k$, there is a $\mathcal{M}_{f, 2} \in
H_{k, 2}^{+}$ such that
$D^{k-1}\circ \xi_k\left(\mathcal{M}_{f, 2}\right)=
-
\frac{(k-2)!}{(4\pi)^{k-1}} f^c$ and
\[
\widehat{r}_{f, 2}(z)=
\mathcal{M}_{f, 2}^{+-}(z)\Big|_k(S-1).
\]
\end{theorem}
\begin{proof}
By equation \eqref{SuperMformula},
\[
\widehat{r}_{f, 2}= F_{f, 2}\Big|_k (S-1).
\]
By Corollary \ref{surjective}, there is a $\mathcal{M}_{f, 2}\in H_{k,
2}^{+}$ such that
\begin{equation} \label{Dx}
D^{k-1}\circ \xi_k\left( \mathcal{M}_{f, 2}\right)=
-
\frac{(k-2)!}{(4\pi)^{k-1}}
f^c.
\end{equation}
We claim that
\[
F_{f, 2}= \mathcal{M}_{f, 2}^{+, -}.
\]
A direct computation inserting the Fourier expansion of $f$ gives that
$F_{f, 2}(z)$ has a Fourier expansion of the
form
\[
\sum_n b(n)\Gamma(1-k, 4\pi ny) q^{-n}.
\]
Next
\begin{eqnarray*}
\xi_k\left(F_{f, 2}(z)\right)
=(2i)^{1-k}F^c_f(z)
&=&(2i)^{1-k}\int_{-\overline{z}}^{i\infty}\overline{f(w)}(z+\overline{w})^{k-2}\
d\overline{w} \\
&=&-(2i)^{1-k}\int_z^{i\infty}f^c(w)(z-w)^{k-2}
dw.
\end{eqnarray*}
This implies that
\[
D^{k-1}\circ \xi_k\Big(F_{f, 2}\Big)=
-
\frac{(k-2)!}{(4\pi)^{k-1}}f^c.
\]
Thus by (\ref{Dx}),
\begin{equation*}\label{anni}
D^{k-1}\circ
\xi_k\Big(F_{f, 2}-\mathcal{M}_{f, 2}\Big)=0.
\end{equation*}
By (\ref{vanishxi}) and (\ref{vanishDxi}), non-zero expansions in
incomplete gamma functions are not in the kernel of $D^{k-1}\circ
\xi_k$. This implies that
$F_{f,2}-\mathcal{M}_{f, 2}^{+-}$=0.
\end{proof}
\section{A Mock Eichler-Shimura isomorphism}
\label{ESchar}
In this section, we will show an Eichler-Shimura type theorem for
harmonic period functions of positive weight.
We first note that
\begin{equation} \label{xi}
\xi_k(W_{k, 2}) \subset
W_{k-2}\text{,}
\end{equation}
because $\xi_k$ is compatible with the group action of $\Gamma$.
Fix $P \in W_{k, 2}.$ Then \eqref{xi} and Theorem \ref{ES}
imply that
there exist $f, g \in S_k$ and $a \in \C$ such that
\begin{equation} \label{eseq}
\xi_{k} (P(z))=r_f(z)+r_{g}(-z)+a\left(z^{k-2}-1\right).
\end{equation}
This can be viewed as a differential equation for $P$, and we will
now describe the general solution in $W_{k,2}$.
To find a preimage of the second summand we require regularized
integrals as they are defined, for instance, by Fricke in his upcoming PhD
thesis \cite{Fr}.
Consider a function $f:\field{H}\to\C$ that is continuous. Assume that there is a $c\in\R^+$ such that
\begin{equation}
f(z)=O\Big(e^{c \, \mathrm{Im}(z)}\Big)
\label{bound}
\end{equation}
uniformly in $\mathrm{Re}(z)$ as $\text{Im}(z)\to\infty$. Then, for
each $z_0 \in \mathfrak H$, the integral
\[
\int_{z_0}^{i\infty} e^{uw} f(w) \; dw
\]
(where the path of integration lies within a vertical strip) is
convergent
for $u \in \C$ with $\mathrm{Im}(u) \gg 0$. If it
has an analytic continuation to $u=0$, we define the {\it regularized
integral}
\[
R.\int_{z_0}^{i\infty}f(w) \; dw:=
\left[\int_{z_0}^{i\infty}e^{uw}f(w) \;dw\right]_{u=0}
\text{,}
\]
where the right hand side means the value at $u=0$ of the analytic continuation
of the integral.
Similarly, we define integrals at other cusps $\mathfrak a$.
Specifically,
suppose that $\mathfrak a=\sigma_{\mathfrak a}(i \infty)$ for a scaling
matrix $\sigma_{\mathfrak a} \in$ SL$_2(\Z)$. If
$f(\sigma_{\mathfrak a} z)$ satisfies \eqref{bound}, then we define
\[
R.\int_{z_0}^{\mathfrak a}f(w) \; dw
:=
R.\int_{\sigma_{\mathfrak a}^{-1} z_0}^{i\infty}f \big|_2\gamma(w) \; dw.
\]
For cusps $\ca, \cb$ we define:
\begin{equation}
\label{reg}
R.\int_{\ca}^{\cb}f(w) \; dw
:=
R.\int_{z_0}^{\cb}f(w) \; dw + R.\int_{\ca}^{z_0}f(w) \; dw
\end{equation}
for any $z_0 \in \mathfrak H.$ An easy calculation shows:
\begin{lemma}
\label{indep}
The integral $R.\int_{\ca}^{\cb}f(w) \,dw$ as defined in \eqref{reg} is
independent of $z_0 \in \mathfrak H.$
\end{lemma}
By Theorem \ref{periodPoincareintro1}, there exists a harmonic Maass
form $M_f$ such that
\begin{equation}\label{bgko}
r_f(-z)=M^+_f\Big|_k(1-S)(z).
\end{equation}
Set
\begin{align*}
\mathcal{F}_{f, 2}^\ast(z)
&:=
R.\int_{-\overline{z}}^{i\infty}\frac{M^+_f(w)}{(w+z)^k} \; dw
\text{,}
\\[4pt]
r_{f, 2}^\ast(z)
&:=
R. \int_0^{i\infty}\frac{M^+_f(w)}{(w+z)^k} \; dw \, \Big|_k S
\text{,}
\\[4pt]
\widetilde{r}_{f, 2}^\ast(z)
&:=
\int_{-\overline{z}}^{i\infty}\frac{r_f(-w)}{(w+z)^k} \; dw
\text{,}
\\[4pt]
\widehat{r}_{f, 2}^*(z)
&:=
r_{f, 2}^\ast (z)-\widetilde{r}_{f, 2}^\ast(z)
\text{.}
\end{align*}
We note that, by definition,
$$
M^+_f(z)=\sum_{n=N}^{0}a_n e^{2 \pi i n
z}+O\left(e^{-2 \pi y}\right)
\quad \text{for some $N<0$, as $y \to \infty$}.
$$
We insert the above Fourier expansion into $\mathcal{F}_{f,2}^\ast$ and integrate each of the terms separately. Terms with $n \ge 0$ do not require regularization. For terms with $n < 0$ we obtain a
linear combination of incomplete gamma functions of the form $\Gamma(\ell, z)$ ($\ell \in\Z$, $z \not=0$).
These functions can be analytically continued,
from which we may deduce that
the integrals can be extended to $u=0$. Therefore, the
regularized integrals are well-defined. The integral $r_{f, 2}^\ast$ ist treated analogously.
We also note that
$\widetilde{r}_{f, 2}^\ast$
does not require regularization, since $r_f(-z)\in V_{k-2}$.
We easily compute, using (\ref{conj}), that
\begin{equation}\label{rstar}
\xi_k\left(\widehat{r}_{f, 2}^*(z)\right)=(2i)^{1-k}r_{f^c}(-z).
\end{equation}
We claim that a special solution in $W_{k, 2}$ to (\ref{xi}) is then
given by
\begin{equation}\label{ssol}
R_{f, 2}^*(z)
:= -(2i)^{k-1} \widehat
r_{f^c, 2}(z) -
(2i)^{k-1} \widehat r^*_{g^c, 2}(z)+
\overline a(2i)^{k-1}
\left ( \int_{-\overline z}^{i \infty} \frac{dw}{(w+z)^k}\right) \Big
|_k(1-S).
\end{equation}
It is clear by \eqref{W_k,2}, \eqref{rstar} and the identity
\begin{equation}\label{third}
\xi_k\left ( \int_{-\bar z}^{i \infty} \frac{dw}{(w+z)^k}
\right)=(2i)^{1-k}
\end{equation}
that $R_{f, 2}^*$ satisfies \eqref{eseq}.
By Theorem \ref{W_k,2}, the function
$\widehat r_{f^c, 2}$ is an element of $W_{k, 2}$.
The same is true for $\widehat{r}_{f, 2}^*$:
\begin{lemma}
\label{per*}
We have
\[
\mathcal{F}_{f, 2}^\ast\Big|_k(S-1)(z)=\widehat{r}_{f, 2}^*(z).
\]
In particular, $\widehat{r}_{f, 2}^*\in W_{k,2}$.
\end{lemma}
\begin{proof}
We first note, with Lemma \ref{indep} and the definition of
regularized integrals, that
\begin{align}
\nonumber
r_{f, 2}^\ast|_kS
&=
\left [
\int_{-\bar z}^{i \infty} \frac{e^{wu}M^+_f(w) \; dw}
{(w+z)^k}
\right ]_{u =0}
-
\left [
\int_{1/\bar z}^{i \infty} \frac{e^{wu} M^+_f(-1/w) \;d(-1/w)}
{(-1/w+z)^k}
\right ]_{u=0}
\\[4pt]
&=
\left [
\int_{-\bar z}^{i \infty} \frac{e^{wu} M^+_f(w) \; dw}
{(w+z)^k}
\right ]_{u=0}
-
\left [\int_{-\bar z}^{0} \frac{e^{-u/w} M^+_f(w) \; dw}
{(w+z)^k}
\right ]_{u=0}
\text{.}
\label{r}
\end{align}
On the other hand, to compute
$\mathcal{F}_{f, 2}^\ast |_k(S-1)(z)=\mathcal F_{f,2}^*(-1/z)z^{-k}-\mathcal F_{f, 2}^*(z)$
we recall that, by definition, this is the value of $u$ at $0$ of the analytic continuation of
\begin{gather*}
\int_{1/\bar z}^{i \infty} \frac{e^{wu}M^+_f(w) \; dw}
{(wz-1)^k}
-
\int_{-\bar z}^{i \infty} \frac{e^{w u} M^+_f(w) \; dw}
{(w+z)^k}
\text{.}
\end{gather*}
For $\mathrm{Im}(u) \gg 0$, with \eqref{bgko} this equals
\begin{multline}
\int_{-\bar z}^{0} \frac{e^{-u/w}M^+_f(-1/w) \; d(-1/w)}
{(-z/w-1)^k}
-
\int_{-\bar z}^{i \infty} \frac{e^{wu} M^+_f(w) \; dw}
{(w+z)^k}
\\=
\int_{-\bar z}^{0} \frac{e^{-u/w}M^+_f(w) \; dw}
{(z+w)^k}
-
\int_{-\bar z}^{0} \frac{e^{-u/w}r_f(-w) \; dw}
{(z+w)^k}
-
\int_{-\bar z}^{i \infty} \frac{e^{wu} M^+_f(w) \; dw}
{(w+z)^k }
\text{.}
\label{compu}
\end{multline}
Because of \eqref{periodrel}, the second integral of \eqref{compu} equals
$$\int_{-1/\bar z}^{i \infty} \frac{e^{wu}r_f(-1/w) w^k \; dw}{(zw-1)^k}
=-\int_{1/\bar z}^{i \infty}
\frac{e^{wu}r_f(w) \; dw}{(zw-1)^k}
$$
This is analytic at $u=0$ with value $\tilde r^*_{f,2}|_kS(z)$. Therefore,
with analytic continuation
and \eqref{r}, \eqref{compu} gives
\begin{gather*}
\mathcal F_{f, 2}^*|_k(S-1)
=
-r_{f, 2}^*|_k S + \widetilde{r}_{f, 2}^* \big|_k S
=
-\widehat{r}_{f, 2}^* \big|_k S
\text{,}
\end{gather*}
which implies the result.
\end{proof}
That the third term of \eqref{ssol} is an element of $W_{k, 2}$
follows directly from \eqref{third} and the invariance of the integral
under $T$.
Therefore, the general solution of \eqref{eseq} is
\begin{equation*}\label{gensolution}
-(2i)^{k-1} \left
(\widehat
r_{f^c, 2}(z) + \widehat r^*_{g^c, 2}(z)-\overline a
\int_{-\overline z}^{i \infty} \frac{dw}{(w+z)^k} \Big
|_k(1-S)+G(z) \right),
\end{equation*}
where $G$ is a holomorphic function on $\mathfrak H$.
The last summand $G$ must be annihilated by $1+S$ and $1+U+U^2$ in terms of $|_k$, because all the others satisfy the Eichler-Shimura relations. This implies
that $G=H|_k(S-1)$ for some translation invariant holomorphic
function $H$. Indeed, this follows from $H^1(\G, \mathcal A)=0$, where
$\mathcal A$ is a the module
of holomorphic functions on $\mathfrak H$
(see equation (5.3) of \cite{Kn} citing \cite{Kr}).
Set
$$
U_{k, 2}:=\Big (
\mathcal O(\mathfrak H)+\left\{f \in \oplus_{j=1}^{k-1}y^{-j}V_{k-2};\,
\xi_k(f) \in V_{k-2} \right\}
\Big ) \cap \{f: \HH \to \C; f|_kT=f\},
$$
where $\mathcal O(\mathfrak H)$ is the space of holomorphic functions
on $\mathfrak H$.
We can then complete the proof of
\begin{theorem} The map $\phi: S_k \oplus S_k \to W_{k, 2}$
defined by
$$
\phi(f,g):=\widehat r_{f^c, 2}+\widehat r^*_{g^c, 2}
$$
induces an isomorphism
$$
\overline{\phi}:\, S_k \oplus S_k \cong_{\mathbb R} W_{k, 2}/V_{k, 2},
$$
where $V_{k, 2}:=U_{ k, 2}|_k(S-1)$.
\end{theorem}
\begin{proof} We have already shown above that
$\overline{\phi}$ is surjective.
To show that it is injective,
suppose that $P \in \mathrm{ker}(\overline{\phi})$.
Then
\begin{equation} \label{ESlike}
\widehat r_{f^c, 2} + \widehat r^*_{g^c, 2} = A|_k(S-1)
\end{equation}
for some $A \in U_{k, 2}.$
Applying $\xi_k$ on both sides of \eqref{ESlike}, we
deduce that $r_f(z)+r_g(-z)$ is an Eichler coboundary. The classical
Eichler-Shimura isomorphism (Theorem \ref{ES}) \ implies that $f, g$
must vanish.
\end{proof}
\begin{remark}
Since $\left\{f \in \oplus_{j=1}^{k-1}y^{-j}V_{k-2};\,
\xi_k(f) \in V_{k-2} \right\}$ does not contain any holomorphic elements, it is isomorphic to $V_{k -2}$. The corresponding isomorphism is $\xi_k$.
\end{remark}
\section{Cohomological interpretation}\label{cohom}
Theorem \ref{periodPoincare} has a cohomological interpretation which
makes apparent the
similarity of our construction with the one associated to critical values
in \cite{BGKO}. We shall first give a cohomological interpretation of the
period polynomials in the context of the results of \cite{BGKO}.
We recall the definition of parabolic cohomology in our setting.
For $m \in \Z$ and a $\G$\nobreakdash-\hspace{0pt}submodule
$V$ of the space of functions $f: \HH \to \C$ we define
\begin{align*}
Z^1_p(\G, V)
&:=
\bigl\{g: \G\to V; g(\g\delta)=g(\g)|_m\delta+g(\delta)
\text{ and}
\\&\hspace{7em}
g(T)=h|_m(T-1) \text{ for some } h \in V \bigr\},
\end{align*}
\begin{align*}
B^1_p(\G, V)=B^1(\G, V)
&:=
\bigl\{g: \G\to V; \text{for some } h \in V,
\\&\hspace{7em}
g(\g)=h|_m(\g-1) \text{ for all }\g \in \G \bigr\},
\end{align*}
and
$$
H^1_p(\G, V):=Z^1_p(\G, V)/B^1_p(\G, V).
$$
A basic map in the theory of period polynomials is
$$
\rho: S_k \to H^1_p(\G, V_{k-2}).
$$
It assigns to $f \in S_k$ the class of a cocycle $\phi_f$ determined
by $\phi_f(T)=0$ and $\phi_f(S)=r_f(-z)$.
We further consider the $\G$-module
$\mathcal O^*(\mathfrak H)$ of holomorphic functions
$F: \mathfrak H \to \C$ of at most linear exponential
growth at the cusps.
The group $\G$ acts on $\mathcal O^*(\mathfrak H)$ via $|_{2-k}$.
Then the natural injection $i$ of $V_{k-2}$ into
$\mathcal O^*(\mathfrak H)$
induces a map
$$
i^*: H^1_p\left(\G, V_{k-2}\right) \to H^1_p\left(\G, \mathcal
O^*(\mathfrak H)\right).
$$
Theorem 1.1 of \cite{BGKO} states that $r_f(-z)$ is a constant
multiple of
$ F_f^+|_{2-k}(1-S)$ for the holomorphic part $F_f^+$ of
some harmonic Maass form $F_f$ that grows at most linear
exponentially at the cusps.
This can then be reformulated as:
\begin{equation}\label{ESBGKO}i^* \circ \rho=0.
\end{equation}
To
formulate the analogue of this result in our
context and the setting of non-critical values we consider the following
$\G$-modules, all in terms of the action $|_k$,
\begin{enumerate}
\item[i)] $\HHH^*(\HH)$ the $\G$-module of harmonic
functions on $\HH$ of at most linear exponential growth at the
cusps.
\item[ii)]
$\mathcal V_{k, 2}:=\{f: \HH \to \C
\text{ of at most lin. exp. growth at the cusps,
}
\xi_k(f) \in V_{k-2} \}$.
\end{enumerate}
Because of
the compatibility of $\xi_{k}$ with the
slash action, these spaces are $\G$-invariant.
According to Theorem \ref{W_k,2}, for each $f \in S_k$, the map
$\psi_f$ such that $\psi_f(T)=0$ and $\psi_f(S)=
\widehat{r}_{f, 2}$
induces a cocycle with values in $\mathcal V_{k, 2}$.
Therefore, the assignment $f \to \psi_f$ induces a linear
map
$$
\rho': S_k \to H^1_p\left(\G, \mathcal V_{k, 2}\right).
$$
Because of Remark \ref{harmo}, there is a natural injection $i'$ from
$\mathcal V_{k, 2}$ to $\HHH^*(\HH)$, and this induces a map:
$$
i'^*:
H^1_p \left(\G, \mathcal V_{k, 2}\right) \to
H^1_p \left(\G, \HHH^*(\HH)\right).
$$
Theorem \ref{periodPoincare} then implies that
\begin{theorem} \label{periodPoincare'} The composition
$i'^* \circ \rho'$ is the zero map.
\end{theorem}
|
2,869,038,153,981 | arxiv | \section{Introduction}
Quantum fluctuations in the light field observables have been a subject of
intensive and maintained research since the appearance of quantum optics in
the early sixties, especially since the discovery of the nonclassical states
of light and the generation of squeezed light in the mid eighties \cite%
{Drummond04}. Quantum fluctuations are unavoidable as their origin lies in
the impossibility of determining two canonically conjugate observables with
a precision better than that allowed by the Heisenberg uncertainty
relations. In the case of the radiation field, quantum fluctuations manifest
in quantities such as the photon number, the field phase, the Stokes
parameters, or the field quadratures \cite%
{Loudon87,Meystre91,Korolkova02,Dodonov03,Walls94}.
Although its origin could be traced back till the early days of quantum
mechanics \cite{Dodonov03}, the modern squeezing research goes back to the
mid seventies, its foundational period being closed with the experimental
generation of squeezed light in the mid eighties \cite{Meystre91}. In a
single--mode state the two field quadratures (equivalent to the position and
momentum operators of a harmonic oscillator) constitute a canonically
conjugate pair, the product of their uncertainties being consequently
limited by the Heisenberg inequality. When the field mode is in a coherent
state, the uncertainty is the same for any field quadrature and equals that
of the vacuum state. Then, a squeezed state is that for which the
uncertainty in a particular field quadrature is smaller than that of the
quantum vacuum, a reduction achieved at the expense of an increase in the
uncertainty of the complementary field quadrature. These states can be
generated in nonlinear processes (four-wave mixing, parametric
down-conversion...), and the field quadratures can be detected in a homodyne
detection experiment in which the quantum state is mixed with a classical
(intense, coherent) local oscillator field \cite{Loudon87}.
Single-mode squeezing has been intensively and extensively investigated for
almost three decades \cite{Drummond04}. A well established result is that a
high degree of squeezing is obtained in nonlinear cavities near the
bifurcation points, and that the squeezing level degrades as the system is
brought far from them \cite{Walls94}. Multimode squeezing has also been
considered in the past. One of the most immediate cases is that of nonlinear
cavities working in several longitudinal modes, a type of system already
exhibiting noise reduction features specific to multimode systems. For
example, the quantum noise suppression on the difference of intensities
(amplitude-squeezing) of a two--mode optical parametric oscillator above
threshold \cite{Reynaud87,Reid88}, which is associated to the existence of a
continuous diffusion of the phase difference between the two modes. It is to
be remarked that squeezing appears in this case associated to a \textit{%
collective variable}, namely the intensity difference, and that, remarkably,
the noise reduction level is independent of the system proximity to a
bifurcation point.
A particularly interesting case of multimode squeezing is that of solitons.
Squeezing in optical fibre solitons (\textit{temporal} solitons) has
attracted much interest since the late eighties \cite{Drummond87,Haus90} and
is by now quite well understood, see e.g. \cite{Kozlov03} for a recent
review. In this problem the relevance of collective variables, such as the
position or the momentum of the soliton, is very clear in the sense that
these are the observables in which the behavior of quantum fluctuations is
easily detectable. As we comment below, this problem has some similarities,
but also strong differences,\ with the problem of cavity solitons that we
treat here. A related problem, that of the squeezing of spatial solitons in
Kerr media, has also been considered recently \cite%
{Treps00,Lantz04,Nagasako98,Mecozzi98}.
For our purposes, a most exciting connection is that existing between
pattern formation in nonlinear optical cavities and squeezing \cite%
{Lugiato99,Lugiato02}. In these systems the concepts used in the analysis of
quantum fluctuations, which has started relatively recently, must be
generalized to cover correlations at different spatial points \cite%
{Kolobov99}.
Extended nonlinear optical systems, specially nonlinear optical cavities
with large Fresnel numbers, are systems that spontaneously display
dissipative structures, which are extended patterns that form in the plane
orthogonal to the light propagation direction, through an spontaneous
symmetry breaking. One of the new concepts that appeared when the analysis
of quantum fluctuations in these systems was addressed is the quantum image
\cite{Lugiato95}, which can be described as a precursor appearing below
threshold of the pattern that the system would display above threshold. The
quantum image is not detectable at low observation frequencies as in this
case the fast dynamics of quantum fluctuations is washed out. We refer the
reader to existing reviews for a resume of the main researches in the field
\cite{Lugiato99,Lugiato02}. For our purposes, which concern the analysis of
the squeezing in dissipative structures, the work carried out by Gatti and
Lugiato \cite{Lugiato93,Gatti95} on the squeezing of extended degenerate
optical parametric oscillators (DOPO) below threshold, is particularly close
(see also \cite{Drummond05}), as well as the analysis of the role played by
the Goldstone mode in \cite{Zambrini00,Gomila02}.
In this article we investigate quantum fluctuations of dissipative
structures in the DOPO. In a first part we develop the theory for the
calculation of linearized quantum fluctuations, in particular for the
calculation of the squeezing and intensity fluctuations spectra. The theory
is general in the sense that no particular dissipative structure is assumed.
Moreover, although the model for a DOPO in the large pump detuning is used,
the derived expressions are easily generalizable to cover other nonlinear
cavity models. Then, in a second part we apply this theory to the study of
the squeezing properties of a special dissipative structure appearing above
threshold, the so--called bright cavity soliton.
In \cite{EPL} we already advanced a particular property of quantum
fluctuations which is specific to pattern formation (i.e., that is absent
when the emission is homogeneous in space), which can be put in short as
follows:\ Due to the translational invariance of the problem (the position
of the pattern in the transverse plane is not fixed when the pumping is
spatially homogeneous), there is a particular transverse mode that is free
from quantum fluctuations (in the linear approximation), a \textit{perfectly}
squeezed\textit{\ }mode. And this occurs irrespectively of the value of the
parameters of the system. The only conditions are that (i) the system be
translational invariant, and (ii)\ that the output field displays a pattern.
This particular mode corresponds to the transverse linear momentum of the
pattern.
Here we shall go beyond this result by studying the squeezing properties of
a special pattern, the bright cavity soliton. For that we shall consider the
DOPO\ model in the large pump detuning limit, as the problem is much simpler
in these conditions because there exists an explicit analytical solution for
the localized structure. As stated, the article consists of two parts. In
the first part (covering Sections II to V) we derive general expressions for
the study of quantum fluctuations (e.g., the linear squeezing spectrum)
without specializing to any particular pattern. Then, the second part
(Section VI) is devoted to the bright cavity soliton. Finally, in Section
VII we give our main conclusions.
\section{Linear theory of quantum fluctuations of optical dissipative
structures}
In this section we present the general method that will allow us determining
quantities related with the quantum fluctuations of optical dissipative
structures (e.g. the squeezing spectrum) in the linear approximation. An
outstanding feature of the method is that it circumvents the numerical
integration of the dynamical, stochastic equations, which is always a
problematic (and terribly time consuming) task. Instead, the method exploits
the diagonalization of the linear problem, which allows simplest formal
solutions.
We shall use the DOPO\ with plane cavity mirrors as a model for presenting
the method. Any other nonlinear optical cavity with plane mirrors sustaining
dissipative structures can be studied along similar ways after
straightforward particularization of the expressions given below.
We assume that the dynamical equations of the studied system have been cast
in the form of classical-looking Langevin equations corresponding to some
coherent state representation. In particular we assume that a generalized $P$
representation \cite{Drummond80} is being used (Sec. II.A) as its normal
ordering equivalence allows a direct computation of measurable quantities
corresponding to the fields leaving the cavity. Furthermore we assume that
those Langevin equations have been linearized around a classical dissipative
structure (Sec. II.B). The method consists in separating the fluctuations in
two classes: (i) Those coming from the drift of the dissipative structure,
as it can move freely across the transverse section owed to the spatial
translation invariance of the model (Sec. II.C); and (ii) Formally solving
the equations corresponding to the remaining ("internal") fluctuations
making use of a special basis (Sec. II.D), what allows deriving a general
expression for the linearized squeezing spectrum (Sec. III), as well as for
the spectrum of intensity fluctuations (Sec. IV). This section ends with a
general result concerning the squeezing of dissipative structures (Sec. V).
\subsection{\label{DOPOmodel}Langevin equations for a planar DOPO in the
generalized P representation}
We consider the model for a type I DOPO\ with plane cavity mirrors of \cite%
{Gatti97} pumped by a plane wave coherent field of frequency $2\omega_{%
\mathrm{s}}$ and amplitude $\mathcal{E}_{\mathrm{in}}$. An intracavity $%
\chi^{(2)}$ nonlinear crystal converts pump photons into subharmonic
photons, at frequency $\omega_{\mathrm{s}}$, and vice versa. Only two
longitudinal cavity modes, of frequencies $\omega_{0}$ (pump mode) and $%
\omega_{1}$(signal mode), which are the closest to $2\omega_{\mathrm{s}}$
and $\omega_{\mathrm{s}}$, respectively, are assumed to be relevant. The
cavity is assumed to be single-ended, i.e., losses occur at a single cavity
mirror, where the intracavity modes are damped at rates $\gamma_{n}$, $n=0,1$%
. The above frequencies define dimensionless pump and signal detuning
parameters through $\Delta_{0}=\left( \omega_{0}-2\omega_{\mathrm{s}}\right)
/\gamma_{0}$ and $\Delta_{1}=\left( \omega_{1}-\omega_{\mathrm{s}}\right)
/\gamma_{1}$, respectively.
We denote the intracavity field envelope operators for pump and signal
modes, which propagate along the $z$ direction, by $\hat{A}_{0}\left(
\mathbf{r},t\right) $ and $\hat{A}_{1}\left( \mathbf{r},t\right) $,
respectively, where\textbf{\ }$\mathbf{r}=\left( x,y\right) $ is the
transverse position vector, obeying standard equal-time commutation relations%
\begin{equation}
\left[ \hat{A}_{m}\left( \mathbf{r},t\right) ,\hat{A}_{n}^{\dag}\left(
\mathbf{r}^{\prime},t\right) \right] =\delta_{m,n}\delta\left( \mathbf{r}-%
\mathbf{r}^{\prime}\right) . \label{commutationrel}
\end{equation}
As commented we shall use a coherent state representation in order to handle
the problem. These representations set a correspondence between the quantum
operators $\hat{A}_{m}(\mathbf{r},t)$ and $\hat{A}_{m}^{\dag }(\mathbf{r},t)$
and the c-number fields $\mathcal{A}_{m}(\mathbf{r},t)$ and $\mathcal{A}%
_{m}^{+}(\mathbf{r},t)$, respectively, which are independent but in their
(stochastic) averages, which verify $\left\langle \mathcal{A}_{m}^{+}(%
\mathbf{r},t)\right\rangle =\left\langle \mathcal{A}_{m}(\mathbf{r}%
,t)\right\rangle ^{\ast }$. The physical meaning of the stochastic average
of any function of the c-number fields depends on the representation. In
particular we shall use the generalized $P$ representation \cite{Drummond80}%
, generalized to include the spatial nature of the multimode problem here
considered \cite{Gatti97}, in which the stochastic averages correspond to
quantum expectation values computed in normal and time ordering.
As shown in Appendix A, in the large pump detuning limit ($\left\vert \Delta
_{0}\right\vert \gg 1,\left\vert \Delta _{1}\right\vert ,\gamma _{0}/\gamma
_{1}$) the DOPO dynamical (Langevin) equations in the generalized $P$
representation can be written as
\begin{subequations}
\label{AdiabaticLangevin}
\begin{gather}
\frac{\partial }{\partial t}\mathcal{A}_{1}(\mathbf{r},t)=\gamma _{1}\left(
L_{1}\mathcal{A}_{1}+\mu \mathcal{A}_{1}^{+}+i\frac{\sigma }{\kappa ^{2}}%
\mathcal{A}_{1}^{2}\mathcal{A}_{1}^{+}\right) + \notag \\
\sqrt{\gamma _{1}\left( \mu +i\frac{\sigma }{\kappa ^{2}}\mathcal{A}%
_{1}^{2}\right) }\eta (\mathbf{r},t), \\
\frac{\partial }{\partial t}\mathcal{A}_{1}^{+}(\mathbf{r},t)=\gamma
_{1}\left( L_{1}^{\ast }\mathcal{A}_{1}^{+}+\mu \mathcal{A}_{1}-i\frac{%
\sigma }{\kappa ^{2}}\mathcal{A}_{1}^{+}{}^{2}\mathcal{A}_{1}\right) +
\notag \\
\sqrt{\gamma _{1}\left( \mu -i\frac{\sigma }{\kappa ^{2}}\mathcal{A}%
_{1}^{+}{}^{2}\right) }\eta ^{+}(\mathbf{r},t),
\end{gather}%
where $L_{1}=-(1+i\Delta _{1})+il_{1}^{2}\nabla ^{2}$, $l_{1}=c/\sqrt{%
2\omega _{1}\gamma _{1}}$ is the diffraction length for the signal field, $%
\nabla ^{2}=\partial ^{2}/\partial x^{2}+\partial ^{2}/\partial y^{2}$ is
the transverse Laplacian operator, and we have introduced the real and
dimensionless parametric pump parameter $\mu $%
\end{subequations}
\begin{equation}
\mu =\frac{g\left\vert \mathcal{E}_{\mathrm{in}}\right\vert }{\gamma
_{0}\gamma _{1}\left\vert \Delta _{0}\right\vert }>0,
\end{equation}%
where $g$ is the (real) nonlinear coupling coefficient, Eq. (\ref{defg}) in
Appendix A, the normalized nonlinear coupling coefficient%
\begin{equation}
\kappa ^{-2}=\frac{g^{2}}{2\gamma _{0}\gamma _{1}\left\vert \Delta
_{0}\right\vert },
\end{equation}%
and $\sigma =\func{sign}\Delta _{0}=\pm 1$. Finally, $\eta (\mathbf{r},t)$
and $\eta ^{+}(\mathbf{r},t)$ are independent, real white Gaussian noises of
zero average and correlations given by Eqs. (\ref{noise}) in Appendix A.
Equations (\ref{AdiabaticLangevin}) are the model we shall consider along
this paper.
\subsection{\label{linlan}Dynamics of quantum fluctuations: Linearized
Langevin equations around the classical dissipative structures of the DOPO}
In the classical limit ($\mathcal{A}_{i}^{+}$ being interpreted as $\mathcal{%
A}_{i}^{\ast }$, noises being ignored), Eqs. (\ref{AdiabaticLangevinClassic}%
) have the form of a parametrically driven nonlinear Schr\"{o}dinger
equation (PDNLSE), first derived for the DOPO in \cite{Longhi97,Trillo97},
see Eq. (\ref{PDNLSE}) in Appendix B. In our case $\sigma $ accounts for the
cases of self-focusing $\left( \sigma =+1\right) $ or defocusing $\left(
\sigma =-1\right) $ of the PDNLSE, which determine the kind of dissipative
structures (DS) supported by the DOPO. These DS are patterns that appear in
the transverse plane with respect to the direction of light propagation. We
denote these (steady) structures by $\mathcal{A}_{1}\left( \mathbf{r}\right)
=\mathcal{\bar{A}}_{1}\left( \mathbf{r}-\mathbf{r}_{1}\right) $, where $%
\mathbf{r}_{1}=\left( x_{1},y_{1}\right) $ is arbitrary due to the
translation invariance of the problem. They can be, e. g., periodic patterns
or localized structures \cite%
{Longhi95,Longhi97,Bondila95,Barashenkov02,deValcarcel02}. Although in the
second part of this paper we shall concentrate on a particular type of DS,
namely the bright cavity soliton, we stress here that the treatment we
present below is completely general and covers the description of quantum
fluctuations of any stationary DS.
The dynamics of the quantum fluctuations around any DS is studied by setting
\begin{subequations}
\label{ClassicalSolution}
\begin{align}
\mathcal{A}_{1}(\mathbf{r},t) & =\mathcal{\bar{A}}_{1}\left( \mathbf{r}-%
\mathbf{r}_{1}\left( t\right) \right) +a_{1}\left( \mathbf{r}-\mathbf{r}%
_{1}\left( t\right) ,t\right) , \\
\mathcal{A}_{1}^{+}(\mathbf{r},t) & =\mathcal{\bar{A}}_{1}^{\ast}\left(
\mathbf{r}-\mathbf{r}_{1}\left( t\right) \right) +a_{1}^{+}\left( \mathbf{r}-%
\mathbf{r}_{1}\left( t\right) ,t\right) ,
\end{align}
where $\mathcal{\bar{A}}_{1}\ $and $\mathcal{\bar{A}}_{1}^{\ast}$ are the
classical stationary mean values of the field corresponding to a particular
DS (i.e., the stationary solutions of Eqs. (\ref{AdiabaticLangevin}) when
the noise terms are neglected), and $a_{1}$ and $a_{1}^{+}$ are the c-number
fields accounting for the quantum fluctuations. Notice that the position of
the classical solution, $\mathbf{r}_{1}\left( t\right) =\left(
x_{1},y_{1}\right) $, is let to vary in time in order to properly describe
the diffusive movement of the DS, which is excited by (quantum) noise.
Linearizing\ the Langevin equations around the classical solution we get, to
first order in the fluctuations, the linearized equation of motion for the
quantum fluctuations, that read
\end{subequations}
\begin{equation}
-\kappa \left( \mathbf{G}_{x}\frac{\mathrm{d}x_{1}}{\mathrm{d}t}+\mathbf{G}%
_{y}\frac{\mathrm{d}y_{1}}{\mathrm{d}t}\right) +\frac{\partial }{\partial t}%
\mathbf{a}_{1}=\gamma _{1}\mathcal{L}\mathbf{a}_{1}+\sqrt{\gamma _{1}}%
\mathbf{h}, \label{linearizedLangevin}
\end{equation}%
where
\begin{equation}
\mathbf{G}_{x(y)}=\partial _{x(y)}\left(
\begin{array}{c}
\mathcal{\bar{A}}_{1} \\
\mathcal{\bar{A}}_{1}^{\ast }%
\end{array}%
\right) , \label{goldstone}
\end{equation}%
$\mathbf{a}_{1}$ is the quantum fluctuations vector
\begin{equation}
\mathbf{a}_{1}(\mathbf{r},t)=\left(
\begin{array}{c}
a_{1}(\mathbf{r},t), \\
a_{1}^{+}(\mathbf{r},t)%
\end{array}%
\right) ,
\end{equation}%
$\mathbf{h}$ is the noise vector
\begin{equation}
\mathbf{h}(\mathbf{r},t)=\left(
\begin{array}{c}
\sqrt{\bar{\alpha}_{0}}\ \eta (\mathbf{r},t) \\
\sqrt{\bar{\alpha}_{0}^{\ast }}\ \eta ^{+}(\mathbf{r},t)%
\end{array}%
\right) ,
\end{equation}%
where%
\begin{equation}
\bar{\alpha}_{0}=\mu +i\sigma \kappa ^{-2}\mathcal{\bar{A}}_{1}^{2}.
\end{equation}%
Finally, the linear operator $\mathcal{L}$ and its adjoint $\mathcal{L}%
^{\dag }$ read
\begin{subequations}
\label{LinearOperator}
\begin{align}
\mathcal{L}& =\left(
\begin{array}{cc}
\mathcal{L}_{1} & \bar{\alpha}_{0} \\
\bar{\alpha}_{0}^{\ast } & \mathcal{L}_{1}^{\ast }%
\end{array}%
\right) ,\ \ \mathcal{L}^{\dag }=\left(
\begin{array}{cc}
\mathcal{L}_{1}^{\ast } & \bar{\alpha}_{0} \\
\bar{\alpha}_{0}^{\ast } & \mathcal{L}_{1}%
\end{array}%
\right) , \\
\mathcal{L}_{1}& =-(1+i\Delta _{1})+il_{1}^{2}\nabla ^{2}+2i\sigma \kappa
^{-2}\left\vert \mathcal{\bar{A}}_{1}(\mathbf{r})\right\vert ^{2},
\label{Laux}
\end{align}%
where we note that a typo has been corrected in Eq. (\ref{Laux}) with
respect to the corresponding expression in \cite{EPL}.
In Eq. (\ref{linearizedLangevin}) two terms ($\partial _{x}\mathbf{a}_{1}%
\mathrm{d}x_{1}/\mathrm{d}t$ and $\partial _{y}\mathbf{a}_{1}\mathrm{d}y_{1}/%
\mathrm{d}t$) have been neglected as they are of second order in the
fluctuations. Notice finally that Eq. (\ref{linearizedLangevin}) has the
standard form of a set of linearized Langevin equations but for the first
term appearing on the left hand side, which describes possible displacements
of the DS on the transverse plane.
All the information about quantum fluctuations in the linear approximation
is contained in Eq. (\ref{linearizedLangevin}). Our goal is then finding an
efficient method for solving it at the time that relevant information is
extracted in a transparent way. This is accomplished by using the
eigensystems of the linear operators
\subsection{Diagonalization of the linear problem. The role of the Goldstone
modes: Drift of the dissipative structure}
Our main purpose in this work is to study the properties of quantum
fluctuations around the semiclassical mean value corresponding to a DS. In
this section we solve Eqs. (\ref{linearizedLangevin}), which\ will allow to
characterize the quantum fluctuations, in particular through the squeezing
spectrum. With this aim, we develop a general method suitable for obtaining
the formal solution of Eqs. (\ref{linearizedLangevin}), suitable for any
system and any classical stationary DS.
Let us assume without proof that the set of eigenvectors of the linear
operators $\mathcal{L}$ and $\mathcal{L}^{\dag}$, Eq. (\ref{LinearOperator}%
), form a biorthonormal basis. (In the second part of this article we show
that this is indeed the case for the bright cavity soliton solution, unlike
the problem of conservative temporal solitons where the set of eigenvectors
must be completed in order to form a proper basis \cite{Kozlov03}). The
method used to solve Eq. (\ref{linearizedLangevin}) consists in expanding
the quantum fluctuations in this biorthonormal basis.
We denote the eigensystems of $\mathcal{L}$ and $\mathcal{L}^{\dag }$ by
\end{subequations}
\begin{subequations}
\label{Eigensystem}
\begin{align}
\mathcal{L}\mathbf{v}_{i}(\mathbf{r})& =\lambda _{i}\mathbf{v}_{i}(\mathbf{r}%
),\ \ \ \mathbf{v}_{i}(\mathbf{r})=%
\begin{pmatrix}
v_{i}(\mathbf{r}) \\
v_{i}^{+}(\mathbf{r})%
\end{pmatrix}%
, \label{Ldiag} \\
\mathcal{L}^{\dag }\mathbf{w}_{i}(\mathbf{r})& =\lambda _{i}^{\ast }\mathbf{w%
}_{i}(\mathbf{r}),\ \ \mathbf{w}_{i}(\mathbf{r})=%
\begin{pmatrix}
w_{i}(\mathbf{r}) \\
w_{i}^{+}(\mathbf{r})%
\end{pmatrix}%
. \label{Ldaggerdiag}
\end{align}
In the above and in the following expressions, the index $i$ represents both
the discrete and the continuous spectra as we do not want to overburden the
notation. Note also that, in the following, Kronecker deltas should be
understood as suitable Dirac deltas as well as sums should be understood as
suitable integrals when referring to the continuous part of the spectra.
We define scalar product as usual
\end{subequations}
\begin{equation}
\left\langle \mathbf{u}|\mathbf{s}\right\rangle =\int\mathrm{d}^{2}r~\mathbf{%
u}^{\dag}(\mathbf{r})\cdot\mathbf{s}(\mathbf{r}), \label{ScalarProduct}
\end{equation}
so that relation
\begin{equation}
\left\langle \mathbf{w}_{i}|\mathcal{L}\mathbf{s}\right\rangle =\lambda
_{i}\left\langle \mathbf{w}_{i}|\mathbf{s}\right\rangle ,
\end{equation}
holds for any $\mathbf{s}$. Finally, all eigenvectors are assumed to be
normalized as%
\begin{equation}
\left\langle \mathbf{w}_{i}|\mathbf{v}_{j}\right\rangle =\delta_{ij}.
\end{equation}
The spectra must be computed numerically in general. However, it will be
convenient for our purposes to state two general properties of the discrete
spectra. These are:
\begin{itemize}
\item Property 1: For any parameter set there exist Goldstone modes, $%
\mathbf{v}_{1x(1y)}=\mathbf{G}_{x(y)}$, with $\mathbf{G}_{x(y)}$ given by
Eq. (\ref{goldstone}), satisfying
\begin{equation}
\mathcal{L}\mathbf{v}_{1x(1y)}=0,
\end{equation}
and the associated adjoint eigenvectors, denoted by $\mathbf{w}_{1x(1y)}$,
which verify
\begin{equation}
\mathcal{L}^{\dag}\mathbf{w}_{1x(1y)}=0.
\end{equation}
This property is a consequence of the translational invariance of the
problem, as any DS can be located at any position on the transverse plane.
\item Property 2: For any parameter set there exist eigenvectors of the
adjoint problem, which we denote as $\mathbf{w}_{2x(2y)}$, verifying
\begin{equation}
\mathcal{L}^{\dag }\mathbf{w}_{2x(2y)}=-2\mathbf{w}_{2x(2y)}.
\end{equation}%
These eigenvectors are%
\begin{equation}
\mathbf{w}_{2x(2y)}(\mathbf{r})=\partial _{x(y)}\left(
\begin{array}{c}
i\mathcal{\bar{A}}_{1}(\mathbf{r}) \\
-i\mathcal{\bar{A}}_{1}^{\ast }(\mathbf{r})%
\end{array}%
\right) , \label{lofmagic}
\end{equation}%
as is straightforward to be checked. This property will be associated to a
perfectly squeezed mode \cite{EPL}, as we show below.
\end{itemize}
Now, the linearized Langevin equation (\ref{linearizedLangevin}) can be
solved by expanding the quantum fluctuations on the basis $\left\{ \mathbf{v}%
_{i}\right\} $,%
\begin{equation}
\mathbf{a}_{1}(\mathbf{r},t)=\sum\limits_{i}c_{i}(t)\mathbf{v}_{i}(\mathbf{r}%
), \label{expansion}
\end{equation}%
where the Goldstone modes have been excluded from this expansion as any
contribution of them to $\mathbf{a}_{1}(\mathbf{r},t)$ would entail a shift
of the solution, which is already accounted for by the (still undetermined)
location of the DS, by definition.
First we project Eq. (\ref{linearizedLangevin}) onto $\mathbf{w}_{1x}$ and $%
\mathbf{w}_{1y}$, obtaining
\begin{subequations}
\label{dif}
\begin{align}
\dot{x}_{1} & =-\frac{\sqrt{\gamma_{1}}}{\kappa}\left\langle \mathbf{w}_{1x}|%
\mathbf{h}\right\rangle , \label{dif1} \\
\dot{y}_{1} & =-\frac{\sqrt{\gamma_{1}}}{\kappa}\left\langle \mathbf{w}_{1y}|%
\mathbf{h}\right\rangle , \label{dif2}
\end{align}
We see that the DS diffuses driven by quantum fluctuations as anticipated.
The diffusive drift of the DS through a time $t_{\mathrm{d}}$ can be
evaluated by considering the mean squared deviation of the position of the
DS along this time,
\end{subequations}
\begin{equation}
\boldsymbol{\rho}(t)=\mathbf{r}_{1}(t)-\mathbf{r}_{1}(t-t_{\mathrm{d}}).
\label{drift}
\end{equation}
The variance $\left\langle \boldsymbol{\rho}^{2}(t)\right\rangle $ can be
calculated from Eqs.(\ref{dif}), which give the time evolution for $\mathbf{r%
}_{1}=(x_{1},y_{1})$, obtaining%
\begin{equation}
x_{1}(t)=-\frac{\sqrt{\gamma_{1}}}{\kappa}\int\limits_{0}^{t}\mathrm{d}%
t^{\prime}\left\langle \mathbf{w}_{1x}\left( \mathbf{r}\right) |\mathbf{h}%
\left( \mathbf{r},t^{\prime}\right) \right\rangle , \label{x1}
\end{equation}
and analogous expression holds for $y_{1}(t)$, when $\mathbf{w}_{1x}$ is
replaced by $\mathbf{w}_{1y}$ in (\ref{x1}). Substituting this solution into
Eq.(\ref{drift}), we reach the expression of the variance of $\boldsymbol{%
\rho }(t)$, which is linear in time \cite{Gomila02}, and reads%
\begin{equation}
\left\langle \boldsymbol{\rho}^{2}(t)\right\rangle =Dt_{\mathrm{d}},
\label{variance}
\end{equation}
where the diffusion coefficient is given by
\begin{equation}
D=2\frac{\gamma_{1}}{\kappa^{2}}\func{Re}\int\mathrm{d}^{2}r\left[
w_{1x}^{2}(\mathbf{r})+w_{1y}^{2}(\mathbf{r})\right] \bar{\alpha}_{0}^{\ast
}(\mathbf{r}). \label{diffconstant}
\end{equation}
The knowledge of the variance (\ref{variance}) allows us to evaluate the
possible effects of the DS movement on the noise detected, e.g., in a
homodyning experiment. This could be quantitatively important as a possible
noise reduction could be blurred by the diffusion of the structure \cite{EPL}%
.
\subsection{Formal solution to the linearized Langevin equations}
Now by substituting (\ref{expansion}) into (\ref{linearizedLangevin}) and
projecting onto $\left\{ \mathbf{w}_{i}\right\} $, we obtain the evolution
equation for the expansion coefficients in Eq. (\ref{expansion}):%
\begin{equation}
\dot{c}_{i}=\gamma_{1}\lambda_{i}c_{i}+\sqrt{\gamma_{1}}\left\langle \mathbf{%
w}_{i}|\mathbf{h}\right\rangle , \label{CoefficientEvolution}
\end{equation}
where the index $i$ does not include the Goldstone modes, as commented. We
write down Eq. (\ref{CoefficientEvolution}) in the temporal-frequencies
space. By defining the Fourier transforms
\begin{subequations}
\label{FourierTransf}
\begin{align}
c_{i}(t) & =\frac{1}{2\pi}\int\mathrm{d}\omega~e^{i\omega t}\tilde{c}%
_{i}(\omega), \\
\mathbf{h}(\mathbf{r},t) & =\frac{1}{2\pi}\int\mathrm{d}\omega~e^{i\omega t}%
\mathbf{\tilde{h}}(\mathbf{r},\omega),
\end{align}
and the corresponding inverse transforms
\end{subequations}
\begin{subequations}
\label{InverseFourier}
\begin{align}
\tilde{c}_{i}(\omega) & =\int\mathrm{d}t~e^{-i\omega t}c_{i}(t), \\
\mathbf{\tilde{h}}(\mathbf{r},\omega) & =\int\mathrm{d}t~e^{-i\omega t}%
\mathbf{h}(\mathbf{r},t),
\end{align}
we find a simple expression in the\ temporal spectral domain for Eq.(\ref%
{CoefficientEvolution}), that reads
\end{subequations}
\begin{equation}
i\omega\tilde{c}_{i}(\omega)=\gamma_{1}\lambda_{i}\tilde{c}_{i}(\omega )+%
\sqrt{\gamma_{1}}\left\langle \mathbf{w}_{i}\left( \mathbf{r}\right) |%
\mathbf{\tilde{h}}\left( \mathbf{r},\omega\right) \right\rangle ,
\end{equation}
which gives
\begin{equation}
\tilde{c}_{i}(\omega)=\frac{\sqrt{\gamma_{1}}\left\langle \mathbf{w}%
_{i}\left( \mathbf{r}\right) |\mathbf{\tilde{h}}\left( \mathbf{r}%
,\omega\right) \right\rangle }{i\omega-\gamma_{1}\lambda_{i}}.
\label{CoefOmega}
\end{equation}
By using solution (\ref{CoefOmega}) in Eqs. (\ref{FourierTransf}), we
retrieve the expansion coefficients $c_{i}(t)$%
\begin{equation}
c_{i}(t)=\frac{\sqrt{\gamma_{1}}}{2\pi}\int\mathrm{d}\omega~e^{i\omega t}%
\frac{\left\langle \mathbf{w}_{i}\left( \mathbf{r}\right) |\mathbf{\tilde {h}%
}\left( \mathbf{r},\omega\right) \right\rangle }{i\omega-\gamma
_{1}\lambda_{i}}, \label{CoefficientExpression}
\end{equation}
which is the formal solution of the linearized Langevin equations, Eqs. (\ref%
{expansion}) and (\ref{CoefficientEvolution}) .
\subsection{Modal correlation spectrum}
Once the time-dependent expansion coefficients $c_{i}(t)$ are known, Eq. (%
\ref{CoefficientExpression}), we can calculate the two--time correlations
between the different modes. The knowledge of these correlations, and
specifically, of their spectra, is necessary in order to characterize some
properties of quantum fluctuations, such as the spectrum of squeezing or the
spectrum of intensity fluctuations of the quantum field exiting the
nonlinear cavity, as will be shown below.
We define the correlation spectrum between two modes labeled by indices $i$
and $j$ as usual%
\begin{equation}
S_{ij}(\omega )=\int \mathrm{d}\tau ~e^{-i\omega \tau }\left\langle
c_{i}(t+\tau ),c_{j}(\tau )\right\rangle \label{ModalSpectrumDef}
\end{equation}%
where the correlation%
\begin{multline}
\left\langle c_{i}(t+\tau ),c_{j}(\tau )\right\rangle = \\
\left\langle c_{i}(t+\tau )c_{j}(\tau )\right\rangle -\left\langle
c_{i}(t)\right\rangle \left\langle c_{j}(t)\right\rangle ,
\end{multline}%
and $\left\langle \mathcal{O}\right\rangle $ is the (stochastic) average
value of $\mathcal{O}$. By using Eqs. (\ref{InverseFourier}) and\ (\ref%
{CoefOmega}), $S_{ij}(\omega )$ can be written as
\begin{multline}
S_{ij}(\omega )=\frac{\gamma _{1}}{2\pi }\int \mathrm{d}\omega ^{\prime }%
\frac{e^{i(\omega ^{\prime }+\omega )\tau }}{\left( \gamma _{1}\lambda
_{i}-i\omega \right) \left( \gamma _{1}\lambda _{j}-i\omega ^{\prime
}\right) }\times \\
\left\langle \left\langle \mathbf{w}_{i}|\mathbf{\tilde{h}}(\mathbf{r}%
,\omega )\right\rangle ,\left\langle \mathbf{w}_{j}|\mathbf{\tilde{h}}(%
\mathbf{r},\omega )\right\rangle \right\rangle .
\end{multline}%
Taking into account the properties of noise, Eqs. (\ref{noise}), one obtains
straightforwardly
\begin{multline}
\left\langle \left\langle \mathbf{w}_{i}|\mathbf{\tilde{h}}(\mathbf{r}%
,\omega )\right\rangle ,\left\langle \mathbf{w}_{j}|\mathbf{\tilde{h}}(%
\mathbf{r}^{\prime },\omega ^{\prime })\right\rangle \right\rangle = \\
2\pi \delta \left( \omega +\omega ^{\prime }\right) \int \mathrm{d}%
^{2}r~d_{ij}\left( \mathbf{r},\mathbf{r}\right) ,
\end{multline}%
where%
\begin{multline}
d_{ij}\left( \mathbf{r},\mathbf{r}\right) = \label{dij} \\
w_{i}^{\ast }\left( \mathbf{r}\right) w_{j}^{\ast }\left( \mathbf{r}\right)
\bar{\alpha}_{0}\left( \mathbf{r}\right) +\left[ w_{i}^{+}\left( \mathbf{r}%
\right) \right] ^{\ast }\left[ w_{j}^{+}\left( \mathbf{r}\right) \right]
^{\ast }\bar{\alpha}_{0}^{\ast }\left( \mathbf{r}\right) ,
\end{multline}%
so that the modal correlation spectrum $S_{ij}(\omega )$ can be written as%
\begin{equation}
S_{ij}(\omega )=\frac{D_{ij}}{\left( \lambda _{i}-i\omega /\gamma
_{1}\right) \left( \lambda _{j}+i\omega /\gamma _{1}\right) },
\label{ModalSpectrum}
\end{equation}%
where we have introduced the matrix $D_{ij}$, which we call modal diffusion
matrix (because of the similarity of Eq. (\ref{ModalSpectrum}) with an
spectral matrix \cite{Collet85}), defined by%
\begin{equation}
D_{ij}=\int \mathrm{d}^{2}r\ d_{ij}\left( \mathbf{r},\mathbf{r}\right) .
\label{diffusionmatrix}
\end{equation}
Note that all modal correlations can be obtained just by diagonalizing the
linear problem.
\section{Squeezing spectrum in the linear approximation}
We consider now the squeezing properties of the classical DS $\mathcal{%
\bar
{A}}_{1}$ as measured via a balanced homodyne detection experiment
\cite{Gatti95}, which allows the direct measurement of quadrature squeezing.
The quantum field outgoing the nonlinear cavity, to be denoted by $\hat
{A}%
_{1,\mathrm{out}}\left( \mathbf{r},t\right) $, is combined at a beam
splitter with a local oscillator field (LOF). This LOF lies in a classical
multimode coherent state $\mathbf{\alpha}_{\mathrm{L}}\left( \mathbf{r}-%
\mathbf{r}_{\mathrm{L}}\left( t\right) \right) $ of intensity much larger
than that of\ $\hat{A}_{1,\mathrm{out}}\left( \mathbf{r},t\right) $. (The
shift $\mathbf{r}_{\mathrm{L}}\left( t\right) $ is introduced in order to
cover the case of a movable LOF for reasons that will be clear below, such
as to consider the possibility to follow the DS movement.)
The difference $\widehat{\Delta I}(t)$ between the intensities $\mathcal{%
\hat
{D}}_{+}$ and $\mathcal{\hat{D}}_{-}$ of the two output ports of the
beam splitter, with
\begin{equation}
\mathcal{\hat{D}}_{\pm}(\mathbf{r},t)=\frac{1}{\sqrt{2}}[\hat{A}_{1,\mathrm{%
out}}(\mathbf{r},t)\pm\alpha_{\mathrm{L}}\left( \mathbf{r}-\mathbf{\mathbf{r}%
}_{\mathrm{L}}\left( t\right) \right) ],
\end{equation}
is then measured, and it turns out to be given by \cite{Gatti95}
\begin{align}
\widehat{\Delta I}(t) & =\int\mathrm{d}^{2}r\left[ \mathcal{\hat{D}}%
_{+}^{\dag}(\mathbf{r},t)\mathcal{\hat{D}}_{+}(\mathbf{r},t)-\mathcal{\hat{D}%
}_{-}^{\dag}(\mathbf{r},t)\mathcal{\hat{D}}_{-}(\mathbf{r},t)\right] \notag
\\
& \equiv\sqrt{N}\hat{E}_{\mathrm{H},\mathrm{out}}(t).
\end{align}
where the projection of the output signal field $\hat{A}_{1,\mathrm{out}}(%
\mathbf{r},t)$ on the LOF has been introduced as the field $\hat
{E}_{%
\mathrm{H},\mathrm{out}}(t)$
\begin{align}
\hat{E}_{\mathrm{H},\mathrm{out}}(t) & =\hat{A}_{\mathrm{H},\mathrm{out}}(t)+%
\hat{A}_{\mathrm{H},\mathrm{out}}^{\dag}(t), \\
\hat{A}_{\mathrm{H},\mathrm{out}}(t) & \equiv\frac{1}{\sqrt{N}}\int\mathrm{d}%
^{2}r~\alpha_{\mathrm{L}}^{\ast}(\mathbf{r}-\mathbf{\mathbf{r}}_{\mathrm{L}%
}\left( t\right) )\hat{A}_{1,\mathrm{out}}(\mathbf{r},t),
\end{align}
with the LOF intensity denoted by%
\begin{equation}
N=\int\mathrm{d}^{2}r\left\vert \alpha_{\mathrm{L}}(\mathbf{r})\right\vert
^{2}. \label{LOFintens}
\end{equation}
By using (\ref{commutationrel}), the spectrum of the difference intensity
fluctuations can be written as%
\begin{multline}
V(\omega )=\int\limits_{-\infty }^{\infty }\mathrm{d}\tau ~e^{-i\omega \tau
}\left\langle \hat{E}_{\mathrm{H},\mathrm{out}}(t+\tau ),\hat{E}_{\mathrm{H},%
\mathrm{out}}(t)\right\rangle \\
=1+\int\limits_{-\infty }^{\infty }\mathrm{d}\tau ~e^{-i\omega \tau
}\left\langle :\hat{E}_{\mathrm{H},\mathrm{out}}(t+\tau ),\hat{E}_{\mathrm{H}%
,\mathrm{out}}(t):\right\rangle \\
=1+S_{\mathrm{out}}(\omega ),
\end{multline}%
which defines the squeezing spectrum $S_{\mathrm{out}}(\omega )$ of the
field exiting the cavity. When $\hat{A}_{\mathrm{out}}(\mathbf{r},t)$ is in
a coherent state, $V(\omega )=1$ and $S_{\mathrm{out}}(\omega )=0$, which
defines the standard quantum limit. Light is said to be squeezed at a
frequency $\omega _{\mathrm{c}}$ when $S_{\mathrm{out}}(\omega _{\mathrm{c}%
})<0$, and the case of complete noise reduction, or perfect squeezing, at $%
\omega _{\mathrm{c}}$ is signaled by $S_{\mathrm{out}}(\omega _{\mathrm{c}%
})=-1$ as in this case $V(\omega _{\mathrm{c}})=0$.
Now the correlations of the output fields can be written in terms of those
of the intracavity fields by using the input-output formalism \cite{Collet84}%
\begin{align}
\left\langle :\hat{A}_{1,\mathrm{out}}^{\dag}(t),\hat{A}_{1,\mathrm{out}%
}(t^{\prime}):\right\rangle & =2\gamma_{1}\left\langle :\hat{A}_{1}^{\dag
}(t),\hat{A}_{1}(t^{\prime}):\right\rangle \label{in-out} \\
& =2\gamma_{1}\left\langle \mathcal{A}_{1}^{+}(t),\mathcal{A}_{1}(t^{\prime
})\right\rangle \notag
\end{align}
and then
\begin{equation}
S_{\mathrm{out}}(\omega)=2\gamma_{1}\int\limits_{-\infty}^{\infty}\mathrm{d}%
\tau~e^{-i\omega\tau}\left\langle \delta\mathcal{E}_{\mathrm{H}%
}(t+\tau),\delta\mathcal{E}_{\mathrm{H}}(t)\right\rangle ,
\end{equation}
\newline
where $\delta\mathcal{E}_{\mathrm{H}}(t)=\mathcal{E}_{\mathrm{H}%
}(t)-\left\langle \mathcal{E}_{\mathrm{H}}(t)\right\rangle $ with
\begin{subequations}
\begin{align}
\mathcal{E}_{\mathrm{H}}(t) & =\mathcal{A}_{\mathrm{H}}(t)+\mathcal{A}_{%
\mathrm{H}}^{+}(t), \\
\mathcal{A}_{\mathrm{H}}(t) & =\frac{1}{\sqrt{N}}\int\mathrm{d}^{2}r~\alpha_{%
\mathrm{L}}^{\ast}(\mathbf{r}-\mathbf{r}_{\mathrm{L}}\left( t\right) )%
\mathcal{A}_{1}(\mathbf{r},t).
\end{align}
This expression can be written in a more compact form as
\end{subequations}
\begin{subequations}
\label{Sout}
\begin{align}
S_{\mathrm{out}}\left( \omega\right) & =\frac{2\gamma_{1}}{N}%
\int\limits_{-\infty}^{\infty}\mathrm{d}\tau e^{-i\omega\tau}\langle \delta%
\mathcal{E}_{\mathrm{H}}\left( t+\tau\right) \delta\mathcal{E}_{\mathrm{H}%
}\left( t\right) \rangle, \\
\delta\mathcal{E}_{\mathrm{H}}\left( t\right) & =\langle\boldsymbol{\alpha }%
_{\mathrm{L}}\left( \mathbf{r}+\boldsymbol{\rho}\left( t\right) \right) \mid%
\mathbf{a}_{1}\left( \mathbf{r},t\right) \rangle, \label{deltaEh}
\end{align}
where\textbf{\ }$\boldsymbol{\rho}\left( t\right) =$ $\mathbf{r}_{1}\left(
t\right) -\mathbf{r}_{\mathrm{L}}\left( t\right) $, and the change of
variables $\mathbf{r}\rightarrow\mathbf{r}-\mathbf{r}_{1}\left( t\right) $
has been introduced. (Remind that $\mathbf{r}_{1}\left( t\right) $ describes
the position of the dissipative structure, that changes because of the
diffusion introduced by quantum fluctuations, Eqs. (\ref{dif}).)
The output squeezing spectrum (\ref{Sout}) can now be calculated in terms of
the modal correlation spectrum (\ref{ModalSpectrum}): The intracavity field
fluctuations $\mathbf{a}_{1}(\mathbf{r},t)$ can be written in terms of the
expansion (\ref{expansion}) --remind that Goldstone modes are excluded--, so
that the output squeezing spectrum (\ref{Sout}) takes the form
\end{subequations}
\begin{equation}
S_{\mathrm{out}}(\omega )=\frac{2\gamma _{1}}{N}\sum\limits_{i,j}\left%
\langle \boldsymbol{\alpha }_{\mathrm{L}}|\mathbf{v}_{i}\right\rangle
\left\langle \boldsymbol{\alpha }_{\mathrm{L}}|\mathbf{v}_{j}\right\rangle
S_{ij}(\omega ), \label{Specsum}
\end{equation}%
where
\begin{multline}
\left\langle \boldsymbol{\alpha }_{\mathrm{L}}|\mathbf{v}_{i}\right\rangle
=\int\limits_{-\infty }^{\infty }\mathrm{d}^{2}r~\alpha _{\mathrm{L}}^{\ast
}(\mathbf{r}+\boldsymbol{\rho }\left( t\right) )v_{i}(\mathbf{r}+\mathbf{r}%
_{1}\left( t\right) )+ \label{aux} \\
\int\limits_{-\infty }^{\infty }\mathrm{d}^{2}r~\alpha _{\mathrm{L}}(\mathbf{%
r}+\boldsymbol{\rho }\left( t\right) )v_{i}^{+}(\mathbf{r}+\mathbf{r}%
_{1}\left( t\right) ),
\end{multline}%
and $N$ and $S_{ij}(\omega )$ are given by Eqs. (\ref{LOFintens}) and (\ref%
{ModalSpectrum}), respectively.
The projections of the eigenvectors $\mathbf{v}_{i}$ onto the LOF are given
by $\left\langle \boldsymbol{\alpha}_{\mathrm{L}}|\mathbf{v}%
_{i}\right\rangle $, where the scalar product (\ref{ScalarProduct}) is used,
and the modal correlation terms $S_{ij}(\omega)$ correspond to Eq.(\ref%
{ModalSpectrum}).
Up to this point we have treated with a complete detection of the beam, that
is, we have considered an arbitrarily large detector covering the whole
transverse profile of the outgoing field. We can wonder now on the effect of
the detector size when it is finite, which corresponds to a more realistic
description.
Thus we consider now a movable detector with finite transverse size $\Sigma $%
, which allows to sweep the transverse profile of the outgoing field and
study the spatial distribution of squeezing. Mathematically, the use of a
finite size detector corresponds to limit the spatial integrations in Eq. (%
\ref{Specsum}) to a domain $\Sigma $ around the "detector position" $\mathbf{%
r}_{0}$,\ so replacing $\left\langle \boldsymbol{\alpha }_{\mathrm{L}}|%
\mathbf{v}_{i}\right\rangle $, Eq. (\ref{aux}), by%
\begin{multline}
\left\langle \boldsymbol{\alpha }_{\mathrm{L}}|\mathbf{v}_{i}\right\rangle
_{\left\{ \mathbf{r}_{0},\Sigma \right\} }\equiv \label{PF} \\
\int\limits_{\left\{ \mathbf{r}_{0},\Sigma \right\} }\mathrm{d}^{2}r~\alpha
_{\mathrm{L}}^{\ast }\left( \mathbf{r}+\boldsymbol{\rho }\left( t\right)
\right) v_{i}\left( \mathbf{r}+\mathbf{r}_{1}\left( t\right) \right) + \\
\int\limits_{\left\{ \mathbf{r}_{0},\Sigma \right\} }\mathrm{d}^{2}r~\alpha
_{\mathrm{L}}\left( \mathbf{r}+\boldsymbol{\rho }\left( t\right) \right)
v_{i}^{+}\left( \mathbf{r}+\mathbf{r}_{1}\left( t\right) \right) ,
\end{multline}%
where $\left\{ \mathbf{r}_{0},\Sigma \right\} $ represents the spatial
region occupied by the detector, and $N$, Eq. (\ref{LOFintens}), by
\begin{equation}
N_{\left\{ \mathbf{r}_{0},\Sigma \right\} }\equiv \int\limits_{\left\{
\mathbf{r}_{0},\Sigma \right\} }\mathrm{d}^{2}r\left\vert \alpha _{\mathrm{L}%
}(\mathbf{r})\right\vert ^{2}. \label{NFinite}
\end{equation}%
Finally we can compute the squeezing spectrum measured when the finite
detector is placed at $\mathbf{r}_{0}$ through%
\begin{multline}
S_{\mathrm{out}}(\omega ;\mathbf{r}_{0})=\frac{2\gamma _{1}}{N_{\left\{
\mathbf{r}_{0},\Sigma \right\} }}\times \label{spatialS} \\
\sum\limits_{i,j}\left\langle \boldsymbol{\alpha }_{\mathrm{L}}|\mathbf{v}%
_{i}\right\rangle _{\left\{ \mathbf{r}_{0},\Sigma \right\} }\left\langle
\boldsymbol{\alpha }_{\mathrm{L}}|\mathbf{v}_{j}\right\rangle _{\left\{
\mathbf{r}_{0},\Sigma \right\} }S_{ij}(\omega ).
\end{multline}%
As can be noted in Eq.(\ref{spatialS}), the obtained level of squeezing
depends both on the area and position of the detector.
\section{Intensity fluctuations spectrum}
Now we apply the technique used for calculating the squeezing spectrum to
the derivation of spectrum of the intensity fluctuations. For the sake of
simplicity here we ignore the diffusive movement of the DS, i.e., we assume
that the detector can follow such motion.
The intensity fluctuations spectrum can be directly observed with a single
photodetector and is given by \cite{Glauber63}%
\begin{multline}
V_{I}\left( \omega \right) =\int\limits_{-\infty }^{\infty }\mathrm{d}%
t~e^{-i\omega t}\diint \mathrm{d}^{2}r\ \mathrm{d}^{2}r^{\prime }\times
\label{intensityspec} \\
\left\langle \delta \hat{I}_{\mathrm{out}}\left( \mathbf{r},t\right) \delta
\hat{I}_{\mathrm{out}}\left( \mathbf{r}^{\prime },0\right) \right\rangle ,
\end{multline}%
where
\begin{equation}
\delta \hat{I}_{\mathrm{out}}\left( \mathbf{r},t\right) =\hat{I}_{\mathrm{out%
}}\left( \mathbf{r},t\right) -\left\langle \hat{I}_{\mathrm{out}}\left(
\mathbf{r},t\right) \right\rangle ,
\end{equation}%
and the term proportional to the intensity of the outgoing field is
\begin{equation}
\hat{I}_{\mathrm{out}}\left( \mathbf{r},t\right) =\hat{A}_{\mathrm{out}%
}^{\dag }\left( \mathbf{r},t\right) \hat{A}_{\mathrm{out}}\left( \mathbf{r}%
,t\right) .
\end{equation}%
By making use of (\ref{commutationrel})\ and taking account of the
input-output relation (\ref{in-out}) one obtains \cite{Gatti95}%
\begin{equation}
V_{I}(\omega )=S_{\mathrm{SN}}\left[ 1+S_{I}(\omega )\right] ,
\end{equation}%
where the term corresponding to the shot noise reads
\begin{equation}
S_{\mathrm{SN}}=2\gamma _{1}\int \mathrm{d}^{2}r~\left\langle \hat{I}%
_{1}\left( \mathbf{r},t\right) \right\rangle ,
\end{equation}%
and
\begin{multline}
S_{I}(\omega )=\frac{4\gamma _{1}^{2}}{S_{\mathrm{SN}}}\int\limits_{-\infty
}^{\infty }\mathrm{d}t~e^{-i\omega t}\diint \mathrm{d}^{2}r\ \mathrm{d}%
^{2}r^{\prime }\times \label{intensityS} \\
\left\langle :\delta \hat{I}_{1}\left( \mathbf{r},t\right) \delta \hat{I}%
_{1}\left( \mathbf{r}^{\prime },0\right) :\right\rangle ,
\end{multline}%
where
\begin{subequations}
\begin{align}
\delta \hat{I}_{1}(\mathbf{r},t)& =\hat{I}_{1}(\mathbf{r},t)-\left\langle
\hat{I}_{1}(\mathbf{r},t)\right\rangle , \\
\hat{I}_{1}(\mathbf{r},t)& =\hat{A}_{1}^{\dag }(\mathbf{r},t)\hat{A}_{1}(%
\mathbf{r},t).
\end{align}
As stated, our interest is centered on the quantum fluctuations properties
of the DS given by (\ref{ClassicalSolution}). When such form of the field is
considered, the expression of the intensity fluctuations spectrum, leading
to first order in fluctuations, reads\
\end{subequations}
\begin{equation}
S_{I}(\omega)=\frac{2\gamma_{1}}{\bar{N}}\int\limits_{-\infty}^{\infty }%
\mathrm{d}\tau e^{-i\omega\tau}\langle\delta\mathcal{E}_{\mathrm{I}}\left(
t+\tau\right) \delta\mathcal{E}_{\mathrm{I}}\left( t\right) \rangle,
\label{intensitySbis}
\end{equation}
where
\begin{subequations}
\begin{align}
\delta\mathcal{E}_{\mathrm{I}}\left( t\right) & =\langle\mathbf{\bar{A}}%
_{1}\left( \mathbf{r,}t\right) \mid\mathbf{a}_{1}\left( \mathbf{r},t\right)
\rangle, \\
\bar{N} & =\int\mathrm{d}^{2}r~\left\langle \left\vert \mathcal{\bar{A}}_{1}(%
\mathbf{r})\right\vert ^{2}\right\rangle , \\
\mathbf{\bar{A}}_{1}\left( \mathbf{r}\right) & \equiv(\mathcal{\bar{A}}_{1}(%
\mathbf{r}),\mathcal{\bar{A}}_{1}^{\ast}(\mathbf{r}))^{T}.
\end{align}
Notice that $\mathbf{\bar{A}}_{1}\left( \mathbf{r}\right) $ is the vector
which corresponds to the classical DS, given by Eq.(\ref{ClassicalSolution}).
As can be noted, the intensity fluctuations spectrum, Eq.(\ref{intensitySbis}%
), has the same expression as the squeezing spectrum, Eq.(\ref{Sout}), but
with the classical DS solution acting as LOF \cite{Gatti95}. So, as it
occurred previously with the output squeezing spectrum, the intensity
fluctuations spectrum can be written in terms of the modal correlation
spectrum (\ref{ModalSpectrum})
\end{subequations}
\begin{equation}
S_{I}(\omega)=\frac{2\gamma_{1}}{\bar{N}}\sum\limits_{i,j}\left\langle
\mathbf{\bar{A}}_{1}|\mathbf{v}_{i}\right\rangle \left\langle \mathbf{\bar{A}%
}_{1}|\mathbf{v}_{j}\right\rangle S_{ij}(\omega), \label{intensityProject}
\end{equation}
where%
\begin{equation}
\left\langle \mathbf{\bar{A}}_{1}|\mathbf{v}_{i}\right\rangle =\int
\limits_{-\infty}^{\infty}\mathrm{d}^{2}r~\left[ \mathcal{\bar{A}}_{1}^{\ast
}(\mathbf{r})v_{i}(\mathbf{r})+\mathcal{\bar{A}}_{1}(\mathbf{r})v_{i}^{+}(%
\mathbf{r})\right] .
\end{equation}
Finally, and analogously to what we did with the squeezing spectrum, when a
finite detector of transverse size $\Sigma $ positioned at $\mathbf{r}_{0}$
is considered the intensity fluctuations are given by Eq.(\ref%
{intensityProject}) after replacing $\left\langle \mathbf{\bar{A}}_{1}|%
\mathbf{v}_{i}\right\rangle \,\ $by $\left\langle \mathbf{\bar{A}}_{1}|%
\mathbf{v}_{i}\right\rangle _{\left\{ \mathbf{r}_{0},\Sigma \right\} }$ and $%
\bar{N}$ by $\bar{N}_{\left\{ \mathbf{r}_{0},\Sigma \right\} }$ so that the
intensity fluctuations spectrum measured with a finite detector of size $%
\Sigma $ placed at $\mathbf{r}_{0}$ reads%
\begin{multline}
S_{I}(\omega ;\mathbf{r}_{0})=\frac{2\gamma _{1}}{\bar{N}_{\left\{ \mathbf{r}%
_{0},\Sigma \right\} }}\times \\
\sum\limits_{i,j}\left\langle \mathbf{\bar{A}}_{1}|\mathbf{v}%
_{i}\right\rangle _{\left\{ \mathbf{r}_{0},\Sigma \right\} }\left\langle
\mathbf{\bar{A}}_{1}|\mathbf{v}_{j}\right\rangle _{\left\{ \mathbf{r}%
_{0},\Sigma \right\} }S_{ij}(\omega ),
\end{multline}%
where $\left\langle \mathbf{\bar{A}}_{1}|\mathbf{v}_{i}\right\rangle
_{\left\{ \mathbf{r}_{0},\Sigma \right\} }$ and $\bar{N}_{\left\{ \mathbf{r}%
_{0},\Sigma \right\} }$ are given, respectively, by Eqs.(\ref{PF}) and (\ref%
{NFinite}) when $\boldsymbol{\alpha }_{\mathrm{L}}$ is replaced by $\mathbf{%
\bar{A}}_{1}$.
\section{A general result on the squeezing of dissipative structures}
Let us assume that we can set $\mathbf{\rho}=0$ in Eq. (\ref{aux}). This
means that the LOF can be shifted in such a way that it exactly follows the
diffusive movement of the dissipative structure whose squeezing is being to
be measured. Further, let us choose the LOF $\boldsymbol{\alpha}_{\mathrm{L}%
}=\mathbf{w}_{2x}$, Eq. (\ref{lofmagic}), i.e., a LOF whose shape is $%
\alpha_{\mathrm{L}}=iG_{x}$. (We notice that the result that follows is
valid for any $\mathbf{\alpha}_{\mathrm{L}}$ that corresponds to a linear
combination of $\mathbf{w}_{2x}$ and $\mathbf{w}_{2y}$.)
By doing this it turns out that $\delta \mathcal{E}_{\mathrm{H}}\left(
t\right) =c_{2x}\left( t\right) $, see Eqs. (\ref{expansion}) and (\ref%
{deltaEh}). Standard techniques \cite{Gardiner00} applied to Eq. (\ref%
{CoefficientEvolution}) for $i=2x$%
\begin{equation}
\dot{c}_{2x}=-2\gamma _{1}c_{2x}+\sqrt{\gamma _{1}}\xi _{2x}, \label{c2x}
\end{equation}%
where $\xi _{2x}\left( t\right) =\left\langle \mathbf{w}_{2x}|\mathbf{h}%
\right\rangle $ is the noise source and $c_{2x}\left( t\right) =\left\langle
\mathbf{w}_{2x}|\mathbf{a}_{1}\right\rangle $, allow the computation of the
stochastic correlation $\langle \delta \mathcal{E}_{\mathrm{H}}\left( t+\tau
\right) \delta \mathcal{E}_{\mathrm{H}}\left( t\right) \rangle $, that turns
out to be
\begin{equation}
\langle \delta \mathcal{E}_{\mathrm{H}}\left( t+\tau \right) \delta \mathcal{%
E}_{\mathrm{H}}\left( t\right) \rangle =-\tfrac{1}{2}N_{\mathrm{H}%
}e^{-2\gamma _{1}\left\vert \tau \right\vert }.
\end{equation}%
Then, by using Eq. (\ref{Sout}) we get%
\begin{equation}
S_{\mathrm{out}}\left( \omega \right) =-\frac{1}{1+\left( \omega /2\gamma
_{1}\right) ^{2}}, \label{S2}
\end{equation}%
which is the main result in \cite{EPL}. Of course the same result is
obtained by using Eqs. (\ref{dij}), (\ref{ModalSpectrum}) and (\ref%
{diffusionmatrix}).
It is to be remarked that Eq. (\ref{c2x}) is analogous to that derived in
\cite{Gatti01} for the stationary phase of the hexagonal mode appearing in
the Kerr cavity model, and that it was later interpreted in \cite{Gomila02}
as the hexagonal pattern transverse linear momentum. Then we can say that
the above result means that the transverse linear momentum of any stationary
DS appearing in the large pump detuning limit of the DOPO\ model is
perfectly squeezed in the linearized theory. This is a reasonable result as
it is immediately related to the fact that the transverse position of the DS
is completely undetermined as it diffuses with time.
Eq. (\ref{S2}) implies that $S\left( \omega=0\right) =-1$ what means that
within the validity domain of the linear theory we are using, any stationary
dissipative structure sustained by the DOPO in the large pump detuning limit
displays \textit{perfect squeezing} at $\omega=0$ when probed with the
appropriate LOF. As Eq. (\ref{S2}) is independent of the kind of dissipative
structure and of the system parameters, it is to be remarked that the result
is universal and independent of the existence of bifurcations. Let us
emphasize that the appropriate LOF ($\alpha_{\mathrm{L}}=iG_{x\left(
y\right) }$) is, in principle, easily implementable as it corresponds to the
$\pi/2$ phase-shifted gradient of the corresponding DS envelope which can be
easily synthesized by, e.g., Fourier filtering.
In \cite{EPL} we discussed up to what extent the assumption $\boldsymbol{%
\rho }=0$ is reasonable: It is, indeed, as the diffusion of the dissipative
structures is very slow because of the large number of photons they carry,
which acts as an inertial mass \cite{EPL}.
We find it important to make here a general comment on the mathematical
technique we have presented in the previous sections. We must note that the
linearized approach we have presented is valid, in principle, when all
eigenvalues are negative: In this case all fluctuations are damped and it is
reasonable to assume that they will remain small enough. Remarkably, in our
case there exists always two null eigenvalues, which are associated with the
Goldstone modes (see Property 1 in Subsection II C). Nevertheless these null
eigenvalues do not make the linear approach invalid as the undamped
fluctuations do not concern any particular field mode but the position of
the dissipative structure, which is decoupled from the rest of fluctuations
and undergoes a continuous diffusion (as it occurs, e.g., with the phase
difference in \cite{Reynaud87,Reid88}). Numerical research carried out in
vectorial Kerr cavities \cite{Zambrini00} reinforce this confidence. In
resume:\ In spite of having a null eigenvalue, we can be confident that the
linearized description of quantum fluctuations will be reasonably accurate,
and that a nonlinear treatment \cite{Drummond05} will not lead to
dramatically different results.
\section{Squeezing properties of the DOPO bright cavity soliton}
In this second part of the article, we study in detail the squeezing
properties of the DOPO\ bright cavity soliton in the large pump detuning
limit. First, in Subsection VI.A we review the main properties of this
solution and then, in Subsection VI.B, we apply the theory developed in the
first part to it. For the sake of simplicity we shall consider in this
second part only the one--dimensional case, that is, we assume that the DOPO
works embedded in a waveguide that avoids diffraction in the $y$ transverse
dimension whilst in the $x$ transverse dimension the system aperture is
arbitrarily large. Moreover, we shall assume from now on that the LOF used
in the homodyne detection scheme can be moved, in the transverse dimension,
in such a way that its movement exactly matches the diffusive motion of the
bright cavity soliton, and thus we shall not take into account the diffusive
motion of the DS (see \cite{EPL} for a quantitative discussion in which we
show that this is a reasonable assumption).
\subsection{The DOPO bright cavity soliton}
As stated in Subsection II.A, in the limit of large pump detuning we are
considering here, the classical description of the DOPO is the PDNLSE, (Eq. (%
\ref{AdiabaticLangevinClassic}) in Appendix B). It is well known that the
one--dimensional PDNLSE supports two different types of localized structures
(cavity solitons in our context): Dark cavity solitons (tanh--type localized
structures) in the self-defocusing case $\sigma =-1$, and bright\ cavity
solitons (sech--type localized structures) in the self-focusing case $\sigma
=+1$ \cite{Fauve90,Miles84,Elphick89,Barashenkov91}. We shall treat in the
following the bright cavity soliton and thus we take $\sigma =+1$.
Before going on we find it convenient to review the main solutions of the
PDNLSE. This equation has only two free parameters (the parametric pump
parameter $\mu $ and the cavity detuning $\Delta _{1}$) as the rest of
parameters ($\gamma _{1}$, $l_{1}$, and $\kappa ^{2}$) can be easily removed
by normalizing the time and space coordinates as well as the field amplitude
(see Eq. (\ref{PDNLSE}) in Appendix B). When $\Delta _{1}<0$, the trivial
state, $\mathcal{\bar{A}}_{1}(x)=0$, undergoes a supercritical bifurcation
towards a patterned state (a roll, or stripe, pattern) at $\mu =1$.
Contrarily, when $\Delta _{1}>0$ the bifurcation affecting the trivial state
is subcritical and occurs at $\mu =\mu _{0}\equiv \sqrt{1+\Delta _{1}^{2}}$.
For $\mu >\mu _{0}$, and positive $\Delta _{1}$, dynamic patterns are found
\cite{Longhi95}.
The bright cavity soliton has the explicit expression%
\begin{equation}
\mathcal{\bar{A}}_{1,\mathrm{BS}}(x)=\sqrt{2}\beta ~e^{i\phi }\func{sech}%
(\beta x), \label{BS}
\end{equation}%
with
\begin{subequations}
\label{solitonsPGLE}
\begin{align}
\beta ^{2}& =\Delta _{1}\pm \sqrt{\mu ^{2}-1}, \label{beta} \\
\ \cos (2\phi )& =\mu ^{-1}.
\end{align}%
This solution exists for $\Delta _{1}>0$ and $1<\mu <\mu _{0}$ \cite%
{Alexeeva99}, and is stable in a wide domain of parameters, although for
large enough $\Delta _{1}$ it becomes unstable, through a Hopf bifurcation,
thus appearing temporal oscillations (self-pulsing solitons) \cite{Bondila95}%
. Then the bright cavity soliton can undergo three different bifurcations:
(i) a tangent bifurcation at $\mu =1$ (the BS does not exist for $\mu <1$),
(ii) a bifurcation at $\mu =\mu _{0}$ (the BS does not exist for $\mu >\mu
_{0}$), and (iii) a Hopf bifurcation for large enough $\Delta _{1}$ that
transforms the stationary BS into a self--pulsing BS. In Fig. 1 we represent
in the $\left\langle \Delta _{1},\mu \right\rangle $ plane the domains of
existence of the different solutions we have just commented.
It is convenient to briefly comment here on the similarities between the
bright cavity soliton and the temporal soliton of the nonlinear Schr\"{o}%
dinger equation (NLSE). The NLSE is a conservative equation that can be
written without free parameters. Then, there is a family of sech--type
solitons, that coexist, and for which certain quantities (such as the
energy) are conserved. This is very different from the PDNLSE which is a
nonconservative equation governed by two parameters. For given $\mu$ and $%
\Delta_{1}$, the sech--type solution is unique and can be stable or
unstable. In fact the bright cavity soliton is not a true soliton and its
name can be misleading when comparisons between these similar but actually
quite different equations (NLSE and PDNLSE) are made. In particular, these
differences manifest in the spectra of $\mathcal{L}$ and$~\mathcal{L}^{\dag}$%
.
We have shown that in order to calculate the squeezing spectrum, given by
Eq. (\ref{Specsum}), one needs to evaluate the spectra (\ref{Eigensystem})
of both $\mathcal{L}$ and$~\mathcal{L}^{\dag }$ (\ref{LinearOperator}) for $%
\mathcal{\bar{A}}_{1}(\mathbf{r})=\mathcal{\bar{A}}_{1,\mathrm{BS}}(x)$, Eq.
(\ref{BS}). The spectrum of $\mathcal{L}$ in this case has been extensively
studied \cite{Barashenkov91}. It consists of a continuous spectrum, with
eigenvalues of the form
\end{subequations}
\begin{subequations}
\begin{align}
\lambda _{s}(k)& =-1+s\sqrt{\mu ^{2}-\Delta _{k}^{2}}, \\
\Delta _{k}& =\Delta _{1}+sk^{2},\text{\ \ }s=\pm 1,\ \ \ k\in \lbrack
0,\infty )
\end{align}%
and of a discrete spectrum with eigenvalues $\{\lambda _{i}\}_{i=1}^{D}$.
The spectra of $\mathcal{L}$ and$~\mathcal{L}^{\dag }$ need to be computed
numerically, as analytical expressions can not be derived in general, which
we have done by adapting the Fourier method described in \cite{Alexeeva99}.
Nevertheless, some eigenvectors of the discrete spectrum can be computed
analytically for some parameter sets. In particular, we have derived the
four eigenvectors corresponding to the bifurcation occurring at $\mu =1$.
These eigenvectors are explicitly given in Appendix B.
A crucial point is whether the set of eigenvectors of $\mathcal{L}$ and$~%
\mathcal{L}^{\dag}$ form a (biorthonormal) basis or not. We commented above
that for the similar problem of the nonlinear Schr\"{o}dinger equation,
describing temporal fiber solitons, the set of eigenvectors of $\mathcal{L}$
and$~\mathcal{L}^{\dag}$ do not form a base \cite{Kozlov03}. As in our case
analytical expressions are not available, our strategy has consisted in
discretizing the transverse spatial coordinate and numerically diagonalize $%
\mathcal{L}$ and$~\mathcal{L}^{\dag}$. Our numerical results show that the
number of independent eigenvectors coincides with the dimension of the
matrices, what constitutes a numerical proof that for this particular
transverse pattern, the eigenvectors of $\mathcal{L}$ and$~\mathcal{L}%
^{\dag} $ form a (biorthonormal) basis, whereas this does not happen in the
conservative case. We see that the NLSE and the PDNLSE are very different
problems indeed.
\subsection{Squeezing properties of the unidimensional bright cavity soliton}
Now we are in conditions for studying the squeezing properties of the DOPO\
bright cavity soliton. Of course the amount of squeezing to be obtained will
depend on the local oscillator field (LOF) one uses in the homodyne
measurements and, in general, on the parameter values (pump and detuning).
We shall proceed in our study as follows: First, in Subsection VI.B.1, we
shall particularize the general result obtained in \cite{EPL} and derived
above in Section III for the bright cavity soliton. Then, in Subsection
VI.B.2, we shall consider the squeezing at the bifurcation points (which
have been discussed in Subection VI.A). Finally, in Subsection VI.B.3 we
shall consider the case of a plane--wave LOF. In this last case we discuss
first how the amount of squeezing changes with the parameters (for
arbitrarily large detectors), and then how the level of squeezing depends on
the size of the photodetectors when these have a finite transverse dimension.
\subsubsection{Squeezing of the cavity soliton linear momentum}
In Section VI we showed that there is a special mode that is perfectly
squeezed in the linear approximation, which is equivalent to saying that the
transverse linear momentum of the DS is perfectly squeezed. We saw that the
fluctuations of this mode are detected by choosing the LOF $\boldsymbol{%
\alpha }_{\mathrm{L}}(x)=\mathbf{w}_{2x}(x)$, that is, $\alpha _{\mathrm{L}}(%
\mathbf{r})=i\partial _{x}\mathcal{\bar{A}}_{1,\mathrm{BS}}\left( x\right) $%
, see Eq.(\ref{lofmagic}). In Fig. 2 we represent the amplitude of this LOF
with a dashed line together with the soliton amplitude, given by Eq. (\ref%
{BS}). (As the phase factor $\exp \left( i\phi \right) $ is space
independent, Eq. (\ref{BS}), it has been discarded in making this plot.)
The similarity between $w_{2x}(x)$ and the Gauss--Hermite function
\end{subequations}
\begin{equation}
GH_{1}(x)\equiv ie^{i\phi }xe^{-\frac{1}{2}(x/\xi )^{2}}, \label{GH}
\end{equation}%
is immediate (in Fig. 2 the Gauss--Hermite function is also plotted), what
suggests the use of this function, which is relatively easy to generate, as
a LOF. In order to calculate the level of squeezing that would be detected
by using the Gauss--Hermite LOF, Eq. (\ref{GH}), we take advantage of the
general theory presented in the first part of this article:\ We expand the
LOF on the basis of $\mathcal{L}^{\dag }$ as%
\begin{equation}
\boldsymbol{\alpha }_{\mathrm{L}}=\sum_{i}\left\langle \mathbf{v}_{i}|%
\boldsymbol{\alpha }_{\mathrm{L}}\right\rangle \mathbf{w}_{i},
\end{equation}%
excluding the Goldstone modes, and the squeezing spectrum is given by%
\begin{equation}
S_{\mathrm{out}}(\omega )=\frac{2\gamma }{N}\sum\limits_{i,j}\left\langle
\mathbf{v}_{i}|\boldsymbol{\alpha }_{\mathrm{L}}\right\rangle \left\langle
\mathbf{v}_{j}|\boldsymbol{\alpha }_{\mathrm{L}}\right\rangle S_{ij}(\omega
),
\end{equation}%
where the modal correlation matrix, Eq.(\ref{ModalSpectrum}), is evaluated
from the computed spectra of $\mathcal{L}$ and$~\mathcal{L}^{\dag }$. The
accuracy of the numerical method was checked by computing $S_{\mathrm{out}%
}(\omega )$ when $\boldsymbol{\alpha }_{\mathrm{L}}(x)=\mathbf{w}_{2}(x)$,
yielding an error less than $10^{-13}$. The influence of the Gauss--Hermite
LOF width and position was already presented in \cite{EPL}, and we reproduce
it in Fig. 3 for the sake of completitude. Notice that the level of
squeezing is quite large even when the LOF position or width are not well
matched to those of mode $\mathbf{w}_{2}(x)$.
\subsubsection{Squeezing at the bifurcation points}
At the bifurcations points there is, at least, one null eigenvalue, apart
from that corresponding to the Goldstone mode. This implies the existence of
a mode, different to $\mathbf{w}_{2}(x)$, whose eigenvalue reaches its
minimum value. This mode is expected to be perfectly squeezed in the linear
approximation. The squeezing of these modes is the equivalent to the usual
squeezing at the bifurcation points that has been repeatedly studied in a
large number of nonlinear cavities \cite{Meystre91,Walls94}.
We have seen that the bright cavity soliton can undergo three different
bifurcations at $\mu=1$, $\mu=\mu_{0}$, and $\mu=\mu_{\mathrm{HB}}$, see
Fig. 1. Let us consider first the bifurcation occurring at $\mu=1$ (the
analytic expression of the mode that has $\lambda=-2$ is given in the
Appendix B, mode $\mathbf{w}_{3}(x)$). Then, by taking $\boldsymbol{\alpha}_{%
\mathrm{L}}(x)=\mathbf{w}_{3}(x)$, one obtains the squeezing spectra that we
have represented in Fig. 4 for $\Delta_{1}=1.2$ and three values of $\mu$.
Notice that $S_{\mathrm{out}}\left( \omega=0\right) =-1$ for $\mu=1$ (full
line); for $\mu=1.0001$ (dashed line) the maximum squeezing does not reach $%
-1$. Furthermore, as $\mu$ departs from unity the maximum squeezing does not
occur at $\omega=0$, but at a slightly different frequency. For $\mu=1.01$,
the maximum level of squeezing is close to $-0.75$. That is, the squeezing
degrades quickly as the system departs from this bifurcation. A similar
behavior is exhibited at the bifurcation occurring at $\mu=\mu_{0}=\sqrt{%
1+\Delta_{1}^{2}}$ and we shall not enter into quantitative details.
Let us finally consider the Hopf bifurcation. For $\mu =\mu _{\mathrm{HB}}$
there are two eigenmodes, let us denote them as $\mathbf{w}_{\mathrm{HB}+}$
and $\mathbf{w}_{\mathrm{HB}-}$, for which the eigenvalues read $\lambda
_{\pm }=-2$ $\pm i\omega _{\mathrm{HB}}$ ($\omega _{\mathrm{HB}}$ is the
oscillation frequency at the Hopf bifurcation), i.e., there are two most
damped eigenmodes which are expected to exhibit maximum squeezing. But in
the Hopf bifurcation there cannot be a LOF that matches these squeezed
modes: Remind that the LOF vector was defined as $\boldsymbol{\alpha }_{%
\mathrm{L}}=\left( \alpha _{\mathrm{L}},\alpha _{\mathrm{L}}^{\ast }\right)
^{T}$, and at the Hopf bifurcation the eigenvectors with Re $\lambda =-2$, $%
\mathbf{w}_{\mathrm{HB}\pm }=\left( w_{\mathrm{HB}\pm },w_{\mathrm{HB}\pm
}^{+}\right) ^{T}$, do not verify $w_{\mathrm{HB}\pm }^{+}=w_{\mathrm{HB}\pm
}^{\ast }$. This can be appreciated in Fig. 5 where we have represented the
imaginary parts of $w_{\mathrm{HB}+}^{\ast }$ and $w_{\mathrm{HB}+}^{+}$
(also the real parts, not shown, are quite different). This fact makes it
impossible to find a LOF such that $\left( \boldsymbol{\alpha }_{\mathrm{L}}|%
\mathbf{v}_{\mathrm{HB}}\right) =1$. Then perfect noise reduction is never
achieved at the Hopf bifurcation.
The amount of squeezing attainable at the Hopf bifurcation is represented in
Fig. 6 where the squeezing spectrum has been represented for two choices of
the LOF: $\boldsymbol{\alpha }_{\mathrm{L}}(x)=\mathbf{w}_{\mathrm{HB}+}(x)$%
, in dashed line, and $\boldsymbol{\alpha }_{\mathrm{L}}(x)=\mathbf{w}_{%
\mathrm{HB}+}(x)+\mathbf{w}_{\mathrm{HB}-}(x)$, in full line. Interestingly
the maximum squeezing is larger for the latter choice. Notice also that, in
contrast to the other bifurcations and as it is well known, at the Hopf
bifurcation the maximum level of squeezing is reached at a frequency
different from $\omega =0$, which corresponds to the self-pulsing frequency
of the bright cavity soliton.
\subsubsection{Squeezing with a plane-wave local oscillator}
We consider now the particular case of the squeezing properties of the
bright cavity soliton when measured in a homodyning experiment, probed with
a plane-wave LOF, which is of obvious relevant interest as this is the
simplest LOF.
At a first stage we deal, as in the preceding subsections, with a complete
beam detection, that is, the detector is assumed to completely cover the
transverse extension of the outgoing quantum cavity soliton. Fig. 7 shows
the squeezing spectra obtained for five different values of the pump $\mu $
and $\Delta _{1}=1.2$. The maximum of squeezing is achieved at frequency $%
\omega =0$ irrespective of the pump value (which is obviously limited to $1<$
$\mu <$ $\mu _{0}=\sqrt{1+\Delta _{1}^{2}}$, see Fig. 1). A high degree of
squeezing, sustained for a large range of parameters setting, is obtained.
This is more clearly seen in the inset of Fig. 7, where $S(\omega =0)$ is
depicted as a function of pump for $\Delta _{1}=1.2$.
We have also calculated the influence of the detuning: In Fig. 8 we plot the
same as in Fig. 7 but for fixed $\mu =1.2$ and as a function of detuning $%
\Delta _{1}$. Again large squeezing levels are obtained in all the domain of
existence of the cavity soliton (for detunings larger than those represented
in Fig. 8, the cavity soliton becomes Hopf unstable as already commented).
We focus now on the homodyning detection when using finite size detectors.
When a detector, with size $\Sigma $ and positioned at $x=x_{0}$ is used,
the squeezing spectrum detected is given by Eq. (\ref{spatialS}),
particularized to the BS solution. In the particular case of a plane-wave
LOF, the calculations are considerably simplified, as $\mathbf{\alpha }_{L}$
is constant. We note, as well, that the phase of the plane-wave LOF is a
free parameter, which is allowed to vary in order to obtain the maximum
level of squeezing.
In Fig. 9, we represent the spatial distribution of squeezing, for $\mu
=\Delta _{1}=1.2$ and at $\omega =0$, when the finite size detector is
displaced across the transverse dimension. We consider three different
values of the detector size $\Sigma =\Delta x/\beta $ normalized to the BS
width $\beta $, see Eq. (\ref{beta}), as indicated in the figure. In the
three cases the phase of the plane-wave LOF (not shown) has been chosen in
order to obtain the maximum squeezing at each position of the detector in
the transverse plane. A clear conclusion can be extracted from these plots:
The smaller is the detector size, the smaller is the squeezing level one
attains. This is to be expected as the smaller the photodetector is, the
larger influence of high spatial frequency modes, i.e., with the smaller
photodetector the influence of all sort of nonsqueezed modes is larger than
with the larger photodetectors which filter out high frequency modes. One
more conclusion one can extract from the figure is that the squeezing level
is larger, for all detector sizes, at the center of the soliton ($x=0$),
i.e., the soliton is more squeezed than the vacuum (for large $x$ the
soliton amplitude tends to zero and the squeezing is due to the squeezed
vaccum).
Finally we focus on the influence of the detector size on the degree of
squeezing reached when the detector is placed at the exact center of the
soliton. Fig. 10 shows the maximum squeezing reached in these conditions at
frequency $\omega =0$ when a detector of normalized size $\Sigma $ is
considered. Several features can be appreciated in the figure. As a general
trend, the squeezing level degrades as the detector size is reduced, i.e.,
the maximum level of squeezing is found for a very large detector. But,
interestingly, for $1.5<\Sigma <3$, the squeezing decreases with the
increase of the detector size. Obviously, for detectors whose size falls in
this region what is happening is that the fluctuations being detected come
from both the BS and the trivial solution (below threshold emission)
existing far from the detector. In fact we see that the phase of the local
oscillator that optimizes the squeezing level (represented with a dashed
line, left vertical axis), which is almost insensitive to the detector size
for $\Sigma <1$, changes rapidly in this detector width region. Then, for
large detectors, $\Sigma >10$, the monotonic increase of squeezing with the
detector width is recovered and the optimum phase for the LOF\ is again
almost insensitive to the detector size.
\section{Conclusions}
In this article we have developed a general theory for the analysis of
linearized quantum fluctuations of optical dissipative structures generated
in wide aperture nonlinear optical cavities. Although we have done this
explicitly for the special case of the degenerate optical parametric
oscillator in the large pump detuning limit, our method can be easily
generalized to any other nonlinear cavity. The method consists, in short, in
expanding the fluctuations in the biorthonormal base that the eigenvectors
of the linear deterministic operator of the linearized Langevin equations of
the system. This technique allows, in particular, the identification of a
special mode which is perfectly squeezed (in the linear approximation). The
perfect squeezing occurs irrespective of the nonlinear cavity parameter
values, and the special mode can be identified with the transverse linear
momentum of the dissipative structure that is being emitted. It must be
emphasized that the existence of this squeezed mode is a genuine transverse
effect, i.e., it is associated to the symmetry breaking introduced by the
existence of dissipative structures.
Then we have applied this theory to the study of squeezing of a particular
dissipative structure, namely, the bright cavity soliton. In particular we
have analyzed the squeezing occurring at the different bifurcations that
this dissipative structure can undergo at the same time that the appropriate
LOF for detecting it in each situation. Then we have studied the squeezing
detected when a plane wave LOF is used and analyzed its dependence on the
parameter values. Finally we have considered also finite size detectors. We
have shown that for large detectors the squeezing level is large almost
independently of the system parameters. For finite size detectors, we have
analyzed the spatial distribution of squeezing.
We gratefully acknowledge fruitful discussions with A. Gatti, K. Staliunas
and J. A. de Azc\'{a}rraga. This work has been supported by the Spanish
Ministerio de Educaci\'{o}n y Ciencia and the European Union FEDER through
Projects BFM2002-04369-C04-01, FIS2005-07931-C03-01 and -02 and Programa
Juan de la Cierva.
\section{Appendix A}
In this Appendix the DOPO model used along the paper is derived. The
Hamiltonian describing the DOPO in the interaction picture is given by \cite%
{Gatti97}%
\begin{equation}
\hat{H}=\hat{H}_{\mathrm{free}}+\hat{H}_{\mathrm{int}}+\hat{H}_{\mathrm{ext}%
},
\end{equation}%
where%
\begin{equation}
\hat{H}_{\mathrm{free}}=\hbar \sum\limits_{n=0,1}\gamma _{n}\int \mathrm{d}%
^{2}r~\hat{A}_{n}^{\dag }\left( \Delta _{n}-l_{n}^{2}\nabla ^{2}\right) \hat{%
A}_{n},
\end{equation}%
governs the free evolution of the intracavity fields in the paraxial
approximation,%
\begin{equation}
\hat{H}_{\mathrm{int}}=\frac{\hbar g}{2}\int^{2}\mathrm{d}^{2}r~i\left[ \hat{%
A}_{0}\left( \hat{A}_{1}^{\dag }\right) ^{2}-\hat{A}_{0}^{\dag }\left( \hat{A%
}_{1}\right) ^{2}\right] ,
\end{equation}%
describes the nonlinear interaction, and%
\begin{equation}
\hat{H}_{\mathrm{ext}}=\hbar \int \mathrm{d}^{2}r~i\left[ \mathcal{E}_{%
\mathrm{int}}\hat{A}_{0}^{\dag }-\mathcal{E}_{\mathrm{int}}^{\ast }\hat{A}%
_{0}\right] ,
\end{equation}%
accounts for the coherent driving. In the above expressions $l_{n}=c/\sqrt{%
2\omega _{n}\gamma _{n}}$ is the diffraction length for the field $\hat{A}%
_{n}$, $\Delta _{0}=\left( \omega _{0}-2\omega _{\mathrm{s}}\right) /\gamma
_{0}$ and $\Delta _{1}=\left( \omega _{1}-\omega _{\mathrm{s}}\right)
/\gamma _{1}$ are the (adimensional) pump and signal detuning parameters,
respectively, $\nabla ^{2}=\partial ^{2}/\partial x^{2}+\partial
^{2}/\partial y^{2}$ is the transverse Laplacian operator, and $g$ is the
(real) nonlinear coupling coefficient, given by
\begin{equation}
g=\frac{3\chi ^{(2)}\omega _{\mathrm{s}}}{2(2\pi n)^{3}}\sqrt{\frac{\hbar
\omega _{\mathrm{s}}}{\varepsilon _{0}L_{z}}}, \label{defg}
\end{equation}%
with $\chi ^{(2)}$ the relevant nonlinear susceptibility matrix element, $n$
the common value of the refractive index of the crystal at pump and signal
wavelengths (a type I DOPO is considered), and $L_{z}$ the thickness of the
crystal along the resonator axis.
From the above Hamiltonian one obtains the master equation governing the
evolution of the density matrix $\hat{\rho}$ of the intracavity modes,%
\begin{equation}
\frac{\partial }{\partial t}\hat{\rho}=\frac{1}{i\hbar }\left[ \hat{H},\hat{%
\rho}\right] +\widehat{\Lambda \rho }, \label{MasterEq}
\end{equation}%
where the Liouvillian term%
\begin{multline}
\widehat{\Lambda \rho }=\sum\limits_{n=0,1}\gamma _{n}\dint \mathrm{d}^{2}r%
\left[ 2\hat{A}_{n}(\mathbf{r},t)\hat{\rho}\hat{A}_{n}^{\dag }(\mathbf{r}%
,t)-\right. \\
\left. \hat{\rho}\hat{A}_{n}^{\dag }(\mathbf{r},t)\hat{A}_{n}(\mathbf{r},t)-%
\hat{A}_{n}(\mathbf{r},t)\hat{A}_{n}^{\dag }(\mathbf{r},t)\hat{\rho}\right] ,
\end{multline}%
models the coupling between the system and the external reservoir through
the output mirror.
Passing to the the generalized $P$ representation \cite{Drummond80} one can
transform the master equation (\ref{MasterEq}) into an equivalent
Fokker-Planck equation for a quasiprobability density (denoted by $P$),
following standard methods (see, e.g. \cite{Gatti97}), the result being:
\begin{multline}
\frac{\partial }{\partial t}P(\mathbf{A})=\left\{ \dint \mathrm{d}^{2}r~%
{\LARGE [}\frac{\partial }{\partial \mathcal{A}_{0}}\left( -\gamma _{0}L_{0}%
\mathcal{A}_{0}+\frac{g}{2}\mathcal{A}_{1}^{2}-\mathcal{E}_{\mathrm{in}%
}\right) +\right. \label{Fokker-Planck} \\
\frac{\partial }{\partial \mathcal{A}_{0}^{+}}\left( -\gamma _{0}L_{0}^{\ast
}\mathcal{A}_{0}^{+}+\frac{g}{2}\mathcal{A}_{1}^{+2}-\mathcal{E}_{\mathrm{in}%
}^{\ast }\right) + \\
\frac{\partial }{\partial \mathcal{A}_{1}}\left( -\gamma _{1}L_{1}\mathcal{A}%
_{1}-g\mathcal{A}_{1}^{+}\mathcal{A}_{0}\right) + \\
\frac{\partial }{\partial \mathcal{A}_{1}^{+}}\left( -\gamma _{1}L_{1}^{\ast
}\mathcal{A}_{1}^{+}-g\mathcal{A}_{1}\mathcal{A}_{0}^{+}\right) + \\
\left. \frac{g}{2}\left( \frac{\partial ^{2}}{\partial \mathcal{A}_{1}^{2}}%
\mathcal{A}_{0}+\frac{\partial ^{2}}{\partial \mathcal{A}_{1}^{+2}}\mathcal{A%
}_{0}^{+}\right) {\LARGE ]}\right\} P(\mathbf{A}).
\end{multline}%
In the above expression $\mathbf{A}=(\mathcal{A}_{0},\mathcal{A}_{0}^{+},%
\mathcal{A}_{1},\mathcal{A}_{1}^{+})$, and%
\begin{equation}
L_{j}=-(1+i\Delta _{j})+il_{j}^{2}\nabla ^{2}. \label{L}
\end{equation}
In its turn, a Fokker-Planck equation, here Eq. (\ref{Fokker-Planck}), can
be transformed into an equivalent classical-looking set of stochastic
differential equations (so called Langevin equations) via Ito rules \cite%
{Gardiner00}. In our case they read
\begin{subequations}
\label{completeLangevin}
\begin{align}
\frac{\partial }{\partial t}\mathcal{A}_{0}& =\gamma _{0}L_{0}\mathcal{A}%
_{0}-\frac{g}{2}\mathcal{A}_{1}^{\ \ 2}+\mathcal{E}_{\mathrm{in}}, \\
\frac{\partial }{\partial t}\mathcal{A}_{0}^{+}& =\gamma _{0}L_{0}^{\ast }%
\mathcal{A}_{0}^{+}-\frac{g}{2}\mathcal{A}_{1}^{+}{}^{2}+\mathcal{E}_{%
\mathrm{in}}^{\ast }, \\
\frac{\partial }{\partial t}\mathcal{A}_{1}& =\gamma _{1}L_{1}\mathcal{A}%
_{1}+g\mathcal{A}_{1}^{+}\mathcal{A}_{0}+\sqrt{g\mathcal{A}_{0}}\eta (%
\mathbf{r},t), \\
\frac{\partial }{\partial t}\mathcal{A}_{1}^{+}& =\gamma _{1}L_{1}^{\ast }%
\mathcal{A}_{1}^{+}+g\mathcal{A}_{1}\mathcal{A}_{0}^{+}+\sqrt{g\mathcal{A}%
_{0}^{+}}\eta ^{+}(\mathbf{r},t),
\end{align}%
with $\eta (\mathbf{r},t)$ and $\eta ^{+}(\mathbf{r},t)$ independent, real
white Gaussian noises of zero average and correlations given by
\end{subequations}
\begin{subequations}
\label{noise}
\begin{gather}
\left\langle \eta ^{+}(\mathbf{r}^{\prime },t^{\prime }),\eta (\mathbf{r}%
,t)\right\rangle =0, \\
\left\langle \eta ^{+}(\mathbf{r},t),\eta ^{+}(\mathbf{r}^{\prime
},t^{\prime })\right\rangle =\delta (\mathbf{r}-\mathbf{r}^{\prime })\delta
(t-t^{\prime }), \\
\left\langle \eta (\mathbf{r},t),\eta (\mathbf{r}^{\prime },t^{\prime
})\right\rangle =\delta (\mathbf{r}-\mathbf{r}^{\prime })\delta (t-t^{\prime
}).
\end{gather}
Note that, if $\mathcal{A}_{i}^{+}$ is interpreted as $\mathcal{A}_{i}^{\ast
}$ and the noise terms are ignored (classical limit), Eqs. (\ref%
{completeLangevin}) coincide with the classical equations for a planar DOPO
\cite{Oppo94}.
Equations (\ref{completeLangevin}) are already set in a convenient way to
apply the method we develop in this paper. However we shall use a simpler
form of them obtained in the limit of large pump detuning, defined by $%
\left\vert \Delta _{0}\right\vert >>1,\gamma _{0}/\gamma _{1},\left\vert
\Delta _{1}\right\vert $, which allows the adiabatic elimination of the pump
field \cite{Longhi97,Trillo97} as
\end{subequations}
\begin{subequations}
\label{AdiabaticSol}
\begin{align}
\mathcal{A}_{0}& =\mathcal{A}_{0,\mathrm{ad}}\equiv \frac{-i}{\gamma
_{0}\Delta _{0}}\left( \mathcal{E}_{\mathrm{in}}-\frac{g}{2}\mathcal{A}%
_{1}^{2}\right) , \\
\mathcal{A}_{0}^{+}& =\mathcal{A}_{0,\mathrm{ad}}^{+}\equiv \frac{i}{\gamma
_{0}\Delta _{0}}\left( \mathcal{E}_{\mathrm{in}}^{\ast }-\frac{g}{2}\mathcal{%
A}_{1}^{+}{}^{2}\right) .
\end{align}%
We note that these expressions correct some typos appearing in the
corresponding expressions in \cite{EPL}. The remaining Langevin equations
for the signal field are obtained by substitution of the limit solution (\ref%
{AdiabaticSol})\ into (\ref{completeLangevin}). By taking, without loss of
generality, the external pump field amplitude as a purely imaginary quantity
whose imaginary part has the same sign as the pump detuning $\Delta _{0}$,
i.e. $\mathcal{E}_{\mathrm{in}}=i\sigma \left\vert \mathcal{E}_{\mathrm{in}%
}\right\vert $ with $\sigma =\func{sign}\Delta _{0}$, they become Eqs. (\ref%
{AdiabaticLangevin}) in Sec. \ref{DOPOmodel}, which is the model we shall
consider.
\section{Appendix B}
In the classical limit ($\mathcal{A}_{1}^{+}\rightarrow\mathcal{A}%
_{1}^{\ast} $, and $\eta,\eta^{+}\rightarrow0$), Eqs. (\ref%
{AdiabaticLangevin}) reduce to
\end{subequations}
\begin{equation}
\frac{\partial}{\partial t}\mathcal{A}_{1}\left( \mathbf{r},t\right)
=\gamma_{1}\left( L_{1}\mathcal{A}_{1}+\mu\mathcal{A}_{1}^{\ast}+i\frac{%
\sigma}{\kappa^{2}}\left\vert \mathcal{A}_{1}\right\vert ^{2}\mathcal{A}%
_{1}\right) , \label{AdiabaticLangevinClassic}
\end{equation}
$L_{1}=-(1+i\Delta_{1})+il_{1}^{2}\nabla^{2}$, which is a version of the so
called parametrically driven, nonlinear Schr\"{o}dinger equation (PDNLSE),
which is a universal model for pattern formation in parametrically driven
systems (see, e.g., \cite{deValcarcel02} for a list of systems described by
the PDNLSE). A convenient normalization of that equation is obtained by
introducing the following dimensionless quantities: time $T=\gamma_{1}t$,
spatial coordinates $\left( X,Y\right) =\left( x/l_{1},y/l_{1}\right) $, and
field $\psi=\mathcal{A}_{1}/\kappa$, so that Eq. (\ref%
{AdiabaticLangevinClassic}) becomes%
\begin{equation}
\partial_{T}\psi=\mu\psi^{\ast}-(1+i\Delta_{1})\psi+i\left(
\partial_{X}^{2}+\partial_{Y}^{2}\right) \psi+i\sigma\left\vert
\psi\right\vert ^{2}\psi. \label{PDNLSE}
\end{equation}
For the bright cavity soliton, Eq. (\ref{BS}), operators $\mathcal{L}$ and$~%
\mathcal{L}^{\dag}$ take the form
\begin{subequations}
\begin{equation}
\mathcal{L}\mathcal{=}\left(
\begin{array}{cc}
\mathcal{L}_{1} & \mathcal{\bar{A}}_{0} \\
\mathcal{\bar{A}}_{0}^{\ast} & \mathcal{L}_{1}^{\ast}%
\end{array}
\right) ,\ \ \mathcal{L}^{\dag}\mathcal{=}\left(
\begin{array}{cc}
\mathcal{L}_{1}^{\ast} & \mathcal{\bar{A}}_{0} \\
\mathcal{\bar{A}}_{0}^{\ast} & \mathcal{L}_{1}%
\end{array}
\right) ,
\end{equation}
where
\end{subequations}
\begin{subequations}
\begin{align}
\mathcal{L}_{1} & =-(1+i\Delta_{1})+i\nabla^{2}+i\left( \frac{2\beta }{\kappa%
}\right) ^{2}\func{sech}^{2}(\beta x), \\
\mathcal{\bar{A}}_{0} & =\mu+i\left( \frac{2\beta}{\kappa}\right)
^{2}~e^{2i\phi}\func{sech}^{2}(\beta x)
\end{align}
and
\end{subequations}
\begin{subequations}
\begin{align}
\beta^{2} & =\Delta_{1}\pm\sqrt{\mu^{2}-1}, \\
\ \cos(2\phi) & =\mu^{-1}.
\end{align}
Then, for $\mu=1$ one can calculate the discrete eigenvalues analytically.
The result is that the discrete eigenvectors of $\mathcal{L}$ and$~\mathcal{L%
}^{\dag}$ with null eigenvalue are
\end{subequations}
\begin{subequations}
\begin{align}
\mathbf{v}_{1} & =-\mathcal{ST}\left(
\begin{array}{c}
e^{i\phi} \\
e^{-i\phi}%
\end{array}
\right) ,\ \ \ \\
\mathbf{w}_{1} & =-\mathcal{S}\left(
\begin{array}{c}
\left( x+i\mathcal{T}\right) e^{i\phi} \\
\left( x-i\mathcal{T}\right) e^{-i\phi}%
\end{array}
\right) , \\
\mathbf{v}_{4} & =i\beta^{-1/2}\mathcal{S}\left(
\begin{array}{c}
\left[ \beta^{2}+i\left( x\mathcal{T}-1\right) \right] e^{i\phi} \\
-\left[ \beta^{2}-i\left( x\mathcal{T}-1\right) \right] e^{-i\phi}%
\end{array}
\right) , \\
\mathbf{w}_{4} & =\beta^{1/2}\mathcal{S}\left(
\begin{array}{c}
e^{i\phi} \\
e^{-i\phi}%
\end{array}
\right) ,
\end{align}
and that the discrete eigenvectors of $\mathcal{L}$ and$~\mathcal{L}^{\dag}$
with eigenvalue $\lambda=-2$ are
\end{subequations}
\begin{subequations}
\begin{align}
\mathbf{v}_{2} & =-i\beta^{1/2}\ \mathcal{S}\left(
\begin{array}{c}
\left( x+i\mathcal{T}\right) e^{i\phi} \\
-\left( x-i\mathcal{T}\right) e^{-i\phi}%
\end{array}
\right) , \\
\mathbf{w}_{2} & =-i\beta^{-1/2}\mathcal{ST}\left(
\begin{array}{c}
e^{i\phi} \\
-e^{-i\phi}%
\end{array}
\right) , \\
\mathbf{v}_{3} & =i\beta\mathcal{S}\left(
\begin{array}{c}
e^{i\phi} \\
-e^{-i\phi}%
\end{array}
\right) , \\
\mathbf{w}_{3} & =-\beta^{-1}\mathcal{S}\left(
\begin{array}{c}
\left[ \beta^{2}+i\left( x\mathcal{T}-1\right) \right] e^{i\phi} \\
\left[ \beta^{2}-i\left( x\mathcal{T}-1\right) \right] e^{-i\phi}%
\end{array}
\right) .
\end{align}
In the above expressions $\mathbf{v}_{1}$ is the Goldstone mode, Eq. (\ref%
{goldstone}), and we have introduced the quantities
\end{subequations}
\begin{equation}
\mathcal{S}=\sqrt{\frac{\beta}{2}}\func{sech}(\beta x),\ \ \mathcal{T}%
=\beta\tanh(\beta x).
\end{equation}
Let us finally notice that for $\mu=1$, it turns out that $%
\beta^{2}=\Delta_{1}$ and $\cos\left( 2\phi\right) =1$.
|
2,869,038,153,982 | arxiv | \section{Introduction}
Real world industrial or environmental problems, {\it e.g.}, management of industrial risks, typically involve physical and chemical phenomena having a multitude of dynamically active spatial and temporal scales.
Their direct numerical modelling thus leads to prohibitive computational cost.
Introducing adaptivity can be understood in the sense that the computational effort is
concentrated at locations and time instants where it is necessary to ensure a given numerical accuracy,
while efforts may be significantly reduced elsewhere.
Adaptive methods are in many cases more competitive than schemes on regular fine grids, in
particular for solutions of nonlinear PDEs exhibiting a non-uniformly distributed regularity of the solution.
Reliable error estimators of the solution are essential ingredients of fully adaptive schemes.
They are based for example on Richardson ideas of extrapolation, adjoint problems or gradient based approaches.
For evolutionary problems, a major task is the time evolution of the grid and
its reliable prediction for the next time step. However, to become efficient, adaptive methods require a
significant effort on implementing data structures, which are typically based on graded trees,
hash-tables or multi-domains.
Moreover, the computational cost per cell is significantly increased with respect to uniform discretizations. Hence, an adaptive method is only efficient when the
data compression is large enough to compensate the additional computational cost per cell. Fortunately, for problems
exhibiting local discontinuities or steep gradients, adaptive computations are faster than fine grid computations.
Adaptive discretization methods for solving nonlinear PDEs have a long tradition and can be tracked back to the late
seventies \cite{Bra77}. Adaptive finite element methods have a long history, in particular for elliptic problems.
For chemically reactive flow with detailled chemical reaction in three dimensions \cite {BrRi06, BrRi07} proposed stabilized finite elements with adaptive mesh refinement. The equations are treated fully coupled with a Newton solver and the solution of large linear non-symmetric, indefinite systems becomes necessary for which a parallel multigrid solver is used.
Moving grid techniques have been applied successfully to combustion problems \cite{HLP03}. A posteriori error estimators have also been studied for a long time to improve the grid, since the early work of Babuska and Rheinboldt \cite{BR78}.
However, adjoint problems have to be solved which are linear although the original PDE can be nonlinear \cite{BR01}.
Fully adaptive finite element discretizations of reaction-diffusion problems encountered in electrocardiology have been proposed in \cite{Lang95,FDELP06}. For time adaptivity a stepsize control with linearly implicit time integrators is used. In space a multilevel finite element method is combined with a posteriori local error estimators.
The main challenge is to estimate and control the error of adaptive schemes with respect to the exact solution, or at least with respect to the
same numerical scheme on an underlying uniform grid. Self adaptive methods are preferred as they automatically adjust to the
solution. The block-structured adaptive mesh refinement technique (AMR or SAMR) for hyperbolic partial differential equations
has been pioneered by Berger and Oliger~\cite{BO84}. While the first approach utilized rotated refinement
grids that required complicated conservative interpolation operations, AMR denotes today especially the simplified variant of
Berger and Collela \cite{BC88} that allows only refinement patches aligned to the coarse grid mesh. The striking efficiency
of this algorithm, in particular for 3D instationary supersonic gas dynamics problems, has been demonstrated by Berger
{\it et al.} in \cite{BBS94}.
Recently, multiresolution (MR) techniques have become popular for hyperbolic conservation laws, going back to the seminal work
of Harten \cite{Har95} in the context of finite volume schemes and cell-average MR analysis. Starting point is a finite volume scheme
for hyperbolic conservation laws on a regular grid. Subsequently a discrete multiresolution analysis is used to avoid
expensive flux computations in smooth regions, first without reducing memory requirements, {\it e.g.} for 1D hyperbolic
conservation laws (Harten [30]), 1D conservation laws with viscosity (Bihari \cite{Bih97}), 2D hyperbolic conservations laws (Bihari and Harten \cite{BH96}), 2D compressible Euler equations (Chiavassa and Donat \cite{CD01}), 2D hyperbolic conservation laws with curvilinear
patches (Dahmen {\it et al} \cite{DGM01}) and unstructured meshes (Abgrall and Harten \cite{AH98}, Cohen {\it et al} \cite{CDK00}). A fully adaptive version,
still in the context of 1D and 2D hyperbolic conservation laws, has been developed to reduce also memory requirements
(Gottschlich-M\"uller and M\"uller \cite{GM99}, Kaibara and Gomes \cite{KG00}, Cohen {\it et al} \cite{CKM03}). This algorithm has been extended
to the 3D case and to parabolic PDEs (Roussel {\it et al} \cite{RST03}, Roussel and Schneider \cite{RS05}), and more recently to self-adaptive global and local time-steppings (M\"uller and Stiriba \cite{MS07}, Domingues {\it et al} \cite{DGR08,DGR09,DRS09}). Therewith the solution is represented and computed on a dynamically evolving automatically adapted space-time grid. Different strategies have been proposed to evaluate the flux without requiring a full knowledge of fine grid cell-average values.
Applications to shock waves in compressible flows addressing the issue of shock resolution have been presented in \cite{BRT06}, and extensions to the Navier--Stokes equations in the weakly compressible regime can be found in \cite{RS10}.
Adaptive MR methods with operator splitting have been proposed for multiscale reaction fronts
with stiff source terms in \cite{DMDTDLL12,Duar11}.
The numerical analysis of the above higher order operator splitting techniques has been performed in \cite{DDDLM11,DMD11} and it was shown that the splitting time step can be even larger than the times scales involved in the PDEs.
Adaptive MR computations using the above method with complex chemistry and including detailed transport can be found in \cite{DDDLLM14}, further applications involving various stiffness levels have been presented in \cite{DDTCM13} \cite{DBMBDD12}.
The MR approach has also been used in other contexts. For instance, the Sparse Point Representation (SPR) method was the first
fully adaptive MR scheme, introduced by H\"olmstrom \cite{Hol99} in the context of finite differences and point-value MR analysis,
leading to both CPU time and memory reduction. In the SPR method, the wavelet coefficients are used as regularity indicators
to create locally refined grids, on which the numerical solution is represented and the finite difference discretization of
the operators is performed. Applications of the SPR method have been published in \cite{DGD03,PDF07}.
Discontinuous Galerkin methods, they have been applied to hyperbolic conservation laws in \cite{CDG05} using Haar wavelet indicators to decide where to refine or to coarsen the meshes.
These publications reveal that the multiresolution concept has been applied by several groups
with success to different stiff problems. For comprehensive literature about the subject, we refer to the books of Cohen~\cite{Coh00} and M\"uller~\cite{Mue03}.
The objective of the paper is the extension of the adaptive multiresolution method~\cite{RST03,DGR09} to the numerical simulation of detonation waves.
As model we use here a one-step chemical reaction involving two chemical species only.
Since the chemical source term is stiff, a time splitting has to be made between the convective and source terms and each term needs to be computed with a different time step and a different time integration method. An error-controled time step based on a Runge--Kutta--Fehlberg approach is used
for the source term integration.
The application concerns Chapman-Jouguet detonations in one dimension with different stiffness values and instabilities of detonations waves due to an interaction with a pocket of partially burnt gases in two dimensions.
{These detonation problems motivate the use of the Euler equations, although extensions of the method to compute viscous flows using the Navier--Stokes equations is straightforward, as proposed in \cite{RS10}.}
The outline of the paper is the following: first, we present the set of reactive Euler equations for a simplified detonation
model. Then we describe the finite volume discretization. A Strang splitting technique is utilized to account for
temporal scales in the source term that do not influence the hydrodynamics.
We also briefly summarize the multiresolution strategy.
Finally, we show the numerical results for the test problems in one and two space dimensions and we set the conclusions together with some perspectives for future work.
\section{Governing equations}
For modelling the combustion process, we use the reactive Euler equations, as described in \cite{CF85,GR95}.
The simplest description of a chemically reacting gas flow assumes that the gas mixture is made only of two chemical species, the burnt
gas, denoted with subscript $b$ and the unburnt gas, denoted with subscript $u$. The unburnt gas is converted to burnt gas
via a single irreversible reaction. We represent the mixture state by a single scalar variable $Z$ corresponding the mass
fraction of the unburnt gas. We also assume gases in the mixture to be ideal polytropic gases with equal specific heat ratio
$\gamma$ and specific gas constant $r$.
The system of equations in two dimensions, { which has been non-dimensionalized in a suitable way,} may be written as
\begin{equation} \label{eqn:pde}
\frac{\partial Q}{\partial t} + \frac{\partial F}{\partial x} + \frac{\partial G}{\partial y} = S,
\end{equation}
\bigskip \noindent where $Q = (\rho, \rho v_x, \rho v_y, \rho e, \rho Z)^T$ and
\begin{equation}
F = \left(
\begin{array}{c}
\rho v_x \\ \rho v_x^2+p \\ \rho v_x v_y \\ (\rho e+p)v_x \\ \rho v_x Z
\end{array}
\right) \, , \,
G = \left(
\begin{array}{c}
\rho v_y \\ \rho v_x v_y \\ \rho v_y^2+p \\ (\rho e+p)v_y \\ \rho v_y Z
\end{array}
\right) \, , \,
S = \left(
\begin{array}{c}
0 \\ 0 \\ 0 \\ 0 \\ -K(T) \rho Z
\end{array}
\right)
\end{equation}
\bigskip \noindent Here $\rho$ denotes the mixture density, $V = (v_x,v_y)^T$ the mixture velocity, $e$ the mixture total
energy per unit of mass, $p$ the pressure, $T$ the temperature and $k$ the chemical reaction rate. The two equations of state completing the model are
\begin{equation}
p = \rho r T
\end{equation}
\bigskip \noindent and
\begin{equation}
e = \frac{p}{\rho(\gamma -1)} + \frac{V^2}{2} + Q_0 Z
\end{equation}
\bigskip \noindent where $Q_0$ denotes the amount of heat per unit of mass released in the chemical reaction.
The reaction rate $k(T)$ of the irreversible chemical reaction is expressed in Arrhenius form as
\begin{equation}
k(T) = A \exp \left( -\frac{T_A}{T} \right)
\end{equation}
\bigskip \noindent where the pre-exponential coefficient $A$ and the activation temperature $T_A$ are empirical constants.
When the reaction source term is stiff, however, the reaction rate may be simplified by adopting the so-called ignition
temperature kinetic model, {\it i.e.}
\begin{equation}
k(T) = \left\{
\begin{array}{lll}
\frac{1}{\tau} & \mbox{ if } & T \geq T_i \\
0 & \mbox{ if } & T < T_i
\end{array}
\right.
\end{equation}
\bigskip \noindent where $T_i$ denotes the ignition temperature and $\tau$ the characteristic time of the chemical reaction,
which determines the stiffness of the problem.
{ This formulation has been chosen in the applications presented in the numerical results section.
However, the numerical method is not limited to this simplified model and its extension to the Arrhenius law or even more complex chemical reactions is possible.}
\section{Numerical method}
{ In this section, we first describe the classical Strang splitting and the space discretization of the convective terms.
Subsequently, the time integration is discussed and, first, for the convective terms, a time step depending on the CFL
is chosen, then, for the source term, we split the convective time step into error-controlled time steps using an explicit Dormand-Prince
method. A different choice has been proposed by Duarte {\it et al } in \cite{DMDTDLL12}, based on implicit and explicit Runge-Kutta methods
and a posteriori error estimators. We also briefly recall the multiresolution method, previously published in \cite{RST03,DGR08}.}
\subsection{Strang splitting}
We denote by $C(Q)$ the operator of the convective terms. Equation (\ref{eqn:pde})
becomes
\begin{equation}
\frac{\partial Q}{\partial t} = C(Q) + S(Q).
\end{equation}
\bigskip \noindent Discretizing explicitly with first order in time, we get
\begin{equation}
Q^{n+1} = Q^{n} + \Delta t \left[ C(Q^n) + S(Q^n) \right]
\end{equation}
\bigskip \noindent where $n$ denotes the time instant and $\Delta t$ the convective time step.
The splitting relies on the separation of the convective and source terms operators. To get a first-order
discretization, it writes
\begin{eqnarray}
Q^{\star} & = & Q^{n} + \Delta t \; S(Q^n) \\
Q^{n+1} & = & Q^{\star} + \Delta t \; C(Q^{\star})
\end{eqnarray}
\bigskip \noindent {\it i.e.}
\begin{equation}
Q^{n+1} = C^{o1}_{\Delta t} \; S^{o1}_{\Delta t} \; Q^n
\end{equation}
\bigskip \noindent where $S^{o1}_{\Delta t}$ denotes the first-order accurate source term operator with time
step $\Delta t$ and $C^{o1}_{\Delta t}$ the first-order accurate convection operator with time step $\Delta t$.
\bigskip \noindent To get second-order accuracy, { which corresponds to the classical Strang,} the following procedure can be applied
\begin{equation}
Q^{n+1} = S^{o2}_{\Delta t/2} \; C^{o2}_{\Delta t} \; S^{o2}_{\Delta t/2} \; Q^n
\end{equation}
\bigskip \noindent where $S^{o2}_{\Delta t/2}$ denotes the second-order accurate source term operator with time
step $\Delta t/2$ and $C^{o2}_{\Delta t}$ the second-order accurate convection operator with time step $\Delta t$.
{ We note that the splitting time step in the above method is fixed. Techniques to introduce adaptive splitting time steps
based on a posteriori error estimators have been introduced in \cite{DDDLM11,Duar11} allowing automatic error control.}
\subsection{Space discretization of the convective terms}
Discretization in space is made using a finite volume method, to ensure conservative flux computations.
Convective terms are discretized using the AUSM+ scheme \cite{Lio96}. In this procedure,
pressure terms are computed separately. The Euler flux $F$ writes
\begin{equation}
F = \left(
\begin{array}{c}
\rho v_x \\ \rho v_x^2+p \\ \rho v_x v_y \\ (\rho e+p)v_x \\ \rho v_x Z
\end{array}
\right) = M \; c \; \left(
\begin{array}{c}
\rho \\ \rho v_x \\ \rho v_y \\ \rho h \\ \rho Z
\end{array}
\right) + \left(
\begin{array}{c}
0 \\ p \\ 0 \\ 0 \\ 0
\end{array}
\right)
\end{equation}
\bigskip \noindent where $M$ denotes the Mach number, $c$ the speed of sound and $h$ the enthalpy per unit of mass.
Denoting by $\Phi$ the purely convective
term and by $\Pi$ the pressure term, we discretize the Euler flux $F$ in space following
\begin{equation}
F_{i+\frac{1}{2}} = M_{i+\frac{1}{2}} \; c_{i+\frac{1}{2}} \; \Phi_{i+\frac{1}{2}} + \Pi_{i+\frac{1}{2}}
\end{equation}
{ where the indices refer to nodal values.}
\bigskip \noindent The interface speed of sound is $c_{i+\frac{1}{2}} = \sqrt{c_{i} c_{i+1}}$ and the interface convective
term is
\begin{equation}
\Phi_{i+\frac{1}{2}} = \left\{
\begin{array}{ll}
\Phi_i & \mbox{ if } M_{i+\frac{1}{2}} \geq 0 \\
\Phi_{i+1} & \mbox{ otherwise}
\end{array}
\right.
\end{equation}
\bigskip \noindent The terms $M_{i+\frac{1}{2}}$ and $p_{i+\frac{1}{2}}$ follow
\begin{eqnarray}
M_{i+\frac{1}{2}} & = & M^+_i + M^-_{i+1} \\
p_{i+\frac{1}{2}} & = & P^+_i \; p_i + P^-_{i+1} \; p_{i+1}
\end{eqnarray}
\bigskip \noindent where
\begin{equation}
M_i^\pm = \left\{
\begin{array}{ll}
\frac{1}{2} \left( M_i \pm |M_i| \right) & \mbox{ if } |M_i| \geq 1 \\
\pm \frac{1}{2} \left(M_i \pm 1 \right)^2 \pm \frac{1}{8} \left( M_i^2 - 1 \right)^2 & \mbox{ otherwise}
\end{array}
\right.
\end{equation}
\bigskip \noindent and
\begin{equation}
P_i^\pm = \left\{
\begin{array}{ll}
\frac{1}{2} \left( 1 \pm \mbox{sign}(M_i) \right) & \mbox{ if } |M_i| \geq 1 \\
\frac{1}{4} \left(M_i \pm 1 \right)^2\left( 2 \mp M_i \right)
\pm \frac{3}{16} M_i \left( M_i^2 - 1 \right)^2 & \mbox{ otherwise}
\end{array}
\right.
\end{equation}
\bigskip
Third-order accuracy {of the spatial discretization} far from discontinuities is obtained using a MUSCL interpolation \cite{Van79}, together with a
Koren slope limiter \cite{Kor93}. Denoting by $q$ one of the conservative quantities ({\it i.e.} density, momentum, energy,
partial mass of the unburnt gas), the corrected value via a MUSCL interpolation is
\begin{equation}
q'_{i} = q_i + \frac{1}{6}\phi \left( r_i \right) \left(q_i-q_{i-1} \right) + \frac{1}{3} \phi \left( \frac{1}{r_i} \right)
\left(q_{i+1} - q_{i} \right)
\end{equation}
\bigskip \noindent and
\begin{equation}
q'_{i+1} = q_{i+1} - \frac{1}{3}\phi \left( r_{i+1} \right) \left(q_{i+1}-q_i \right) - \frac{1}{6} \phi \left(
\frac{1}{r_{i+1}} \right) \left(q_{i+2} - q_{i+1} \right)
\end{equation}
\bigskip \noindent where
\begin{equation}
r_i =\frac{q_{i+1}-q_i}{q_i-q_{i-1}}
\end{equation}
\bigskip \noindent and
\begin{equation}
\phi(r) = \max \left[ 0, \min \left( 2r, \frac{1+2r}{3},2\right)\right]
\end{equation}
\bigskip \noindent Nevertheless, since we use a second-order accurate Strang splitting {in time}, the global accuracy of the scheme remains second-order.
\subsection{Time integration of the convective terms}
Time integration of the convective term is made using a classical third-order TVD Runge-Kutta scheme, i.e.
\begin{eqnarray}
Q^{\star} & = & Q^n + \Delta t \; C\left( Q^n \right) \\
Q^{\star\star} & = & \frac{1}{4} \left[ 3 \; Q^n + Q^\star + \Delta t \; C \left( Q^\star \right) \right] \\
Q^{n+1} & = & \frac{1}{3} \left[ Q^n + 2 \; Q^{\star\star} + 2 \; \Delta t \; C \left( Q^{\star\star} \right) \right]
\end{eqnarray}
\bigskip \noindent The corresponding Runge-Kutta tableau is given in Table \ref{table:RK3}.
\begin{table}[htbp]
\begin{center}
\begin{tabular}{c|ccc}
$0$ & ~ & ~ & ~ \\
& & & \\
$1$ & $1$ & ~ & ~ \\
& & & \\
$\frac{1}{4}$ & $\frac{1}{4}$ & $\frac{1}{4}$ & $0$ \\
& & & \\
\hline
& & & \\
~ & $\frac{1}{6}$ & $\frac{1}{6}$ & $\frac{2}{3}$
\end{tabular}
\caption{Butcher tableau corresponding to the compact TVD third-order Runge-Kutta method.}
\label{table:RK3}
\end{center}
\end{table}
\bigskip The convective time step $\Delta t$ is chosen to satisfy the Courant-Friedrichs-Levy (CFL) condition. In the
computations, $CFL = 0.5$ is used.
\subsection{Time integration of the stiff source terms}
Due to the stiffness of the chemical source terms in case of detonation computations, a high order time integration is chosen to ensure the numerical stability.
{ As the chemical time scale is much smaller than the convective one
we apply a much smaller time step $\Delta t_d$ than for the convection terms. }
However, this is only required in a small area of the computational domain, {\it i.e.} in the
reaction zone. In the rest of the computational domain, the source term is almost equal to zero.
For this reason, we introduce an adaptive local time step $\Delta t_d = \frac{\Delta t}{2N}$, $N$ being the number of substeps
required by the source term computation. In order to adapt this local time step with time, a fourth-fifth order embedded
Runge-Kutta method is used. The computational error between the fourth-order and the fifth-order method enables to decide
wether the local time step needs to be increased or decreased, as proposed by Fehlberg \cite{Feh64}.
\begin{eqnarray}
Q^{n+1}_{o4} & = & Q^n + \Delta t_d \; \sum_{i=1}^{s} b_{i,o4} k_i \\
Q^{n+1}_{o5} & = & Q^n + \Delta t_d \; \sum_{i=1}^{s} b_{i,o5} k_i \\
\end{eqnarray}
\bigskip \noindent where
\begin{equation}
k_i = S \left( t^n + c_i \Delta t_d, Q^n + \Delta t_d \; \sum_{i=1}^{s} \sum_{j=1}^{s} a_{i,j} k_j \right)
\end{equation}
\bigskip \noindent In this article, an embedded explicit Dormand-Prince method~\cite{DP80} was chosen. The coefficients are
computed in order to minimize the error of the fifth-order solution. This is the main difference with the Fehlberg method,
which was constructed so that the fourth-order solution has a small error. For this reason, the Dormand-Prince method is
more suitable when the higher-order solution is used to continue the time integration. The coefficients are given in
Table \ref{table:DP}.
\begin{table}[htbp]
\begin{center}
\begin{tabular}{c|ccccccc}
$0$ & & & & & & & \\
& & & & & & & \\
$\frac{1}{5}$ & $\frac{1}{5}$ & & & & & & \\
& & & & & & & \\
$\frac{3}{10}$ & $\frac{3}{40}$ & $\frac{9}{40}$ & & & & & \\
& & & & & & & \\
$\frac{4}{5}$ & $\frac{44}{45}$ & $-\frac{56}{15}$ & $\frac{32}{9}$ & & & & \\
& & & & & & & \\
$\frac{8}{9}$ & $\frac{19372}{6561}$ & $-\frac{25360}{2187}$ & $\frac{64448}{6561}$ & $-\frac{212}{729}$ & & & \\
& & & & & & & \\
$1$ & $\frac{9017}{3168}$ & $-\frac{355}{33}$ & $\frac{46732}{5247}$ & $\frac{49}{176}$ & $-\frac{5103}{18656}$ & & \\
& & & & & & & \\
$1$ & $\frac{35}{384}$ & $0$ & $\frac{500}{1113}$ & $\frac{125}{192}$ & $-\frac{2187}{6784}$ & $\frac{11}{84}$ & \\
& & & & & & & \\
\hline
& & & & & & & \\
& $\frac{5179}{57600}$ & $0$ & $\frac{7571}{16695}$ & $\frac{393}{640}$ & $-\frac{92097}{339200}$ & $\frac{187}{2100}$
& $\frac{1}{40}$ \\
& & & & & & & \\
& $\frac{35}{384}$ & $0$ & $\frac{500}{1113}$ & $\frac{125}{192}$ & $-\frac{2187}{6784}$ & $\frac{11}{84}$ & $0$
\end{tabular}
\caption{Butcher tableau corresponding to the Dormand-Prince method. The first row of $b$ coefficients gives the fourth-order accurate solution, and the second row yields order five.}
\label{table:DP}
\end{center}
\end{table}
The local relative error of the quantity $q$ is denoted by $e$ and the accepted tolerance by $\varepsilon$. The optimal
time step $\sigma \; \Delta t_d$ can be determined by multiplying the scalar $\sigma$ times the current time step $\Delta t_d$.
The scalar $\sigma$ is given by
\begin{equation}
\sigma = \left( \frac{\varepsilon \; \Delta t_d}{2 \; e \; \Delta t_{max}} \right) ^{\frac{1}{4}} =
\left( \frac{\varepsilon \; \Delta t_d}{ e \; \Delta t} \right) ^{\frac{1}{4}}
\end{equation}
In practice, however, the new time step $\Delta t'_d$ is determined in function of $\sigma$, following
\begin{equation}
\Delta t'_d = \left\{
\begin{array}{ll}
2 \; \Delta t_d & \mbox{ if } \sigma > 2 \\
\frac{\Delta t_d}{2} & \mbox{ if } \sigma < 1 \\
\Delta t_d & \mbox{ otherwise}
\end{array}
\right.
\end{equation}
\subsection{Multiresolution method}
The principle in the multiresolution (MR) setting is to represent a set of function
cell averages as values on a coarser grid plus a series of differences
at different levels of nested grids.
The differences contain the information
of the function when going from a coarse to a finer grid.
A tree data structure is an efficient way to store the reduced MR dataas it allows to
reduce the memory with respect to a finite volume (FV) scheme on the finest level.
This representation could also increase the speed-up during the time evolution because it reduces the time-searching of the elements.
In the following we consider a hierarchy of regular grids in 2D, $\Omega_{\ell}$, $0\leq\ell\leq L$.
The root cell is $\Omega_{0,0,0}=\Omega$ and corresponds to a rectangle
with side lengths $h_{x}$ and $h_{y}$.
The different node cells at a level $\ell>0$ forming $\Omega_{\ell}$ are denoted by
$\Omega_{\ell,i,j}$ where $(i,j)\in\Lambda_{\ell}$.
The ensemble of indices of the existing node cells on the level $\ell$ is $\Lambda_{\ell}$.
Note that $\Omega_{\ell,i,j}$ are rectangles with side lengths
$h_{x,\ell}=2^{-\ell}h_{x}$ and $h_{y,\ell}=2^{-\ell}h_{y}$.
In the tree terminology, the refinement of a parent node cell $\Omega_{\ell,i,j}$ at level
$\ell$ produces four children nodes $\Omega_{\ell+1,2i,2j}$,
$\Omega_{\ell+1,2i,2j+1}$, $\Omega_{\ell+1,2i+1,2j}$ and $\Omega_{\ell+1,2i+1,2j+1}$
at level $\ell+1$, as illustrated in Figure \ref{cap:Dyadic-refinement}.
\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=0.4]{dyadic.png}
\end{center}
\caption{\label{cap:Dyadic-refinement}Dyadic grid refinement in 2D.}
\end{figure}
The cell-average value of the quantity $u$ on the cell $\Omega_{\ell,i,j}$ is given by
$\bar{u}_{\ell,i,j}=\frac{1}{|\Omega_{\ell,i,j}|}\int_{\Omega_{\ell,i,j}}u(x,y) \; dx \; dy$.
and correspondingly the ensemble of the existing cell-average values at level $\ell$ by
$\bar{U}_{\ell}=(\bar{u}_{\ell,i,j})_{(i,j)\in\Lambda_{\ell}}$.
The projection { (or restriction)} operator
\[
P_{\ell+1\rightarrow l}\,:\, \bar{U}_{\ell+1}\,\mapsto\,\bar{U}_{\ell}.
\]
estimates the cell-averages of a level $\ell$ from the ones of
the level $\ell+1$.
The parent cell-average is the weighted average
of the children cell-averages
\[
\bar{u}_{\ell,i,j}=\frac{1}{4}(\bar{u}_{\ell+1,2i,2j}+\bar{u}_{\ell+1,2i,2j+1}+
\bar{u}_{\ell+1,2i+1,2j}+\bar{u}_{\ell+1,2i+1,2j+1})
\]
and thus the projection operator is exact and unique.
To predict the cell-averages of a level $\ell+1$ from the ones of
the level $\ell$, we use a prediction { (or interpolation)} operator
\[
P_{\ell\rightarrow l+1}\,:\, \bar{U_{\ell}}\,\mapsto\,\widetilde{U}_{\ell+1}.
\]
\bigskip \noindent This operator yields an approximation $\widetilde{U}_{\ell+1}$
of $\bar{U}_{\ell+1}$ at the level $\ell+1$.
In this paper, we use third order interpolation given by a tensor product approach
\cite{BH96}.
For $n,p\in\{0,1\}$, we define
\begin{eqnarray}
\widetilde{u}_{\ell+1,2i+n,2j+p} & = & \bar{u}_{\ell, i,j} + \frac{1}{8}(-1)^{n}
\left( \bar{u}_{\ell,i+1,j}-\bar{u}_{\ell,i-1,j} \right) \nonumber \\
& & + \frac{1}{8}(-1)^{p} \left( \bar{u}_{\ell,i,j+1}-\bar{u}_{\ell,i,j-1} \right) \\
& & +\frac{1}{64}(-1)^{np} \left[ \left( \bar{u}_{\ell,i+1,j+1}-\bar{u}_{\ell,i+1,j-1} \right) -
\left( \bar{u}_{\ell,i-1,j+1}-\bar{u}_{\ell,i-1,j-1} \right) \right]. \nonumber
\end{eqnarray}
%
First, this prediction is local, since it is made from the cell
average $\bar{u}_{\ell,i,j}$ and the eight nearest uncles $\bar{u}_{\ell,i\pm1,j\pm1}$.
Second, it is consistent with the projection, {\it i.e.}
$P_{\ell+1\rightarrow\ell}\circ P_{\ell\rightarrow\ell+1}=\mbox{Id}$.
The difference between the exact and the predicted values at three children cells yields
the wavelet (or detail) coefficients.
The sum of the four details in the children cells
is equal to zero \cite{BH96}
\begin{eqnarray}
\bar{d}_{\ell+1,2i,2j+1} & = & \bar{u}_{\ell+1,2i,2j+1}-\widetilde{u}_{\ell+1,2i,2j+1} \nonumber \\
\bar{d}_{\ell+1,2i+1,2j} & = & \bar{u}_{\ell+1,2i+1,2j}-\widetilde{u}_{\ell+1,2i+1,2j}\\
\bar{d}_{\ell+1,2i+1,2j+1} & = & \bar{u}_{\ell+1,2i+1,2j+1}-\widetilde{u}_{\ell+1,2i+1,2j+1}. \nonumber
\end{eqnarray}
This implies that the knowledge of the cell-average values on the children
$\bar{U}_{\ell+1}$ is equivalent to the knowledge of the cell-average
values on the parents $\bar{U}_{\ell}$ and the wavelet coefficients
$\bar{D}_{\ell+1}=(\bar{d}_{\ell+1,2i,2j+1},\bar{d}_{\ell+1,2i+1,2j},
\bar{d}_{\ell+1,2i+1,2j+1})_{(i,j)\in\Lambda_{\ell}}$.
The so-called multiresolution transform on the cell-average values
is obtained by repeating this operation recursively on $L$ levels~\cite{Har95},
\[
\bar{U}_{L}\longleftrightarrow(\bar{D}_{L},\,\bar{D}_{L-1},\,\ldots,\bar{D}_{1},\,\bar{U}_{0}).
\]
In conclusion, knowing the cell-average values of all the
leaves $\bar{U}_{L}$ is equivalent to knowing the cell-average
value of the root $\bar{U}_{0}$ and the details of all the other
nodes of the tree structure.
In the MR scheme, instead of using the representation on the full
uniform grid $\Omega_{L},$ the numerical solution $\bar{U}_{MR}^{n}=\bar{U}_{L,MR}^{n}$
is formed by cell averages on an adaptive sparse grid $\Gamma^{n}=\Gamma_{L}^{n}$.
Grid adaptivity in the MR scheme is related with an incomplete tree structure,
where cell refinement may be interrupted at intermediate scale levels.
This means that $\Gamma^{n}$ is formed by \textit{leaf cells} $\Omega_{\ell,i,j}$,
$0\leq\ell\leq L$, $(i,j) \in {\cal L}(\Lambda_{\ell})$, which are cells without children.
Here ${\cal L}(\Lambda_{\ell})$ denotes the ensemble of indices for the existing leaf cells
of the level $\ell$.
Three basic steps are undertaken to evolve the solution from $\bar{U}_{MR}^{n}$ to $\bar{U}_{MR}^{n+1}$,
\bigskip \noindent \textbf{Refinement:} ${\bar{U}}_{MR}^{n+}\leftarrow{\mathbf{R}}\bar{U}_{MR}^{n}$
\bigskip \noindent \textbf{Evolution:} $\check{\bar{U}}_{MR}^{n+1}\leftarrow\mathbf{E}_{MR}{\bar{U}}_{MR}^{n+}$
\bigskip \noindent \textbf{Coarsening:} $\bar{U}_{MR}^{n+1}\leftarrow\mathbf{T}(\epsilon)\check{\bar{U}}_{MR}^{n+1}$
\bigskip \noindent The refinement operator ${\mathbf{R}}$ is a precautionary measure
to account for possible translation or creation of finer scales in
the solution between two subsequent time steps.
As the regions of smoothness or irregularities of the solution may change with time,
the grid $\Gamma^{n}$ may not be convenient anymore at the next time
step $t^{n+1}$.
Hence, before doing the time evolution, the representation
of the solution should be extended onto a grid ${\Gamma}^{n+}$, which
is expected to be a refinement of $\Gamma^{n}$, and to contain $\Gamma^{n+1}$.
Then, the time evolution operator $\mathbf{E}_{MR}=\mathbf{E}_{MR}(\Delta t)$
is applied.
The subscript MR in $E_{MR}$ means that only the cell-averages
on the leaves of the computational grid ${\Gamma}^{n+}$ are evolved in time, and
that an adaptive flux computation $F_{MR}({\bar{U}}_{MR}^{n+})$ is
adopted at interfaces of cells of different scale levels.
Finally, a thresholding operation $\mathbf{T}(\epsilon)$ (coarsening)
is applied in order to unrefine those cells in ${\Gamma}^{n+}$ that
are unnecessary for an accurate representation of $\bar{U}_{MR}^{n+1}$.
{ The choice of the threshold value is motivated by an error analysis equilibrating the perturbation and discretization errors. For details we refer to \cite{CKM03,RST03}.}
To compress data in an adaptive tree structure, while
still being able to navigate through it, \textit{gradedness} is required.
For instance, for a given node in the dynamic tree structure we
assume that:
\begin{itemize}
\item its parent and eight nearest uncles are in the tree (if not, we create
them as nodes);
\item for flux computations, if $\Omega_{\ell,i,j}$ is a leaf, its four
nearest cousins $\Omega_{\ell,i\pm2,j}$ and $\Omega_{\ell,i,j\pm2}$
in each direction are in the tree (if not, we create them as virtual
leaves);
\item if a child is created, all its brothers are also created;
\end{itemize}
For more details of these procedures, we refer to \cite{RST03}.
In the tree structure, the thresholding operator $\mathbf{T}(\epsilon)$
is defined by removing leaves where details are smaller than a prescribed
tolerance $\epsilon$, while preserving the gradedness property, and
the refinement operation ${\mathbf{R}}$ adds one more level as security
zone, in order to forecast the evolution of the solution in the tree
representation at the next time step.
These two operations are performed by the following procedure.
We denote by $\Lambda$ the ensemble of indices of the existing tree
nodes in $\Gamma^{n+}$, by $\mathcal{L}({\Lambda})$ the restriction
of $\Lambda$ onto the leaves, and by $\Lambda_{\ell}$ the restriction
of $\Lambda$ to a level $\ell$, $0\leq\ell<L$.
For the whole tree, from the leaves to the root:
\begin{itemize}
\item Compute the details on the nodes $\bar{d}_{\ell,i,j},\;(i,j)\in\Lambda_{\ell-1}$,
by the multiresolution transform;
\item Define the deletable cells, if the details on the corresponding nodes
and their brothers are smaller than the prescribed tolerance.
\end{itemize}
For the whole tree, from the leaves to the root:
\begin{itemize}
\item If a node and its children nodes are deletable, and the children nodes
are simple leaves (without virtual children), then delete their children.
\item If the node and its parents are not deletable, and it is not at the maximum level,
then create the children for this node.
\end{itemize}
To illustrate the adaptive flux computation, we consider the leaf $\Omega_{\ell+1,2i+1,2j}$,
sharing an interface with another leaf $\Omega_{\ell,i+1,j}$ at
a lower scale level, as illustrated in Figure \ref{cap:Adptive-numerical-flux}.
For the calculation of the outgoing numerical flux on the right
interface, we use the cell width in the $x$ direction $h_{x,\ell+1}$
as step size.
The required right neighboring stencils are obtained from
the cousins $\Omega_{\ell+1,2i+2,2j}$ and $\Omega_{\ell+1,2i+3,2j+1}$,
which are virtual cells.
For conservation, the ingoing flux on the leaf
$\Omega_{\ell,i+1,j}$ is set equal to the sum of the outgoing fluxes
on the neighbour leaves of level $\ell+1$.
For more details on the implementation of this procedure we refer
to \cite{RST03}.
\begin{figure}[htbp]
\begin{center}\includegraphics[scale=0.4]{flux.png}
\end{center}
\caption{\label{cap:Adptive-numerical-flux}Adaptive numerical flux computation in 2D.}
\end{figure}
\section{Numerical results}
In all numerical simulations, we consider an { initially} planar Chapman-Jouguet detonation moving with constant unitary speed through
the unburnt gas to the right of the domain. Setting $\gamma = 1.4$, $r=1$, $Q_0=1$ and $T_i=0.22$ and assigning the following
burnt gas values
\begin{equation}
\rho_b = \gamma \; , \, v_{x,b} = \delta \; , \, p_b = 1 \; , Z_b = 0 \, ,
\end{equation}
\bigskip \noindent where
\begin{equation}
\delta = \sqrt{\frac{2(\gamma-1)}{\gamma+1}} \, ,
\end{equation}
\bigskip \noindent
the corresponding von Neumann state past the shock wave is \cite{TV08,HLW00}
\begin{equation}
\rho_N = \frac{\gamma}{1-\delta} \; , \, v_{x,N} = 2\delta \; , \, p_N = 1 + \gamma \delta \; , Z_N = 1 \, ,
\end{equation}
\bigskip \noindent
and the unburnt gas values are
\begin{equation}
\rho_u = \frac{\gamma}{1+\delta} \; , \, v_{x,u} = 0 \; , \, p_u = 1 - \gamma \delta \; , Z_u = 1 \, .
\end{equation}
The resulting temperature of the unburnt gas $T_u=0.215995$ is only slightly lower than the chosen ignition temperature $T_i$.
The initial condition for an initial front at $x=x_0$ is
\begin{equation}
q (x,0) =
\left\{
\begin{array}{lll}
(q_N - q_b) \exp \left[ a (x-x_0) \right] + q_B & \mbox{ if } & x \leq x_0 \\
q_u & \mbox{ if } & x > x_0
\end{array}
\right.
\end{equation}
\bigskip \noindent where $a$ is set to $\frac{1}{\tau}$. Here $q$ stands for $\rho$, $v_x$, $p$, and $Z$,
while $v_y=0$ everywhere. This way the initial condition is close to the classical Zeldovich-von Neumann-Dring (ZND) solution of the reaction Euler equations.
\subsection{One-dimensional detonation}
First we consider a one-dimensional setting.
The computational domain is $\Omega=[-3,1]$.
In this case, the location of the initial front is set to $x_0 = -2$.
In order to follow the detonation wave and perform longer computations, a change of variables is made for $v_x$.
We set $v'_x = v_x - \delta$ and omit the superscript $'$ everywhere.
Since the initial transition happens in the segment $[-3,-1]$, we only focus on the domain $[-1,1]$.
The dimensionless final time is $t=2.2$.
\subsubsection{Non-stiff case}
In this first part, we consider a detonation with a time coefficient $\tau=10^{-1}$ . As observed in Figure
\ref{fig:K10}, the detonation front propagates from left to right with constant maximal values of density, pressure
and velocity, which correspond to the von Neumann state. The slopes for the density, pressure, and velocity, are moderate
on the left side of the shock, {\it i.e.} in the burnt gas zone, but become sharp on the right side, at the interface
with the unburnt gases. For all the displayed quantities, the curves fit with the reference computation, which has been
performed on a fine grid with $L=14$ scales, {\it i.e.} with $16384$ grid points.
\begin{figure}[htbp]
\includegraphics[width=0.5 \textwidth]{K10_rho00020.png}
\includegraphics[width=0.5 \textwidth]{K10_p00020.png}
\\
\includegraphics[width=0.5 \textwidth]{K10_T00020.png}
\includegraphics[width=0.5 \textwidth]{K10_Y00020.png}
\\
\includegraphics[width=0.5 \textwidth]{K10_u00020.png}
\includegraphics[width=0.5 \textwidth]{K10_Tree00020.png}
\caption{Chapman-Jouguet detonation front:
density ({\it top, left}), pressure ({\it top, right}), temperature ({\it middle, left}),
partial mass of the limiting reactant ({\it middle, right}), velocity ({\it bottom, left}) and
adaptive mesh ({\it bottom, right}) at $t=2.2$, $L=11$ scales, $\epsilon=2.5 \cdot 10^{-3}$, $\tau=10^{-1}$.}
\label{fig:K10}
\end{figure}
Table \ref{table:K10} assembles results for computations performed with different maximum number of scales for both the MR and the FV method.
The relative error is computed on the von Neumann value of the density.
The numerical value is averaged in time to damp
its oscillations and is compared to the theoretical one. We observe for the MR computations
a CPU time compression rate growing with increasing number of scales, while the relative error remains comparable with the one of the
same computation on the regular fine grid. As expected, the error is reduced when increasing the number of scales $L$.
\begin{table}[htbp]
\begin{tabular}{crrrrr}
Method & $L$ & $\epsilon$ & CPU time & CPU compression & relative error \\
\hline
MR & $9$ & $ 10^{-2}$ & 1 $s$ & 41,34 \% & $1.099 \cdot 10^{-1}$ \\
FV & $9$ & ~ & 3 $s$ & 100,00 \% & $1.109 \cdot 10^{-1}$ \\
MR & $10$ & $5 \cdot 10^{-3}$ & 4 $s$ & 34,28 \% & $6.519 \cdot 10^{-2}$ \\
FV & $10$ & ~ & 13 $s$ & 100,00 \% & $6.601 \cdot 10^{-2}$ \\
MR & $11$ & $2.5 \cdot 10^{-3}$ & 11 $s$ & 21,86 \% & $3.554 \cdot 10^{-2}$ \\
FV & $11$ & ~ & 54 $s$ & 100,00 \% & $3.617 \cdot 10^{-2}$ \\
MR & $12$ & $1.25 \cdot 10^{-3}$ & 40 $s$ & 18,75 \% & $1.740 \cdot 10^{-2}$ \\
FV & $12$ & ~ & 3 $min$ 37 $s$ & 100,00 \% & $1.783 \cdot 10^{-2}$ \\
\end{tabular}
\caption{Chapman-Jouguet detonation front: CPU time, CPU compression and relative error on
the von Neumann density for different scales and for the FV and MR methods, $\tau = 10^{-1}$.}
\label{table:K10}
\end{table}
\subsubsection{Influence of the stiffness}
In this part, we perform computations of stiff problems and set $\tau$ first to $10^{-2}$, then to $10^{-3}$.
In order to resolve correctly the detonation front, $13$ scales are required for the case $\tau=10^{-3}$.
The results are shown in Figures \ref{fig:K100} and \ref{fig:K1000}.
We observe a much sharper and thinner reaction zone, reduced to a few computational points in
the case $\tau=10^{-3}$. We observe a good agreement with the reference computations, which were computed on a regular fine grid with $L=15$ scales.
\begin{figure}[htbp]
\includegraphics[width=0.5 \textwidth]{K100_rho00020.png}
\includegraphics[width=0.5 \textwidth]{K100_p00020.png}
\\
\includegraphics[width=0.5 \textwidth]{K100_T00020.png}
\includegraphics[width=0.5 \textwidth]{K100_Y00020.png}
\\
\includegraphics[width=0.5 \textwidth]{K100_u00020.png}
\includegraphics[width=0.5 \textwidth]{K100_Tree00020.png}
\caption{Chapman-Jouguet detonation front:
density ({\it top, left}), pressure ({\it top, right}), temperature ({\it middle, left}),
partial mass of the limiting reactant ({\it middle, right}), velocity ({\it bottom, left}) and
adaptive mesh ({\it bottom, right}) at $t=2.2$, $L=12$ scales, $\epsilon=1.25 \cdot 10^{-3}$, $\tau=10^{-2}$.}
\label{fig:K100}
\end{figure}
\begin{figure}[htbp]
\includegraphics[width=0.5 \textwidth]{K1000_rho00020.png}
\includegraphics[width=0.5 \textwidth]{K1000_p00020.png}
\\
\includegraphics[width=0.5 \textwidth]{K1000_T00020.png}
\includegraphics[width=0.5 \textwidth]{K1000_Y00020.png}
\\
\includegraphics[width=0.5 \textwidth]{K1000_u00020.png}
\includegraphics[width=0.5 \textwidth]{K1000_Tree00020.png}
\caption{Chapman-Jouguet detonation front:
density ({\it top, left}), pressure ({\it top, right}), temperature ({\it middle, left}),
partial mass of the limiting reactant ({\it middle, right}), velocity ({\it bottom, left}) and
adaptive mesh ({\it bottom, right}) at $t=2.2$, $L=13$ scales, $\epsilon=6.25\cdot 10^{-4}$, $\tau=10^{-3}$.}
\label{fig:K1000}
\end{figure}
Table \ref{table:Kvar} gives the results of the computations for $\tau=10^{-2}$ and $\tau=10^{-3}$. As expected,
a high compression rate is reached when 13 scales are used. But as it is seen in Figure \ref{fig:K1000}, the error
on the von Neumann density remains quite large, even if the detonation front is well tracked.
\begin{table}[htbp]
\begin{tabular}{crrrrrr}
Method & $\tau$ & $L$ & $\epsilon$ & CPU time & CPU comp. & rel. error \\
\hline
MR & $10^{-2}$ & $12$ & $1.25 \cdot 10^{-3}$ & 43 $s$ & 20,08 \% & $1.336 \cdot 10^{-1}$ \\
FV & $10^{-2}$ & $12$ & ~ & 3 $min$ 37 $s$ & 100,00 \% & $1.342 \cdot 10^{-1}$ \\
MR & $10^{-3}$ & $13$ & $6.25 \cdot 10^{-4}$ & 2 $min$ 03 $s$ & 7,56 \% & $3.057 \cdot 10^{-1}$ \\
FV & $10^{-3}$ & $13$ & ~ & 27 $min$ 13 $s$ & 100,00 \% & $3.057 \cdot 10^{-1}$
\end{tabular}
\caption{Chapman-Jouguet detonation front: CPU time, CPU compression and relative error on
the von Neumann density for different values of the stiffness coefficient and for the FV and MR methods.}
\label{table:Kvar}
\end{table}
\subsection{Two-dimensional detonation}
Now we consider a two-dimensional problem.
The interaction between a detonation wave and a pocket of unburnt gas is simulated.
The computational domain is $[0,4] \times [-1,1]$, the initial planar detonation front is located at $x=1.7$ and the initial spot of unburnt gas is centered in $(x_0,y_0) = (2,0)$.
The radius of the circular pocket is $r_0 = 0.1$. The parameters of the
detonation front are the same as in the non-stiff case, {\it i.e.} we choose $\tau = 10^{-1}$.
The dimensionless final time is $t=1.3$.
\begin{figure}[htbp]
\includegraphics[width=0.5 \textwidth]{schlieren00000.png}
\includegraphics[width=0.5 \textwidth]{tree00000.png}
\\
\includegraphics[width=0.5 \textwidth]{schlieren00025.png}
\includegraphics[width=0.5 \textwidth]{tree00025.png}
\\
\includegraphics[width=0.5 \textwidth]{schlieren00050.png}
\includegraphics[width=0.5 \textwidth]{tree00050.png}
\caption{Time evolution of the interaction between a detonation wave an a pocket of unburnt gas computed with the adaptive MR method. Numerical Schlieren
of the density gradient ({\it left}) and corresponding meshes ({\it right}) at $t=0$ ({\it top}), $t=0.65$ ({\it middle})
and $t=1.3$ ({\it bottom}).}
\label{fig:2D}
\end{figure}
In Figure \ref{fig:2D}, we observe the destabilization of the planar front by the pocket of unburnt gas. We remark circular
structures due to the expansion of the pocket in the three directions and their interaction with the detonation wave going
from left to right. Recirculations are observed in the center of the pocket, which are advected by the flow. We also
observe reflexions of the waves on the free-slip boundaries in $y=-1$ and $y=1$.
\begin{figure}[htbp]
\includegraphics[width=\textwidth]{tree00050_zoom.png}
\caption{Adaptive MR computation of the interaction between a detonation wave an a pocket of unburnt gas: Zoom on the adaptive
mesh at $t=1.3$.}
\label{fig:2Dzoom}
\end{figure}
On the right side of Figure \ref{fig:2D}, we observe that the adaptive mesh follows well the structures of the different
waves that interact in the flow. A zoom in the center of the mesh (Figure \ref{fig:2Dzoom}) enables to see how well the
mesh adapts to all the structures of the flow. The computation, performed on $L=10$ scales {\it i.e.} a maximum of
$2^{2 L} = 1,048,576$ points, requires 2 $h$ 59 $min$ of CPU time on a 16 Core PC. The same
computation on the regular fine grid lasts 14 $h$ 35 $min$.
Hence the CPU time compression is around 20 \%.
\begin{figure}[htbp]
\includegraphics[width=\textwidth]{slice00050.png}
\caption{Interaction between a detonation wave an a pocket of unburnt gas. One-dimensional cuts of density
for $y=0$ at $t=1.3$. Comparison of the adaptive MR and the FV computations for $L=10$ with the FV reference computation for $L=11$. The total number of grid points in the FV computations is $2^{2 L}$.}
\label{fig:2Dslice}
\end{figure}
Figure \ref{fig:2Dslice} shows a cut of the density at $y=0$. We observe an excellent agreement between the MR and FV
computations with $L=10$ scales, and a good agreement of both curves with the FV reference computation obtained with $L=11$ scales, which validates the grid convergence of the computation.
{ The total number of grid points in the FV computations is $2^{2 L}$.}
\section{Conclusion and perspectives}
We presented an extension of the adaptive multiresolution method for the reactive Euler equations, able to deal with
fast chemical reactions.
A finite volume scheme with second order shock capturing schemes is used for space discretization.
Classical Strang splitting is applied to deal with the stiffness of the physical problem in time.
Space and time adaptivity are then introduced using multiresolution analysis and Runge--Kutta--Fehlberg schemes, respectively. The implementation uses dynamic memory allocation with tree data structures.
Applications to detonation problems in one and two space dimensions validated the algorithm and illustrated the efficiency of the adaption strategy.
The adaptive computations yield both speed-up of CPU time and memory compression with respect to uniform grid computations while the precision is automatically controlled using suitable thresholding techniques.
As perspective we plan the extension of the method to three space dimensions and also to include multi-species chemical reactions {and viscous effects}.
The parallelization of the adaptive method using tree data structures to handle the adaptive grid remains a challenging task for the near future.
\section*{Acknowledgements}
This article is dedicated to Professor Henning Bockhorn on the occasion of his 70th birthday, thanking him
cordially for the great time we had in his group, first in Kaiserslautern and then in Karlsruhe.
We also thank Margarete Domingues, Ralf Deiterding and Sonia Gomes for constructive
discussions on the topic and fruitful interactions.
KS and OR thankfully acknowledge financial support from the ANR projects SiCoMHD (ANR-Blanc 2011-045)
and MAPIE (ANR-13-MONU-0002).
|
2,869,038,153,983 | arxiv | \section{Introduction}
\label{sec:intro}
\setcounter{equation}{0}
Field theories with a complex action are difficult to treat
nonperturbatively, because the weight $e^{-S} = |e^{-S}|e^{i\varphi}$ in
the partition function is not real. Standard numerical approaches based on
a probability interpretation and importance sampling will then typically
break down, which is commonly referred to as the sign problem. This is a
pressing problem for QCD at nonzero baryon chemical potential, where a
nonperturbative determination of the phase diagram in the plane of
temperature and chemical potential is still lacking
\cite{deForcrand:2010ys}. Several methods have
been developed to explore at least part of the phase diagram
\cite{Fodor:2001au,Fodor:2002km,Allton:2002zi,Gavai:2003mf,deForcrand:2002ci,
D'Elia:2002gd,D'Elia:2009qz,Kratochvila:2005mk,Alexandru:2005ix,Ejiri:2008xt,
Fodor:2007vv},
but in general these can only be applied in a limited region. Recent
years have also seen an intense study of the sign problem in QCD and
related theories, which has led to new formulations
\cite{Anagnostopoulos:2001yb,Ambjorn:2002pz}
and considerable insight into how the
complexity of the weight interplays with physical observables
\cite{Akemann:2004dr,Splittorff:2005wc,Splittorff:2006fu,Han:2008xj,
Bloch:2008cf,Lombardo:2009aw,Danzer:2009dk,Andersen:2009zm,
Hands:2010zp,Bringoltz:2010iy}.
Finally, in some theories the sign problem can be eliminated completely,
using a reformulation which yields a manifestly real and positive weight
\cite{Chandrasekharan:1999cm,Endres:2006xu,Chandrasekharan:2008gp,
Banerjee:2010kc}.
This demonstrates that the sign problem is not a problem of principle for
a theory, but instead tied to the formulation and/or algorithm. For QCD an
exact reformulation without a sign problem has unfortunately not (yet)
been found.
Complex Langevin dynamics \cite{Parisi:1984cs,Klauder:1983}
offers the possibility of a general solution to this problem. In this
formulation the fields, denoted here collectively as $\phi$, are
supplemented with an additional fictional Langevin time $\vartheta$ and
the system evolves according to the stochastic equation,
\begin{equation}
\label{eqphi}
\frac{\partial \phi_{x}(\vartheta)}{\partial \vartheta} =
-\frac{\delta S[\phi;\vartheta]}{\delta \phi_{x}(\vartheta)}
+ \eta_x(\vartheta).
\end{equation}
In the case that the action is complex, the fields are
\emph{complexified} as
\begin{equation}
\phi\to\phi^{\rm R} + i\phi^{\rm I},
\end{equation}
and the Langevin equations read (using general complex noise)
\begin{subequations}
\begin{align}
\frac{\partial \phi_{x}^{\rm R}}{\partial \vartheta} &= K_{x}^{\rm R} +
\sqrt{N_{\rm R}}\eta^{\rm R}_{x},
& K_{x}^{\rm R} = -\mbox{Re}\frac{\delta S}{\delta
\phi_{x}}\Big|_{\phi\to\phi^{\rm R}+i\phi^{\rm I}},
\\
\frac{\partial \phi_{x}^{\rm I}}{\partial \vartheta} &= K_{x}^{\rm I} +
\sqrt{N_{\rm I}}\eta^{\rm I}_x,
& K_{x}^{\rm I} = -\mbox{Im}\frac{\delta S}{\delta \phi_{x}}\Big|_{\phi\to\phi^{\rm R}+i\phi^{\rm I}}.
\end{align}
\label{eqphic}
\end{subequations}
The strength of the noise in the real and imaginary components of the
Langevin equation is constrained via $N_{\rm R}-N_{\rm I}=1$, and the
noise furthermore satisfies
\begin{subequations}
\begin{align}
&\langle \eta_x^{\rm R}(\vartheta)\rangle = \langle \eta_x^{\rm I}(\vartheta)\rangle = \langle \eta^{\rm R}_{x}(\vartheta)\eta^{\rm I}_{y}(\vartheta') \rangle = 0, \\
&\langle \eta^{\rm R}_{x}(\vartheta)\eta^{\rm R}_{y}(\vartheta') \rangle =
\langle \eta^{\rm I}_{x}(\vartheta)\eta^{\rm I}_{y}(\vartheta') \rangle
= 2\delta_{xy}\delta(\vartheta - \vartheta'),
\end{align}
\end{subequations}
i.e., it is Gaussian.
Since the complex action is only used to generate the drift terms but not
for importance sampling, complex Langevin dynamics can potentially avoid
the sign problem.\footnote{Early studies of complex Langevin
dynamics can be found in, e.g., Refs.\
\cite{Klauder:1985b,Karsch:1985cb,Ambjorn:1985iw,Ambjorn:1986fz,
Flower:1986hv,Ilgenfritz:1986cd}.
Ref.\ \cite{Damgaard:1987rr} contains a further guide to the
literature. More recent work includes Refs.\
\cite{Berges:2005yt,Berges:2006xc,Berges:2007nr,Aarts:2008rr,
Aarts:2008wh,Aarts:2009hn,Aarts:2009dg,Aarts:2009uq,Pehlevan:2007eq,
Guralnik:2009pk}.
}
In the limit of infinite Langevin time, noise averages of observables
should
equal the standard quantum expectation values. For a real action/Langevin
dynamics, formal proofs that observables converge to the correct value can
be formulated, using properties of the associated Fokker-Planck equation
\cite{Damgaard:1987rr}.
If the action is complex and the Langevin dynamics extends into the
expanded complexified space, these proofs no longer hold.
Nevertheless, a formal derivation of the validity of the approach can
still be given, employing holomorphicity and the Cauchy-Riemann equations.
We sketch here the basic notion, suppressing all indices for notational
simplicity, and refer to Ref.\ \cite{Aarts:2009uq} for details.
Associated with the Langevin process (\ref{eqphic}) is a (real and
positive) probability density $P[\phi^{\rm R}, \phi^{\rm I};\vartheta]$, which
evolves according the Fokker-Planck equation
\begin{equation}
\frac{\partial P[\phi^{\rm R},\phi^{\rm I};\vartheta]}{\partial\vartheta} = L^T
P[\phi^{\rm R}, \phi^{\rm I};\vartheta],
\end{equation}
with the Fokker-Planck operator
\begin{equation}
L^T = \frac{\partial}{\partial\phi^{\rm R}} \left[ N_{\rm R} \frac{\partial}{\partial\phi^{\rm R}}- K^{\rm R}\right]
+ \frac{\partial}{\partial\phi^{\rm I}} \left[ N_{\rm I} \frac{\partial}{\partial\phi^{\rm I}}- K^{\rm I}\right].
\end{equation}
Stationary solutions of this Fokker-Planck equation are only known in
very special cases
\cite{Aarts:2009hn,Ambjorn:1985iw,Aarts:2009uq,Nakazato:1985zj}.
Expectation values obtained by solving the stochastic process should then
equal
\begin{equation}
\label{eqP}
\langle O\rangle_{P(\vartheta)}
= \frac{\int D\phi^{\rm R} D\phi^{{\rm I}}\, P[\phi^{\rm R},\phi^{\rm I};\vartheta]
O[\phi^{\rm R}+i\phi^{\rm I}]}
{\int D\phi^{\rm R} D\phi^{\rm I}\, P[\phi^{\rm R},\phi^{\rm I};\vartheta]}.
\end{equation}
However, we may also consider expectation values with respect to a
complex weight $\rho[\phi;\vartheta]$,
\begin{equation}
\langle O\rangle_{\rho(\vartheta)}
= \frac{\int D\phi\, \rho[\phi;\vartheta] O[\phi]}{\int D\phi\,
\rho[\phi;\vartheta]},
\end{equation}
where, using Eq.\ (\ref{eqphi}), $\rho$ evolves according to a complex
Fokker-Planck equation
\begin{equation}
\frac{\partial\rho[\phi;\vartheta]}{\partial\vartheta} =
L_0^T\rho[\phi;\vartheta],
\;\;\;\;\;\;\;\;
L_0^T = \frac{\partial}{\partial\phi} \left[ \frac{\partial}{\partial\phi}+ \frac{\partial S}{\partial\phi} \right].
\end{equation}
This equation has the desired stationary solution $\rho[\phi]\sim
\exp(-S)$.
Under some assumptions and relying on holomorphicity and partial
integration \cite{Aarts:2009uq}, one can show that these expectation
values are equal, and
\begin{equation}
\langle O\rangle_{P(\vartheta)} = \langle O\rangle_{\rho(\vartheta)}.
\end{equation}
If it can subsequently be shown that
\begin{equation}
\lim_{\vartheta\to\infty} \langle O\rangle_{\rho(\vartheta)} = \langle
O\rangle_{\rho(\infty)},
\;\;\;\;\;\;\;\;
\rho(\phi;\infty)\sim \exp(-S),
\end{equation}
the applicability of complex Langevin dynamics is demonstrated. In Ref.\
\cite{Aarts:2009uq} this proposal was studied in some detail in the case
of simple models. Remarkably it was found that for complex noise
($N_{\rm I}>0$), the Langevin dynamics does {\em not} converge to the correct
answer. On the other hand, for real noise ($N_{\rm I}=0$) correct convergence
was observed.
In this paper, we continue our investigation into the applicability of
complex Langevin dynamics at finite chemical potential \cite{Aarts:2008rr,
Aarts:2008wh,Aarts:2009hn,Aarts:2009dg,Aarts:2009uq}. We consider the
three-dimensional XY model for a number of reasons. We found earlier that
this theory is very sensitive to instabilities and runaways and therefore
requires the use of an adaptive stepsize \cite{Aarts:2009dg}. This is
similar to the case of QCD in the heavy dense limit
\cite{Aarts:2008rr,Aarts:2009dg}. As QCD, this theory has a Roberge-Weiss
periodicity at imaginary chemical potential \cite{Aarts:2009dg,Roberge:1986mm}.
Furthermore, it is closely related to the relativistic Bose gas at finite
chemical potential, for which complex Langevin dynamics was shown to work
very well (at weak coupling in four dimensions)
\cite{Aarts:2008wh,Aarts:2009hn}. Finally, this theory can be rewritten
using a world line formulation without a sign problem
\cite{Chandrasekharan:2008gp,Banerjee:2010kc}, which can be solved
efficiently using the worm algorithm \cite{Banerjee:2010kc,worm}. This
allows for a direct comparison for all parameter values.
The paper is organized as follows. In Sec.\ \ref{sec:xymodel}, we remind
the reader of some details of the XY model at real and imaginary chemical
potential, the adaptive stepsize algorithm we use and the related
phase-quenched XY model. The world line formulation and some properties of
the strong-coupling expansion are briefly mentioned in Sec.\ \ref{sec:wl}.
We then test the validity of complex Langevin dynamics in Sec.\
\ref{sec:comparison} and develop diagnostic tests in Sec.\
\ref{sec:diagnostics}. In the Conclusion we summarize our findings and
discuss possible directions for the future.
\section{XY model}
\label{sec:xymodel}
\setcounter{equation}{0}
The action of the XY model at finite chemical potential is
\begin{equation}
S = -\beta\sum_x\sum_{\nu=0}^2\cos(\phi_{x} - \phi_{x+\hat{\nu}} -
i\mu\delta_{\nu,0}) ,
\label{eq:action}
\end{equation}
where $0\leq \phi_x<2\pi$. The theory is defined on a lattice of size
$\Omega=N_\tau N_s^2$, and we use periodic boundary conditions. The
chemical potential $\mu$ is coupled to the Noether charge associated with
the global symmetry $\phi_x\rightarrow\phi_x+\alpha$ and is introduced in
the standard way \cite{Hasenfratz:1983ba}. The action satisfies
$S^*(\mu)=S(-\mu^*)$. At vanishing chemical potential the theory is known
to
undergo a phase transition at $\beta_{c}=0.45421$
\cite{Campostrini:2000iw,Banerjee:2010kc} between a disordered phase when
$\beta<\beta_{c}$ and an ordered phase when $\beta>\beta_{c}$.
The drift terms appearing in the complex Langevin equations are given by
\begin{subequations}
\begin{align}
K^{\rm R}_x = -\beta\sum_{\nu} & \Big[ \sin(\phi_x^{\rm R} - \phi_{x+\hat{\nu}}^{\rm R})\cosh(\phi_{x}^{\rm I} -
\phi_{x+\hat{\nu}}^{\rm I} - \mu\delta_{\nu,0}) \nonumber\\
&{}+ \sin(\phi_{x}^{\rm R} - \phi_{x-\hat{\nu}}^{\rm R})\cosh(\phi_{x}^{\rm I} - \phi_{x-\hat{\nu}}^{\rm I} + \mu\delta_{\nu,0}) \Big], \\
K^{\rm I}_x = -\beta\sum_{\nu} & \Big[ \cos(\phi_{x}^{\rm R} - \phi_{x+\hat{\nu}}^{\rm R})\sinh(\phi_{x}^{\rm I} -
\phi_{x+\hat{\nu}}^{\rm I} - \mu\delta_{\nu,0}) \nonumber\\
&{}+ \cos(\phi_{x}^{\rm R} - \phi_{x-\hat{\nu}}^{\rm R})\sinh(\phi_{x}^{\rm I} - \phi_{x-\hat{\nu}}^{\rm I} + \mu\delta_{\nu,0}) \Big].
\end{align}
\end{subequations}
The equations are integrated numerically by discretizing Langevin time as
$\vartheta=n\epsilon_n$ with $\epsilon_n$ the adaptive stepsize. Explicitly,
\begin{subequations}
\begin{align}
\phi_{x}^{\rm R}(n+1) {}&= \phi_{x}^{\rm R}(n) + \epsilon_{n}K_{x}^{\rm R}(n) + \sqrt{\epsilon_{n}}\eta_{x}(n),\\
\phi_{x}^{\rm I}(n+1) {}&= \phi_{x}^{\rm I}(n) + \epsilon_{n}K_{x}^{\rm I}(n),
\end{align}
\end{subequations}
where we specialized to real noise, with
$\langle\eta_x(n)\eta_{x'}(n')\rangle=2\delta_{xx'}\delta_{nn'}$. In the case
that
$\mu=\phi^{\rm I}=0$, the drift terms are bounded and $|K_x^{\rm R}|<6\beta$.
When $\phi^{\rm I}\neq 0$, the drift terms are unbounded, which can result in
instabilities and runaways. In this particular theory, much care is
required to numerically integrate the dynamics in a stable manner and we
found that an adaptive stepsize is mandatory \cite{Aarts:2009dg}. At each
timestep, the stepsize is determined according to
\begin{equation}
\epsilon_{n} = \min\left\{\bar{\epsilon}, \bar{\epsilon}\frac{\langle
K^{\rm max} \rangle}{K^{\rm max}_n}\right\} ,
\end{equation}
where
\begin{equation}
K^{\rm max}_n = \max_x \left| K^{\rm R}_x(n) + i K^{\rm I}_x(n)\right|.
\end{equation}
Here $\bar{\epsilon}$ is the desired target stepsize and $\langle K^{\rm
max} \rangle$ is either precomputed or computed during the thermalisation
phase. All observables are analyzed over equal periods of Langevin time to
ensure correct statistical significance.
The observable we focus on primarily in this study is the action density
$\langle S\rangle/\Omega$. After complexification the action is written as
$S=S^{\rm R}+iS^{\rm I}$, with
\begin{subequations}
\begin{align}
S^{{\rm R}} & = - \beta\sum_{x,\nu}\cos(\phi_{x}^{\rm R} - \phi_{x+\hat{\nu}}^{\rm R})\cosh(\phi_{x}^{\rm I} - \phi_{x+\hat{\nu}}^{\rm I} - \mu\delta_{\nu,0}), \\
S^{{\rm I}} & = \beta\sum_{x,\nu}\sin(\phi_{x}^{\rm R} - \phi_{x+\hat{\nu}}^{\rm R})\sinh(\phi_{x}^{\rm I} - \phi_{x+\hat{\nu}}^{\rm I} - \mu\delta_{\nu,0}).
\end{align}
\end{subequations}
After noise averaging, the expectation value of the imaginary part is
consistent with zero while the expectation value of the real part is even
in $\mu$, as is expected from symmetry considerations.
By choosing an imaginary chemical potential $\mu=i\mu_{\rm I}$ the action
(\ref{eq:action})
becomes purely real. This has both the advantage of enabling standard
Monte Carlo algorithms to be applied (we choose to employ real Langevin
dynamics) and that the behaviour at $\mu^2\gtrsim0$ can be assessed by
continuation from $\mu^2\lesssim0$. The action and drift term with
imaginary chemical potential are
\begin{align}
S_{\rm imag} =& -\beta\sum_{x,\nu}\cos(\phi_{x} - \phi_{x+\hat{\nu}} + \mu_{\rm I}\delta_{\nu,0}), \\
K_{x} =& -\beta\sum_{\nu}\left[
\sin(\phi_{x} - \phi_{x+\hat{\nu}} + \mu_{\rm I}\delta_{\nu,0}) + \sin(\phi_{x} - \phi_{x-\hat{\nu}} - \mu_{\rm I}\delta_{\nu,0}) \right] .
\end{align}
This theory is periodic under $\mu_{\rm I}\to\mu_{\rm I}+2\pi/N_{\tau}$, which
yields a Roberge-Weiss transition at $\mu_{\rm I}=\pi/N_{\tau}$, similar as in
QCD \cite{Roberge:1986mm}.
This periodicity can be made explicit by shifting the chemical potential
to the final time slice, via the field redefinition
$\phi_{{\mathbf x},\tau}\longrightarrow\phi_{{\mathbf x},\tau}^{\prime} =
\phi_{{\mathbf x},\tau}-\mu_{\rm I}\tau$. The action is then (for arbitrary complex
chemical potential)
\begin{equation}
S_{\rm fts} = -\beta\sum_{x,\nu}\cos(\phi_{x} - \phi_{x+\hat{\nu}} -
iN_{\tau}\mu\delta_{\tau,N_{\tau}}\delta_{\nu,0}).
\end{equation}
We have also carried out simulations with this action and confirmed the
results obtained with the original formulation. The sole exception was the
largest $\beta$ value ($\beta=0.7$), where the original action missed the
Roberge-Weiss transition, while the final-time-slice formulation located
it without problems.
The severity of the sign problem is conventionally
(see e.g.\ Ref.\ \cite{deForcrand:2010ys})
estimated by the
expectation value of the phase factor $e^{i\varphi}=e^{-S}/|e^{-S}|$ in
the phase quenched theory, i.e.\ in the theory where only the real
part of the action (\ref{eq:action}) is included in the Boltzmann weight.
In this case, the phase quenched theory is the anisotropic XY model, with
the action
\begin{equation}
S_{\rm pq} = -\sum_{x,\nu} \beta_\nu\cos(\phi_x-\phi_{x+\hat\nu}),
\end{equation}
where $\beta_0=\beta\cosh\mu$, and $\beta_{1,2}=\beta$.
\section{World line formulation}
\label{sec:wl}
\setcounter{equation}{0}
The advantage of the XY model is that it can be formulated without a sign
problem by an exact rewriting of the partition function in terms of world
lines \cite{Chandrasekharan:2008gp,Banerjee:2010kc}.\footnote{The world
line formulation has of course a long history in lattice gauge theory, see
e.g.\ Ref.\ \cite{Banks:1977cc}. Recent work includes Refs.\
\cite{Wenger:2008tq,deForcrand:2009dh,Wolff:2009kp}. For a review, see Ref.\
\cite{Chandrasekharan:2008gp}.
}
Moreover, this dual formulation can be simulated efficiently with a worm
algorithm \cite{worm,Banerjee:2010kc}, which allows us to compare the
results obtained with complex Langevin dynamics with those from the world
line approach. We briefly repeat some essential elements of the world line
formulation and refer to Ref.\ \cite{Banerjee:2010kc} for more details.
The partition function can be rewritten using the identity
\begin{equation}
e^{\beta\cos\phi} = \sum_{k=-\infty}^{\infty}I_{k}(\beta)e^{ik\phi},
\end{equation}
where $I_{k}(\beta)$ are the modified Bessel functions of the first kind.
Using this replacement and integrating over the fields, the partition
function is written as
\begin{equation}
Z = \int D\phi \, e^{-S} =
\sum_{[k]}\prod_{x,\nu} I_{k_{x,\nu}}(\beta)e^{k_{x,\nu}\mu\delta_{\nu,0}}
\delta\left(\sum_{\nu}\left[ k_{x,\nu} - k_{x-\hat{\nu},\nu} \right] \right) .
\end{equation}
The sum over $[k]$ indicates a sum over all possible world line
configurations. Since $\langle S\rangle = -\beta\frac{\partial \ln
Z}{\partial \beta}$, the action can be computed from
\begin{equation}
\langle S \rangle = -\beta\left\langle
\sum_{x,\nu}\left[
\frac{I_{k_{x,\nu}-1}(\beta)}{I_{k_{x,\nu}}(\beta)} -
\frac{k_{x,\nu}}{\beta}
\right]\right\rangle_{\rm wl},
\end{equation}
where the brackets denote the average over world line configurations. To
compute this average, we have implemented the worm algorithm, following
Ref. \cite{Banerjee:2010kc}. We note here, amusingly, that the world line
formulation has a sign problem at imaginary chemical potential.
Inspired by Ref.\ \cite{Langelage:2010yn},
we have also studied a (low-order) strong-coupling expansion of this
model, using
\begin{equation}
I_k(2x) = \frac{x^k}{k!}\left(
1 + \frac{x^2}{k+1}
+ \frac{x^4}{2(k+2)(k+1)} +\ldots\right).
\end{equation}
At strong coupling the chemical potential cancels in most world lines,
except when the world line wraps around the temporal direction. At leading
order in the strong-coupling expansion, it then appears in the combination
$(\frac{1}{2}\beta e^\mu)^{N_\tau}$. In the thermodynamic limit it therefore contributes only
when $\frac{1}{2}\beta e^\mu \geq 1$. Hence a simple strong-coupling estimate
for the critical coupling at nonzero $\mu$ is given by
\begin{equation}
\beta_c(\mu) = 2e^{-\mu}.
\end{equation}
The $\mu$-independence at small $\beta$ and $\mu$ is known as the Silver
Blaze feature in QCD \cite{Cohen:2003kd}.
The partition function is expressed in terms of the free energy density
$f$ as $Z=\exp(-\Omega f)$. A strong-coupling expansion to order $\beta^4$
on a lattice with $N_{\tau}>4$ yields
\begin{equation}
f = -\frac{3}{4}\beta^2-\frac{21}{64}\beta^4 + {\cal
O}(\beta^6),
\end{equation}
and hence
\begin{equation}
\label{eqSsc}
\langle S\rangle/\Omega =
-\frac{3}{2}\beta^2 - \frac{21}{16}\beta^4 + {\cal O}(\beta^6).
\end{equation}
In the phase quenched theory we find
\begin{equation}
f_{\rm pq} =
-\frac{1}{4}\beta^2\left(2+\cosh^2\mu\right)
-\frac{1}{64}\beta^4\left(14+8\cosh^2\mu-\cosh^4\mu\right)
+ {\cal O}(\beta^6).
\end{equation}
We can now estimate the severeness of the sign problem at strong coupling.
The average phase factor takes the standard form,
\begin{equation}
\langle e^{i\varphi}\rangle_{\rm pq} = \frac{Z}{Z_{\rm pq}} =
\exp\left[-\Omega\Delta f\right],
\;\;\;\;\;\;\;\;
\Delta f = f-f_{\rm pq},
\end{equation}
where in this case
\begin{equation}
\Delta f =
\frac{1}{4}\beta^2\left(\cosh^2\mu-1\right)
+\frac{1}{64}\beta^4\left(\cosh^2\mu-1\right)\left(7-\cosh^2\mu\right)
+ {\cal O}(\beta^6).
\end{equation}
On a finite lattice and for small chemical potential we find therefore
the sign problem to be mild in the strong-coupling limit, since the volume
factor is balanced by $\beta^2\mu^2/4\ll 1$.
\section{Comparison}
\label{sec:comparison}
\setcounter{equation}{0}
\begin{figure}[htp]
\centering
\subfigure[$\beta=0.7$]{
\includegraphics[width=0.47\textwidth]{act_b0.7_x8.eps}
}
\subfigure[$\beta=0.6$]{
\includegraphics[width=0.47\textwidth]{act_b0.6_x8.eps}
}
\subfigure[$\beta=0.5$]{
\includegraphics[width=0.47\textwidth]{act_b0.5_x8.eps}
}
\subfigure[$\beta=0.4$]{
\includegraphics[width=0.47\textwidth]{act_b0.4_x8.eps}
}
\subfigure[$\beta=0.3$]{
\includegraphics[width=0.47\textwidth]{act_b0.3_x8.eps}
}
\subfigure[$\beta=0.2$]{
\includegraphics[width=0.47\textwidth]{act_b0.2_x8.eps}
}
\caption{
Real part of action density $\langle S\rangle/\Omega$ as a function of
$\mu^2$ on a lattice of size $8^3$, using complex Langevin dynamics and
the world line formulation at real $\mu$ ($\mu^2>0$) and real Langevin
dynamics at imaginary $\mu$ ($\mu^2<0$). The vertical lines on the left
indicate the Roberge-Weiss transitions at $\mu_{\rm I}=\pi/8$.
}
\label{fig:action-plots}
\end{figure}
We start to assess the applicability of complex Langevin dynamics for this
model at small chemical potential. In this case we can use continuity
arguments to compare observables at real and imaginary chemical potential.
In Fig.\ \ref{fig:action-plots} the real part of the action density is
shown as a function of $\mu^2$, for several values of $\beta$: from the
ordered phase at large $\beta$ to the disordered phase at low $\beta$. We
observe that at the highest values of $\beta$ this observable is
continuous across $\mu^2=0$, which is a good indication that complex
Langevin dynamics works well in this region. The cusp at
$\mu_{\rm I}=\pi/N_\tau$ (corresponding to
$\mu^2=-0.154$) reflects the Roberge-Weiss transition.
At lower $\beta$, however, we observe that the action density is no
longer continuous: this is interpreted as a breakdown of complex Langevin
dynamics.
In order to verify this, Fig.\ \ref{fig:action-plots} also contains the
expectation values of the action density found using the worm algorithm in
the world line formalism for real $\mu$. As expected, in this case the
action density is continuous across $\mu^2=0$ for all values of $\beta$,
confirming the interpretation given above.
We have verified that the jump in the action density at lower $\beta$ is
independent of the lattice volume. We have also verified that the
discrepancy at $\mu^2=0$ between real Langevin dynamics and the world line
result (at e.g.\ $\beta=0.4$) is due to the finite Langevin stepsize.
For small $\beta$, the numerical results found with the worm algorithm are
consistent with those derived analytically in the strong-coupling limit
above. The expectation value of the action density is $\mu$ independent
and hence the Roberge-Weiss periodicity is smoothly realized. Using Eq.\
(\ref{eqSsc}), we also find quantitative agreement: in the strong-coupling
expansion $\langle S\rangle/\Omega = -0.0621 +{\cal O}(10^{-4})$ for $\beta=0.2$
and $-0.145 +{\cal O}(10^{-3})$ for $\beta=0.3$.
As discussed above, for the parameter values and lattice sizes used here
the sign problem is not severe: taking
$\mu^2=0.1$ and $\beta=0.2$, we find that
\begin{equation}
\Omega\Delta f \approx \Omega \frac{\beta^2\mu^2}{4} \approx 0.51,
\;\;\;\;\;\;\;\;\;\;\;\;
\langle e^{i\varphi}\rangle_{\rm pq} \approx 0.60.
\end{equation}
We take this as a first indication that the observed breakdown is not due
to the presence of the sign problem, especially since complex Langevin
dynamics has been demonstrated to work well in other models where the sign
problem is severe \cite{Aarts:2008wh,Aarts:2009hn}.
\begin{figure}[t]
\centering
\includegraphics[width=0.85\textwidth]{percent.eps}
\caption{Colour plot indicating the relative difference $\Delta S$
between the expectation value of the action density obtained with complex
Langevin dynamics and in the world line formulation, see Eq.\
(\ref{eq:relS}). Also shown is the phase boundary $\beta_c(\mu)$ between
the ordered (large $\beta$) and disordered (small $\beta$) phase
\cite{Banerjee:2010kc}.
}
\label{fig:percent}
\end{figure}
To probe the reliability of complex Langevin dynamics for larger
values of $\mu$, we have computed the action density for a large number
of parameter values in the $\beta-\mu$ plane. Our findings are summarized
in Fig.\ \ref{fig:percent}, where we show the relative difference between
the action densities obtained with complex Langevin (cl) and in the world
line formulation (wl), according to
\begin{equation}
\label{eq:relS}
\Delta S = \frac{\langle S\rangle_{\rm wl} - \langle S\rangle_{\rm cl}}{\langle
S\rangle_{\rm wl}}.
\end{equation}
Also shown in this figure is the phase transition line $\beta_c(\mu)$,
taken from Ref.\ \cite{Banerjee:2010kc}. We observe a clear correlation
between the breakdown of Langevin dynamics and the phase boundary: complex
Langevin dynamics works fine well inside the ordered phase, but breaks
down in the boundary region and the disordered phase.
The largest deviation around $\mu=2$ is due to the Silver Blaze effect:
the difference between the action density found with complex Langevin
dynamics and the correct $\mu$-independent action density is maximal just
before crossing over to the other phase, where the agreement improves
quickly.
\section{Diagnostics}
\label{sec:diagnostics}
\setcounter{equation}{0}
In this section we attempt to characterize the results presented above in
terms of properties of complex Langevin dynamics and the distribution
$P[\phi^{\rm R}, \phi^{\rm I}]$ in the complexified field space, see Eq.\
(\ref{eqP}). We suppress Langevin time dependence, since we always
consider the quasi-stationary regime, i.e.\ the initial part of the
evolution is discarded (we considered Langevin times up to $\vartheta \sim
2\times 10^4$). Our aim is to argue that the discrepancy at small $\beta$
is introduced by complex Langevin dynamics rather than by the presence of
a chemical potential and hence not due to the sign problem.
A first test of the validity of complex Langevin dynamics is to compare
simulations at $\mu=0$ using a cold start, i.e.\ with $\phi^{\rm I}=0$
initially, and a hot start in which $\phi^{\rm I}$ is taken from a Gaussian
distribution.\footnote{The real components $\phi^{\rm R}$ are taken from a
Gaussian distribution always.} When $\mu=0$, a cold start corresponds to
real Langevin dynamics. In the case of a hot
start, however, the fields lie immediately in the complexified space and
so the dynamics is complexified. Comparison of results obtained with these
two initial ensembles gives insight into the inner workings of
complex Langevin dynamics.
We have computed the expectation value of the action density at $\mu=0$
using both a hot and a cold start. We found them to agree at large
$\beta$, despite the fact that the imaginary components of the field are
initialised randomly. However, when $\beta\lesssim 0.5$, they disagree.
Moreover, the result from the cold start agrees with the one obtained in
the world line formulation. One is therefore led to conclude that when
$\mu=0$ the imaginary components $\phi^{\rm I}$ are driven to zero (more
precisely, to a constant value)
at large
$\beta$ but are not constrained at small $\beta$. In other words the
drift terms are not capable of restoring the reality of the dynamics. It
is tempting to relate this to being in (or close to) the disordered
phase. We note that it cannot be understood from the classical fixed
point structure, since this is independent of $\beta$. We also remark
that the dynamics at small $\beta$ resembles Langevin dynamics with
complex noise ($N_{\rm I}>0$) \cite{Aarts:2009uq}, where the trajectories are
kept in the complexified field space by the stochastic kicks on
$\phi^{\rm I}$ (rather than by the drift terms, as is the case here).
In terms of the distribution $P[\phi^{\rm R}, \phi^{\rm I}]$, these findings
imply that $P[\phi^{\rm R}, \phi^{\rm I}]\sim e^{-S}\delta(\phi^{\rm I})$ at large
$\beta$, but not at small $\beta$. This can be further investigated by
studying the width of the distribution in the imaginary
direction,\footnote{The mean value $\langle\phi^{\rm I}\rangle=0$; in the large
$\beta$ phase, this requires averaging over a large number of initial
conditions.}
\begin{equation}
\left\langle \left(\Delta\phi^{{\rm I}}\right)^2 \right\rangle =
\left\langle \frac{1}{\Omega}\sum_x \left(\phi_{x}^{\rm I}\right)^2
\right\rangle
- \left\langle \frac{1}{\Omega}\sum_x \phi_{x}^{\rm I} \right\rangle^2.
\end{equation}
When $\mu=0$ the width should vanish, while when turning on $\mu$ one may
expect it to increase smoothly. The results are shown in
Fig.~\ref{fig:imag2}. For the larger $\beta$ values this is exactly what
is observed: the width increases smoothly from zero. For the smaller
$\beta$ values, however, we observe that the width is nonzero even when
$\mu=0$ (when using a hot start), and remains large for nonzero $\mu$. At
larger values of $\mu$ the width is driven again towards zero and
agreement with the world line results improves, see Fig.\
\ref{fig:percent}. We remark here that it is possible that different
distributions (with different widths) yield the same result for
observables. This is what is theoretically expected in the presence of
complex noise ($N_{\rm I}\geq 0$) \cite{Aarts:2009uq} and can be seen
analytically in gaussian models with complex noise, where a continuous
family of distributions $P[\phi^{\rm R},\phi^{\rm I};N_{\rm I}]$ all yield the same
result for observables, independent of $N_{\rm I}$, even though the width of
these distributions is nonzero and increases with $N_{\rm I}$ \cite{gert}. In
the case we study here, however, we find that the failure of complex
Langevin dynamics in the disordered phase is correlated with the spread of
the distribution $P[\phi^{\rm R},\phi^{\rm I}]$ in the noncompact direction. We
conclude that a relatively narrow distribution, with a smoothly increasing
width, is required. We note again that this resembles observations made in
simulations of nongaussian models with complex noise
\cite{Aarts:2009uq,seiler}.
\begin{figure}[t]
\centering
\includegraphics[width=0.47\textwidth]{img2-i2_x10.eps}
\includegraphics[width=0.47\textwidth]{img2-i2_x8.eps}
\caption{Width of the distribution $P[\phi^{\rm R},\phi^{\rm I}]$ in the
imaginary direction for various values of $\beta$
as a function of $\mu^2$ on a $10^3$ lattice (left) and,
for larger $\mu$, as a function of $\mu$ on a $8^3$ lattice (right).
}
\label{fig:imag2}
\end{figure}
To investigate the interplay between (the width of) the distribution and
observables, we express expectation values as
\begin{equation}
\langle A[\phi^{\rm R}, \phi^{\rm I}] \rangle = \frac{1}{Z}\int D\phi^{\rm R}
D\phi^{\rm I}\, P[\phi^{\rm R}, \phi^{\rm I}]A[\phi^{\rm R}, \phi^{\rm I}],
\end{equation}
with
\begin{equation}
Z = \int D\phi^{\rm R} D\phi^{\rm I}\, P[\phi^{\rm R}, \phi^{\rm I}].
\end{equation}
In general the operator $A$ is not required to be holomorphic, i.e.\ a
function of $\phi^{\rm R}+i\phi^{\rm I}$, since this will allow more
insight in properties of the distribution.\footnote{Of course
only holomorphic functions correspond to observables in the original
theory.}
The distribution of an operator $A$ can then be defined according to
\begin{equation}
\langle A \rangle = \int dA\, P(A)A = \frac{1}{Z}\int D\phi^{\rm R} D\phi^{\rm I}\,
P[\phi^{\rm R}, \phi^{\rm I}]A[\phi^{\rm R}, \phi^{\rm I}],
\end{equation}
where
\begin{equation}
P(A) = \frac{1}{Z}\int D\phi^{\rm R} D\phi^{\rm I}\, P[\phi^{\rm R}, \phi^{\rm I}]
\delta(A - A[\phi^{\rm R}, \phi^{\rm I}]),
\end{equation}
with the normalization
\begin{equation}
\int dA \, P(A) = 1.
\end{equation}
Distributions $P(A)$ can be constructed numerically, by sampling $A$ from
configurations generated by complex Langevin dynamics.
The distribution for the action density is shown in
Fig.~\ref{fig:act-dens}, comparing again a hot and cold start at $\mu=0$.
This figure supports the earlier claim that real and complex Langevin
match at larger $\beta$ but fail at smaller $\beta$. However, the reason
for failure is somewhat subtle. Na\"{\i}vely, one might expect a large
``tail'' caused by excursions in the complexified field space to affect the
expectation value but this does not appear to happen. Instead we find that
the entire distribution is shifted and becomes only slightly wider at
$\beta\lesssim 0.5$ when the hot start is used.
\begin{figure}[t]
\centering
\includegraphics[width=0.7\textwidth]{prob_act_mu0.0_x8.eps}
\caption{Distribution of action density $S/\Omega$ for various values of
$\beta$ at $\mu=0$ on a $8^3$ lattice, using a hot and a cold start.}
\label{fig:act-dens}
\end{figure}
\begin{figure}[!p]
\centering
\subfigure[$\beta=0.7$]{
\includegraphics[width=0.47\textwidth]{prob_kmax_b0.7_x8.eps}
}
\subfigure[$\beta=0.6$]{
\includegraphics[width=0.47\textwidth]{prob_kmax_b0.6_x8.eps}
}
\subfigure[$\beta=0.5$]{
\includegraphics[width=0.47\textwidth]{prob_kmax_b0.5_x8.eps}
}
\subfigure[$\beta=0.4$]{
\includegraphics[width=0.47\textwidth]{prob_kmax_b0.4_x8.eps}
}
\subfigure[$\beta=0.3$]{
\includegraphics[width=0.47\textwidth]{prob_kmax_b0.3_x8.eps}
}
\subfigure[$\beta=0.2$]{
\includegraphics[width=0.47\textwidth]{prob_kmax_b0.2_x8.eps}
}
\caption{Distribution of $K^{\rm max}/(6\beta)$ at $\mu=0$ on a $8^3$
lattice
using a hot and a cold start.
}
\label{fig:prob-kmax}
\end{figure}
\begin{figure}[!p]
\centering
\subfigure[$\beta=0.7$]{
\includegraphics[width=0.47\textwidth]{prob_kmax_b0.7_mu0.1_x8.eps}
}
\subfigure[$\beta=0.6$]{
\includegraphics[width=0.47\textwidth]{prob_kmax_b0.6_mu0.1_x8.eps}
}
\subfigure[$\beta=0.5$]{
\includegraphics[width=0.47\textwidth]{prob_kmax_b0.5_mu0.1_x8.eps}
}
\subfigure[$\beta=0.4$]{
\includegraphics[width=0.47\textwidth]{prob_kmax_b0.4_mu0.1_x8.eps}
}
\subfigure[$\beta=0.3$]{
\includegraphics[width=0.47\textwidth]{prob_kmax_b0.3_mu0.1_x8.eps}
}
\subfigure[$\beta=0.2$]{
\includegraphics[width=0.47\textwidth]{prob_kmax_b0.2_mu0.1_x8.eps}
}
\caption{As in the previous figure, for $\mu=0.1$}
\label{fig:prob-kmax2}
\end{figure}
Finally, the observed difference at large and small $\beta$ also appears
prominently in the actual dynamics, i.e.\ in the drift
terms. We have analyzed the maximal force $K^{\rm max}$ appearing in the
adaptive stepsize algorithm. In the case of real Langevin dynamics, the
drift terms are limited by an upper bound of $K^{\rm max} \le 6\beta$. In
the complexified space there is no upper limit and the drift terms can in
principle become several orders of magnitude larger~\cite{Aarts:2009dg}.
The distribution of $K^{\rm max}$ is plotted at $\mu=0$ with hot and cold
starts in Figure~\ref{fig:prob-kmax}. In the large $\beta$ phase, the
distributions appear identical, with $K^{\rm max}\le 6\beta$. This is
consistent with the conclusion reached above. In the low $\beta$ phase the
distributions are dramatically different: in the complexified dynamics,
triggered by the hot start, much larger forces appear. The distributions
are no longer peaked but very broad with a long tail (note the horizontal
logarithmic scale). At $\beta=0.5$ we observe interesting crossover
behaviour: both the peaked distribution bounded by $K^{\rm max} = 6\beta$
and a decaying ``tail'' characteristic of small $\beta$ distributions
appear.
To study the two possible distributions of $K^{\rm max}$ further, we show
in Fig.~\ref{fig:prob-kmax2} the same results but now with $\mu=0.1$. In
this case the hot and cold start yield identical distributions, since both
simulations are complexified due to the nonzero chemical potential. The
striking difference between the distributions at large and small $\beta$
is still present. At large $\beta$ the force can occasionally be large,
making the use of an adaptive stepsize necessary. However, the typical
value is still determined by the maximal value for real Langevin dynamics,
i.e.\ $6\beta$. At small $\beta$ this part of the distribution is
completely gone and is replaced by a broad distribution at much larger
$K^{\rm max}$ values. Again at $\beta=0.5$ we observe crossover behaviour
with both features present. These results are qualitatively the same on
larger volumes.
Let us summarize the findings of this section. Complex Langevin dynamics
works well at large $\beta$ in the ordered phase. The distribution
$P[\phi^{\rm I},\phi^{\rm I}]$ in the complexified field space is relatively
narrow in the noncompact direction and Langevin simulations started with
hot and cold initial conditions agree. The drift terms do occasionally
become large but the typical size is set by the maximal value for real
Langevin evolution.
At small $\beta$, in or close to the disordered phase, the distribution
is much wider in the $\phi^{\rm I}$ direction. Typical drift terms are
much larger, with a wide spread in the distribution. At $\mu=0$
complexified dynamics does not reduce to real dynamics. There is a
strong correlation with the phase the theory is in (see Fig.\
\ref{fig:percent}), but not with the sign problem, since these
observations also hold at $\mu=0$ and are independent of the lattice
volume. Moreover, for the lattice volumes we consider the sign problem is
not severe.
We emphasize that a firm conclusion can only be drawn after all the
findings presented above are combined consistently, while the observation
of e.g.\ large drift terms or a large width by itself would clearly be
insufficient.
\section{Conclusion}
\label{sec:conclusion}
\setcounter{equation}{0}
We have studied the applicability of complex Langevin dynamics to simulate
field theories with a complex action due to a finite chemical potential,
in the case of the three-dimensional XY model. Using analytical
continuation from imaginary chemical potential and comparison with the
world line formulation we found that complex Langevin dynamics yields
reliable results at larger $\beta$ but fails when $\beta\lesssim 0.5$ at
small chemical potential. We established that the region of failure is
strongly correlated with the part of the phase diagram which corresponds
to the disordered phase. We have verified that these conclusions do not
depend on the lattice volume. Failure at small $\beta$ values was also
observed a long time ago in the case of SU(3) field theory in the presence
of static charges \cite{Ambjorn:1986fz}.
Due to the use of an adaptive stepsize algorithm no runaways or
instabilities have been observed. The results we found in the disordered
phase are therefore interpreted as convergence to the wrong result. To
analyze this, we have studied properties of the dynamics and field
distributions in the complexified field space. For the smaller $\beta$
values, we found that complexified dynamics does not reduce to real
dynamics when $\mu=0$. Furthermore, for the system sizes and parameter
values we used, the sign problem is not severe. We conclude therefore that
the failure is not due to the presence of the sign problem, but rather due
to an incorrect exploration of the complexified field space by the
Langevin evolution. The forces appearing in the stochastic process behave
very differently at large and small $\beta$. Interestingly, in the
crossover region at $\beta\approx 0.5$, the dynamics shows a combination
of large and small $\beta$ characteristics. It would be interesting to
further understand this, e.g.\ in terms of competing (nonclassical) fixed
points.
We found that several features resemble those found in simulations of
simple models with complex noise \cite{Aarts:2009uq,seiler}. Our hope is
therefore that a detailed study of simple models with complex noise can
shed light on the features observed here with real noise. Such an
investigation is currently in progress.
\vspace*{0.5cm}
\noindent
{\bf Acknowledgments.}
\noindent
We thank Chris Allton and Simon Hands, Debasish Banerjee and Shailesh
Chandrasekharan, Philippe de Forcrand, and especially Kim Splittorff,
Erhard Seiler and Ion-Olimpiu Stamatescu for discussions. We thank the
Blue C Facility at Swansea University for computational resources. This
work is supported by STFC.
|
2,869,038,153,984 | arxiv | \section{Introduction}
\label{sec:intro}
Understanding processes of kinetic dissipation in magnetized plasma is essential for explaining the physical origin and evolution of the solar wind. Observationally, the power spectral density (PSD) of the magnetic field fluctuations is commonly divided into two regimes separated by a spectral break. The lower frequencies, corresponding to larger physical scales, correspond to magnetohydrodynamic (MHD) fluctuations, with an inertial range of turbulence similar to Kolmogorov $f^{-5/3}$ power-law spectrum. In the high frequency range, the power spectra is observed to steepen with a spectral index of between -2 to -4 \citep{Bruno2013, Kiyani2015, Chen2016}. These scales are thought to correspond to scales, in which the MHD approximation is no longer valid, and the kinetic effects of the protons should be considered \citep{Alexandrova2009}. However, the specific processes occurring in the kinetic range have not been determined, with significant debate regarding the nature of the fluctuations and the relevant non-linear processes \citep{Howes2017Ph}.
\textbf{The steepening of the spectral index possibly implies that cascaded energy at the end of MHD scale may be gradually dissipated or develop into a dispersive kinetic turbulence}. Observationally, the solar wind expands non-adiabatically indicating that \em{in-situ} heating must occur. Dissipation of the inertial range turbulence is one source of energy capable of proton heating, though there are multiple mechanisms which may lead to dissipate \citep{Marsch2006}. Kinetic Alfv\'en waves (KAW) could start to dissipate via Landau damping since the scale of the proton gyroradius $\rho_p = v_{th,p}/\Omega_p$, where $v_{th,p}$ is the thermal velocity of proton and $\Omega_p=eB/m_p$ is the proton gyrofrequency, $e$ is the elementary charge, B is the mean magnetic field and $m_p$ is the mass of the proton \citep{Leamon1999a,Schekochihin2009}. Stochastic proton heating is also a possible dissipation mechanism at scales near $\rho_p$. The ions could be heated perpendicularly when the amplitude of the gyro-scale fluctuations is large \citep{Chandran2010,Bourouaine2013,Vech2017,martinovic2019radial}. The proton inertial length $d_p = v_A/\Omega_p$ is another important scale, where $v_A = B/\sqrt{\mu_0n_pm_p}$ is the Alfv\'en speed, with $\mu_0$ being the vacuum magnetic permeability and $n_p$ being the proton density . The inertial length corresponds to the scale at which electrons can decouple from protons and it may limit the size of small-scale current sheets formed through non-linear turbulent processes, which in turn may dissipate energy through magnetic reconnection \citep{Leamon2000,Dmitruk2004,Vasquez2007}.
Alfv\'en waves with quasi-parallel propagation at relatively higher frequency may dissipate through cyclotron resonance damping. For parallel propagating Alfv\'en waves, the damping will occur at the (parallel) wavenumber corresponding to the cyclotron resonance $k_c=\Omega_p/(v_A+v_{th,p})$ \citep{Leamon1998}. Studies of anisotropy in solar wind turbulence using the method introduced by \citet{Cho2000} and \citet{Horbury2008} suggest that the inertial range is highly anisotropic near the kinetic break with $k_\perp \gg k_\parallel$, such that a parallel cyclotron resonance may be hard to obtain \citep{Chen2010b}. PSD($k_\parallel$, $k_\perp$) as reconstructed with the tomography method based on Fourier projection-slice theorem reveals the dominance of oblique propagation of Alfvenic fluctuations extending its power ridge to higher $k_\perp$ and also higher $k_\parallel$, which indicates the existence of oblique Alfven-cyclotron waves \citep{He2013, Yan2016}. The scaling anisotropy may be weakened if the cases for quasi-parallel sampling of the turbulence are carefully inspected and selected \citep{Wang2016}.
From another point of view, the change of the spectral slope indicates a transition from a cascade of non-dispersive Alfv\'en wave to a cascade of dispersive kinetic Alfv\'en wave around the scale $k_\perp \rho_p \sim 1$\citep{Bale2005a,Howes2008,Schekochihin2009}. It has been additionally suggested that a cascade of whistler modes or magnetosonic waves may develop at kinetic scales \citep{Stawicki2001,PeterGary2009}. Furthermore, the inclusion of the Hall term in the MHD approximation has been proposed as the source of the break at scales $d_pk_\perp\sim 1$ \citep{Galtier2006}. \citet{Mallet2017} and \citet{Loureiro2017} suggest that the inertial-range turbulence could generate sheet-like turbulent structures, which could be disrupted by reconnection below a disruption scale intermediate to $d_p$ and $\rho_p$.
Given the number of potential mechanisms which generate a spectral break, and the relatively narrow range in the physical scales predicted, distinguishing these various mechanisms using empirical measurements has proven a difficult task. Furthermore, these different physical processes may occur simultaneously in the solar wind, complicating efforts to quantify their relative contributions.
Many previous studies have explored the transition from inertial to kinetic scale physical processes through both observations and simulations, although no consensus has been reached. Observationally, the mechanisms which lead to spectral steepening may be constrained by investigating the dependence of the spectral break frequency on various plasma parameters. For example, the $\beta_p$-dependence of the break scale has been studied at 1 AU using WIND data, where $\beta_p = n_pk_BT_p/(B^2/(2\mu_0)) = \rho_p^2/d_p^2$ is the ratio of proton thermal pressure to magnetic pressure. For example, \citet{Chen2014} found the break frequency ($f_b$) close to $f_{d}$ at $\beta_p \ll 1$ and close to $f_{\rho}$ at $\beta_p\gg 1$, where $f_{d}=v_{sw}/(2\pi d_p)$ and $f_{\rho} = v_{sw}/(2\pi \rho_p)$ are the frequencies corresponding to the spatial scales $d_p$ and $\rho_p$ in the spacecraft frame under the Taylor Hypothesis, which approximates the observed time evolution of fluctuations in the spacecraft frame as spatial structures advected at the solar wind speed $v_{sw}$ . Numerical 2D-hybrid simulations found similar $\beta_p$ dependence \citep{Franci2016}. \citet{Wang2018} found $f_b/f_{d}$ is statistically independent with $\beta_{p}$ of $0.1 < \beta_{p} < 1.3$ plasma. \citet{Woodham2018} and \citet{Duan2018} considered that the spectral break is best associated with the proton cyclotron scale $f_c=v_{sw}k_c/(2\pi)$. \citet{Vech2018} proposed the break may be caused by magnetic reconnection at a disruption scale intermediate to $d_p$ and $\rho_p$ predicted in \citet{Mallet2017}. The spectral break is found to be independent of $\theta_{\rm{VB}}$, the angle between solar wind velocity and magnetic field, indicating that the spectral break seems to be isotropic in the wavenumber space \citep{Duan2018}. \citet{Duan2018} further proposed and illustrated that the diffusion effect, as a combination of dissipation and dispersion effects, becomes more isotropic as compared to the dissipation or the dispersion alone. The concept and physics of diffusion is very important in both turbulence and magnetic reconnection studies. Several studies investigated the break scale at different heliocentric distances and its relation with plasma scales. \textbf{\citet{Perri2010} suggested the break frequency did not show any remarkable evolution between 0.3 AU and 4.9 AU from the observation of \textit{MESSAGE} and \textit{Ulysess}}. \citet{Bourouaine2012} found the break scale follows the $d_p$ assuming 2-D turbulence, and the break frequency does not change significantly from 0.3 to 0.9 AU. \citet{Bruno2014} found the break moves to higher frequencies as the heliocentric distance decreases, finding agreement with the proton cyclotron resonance scale between 0.42 and 5.3 AU. While many previous studies have focused on the radial behavior of the spectral break in the fast solar wind, the scaling of the spectral break in the slow wind has not been investigated.
NASA's Parker Solar Probe (PSP) provides a set of {\em{in-situ}} instruments capable of constraining the kinetic processes at work in the corona and nascent solar wind, which provide a key connection to coronal heating and solar wind acceleration \citep{Fox2016,Bale2016,Kasper2016}. This manuscript provides a statistical analysis of the behavior of the proton-scale spectral break observed by PSP between 0.17 AU and 0.63 AU, and its radial dependence in the slow solar wind. By measuring the radial dependence of the break we are able to compare the location of the spectral break with various physical scales under a range of plasma conditions, enabling an investigation into the mechanisms behind spectral steepening of the kinetic range.
\section{Data and Method}
\label{sec:data}
We analyze 26 days of data from PSP during the cruise phase of the second orbit of PSP from Mar 10, 2019 to Apr 5, 2019, data on Mar 16 were excluded as the time resolution of the magnetic field is not sufficient to resolve the spectral break. During the period, PSP covers the distance between 0.63 AU (Mar 10) and 0.17 AU (Apr 5) from the Sun. Magnetic field measurements on PSP are made by the FIELDS/MAG fluxgate magnetometer \citep{Bale2016}. Measurements of the solar wind speed, thermal speed and proton density by the SWEAP/SPC instrument are used to compute plasma scales \citep{Kasper2016}. Sample rates of FIELDS and SWEAP data vary between the different mission phases and encounters. Between March, 10 2019 and March 31, 2019, PSP was in cruise phase with low cadence (MAG 9.2 Hz, SPC 0.036 Hz) sample rate. From March 31, 2019 to April 4, 2019 the mission was in encounter phase near perihelion, and higher cadence measurements are obtained (MAG 149 Hz, SPC 5 Hz).
The data are split into 10-minute overlapping intervals with each adjacent interval offset by 2.5 minutes. The PSD of each magnetic field component is calculated using the fast Fourier transform (FFT). A Hanning window is implemented to reduce spectral leakage. The trace power spectrum is computed as the sum of the three vector components. For each interval, the average power spectrum is taken as the average of five adjacent intervals in total. Figure \ref{fig:1} (a) shows an overview of the trajectory of the PSP in the rotating Carrington heliographic frame. For the majority of the orbit PSP is in slow solar wind ($v_{sw}<$ 500 km/s). There are no intervals with average $v_{sw}>$ 500 km/s. Figure \ref{fig:1} (b) shows $\beta_p$ as a function of the heliocentric distance $r$. As the distance between PSP and the Sun decreases, the proton plasma $\beta$ also decreases due to the increasing strength of the magnetic field; typically, $\beta_p<1$.
To locate the proton-scale spectral break, we employed the method of \citet{Bruno2014} and \citet{Wang2018}. Two frequency ranges at either end of the spectrum are a priori selected as the inertial (between 0.1 Hz and 0.5 Hz) and dissipation ranges. Table \ref{tab:1} highlights the range of frequencies for the dissipation spectra over the orbit. A least-squares linear fit of a power law in logarithmic space is performed on the data over each range. The break frequency $f_b$ is defined as the intersection of the two fitting lines. Because the range of spacecraft frequencies corresponding to the dissipation range changes with heliospheric distance, the range over which the fit is performed is varied throughout the orbit. Additionally, spectral flattening is observed when the amplitude of the turbulent fluctuations reaches the noise level of the MAG ($10^{-3}\sim 10^{-4}\ \rm{nT^2/Hz}$). Because of the decreasing strength of the fluctuations at larger distances, the noise floor is reached at lower frequencies in the cruise data.
Figure \ref{fig:2} shows example power spectra at several distances with measured spectral indices and breaks. At larger distances, the spectral break shifts to lower spacecraft frame frequency. The top three PSDs shows a typical inertial range slope $-5/3<\alpha_1<-3/2$, and a dissipation range slope $\alpha_2\approx -4$. The spectra from 0.62 AU does not show an obvious break between two power law spectra. Additionally the inertial range spectral index is somewhat steeper than what is typically observed. This shape has been previously reported by \citet{Bruno2014b} in slow winds. \citet{Bowen2018} demonstrates that the presence of steep magnetic spectra (i.e. $\alpha_1 \sim -2$) likely corresponds to observations of intermittency in the turbulent fluctuations.
We removed several intervals with spectral features peaked at ion scales, which results in a deviation from power law distributions. The presence of these features is likely a secondary population of ion cyclotron waves (Bowen et al. 2019 to be submitted). To systematically control for effects from secondary population of fluctuations, we only accept spectra that fall within a range of spectral indices statistically consistent with known turbulent scalings $-2.5<\alpha_1 <-1.2$ and $\alpha_1 > \alpha_2$ In total 14820 intervals were obtained with 10724 of them returning $\alpha_1$ and $\alpha_2$ within our constrained bounds. 5194 of these intervals have corresponding particle data. Mean values of v_{sw}$, $v_{th,p}$, $n_p$ are averaged over each 10-minute intervals. Here $v_{th,p}=\sqrt{2k_BT_p/m_p}$. We find that $\beta_p<1$ in 4479 intervals, and $\beta_p>1$ in 715 intervals. We estimated the corresponding frequencies in the spacecraft frame using the averaged quantities under Taylor Hypothesis: $f_c=v_{sw}k_c/(2\pi)$, $f_d=v_{sw}/(2\pi d_p)$, $f_\rho=v_{sw}/(2\pi \rho_p)$.
\section{Results}
\label{sec:result}
Figure \ref{fig:3}(a) shows the distribution of break frequency $f_b$ with heliocentric distance $r$. Figure \ref{fig:3}(b) shows the distribution of $f_b$ with $\beta_p$. The data are binned in a 20 $\times$ 20 grid in log-log space. There is large variation in $f_b$ and a clear radial dependence with a power law of $f_b \sim r^{-1.11\pm 0.01}$. A Pearson correlation coefficient is calculated with PCC($r,f_b$) = -0.81, and a Spearman correlation coefficient SCC($r,f_b$) = -0.84. This result is similar to the scalings in the fast solar wind suggested by \citet{Bruno2014}. The $f_b$ shows a weak dependence with $v_{sw}$ with PCC($v_{sw},f_b$) = 0.14 and SCC($v_{sw},f_b$) = 0.10. $f_b$ also decreases with $\beta_p$; PCC($\beta_p,f_b$) = -0.49, SCC($\beta_p,f_b$) = -0.51.
To investigate the correlation between $f_b$ and physical plasma scales, we calculated average $f_{\rho }$, $f_{d}$, and $f_{c}$ for each interval having the measurement of particle data. Table \ref{tab:1} shows that $f_b$ is correlated with all of these scales to a similar degree. It is accordingly difficult to uniquely distinguish the scale which best represents the break frequency.
The ratio of $f_b$ to these characteristic frequencies are calculated and illustrated in Figure \ref{fig:4}. The data is again binned in a 20 $\times$ 20 grid in log-log space. The average and the stand deviation inside each bin are illustrated with blue lines. The average and the standard deviation of each ratio over all of the data is $0.87\pm 0.34 (f_b/f_{c})$, $0.56\pm 0.24 (f_b/f_{d})$ and $0.32\pm 0.22 (f_b/f_{\rho})$. The spectral break occurs nearest the cyclotron resonance frequency. The average $f_b/f_c$ is the largest in each bin. $f_b/f_c$ and $f_b/f_d$ decrease as the distance become larger, while $f_b/f_\rho$ is opposite.
Figure \ref{fig:5} shows the $\beta_p$ dependence of the ratios. The result is similar to \citet{Chen2014}. $f_b/f_d$ is close to 1 where $\beta_p \ll 1$, while $f_b/f_\rho$ is close to 1 where $\beta_p \gg 1$. $f_b/f_c$ is close to 1 for all $\beta_p$. All of the ratios correlate to $\beta_p$. The correlation coefficients are shown in Table \ref{tab:2}. As $f_c=(1/f_d+1/f_\rho)^{-1}$, the $f_c$ is close to the smaller of $f_d$ and $f_\rho$. Our result could not distinguish the behavior of the different possibilities.
\section{Conclusion and Discussion}
\label{sec:conclusion}
We have investigated the radial and $\beta_p$ dependence of the observed proton-scale magnetic spectral break frequency $f_b$ in the slow solar wind from 0.17 AU $<r<$ 0.63 AU. Additionally, we have compared the break scale with the spacecraft frequencies corresponding to the cyclotron resonance, $f_c$, proton gyroscale, $f_\rho$, and proton inertial scale, $f_d$ over the range of the heliocentric distance, $r$. The results show that the break frequency follows a power law of $f_b \propto r^{-1.11}$. We find that the break frequency has mild correlation with all of the three plasma characteristic scales. However, $f_b/f_c$ is closest to unity over the full range of distances covered; however since the predicted breaks scales are typically only defined to order unity, it is difficult to distinguish them at the moderate values of $\beta_p$ observed by PSP to date.
This work provides the first measurement of the radial scaling of the proton-scale break in the slow solar wind in the inner heliosphere down to 0.17 AU. The slow solar wind break manifests a radial dependence similar to the fast wind, with the spectral break occurring around the ion cyclotron resonance scales\citep{Bruno2014}. This suggests that cyclotron resonance may be an important process in the slow solar wind, similar to observations at 1 AU, although the anisotropy of the turbulence complicates a simple picture of parallel-wavenumber cyclotron damping of Alfv\'en waves.
The ratio $f_b/f_c$ approaches unity near the Sun, which may be due to the increased activity of the solar wind plasma close to the Sun generating ion cyclotron waves (Klein et al. 2019 to be submitted, Bowen et al. 2019 to be submitted). However, this could also possibly be due to the fact that $v_{sw}$ changes. \citet{Woodham2018} finds $f_b/f_c$ deviates from unity in the slow solar wind at 1 AU.
Considering that $f_b$ correlates with all three of $f_c$, $f_d$ and $f_\rho$, we cannot constrain the physical mechanisms which relate to the spectral break. For instance, the observations of \citet{Vech2018} which suggest that magnetic reconnection may disrupt the inertial cascade \citep{Mallet2017} at a disruption scale which has a similar scaling to the cyclotron resonant scale, if proton and electron temperatures are similar. Due to our current lack of electron temperature we have not made any attempt to distinguish the disruption scale.
As PSP descends deeper into the heliosphere we expect to study the break scale where physical scales show better separation in spacecraft frequency. In addition to studying the spectral break, investigation into the dynamics of particles and waves at kinetic scales may constrain the process by which the spectra steepens.
The nature of the fluctuation near the break scale also needs a more comprehensive analysis. For example, it could be a kinetic-wave fluctuation or a dissipating current sheet with Hall effect. The kinetic-wave fluctuation may consist of quasi-parallel ion cyclotron waves and quasi-perpendicular kinetic Alfv\'en waves. There are lots of observational studies about the ion-scale fluctuation using multiple spacecraft at 1 AU \citep{He2011,He2012a, He2012, Salem2012, Klein2014, Zhao2017}. The ion-scale wave feature and the energy distribution in the inner heliosphere need further exploration.
The ion-scale magnetic reconnection is observed in magnetosphere and magnetosheath, while it is difficult to measure the ion-scale current sheet and possible magnetic reconnection under a limited time resolution in the solar wind with high advecting speed. Although the outflows of reconnection and accompanying heating and turbulent enhancement have been observed \citep{Gosling2004,Phan2006,He2018}. The investigation of the relation between the break-scale fluctuation and the ion-scale current sheet (even the reconnection) needs high quality and high time resolution measurement. Furthermore, it is unclear that the fluctuation generated by the ion-scale magnetic reconnection could correspond to the ion-scale turbulent fluctuation. The picture that the reconnection causes the spectral break is challenging because of the lack of the observation of the reconnection events in the solar wind, especially in the high-speed solar wind.
\begin{deluxetable*}{ccc}
\tablenum{1}
\tablecaption{The selected fitting frequency interval for the dissipation range\label{tab:1}}
\tablewidth{0pt}
\tablehead{
\colhead{Date} & \colhead{$r$ (AU)} & \colhead{Frequency (Hz)}
}
\startdata
Mar 10-11 & 0.60-0.63 & 0.8-1.4 \\
Mar 12-15 & 0.54-0.60 & 0.9-1.4 \\
Mar 17-19 & 0.47-0.52 & 0.9-1.5\\
Mar 20-24 & 0.37-0.47 & 1.2-2.2\\
Mar 25-28 & 0.28-0.37 & 1.5-2.5\\
Mar 29-30 & 0.23-0.28 & 1.5-3\\
Mar 31-Apr 5 & 0.17-0.23 & 2-5
\enddata
\end{deluxetable*}
\begin{deluxetable*}{cccc}
\tablenum{2}
\tablecaption{Summary of the correlation coefficients of various power-law fits}
\tablewidth{0pt}
\label{tab:2}
\tablehead{
\colhead{Parameter 1} & \colhead{Parameter 2} & \colhead{PCC}
&\colhead{SCC}}
\startdata
&$r$&-0.81&-0.84\\
$f_b$&$v_{sw}$ &0.11&0.10\\
&$\beta_{p}$&-0.45&-0.51\\
\cline{1-4}
&$f_c$ & 0.78 & 0.76 \\
$f_b$&$f_d$ &0.70 & 0.64\\
&$f_\rho$ & 0.69& 0.72\\
\cline{1-4}
& $f_b/f_c$ & -0.40 & -0.34\\
$r$ & $f_b/f_d$ & -0.61 & -0.63\\
& $f_b/f_\rho$ & 0.13 & 0.30\\
\cline{1-4}
& $f_b/f_c$ & -0.05 & -0.09 \\
$\beta_p$ & $f_b/f_d$ & -0.55 & -0.59 \\
& $f_b/f_\rho$ & 0.71 & 0.71
\enddata
\end{deluxetable*}
\begin{figure}[ht!]
\plotone{breakfig1.png}
\caption{(a) The location of PSP during Encounter 2 in the corotating Carrington frame. The red solid circle at the origin is the Sun. The heliocentric distance of PSP is decreasing. Black dashed lines indicate the location of PSP at several times. The orbit color is used to indicate different solar wind speeds. The thin dashed line on the orbit means the SPC data was unavailable. (b) $\beta_p$ at different distances. Blank regions indicate unavailable data. \label{fig:1}}
\end{figure}
\begin{figure}[ht!]
\plotone{breakfig2.png}
\caption{Examples of the PSDs of magnetic field fluctuations at several heliocentric distances. The cyan lines indicate fitted power law spectra. The red stars are intersections of the fitted lines and defined as the break frequency $f_b$. T fitted inertial range index $\alpha_1$ and fitted dissipation range index $\alpha_2$ are shown in the legend. \label{fig:2}}
\end{figure}
\begin{figure}[ht!]
\plotone{breakfig3.png}
\caption{(a) 2D-histogram of the measured break frequencies $f_b$ with heliocentric distance $r$. The black lines are the result of linear regression in log-log space, and red lines indicate the 95\% confidence interval estimated by the standard deviation of the regression. (b) 2D-histogram of the measured break frequencies $f_b$ with $\beta_p$; black and red lines again show the results of linear regression and confidence intervals. Panel(a) contains all 10724 measured intervals while panel(b) only contains intervals with available SPC data. \label{fig:3}}
\end{figure}
\begin{figure}[ht!]
\plotone{breakfig4.png}
\caption{The 2D-histogram of the distribution of the occurrence of (a) $\log_{10}(f_b/f_c)$, (b) $\log_{10}(f_b/f_d)$, (c) $\log_{10}(f_b/f_\rho)$ over the heliocentric distance $r$ respectively. The black lines are the linear fitting with least-squares method. The averages and standard deviations of each $r$ bin are plotted as blue lines.\label{fig:4}}
\end{figure}
\begin{figure}[ht!]
\plotone{breakfig5.png}
\caption{The 2D-histogram of the distribution of the occurrence of (a) $\log_{10}(f_b/f_c)$, (b) $\log_{10}(f_b/f_d)$, (c) $\log_{10}(f_b/f_\rho)$ over $\beta_p$ respectively. The black lines are the linear fitting with least-square method. The averages and standard deviations of each $\beta_p$ bin are plotted as blue lines.\label{fig:5}}
\end{figure}
\section{Introduction}
\label{sec:intro}
Understanding kinetic dissipation in magnetized plasma is essential for explaining the physical origin and evolution of the solar wind. Observationally, the power spectral density (PSD) of the magnetic field fluctuations is commonly divided into two regimes separated by a spectral break. The lower frequencies, corresponding to larger physical scales, correspond to magnetohydrodynamic (MHD) fluctuations, with an inertial range of turbulence similar to Kolmogorov $f^{-5/3}$ power-law spectrum. In the high frequency range, the power spectra is observed to steepen with a spectral index of between -2 to -4 \citep{Bruno2013, Kiyani2015, Chen2016}. These scales are thought to correspond to scales, in which the MHD approximation is no longer valid, and kinetic effects of the protons should be considered \citep{Alexandrova2009}. However, the specific processes occurring in the kinetic range have not been determined, with significant debate regarding the nature of the fluctuations and the relevant non-linear processes \citep{Howes2017Ph}.
The steepening of the spectral index possibly implies that cascaded energy at the end of MHD scale may be gradually dissipated or develop into a dispersive kinetic turbulence. Observationally, the solar wind expands non-adiabatically indicating that {\em{in-situ}} heating must occur. Dissipation of the inertial range turbulence is one source of energy capable of proton heating, though there are multiple mechanisms which may lead to dissipation \citep{Marsch2006}. Kinetic Alfv\'en waves (KAW) may dissipate via Landau damping near the scale of the proton gyroradius $\rho_p = v_{th,p}/\Omega_p$, where $v_{th,p}$ is the thermal velocity of proton and $\Omega_p=eB/m_p$ is the proton gyrofrequency, $e$ is the elementary charge, B is the mean magnetic field and $m_p$ is the mass of the proton \citep{Leamon1999a,Schekochihin2009}. Stochastic proton heating is also a possible dissipation mechanism at scales near $\rho_p$. The ions could be heated perpendicularly when the amplitude of the gyro-scale fluctuations is large \citep{Chandran2010,Bourouaine2013,Vech2017,martinovic2019radial}. The proton inertial length $d_p = v_A/\Omega_p$ is another important scale associated with dissipation, where $v_A = B/\sqrt{\mu_0n_pm_p}$ is the Alfv\'en speed, with $\mu_0$ being the vacuum magnetic permeability and $n_p$ being the proton density. The proton inertial length corresponds to the scale at which electrons can decouple from protons and it may limit the size of small-scale current sheets formed through non-linear turbulent processes, which in turn may dissipate energy through magnetic reconnection \citep{Leamon2000,Dmitruk2004,Vasquez2007}.
Alfv\'en waves with quasi-parallel propagation at relatively higher frequency may dissipate through cyclotron resonance damping. For parallel propagating Alfv\'en waves, the damping will occur at the (parallel) wavenumber corresponding to the cyclotron resonance $k_c=\Omega_p/(v_A+v_{th,p})$ \citep{Leamon1998}. Studies of anisotropy in solar wind turbulence using the method introduced by \citet{Cho2000} and \citet{Horbury2008} suggest that the inertial range is highly anisotropic near the kinetic break with $k_\perp \gg k_\parallel$, such that most the energy is contained in perpendicular fluctuations which do not have parallel wavenumbers resonant with parallel cyclotron waves \citep{Chen2010b}. The 2D PSD distribution ($k_\parallel$, $k_\perp$) as reconstructed with the tomography method based on Fourier projection-slice theorem reveals the dominance of oblique propagation of Alfv\'enic fluctuations extending its power ridge to higher $k_\perp$ and also higher $k_\parallel$, which indicates the existence of oblique Alfven-cyclotron waves \citep{He2013, Yan2016}.
Alternatively, the change of the spectral slope may indicate a transition from a cascade of non-dispersive Alfv\'en waves to a cascade of dispersive kinetic Alfv\'en waves around $k_\perp \rho_p \sim 1$\citep{Bale2005a,Howes2008,Schekochihin2009}. It has been additionally suggested that a cascade of whistler modes or magnetosonic waves may develop at kinetic scales \citep{Stawicki2001,PeterGary2009}. Furthermore, the inclusion of the Hall term in the MHD approximation has been proposed as the source of the break at scales $d_pk_\perp\sim 1$ \citep{Galtier2006}. \citet{Mallet2017} and \citet{Loureiro2017} suggest that the inertial-range turbulence could generate sheet-like turbulent structures, which could be disrupted by reconnection below a disruption scale intermediate to $d_p$ and $\rho_p$.
Given the number of potential mechanisms which generate a spectral break, and the relatively narrow range in the physical scales predicted, distinguishing these various mechanisms using empirical measurements has proven a difficult task \citep{markovskii2008statistical}. Furthermore, these different physical processes may occur simultaneously in the solar wind, complicating efforts to quantify their relative contributions \citep{verscharen2019multi}.
Many previous studies have explored the transition from inertial to kinetic scale physical processes through both observations and simulations, although no consensus has been reached. Observationally, the mechanisms which lead to spectral steepening may be constrained by investigating the dependence of the spectral break frequency on various plasma parameters. For example, the $\beta_p$-dependence of the break scale has been studied at 1 AU using \textit{WIND} data, where $\beta_p = \rho_p^2/d_p^2$ is the ratio of proton thermal pressure to magnetic pressure. For example, \citet{Chen2014} found the break frequency ($f_b$) close to $f_{d}$ at $\beta_p \ll 1$ and close to $f_{\rho}$ at $\beta_p\gg 1$, where $f_{d}=v_{sw}/(2\pi d_p)$ and $f_{\rho} = v_{sw}/(2\pi \rho_p)$ are the frequencies corresponding to the spatial scales $d_p$ and $\rho_p$ in the spacecraft frame under the Taylor Hypothesis, which approximates the observed time evolution of fluctuations in the spacecraft frame as spatial structures advected at the solar wind speed $v_{sw}$ . Numerical 2D-hybrid simulations found similar $\beta_p$ dependence \citep{Franci2016}. \citet{Wang2018} found $f_b/f_{d}$ is statistically independent with $\beta_{p}$ of $0.1 < \beta_{p} < 1.3$ plasma. \citet{Woodham2018} and \citet{Duan2018} suggest that the spectral break is best associated with the proton cyclotron scale $f_c=v_{sw}k_c/(2\pi)$. \citet{Vech2018} proposed the break may be caused by magnetic reconnection at a disruption scale intermediate to $d_p$ and $\rho_p$ predicted in \citet{Mallet2017}. The spectral break is found to be independent of $\theta_{\rm{VB}}$, the angle between solar wind velocity and magnetic field, indicating that the spectral break seems to be isotropic in the wavenumber space \citep{Duan2018}. \citet{Duan2018} further proposed and illustrated that the breakdown of magnetic frozen-in condition in wavenumber space, as a combination of dissipation and dispersion effects, could be a more isotropic explanation as compared to the dissipation or the dispersion alone.
Several studies investigated the break scale at different heliocentric distances and its relation with plasma scales. \citet{Perri2010} suggested the break frequency did not show any remarkable evolution between 0.3 AU and 4.9 AU based on observations from \textit{MESSENGER} and \textit{Ulysess}. \citet{Bourouaine2012} also found the break frequency $f_b$ does not change significantly from 0.3 to 0.9 AU from \textit{Helios 2}, and $f_b$ follows $f_d$ if assuming a 2D turbulence model. \citet{Bruno2014} found the break moves to higher frequencies as the heliocentric distance decreases, finding agreement with the proton cyclotron resonance scale between 0.42 and 5.3 AU. While many previous studies have focused on the radial behavior of the spectral break in the fast solar wind, the scaling of the spectral break in the slow wind has not been investigated.
NASA's Parker Solar Probe (\textit{PSP}) provides a set of {\em{in-situ}} instruments capable of constraining the kinetic processes which contribute to heating and acceleration in the corona and nascent solar wind \citep{Fox2016,Bale2016,Kasper2016}. This manuscript provides a statistical analysis of the behavior of the proton-scale spectral break observed by \textit{PSP} between 0.17 AU and 0.63 AU, and its radial dependence in the slow solar wind. By measuring the radial dependence of the break we are able to compare the location of the spectral break with various physical scales under a range of plasma conditions, enabling an investigation into the mechanisms behind spectral steepening of the kinetic range.
\section{Data and Method}
\label{sec:data}
We analyze 26 days of data from \textit{PSP} during the cruise phase of the second orbit of \textit{PSP} from Mar 10, 2019 to Apr 5, 2019, data on Mar 16 were excluded as the time resolution of the magnetic field is not sufficient to resolve the spectral break. During the period, \textit{PSP} covers the distance between 0.63 AU (Mar 10) and 0.17 AU (Apr 5) from the Sun. Magnetic field measurements on \textit{PSP} are made by the FIELDS/MAG fluxgate magnetometer \citep{Bale2016}. Measurements of the solar wind speed, thermal speed and proton density by the SWEAP/SPC instrument are used to compute plasma scales \citep{Kasper2016}. Sample rates of FIELDS and SWEAP data vary between the different mission phases and encounters. Between March, 10 2019 and March 31, 2019, \textit{PSP} was in cruise phase with low cadence (MAG 9.2 Hz, SPC 0.036 Hz) sample rate. From March 31, 2019 to April 4, 2019 the mission was in encounter phase near perihelion, and higher cadence measurements are obtained (MAG 149 Hz, SPC 5 Hz). Figure \ref{fig:1} (a) shows an overview of the trajectory of the \textit{PSP} in the rotating Carrington heliographic frame. For the majority of the orbit \textit{PSP} is in slow solar wind ($v_{SW}<$ 500 km/s). There are no intervals with average $v_{SW}>$ 500 km/s. Figure \ref{fig:1} (b) shows $\beta_p$ as a function of the heliocentric distance $r$. As the distance between \textit{PSP} and the Sun decreases, the proton plasma $\beta$ also decreases due to the increasing strength of the magnetic field; typically, $\beta_p<1$.
The trace power spectral density is estimated by applying a continuous moving window transform on the vector magnetic field. The 26 day interval is divided into partially overlapping 10 minute segments. The beginnings of each adjacent segments are 2.5 minutes apart (overlapping 75\%). A Hanning window is used to reduced spectral leakage in each segment. For each segment the power spectrum is taken using an ensemble average of five adjacent segments. Each PSD actually correspond to data of 20 minutes.
To locate the proton-scale spectral break, we employed the method of \citet{Bruno2014} and \citet{Wang2018}. Two frequency ranges at either end of the spectrum are a priori selected as the inertial (between 0.1 Hz and 0.5 Hz) and dissipation ranges. Table \ref{tab:1} highlights the range of frequencies for the dissipation spectra over the orbit. A least-squares linear fit of a power law in logarithmic space is performed on the data over each range. The break frequency $f_b$ is defined as the intersection of the two fitting lines. Because the range of spacecraft frequencies corresponding to the dissipation range changes with heliocentric distance, the range over which the fit is performed is varied throughout the orbit. Additionally, spectral flattening is observed when the amplitude of the turbulent fluctuations reaches the noise level of the MAG ($10^{-3}\sim 10^{-4}\ \rm{nT^2/Hz}$). Because of the decreasing strength of the fluctuations at larger distances, the noise floor is reached at lower frequencies in the cruise data.
Figure \ref{fig:2} shows an example of power density spectra at several distances with measured spectral indices and breaks. At larger distances, the spectral break shifts to lower spacecraft frame frequency. The top three PSDs shows a typical inertial range slope $-5/3<\alpha_1<-3/2$, and a dissipation range slope $\alpha_2\approx -4$. The spectra from 0.62 AU does not show an obvious break between two power law spectra. Additionally the inertial range spectral index is somewhat steeper than what is typically observed. This shape has been previously reported by \citet{Bruno2014b} in slow winds. \citet{Bowen2018} demonstrates that the presence of steep magnetic spectra (i.e. $\alpha_1 \sim -2$) likely corresponds to observations of intermittency in the turbulent fluctuations.
We removed several intervals with spectral features peaked at ion scales, which results in a deviation from power law distributions. The presence of these features is likely a secondary population of ion cyclotron waves \citep{Bowen2019}. To systematically control for effects from secondary population of fluctuations, we only accept spectra that fall within a range of spectral indices statistically consistent with known turbulent scalings $-2.5<\alpha_1 <-1.2$ and $\alpha_1 > \alpha_2$ In total 14820 intervals were obtained with 10724 of them returning $\alpha_1$ and $\alpha_2$ within our constrained bounds. 5194 of these intervals have corresponding particle data. Mean values of $v_{sw}$, $v_{th,p}$, $n_p$ are averaged over each of intervals. $k_c,d_p,\rho_p,\beta_p$ are calculated from the plasma data. We find that $\beta_p<1$ in 4479 intervals, and $\beta_p>1$ in 715 intervals.
Under the Taylor Hypothesis, the relation between the wavevector of the fluctuation $\mathbf{k}$ and the corresponding frequency $f$ in the spacecraft frame is $2\pi f = \mathbf{k}\cdot\mathbf{v}_{sw}$. Several possible assumptions can possibly made for simplifying the wavevector direction relative to the solar wind flow. If the fluctuations propagate along the solar wind direction, $2\pi f = k v_{sw}$. If the fluctuations propagate parallel to the mean magnetic field direction, $2\pi f = k v_{sw} \cos(\theta_{VB})$. If quasi-2D turbulence with dominant perpendicular fluctuations is assumed, then $\omega = k_\perp v_{sw} \sin(\theta_{VB})\cos(\phi)$, where $\phi$ is the angle between the wavevector and the ($\mathbf{v}_{sw}$, $\mathbf{B}$) plane \citep{Bourouaine2012}. \citet{Duan2018} found that the spectral break frequency is invariant with the magnetic field's orientation, suggesting that the approximation $ 2\pi f=kv_{sw}$ is appropriate. The corresponding frequencies for the physical scales are $f_c=v_{sw}k_c/(2\pi)$, $f_d=v_{sw}/(2\pi d_p)$, $f_\rho=v_{sw}/(2\pi \rho_p)$.
Due to the comparable Alfv\'en and solar wind, and spacecraft speeds, it is unclear whether the Taylor hypothesis is valid for \textit{PSP} observations during its perihelion \citep{Narita2013, Bourouaine2018, Bourouaine2019, Chhiber2019}. Recent work from \citet{Chaspis2019} suggests the Taylor hypothesis may not be applicable when \textit{PSP} is below 40 solar radii (0.19 AU). To verify our results against the assumption of the Taylor hypothesis, we apply an analysis of the proton break scaling to the modified Taylor Hypothesis: $2\pi f^* = \mathbf{k}\cdot\mathbf{U}_{total}$ \citep{Klein2015}. Here $\mathbf{U}_{total}=\mathbf{v}_{sw}+\mathbf{v}_A-\mathbf{v}_{sc}$, and $\mathbf{v}_{sc}$ is the velocity of the \textit{PSP}. The modified Taylor hypothesis assumes that the anti-sunward propagating fluctuations are approximately frozen into a frame with velocity $\mathbf{U}_{total}$ if the fluctuations do not grow or damp significantly when passing over the spacecraft. The modified corresponding characteristic frequencies are $f_c^*=U_{total}k_c/(2\pi)$, $f_d^*=U_{total}/(2\pi d_p)$, and $f_\rho^*=U_{total}/(2\pi \rho_p)$, where $U_{total} = |\mathbf{U}_{total}|$. Figure \ref{fig:6} shows $U_{total}/v_{sw}$ during our cases. The ratio is almost greater than 1 (97\% of cases), making the modified characteristic frequencies smaller, especially below 0.19 AU. This modified Taylor hypothesis could hold as the outward-propagating fluctuations are dominant near the perihelion \citep{Chen2019}.
\section{Results}
\label{sec:result}
Figure \ref{fig:3}(a) shows the distribution of break frequency $f_b$ with heliocentric distance $r$. Figure \ref{fig:3}(b) shows the distribution of $f_b$ with $\beta_p$. The data are binned in a 20 $\times$ 20 grid in log-log space. There is large variation in $f_b$ and a clear radial dependence with a power law of $f_b \sim r^{-1.11\pm 0.01}$. A Pearson correlation coefficient is calculated with PCC($r,f_b$) = -0.81, and a Spearman correlation coefficient SCC($r,f_b$) = -0.84. This result is similar to the scalings in the fast solar wind suggested by \citet{Bruno2014}. This radial trend is also consistent with the outer-scale break of the PSD \citep{Chen2019}.
The $f_b$ shows a weak dependence with $v_{sw}$ with PCC($v_{sw},f_b$) = 0.14 and SCC($v_{sw},f_b$) = 0.10. $f_b$ also decreases with $\beta_p$; PCC($\beta_p,f_b$) = -0.49, SCC($\beta_p,f_b$) = -0.51.
To investigate the correlation between $f_b$ and physical plasma scales, we calculated average $f_{\rho }$, $f_{d}$, and $f_{c}$ for each interval having the measurement of particle data. Table \ref{tab:1} shows that $f_b$ is correlated with all of these scales to a similar degree. It is accordingly difficult to uniquely distinguish the scale which best represents the break frequency.
The ratio of $f_b$ to these characteristic frequencies are calculated and illustrated in Figure \ref{fig:4} (a), (b) and (c). The data is again binned in a 20 $\times$ 20 grid in log-log space. The average and the stand deviation inside each bin are illustrated with blue lines. The average and the standard deviation of each ratio over all of the data is $0.87\pm 0.34 (f_b/f_{c})$, $0.56\pm 0.24 (f_b/f_{d})$ and $0.32\pm 0.22 (f_b/f_{\rho})$. The spectral break occurs nearest the cyclotron resonance frequency. The average $f_b/f_c$ is the largest in each bin. $f_b/f_c$ and $f_b/f_d$ decrease as the distance become larger, while $f_b/f_\rho$ is opposite. Panel (d-f) shows the ratio of modified frequencies. We get the same result assuming the modified Taylor hypothesis.
Figure \ref{fig:5} shows the $\beta_p$ dependence of the ratios. The result is similar to \citet{Chen2014}. $f_b$ locates around $f_d$ ($f_b/f_d \approx 1$) where $\beta_p \ll 1$, while $f_b$ locates around $f_\rho$ ($f_b/f_\rho \approx 1$) where $\beta_p \gg 1$. $f_b$ approaches $f_c$ ($f_b/f_c \approx 1$) for all The modified ratios have the similar trends. The correlation coefficients are shown in Table \ref{tab:2}. As $f_c=(1/f_d+1/f_\rho)^{-1}$, the $f_c$ is close to the smaller of $f_d$ and $f_\rho$. Our result could not distinguish the behavior of the different possibilities.
\section{Conclusion and Discussion}
\label{sec:conclusion}
We have investigated the radial and $\beta_p$ dependence of the observed proton-scale magnetic spectral break frequency $f_b$ in the slow solar wind from 0.17 AU $<r<$ 0.63 AU. Additionally, we have compared the break scale with the spacecraft frequencies corresponding to the cyclotron resonance, $f_c$, proton gyroscale, $f_\rho$, and proton inertial scale, $f_d$ over the range of the heliocentric distance, $r$. The results show that the break frequency follows a power law of $f_b \propto r^{-1.11}$. We find that the break frequency has mild correlation with all of the three plasma characteristic scales. There is no clearly statistic difference between the result from the plain and the modified Taylor hypothesis. However, $f_b/f_c$ is closest to unity over the full range of distances covered. Nevertheless since the predicted breaks scales are typically only defined to order unity, it is difficult to distinguish them at the moderate values of $\beta_p$ observed by \textit{PSP} to date.
This work provides the first measurement of the radial scaling of the proton-scale break in the slow solar wind in the inner heliosphere down to 0.17 AU. The slow solar wind break manifests a radial dependence similar to the fast wind, with the spectral break occurring around the ion cyclotron resonance scales\citep{Bruno2014}. This suggests that cyclotron resonance may be an important process in the slow solar wind, similar to observations at 1 AU, although the anisotropy of the turbulence complicates a simple picture of parallel-wavenumber cyclotron damping of Alfv\'en waves.
The ratio $f_b/f_c$ approaches unity near the Sun, which may be due to the increased activity of the solar wind plasma close to the Sun generating ion cyclotron waves \citep{Bowen2019}. Regarding that $f_b/f_c$ deviates slightly from unity (less than unity) in the slow solar wind at 1 AU \citep{Woodham2018} and that $f_b/f_c$ increases slightly with decreasing heliocentric distance in the slow solar wind, it seems to be a natural result for $f_b/f_c$ to approach unity near the Sun.
Considering that $f_b$ correlates with all three of $f_c$, $f_d$ and $f_\rho$, we cannot constrain the physical mechanisms which relate to the spectral break. For instance, the observations of \citet{Vech2018} which suggest that magnetic reconnection may disrupt the inertial cascade \citep{Mallet2017} at a disruption scale which has a similar scaling to the cyclotron resonant scale, if proton and electron temperatures are similar. Due to our current lack of electron temperature we have not made any attempt to distinguish the disruption scale.
Near the Sun, the interpretation of the spectral break should be taken carefully. One reason is the failure of the Taylor hypothesis. Our result of the modified Taylor hypothesis from \citet{Klein2015} is only available for the outward-propagating fluctuations in the turbulence dominant with the outward-propagating components. Whether this modification is still available at the future perihelions is still unknown. Another reason is that the large amplitude fluctuations of magnetic fields and proton bulk velocities are found prevalent near the Sun \citep{Bale2019, Kasper2019}. The generation and the role of these structures in the solar wind turbulence is an open question. In this paper, these fluctuations are treated as a part of the turbulent cascade. The behavior of the spectral break in these structures need a further elucidation.
As \textit{PSP} descends deeper into the heliosphere we expect to study the break scale where physical scales show better separation in spacecraft frequency. In addition to studying the spectral break, investigation into the dynamics of particles and waves at kinetic scales may constrain the process by which the spectra steepens. The observational studies find that the kinetic fluctuations could be quasi-parallel ion cyclotron waves, quasi-perpendicular kinetic Alfv\'en waves, or the combination of both types at 1 AU \citep{He2011,He2012a, He2012, Salem2012, Klein2014, Zhao2017}. The behavior of the fluctuation near the break scale in the inner heliosphere needs a more comprehensive analysis. As the evidences of the magnetic reconnection and accompanying turbulent enhancement are found in the solar wind \citep{Gosling2004,Phan2006,He2018}, the kinetic-scale fluctuation from the reconnection is another possible explanation to the spectral break. The contribution of the reconnection comparing with other mechanisms requires a quantitative clarification.
\begin{deluxetable*}{ccc}
\tablenum{1}
\tablecaption{The selected fitting frequency interval for the dissipation range\label{tab:1}}
\tablewidth{0pt}
\tablehead{
\colhead{Date} & \colhead{$r$ (AU)} & \colhead{Frequency (Hz)}
}
\startdata
Mar 10-11 & 0.60-0.63 & 0.8-1.4 \\
Mar 12-15 & 0.54-0.60 & 0.9-1.4 \\
Mar 17-19 & 0.47-0.52 & 0.9-1.5\\
Mar 20-24 & 0.37-0.47 & 1.2-2.2\\
Mar 25-28 & 0.28-0.37 & 1.5-2.5\\
Mar 29-30 & 0.23-0.28 & 1.5-3\\
Mar 31-Apr 5 & 0.17-0.23 & 2-5
\enddata
\end{deluxetable*}
\begin{deluxetable*}{cccc}
\tablenum{2}
\tablecaption{Summary of the correlation coefficients of various power-law fits}
\tablewidth{0pt}
\label{tab:2}
\tablehead{
\colhead{Parameter 1} & \colhead{Parameter 2} & \colhead{PCC}
&\colhead{SCC}}
\startdata
&$r$&-0.81&-0.84\\
$f_b$&$v_{sw}$ &0.11&0.10\\
&$\beta_{p}$&-0.45&-0.51\\
\cline{1-4}
&$f_c$ & 0.78 & 0.76 \\
$f_b$&$f_d$ &0.70 & 0.64\\
&$f_\rho$ & 0.69& 0.72\\
\cline{1-4}
& $f_b/f_c$ & -0.40 & -0.34\\
$r$ & $f_b/f_d$ & -0.61 & -0.63\\
& $f_b/f_\rho$ & 0.13 & 0.30\\
\cline{1-4}
& $f_b/f_c^*$ & -0.07 & -0.03\\
$r$ & $f_b/f_d^*$ & -0.40 & -0.39\\
& $f_b/f_\rho^*$ & 0.32 & 0.47\\
\cline{1-4}
& $f_b/f_c$ & -0.05 & -0.09 \\
$\beta_p$ & $f_b/f_d$ & -0.55 & -0.59 \\
& $f_b/f_\rho$ & 0.71 & 0.71 \\
\cline{1-4}
& $f_b/f_c^*$ & 0.27 & 0.27 \\
$\beta_p$ & $f_b/f_d^*$ & -0.36 & -0.38 \\
& $f_b/f_\rho^*$ & 0.82 & 0.82
\enddata
\end{deluxetable*}
\begin{figure}[ht!]
\plotone{fig1.eps}
\caption{(a) The location of \textit{PSP} during Encounter 2 in the corotating Carrington frame. The red solid circle at the origin is the Sun. The heliocentric distance of \textit{PSP} is decreasing. Black dashed lines indicate the location of \textit{PSP} at several times. The orbit color is used to indicate different solar wind speeds. The thin dashed line on the orbit means the SPC data was unavailable. (b) $\beta_p$ at different distances. Blank regions indicate unavailable data. \label{fig:1}}
\end{figure}
\begin{figure}[ht!]
\plotone{fig2.eps}
\caption{Examples of the PSDs of magnetic field fluctuations at several heliocentric distances. The cyan lines indicate fitted power law spectra. The red stars are intersections of the fitted lines and defined as the break frequency $f_b$. The fitted inertial range index $\alpha_1$ and fitted dissipation range index $\alpha_2$ are shown in the legend. \label{fig:2}}
\end{figure}
\begin{figure}[ht!]
\plotone{fig3.eps}
\caption{The ratio of $U_{total}$ to $v_{sc}$ during the cases. \label{fig:6}}
\end{figure}
\begin{figure}[ht!]
\plotone{fig4.eps}
\caption{(a) 2D-histogram of the measured break frequencies $f_b$ with heliocentric distance $r$. The black lines are the result of linear regression in log-log space, and red lines indicate the 95\% confidence interval estimated by the standard deviation of the regression. (b) 2D-histogram of the measured break frequencies $f_b$ with $\beta_p$; black and red lines again show the results of linear regression and confidence intervals. Panel(a) contains all 10724 measured intervals while panel(b) only contains intervals with available SPC data. \label{fig:3}}
\end{figure}
\begin{figure}[ht!]
\plotone{fig5.eps}
\caption{The 2D-histogram of the distribution of the occurrence of (a) $\log_{10}(f_b/f_c)$, (b) $\log_{10}(f_b/f_d)$, (c) $\log_{10}(f_b/f_\rho)$, (d) $\log_{10}(f_b/f_c^*)$, (e) $\log_{10}(f_b/f_d^*)$, (f) $\log_{10}(f_b/f_\rho^*)$ over the heliocentric distance $r$ respectively. The starred frequencies are the corresponding frequencies from the modified Taylor hypothesis. The black lines are the linear fitting with least-squares method. The averages and standard deviations of each $r$ bin are plotted as blue lines.\label{fig:4}}
\end{figure}
\begin{figure}[ht!]
\plotone{fig6.eps}
\caption{The 2D-histogram of the distribution of the occurrence of (a) $\log_{10}(f_b/f_c)$, (b) $\log_{10}(f_b/f_d)$, (c) $\log_{10}(f_b/f_\rho)$, (d) $\log_{10}(f_b/f_c^*)$, (e) $\log_{10}(f_b/f_d^*)$, (f) $\log_{10}(f_b/f_\rho^*)$ over $\beta_p$ respectively. The starred frequencies are the corresponding frequencies from the modified Taylor hypothesis. The black lines are the linear fitting with least-square method. The averages and standard deviations of each $\beta_p$ bin are plotted as blue lines.\label{fig:5}}
\end{figure}
\bigbreak
\noindent Acknowledgements:
We thank the referee for helpful comments and the NASA Parker Solar Probe Mission and the FIELDS and SWEAP teams for use of data. D.D. is supported by the China Scholarship Council for his stay at SSL. C.H.K.C. is supported by STFC Ernest Rutherford Fellowship ST/N003748/2. The FIELDS and the SWEAP experiment on the Parker Solar Probe spacecraft was designed and developed under NASA contract NNN06AA01C. D.D. and J.S.H. are also supported by NSFC under 41874200, 41574168, and 41421003. The authors acknowledge the extraordinary contributions of the Parker Solar Probe mission operations and spacecraft engineering teams at the Johns Hopkins University Applied Physics Laboratory. PSP data is available on SPDF (https://cdaweb. sci.gsfc.nasa.gov/index.html/).
|
2,869,038,153,985 | arxiv | \section{Introduction}
The analogy between electric current and the flow of water
is in fact older than the discovery of the electrons.
There are essentially two ways to move "water" (charge) between
two ``pools" (reservoirs): One possibility is to exploit
potential difference between the two reservoirs so as to
make the ``water" flow through a ``pipe" (wire). The other
possibility is to operate a device (pump) at some location along
the pipe (the ``scattering region"). This possibility of
moving charge without creating a potential difference is
called pumping. This description assumes ``open" geometry as
in Fig.1c. But what about a ``closed" system as in Fig.1b?
If we operate the same pump, do we get the same
circulating current as in the ``open" geometry?
\begin{figure}[h]
\Cn{\epsfig{figure=pmp_models,width=0.86\hsize}}
\caption{
(a) Upper left: A chaotic ring that has
the shape of a Sinai billiard, with Aharonov-Bohm flux.
(b) Upper right: The dot-wire geometry with the same
topology as in the case of the Sinai billiard.
(c) Lower: The wire is cut into two leads that are attached
to reservoirs. The latter is what we call ``open geometry".}
\end{figure}
The analysis of ``quantum pumping" in closed systems
should take into account several issues that go beyond
the water analogy:
{\bf (i)} Kirchhoff law is not satisfied in the mesoscopic
reality because charge can accumulate;
{\bf (ii)} There are quantized energy levels,
consequently one has to distinguish between
adiabatic and non-adiabatic dynamics;
{\bf (iii)} Interference is important,
implying that the result of the calculation
is of statistical nature (universal conductance fluctuations).
On top we may have to take into account the effect
of having an external environment (decoherence).
Quantum pumping is a special issue in the study
of ``driven systems". We are going to emphasize
the significance of ``quantum chaos" in the analysis.
This in fact provides the foundations for
linear response theory (LRT)
\cite{landau,dsp,wilk,crs,frc,pmc}.
We shall explain how to apply
the Kubo formalism in order to analyze the dynamics
in the low frequency (DC) regime. Within the Kubo
formalism the problem boils down to the calculation
of the generalized (DC) conductance matrix.
To avoid miss-understanding we emphasize that
the dynamics in the low frequency (DC) regime
is in general non-adiabatic:
The DC conductance has both a dissipative
and a non-dissipative parts. In the adiabatic
limit (extremely small rate of driving)
the dissipative part vanishes, while the
non-dissipative part reduces to ``adiabatic transport"
(also called ``geometric magnetism")
\cite{berry,thouless,AvronNet,BeRo}.
The ``adiabatic regime",
where the dissipative effect can be ignored,
is in fact a tiny sub-domain
of the relatively vast ``DC regime".
The dot-wire geometry of Fig.1b is of particular
interest. We are going to discuss the special limit
of taking the length of the wire ($L$) to be infinite.
In this limit the adiabatic regime vanishes,
but still we are left with a vast "DC regime" where
the pumping is described by a "DC conductance".
In this limit we get results \cite{pmo}
that are in agreement
with the well known analysis of quantum pumping
\cite{BPT,brouwer} in an open geometry (Fig.1c).
\begin{figure}[b]
\Cn{\epsfig{figure=pmo_fig,width=0.7\hsize}}
\caption{
Detailed illustration of the dot-wire system.
The dot potential is controlled
by gate voltages $X_1$ and $X_2$.
The flux through the loop is $X_3{=}\Phi$.
The scattering region ($r{<}0$)
is represented by an $S$~matrix.
Later we assume that the length ($L$)
of the wire is very large.}
\end{figure}
\section{Driven systems}
Consider a Fermi sea of non interacting ``spinless" electrons.
The electrons are bounded by some potential. To be specific
we assume a ring topology as in Fig.1a. Of particular
interest is the dot-wire geometry of Fig.1b, or its more
elaborated version Fig.2. It has the same
topology but we can distinguish between a ``wire region" and
a ``dot region" (or ``scattering region").
In particular we can consider a dot-wire system such
that the length of the wire is very very long.
If we cut the wire in the middle, and attach each lead
to a reservoir, then we get the open geometry of Fig.1c.
We assume that we have some control over
the potential that holds the electrons.
Specifically, and without loss of generality,
we assume that there are control parameters $X_1$ and $X_2$
that represent e.g. some gate voltages (see Fig.2)
with which we can control the potential
in the scattering region.
Namely, with these parameters we can change the
dot potential floor, or the height of some
barrier, or the location of a ``wall" element,
or the position of a scatterer inside the dot.
We call $X_1$ and $X_2$ shape parameters.
We also assume that it is possible to have
an Aharonov-Bohm flux $X_3$ through the ring.
Thus our notations are:
\begin{eqnarray}
& & X_1, X_2 \ = \ \mbox{shape parameters} \\
& & X_3 \ = \ \Phi \ = \ (\hbar/e)\phi \ = \ \mbox{magnetic flux}
\ee
and the motion of each electron is described
by a one particle Hamiltonian
\begin{eqnarray}
\mathcal{H} \ = \ \mathcal{H}(\bm{r},\bm{p};\ X_1(t),X_2(t),X_3(t))
\ee
To drive a system means to change
some parameters (fields) in time.
No driving means that $X_1$ and $X_2$ are
kept constant, and also let us assume for simplicity
that there is no magnetic field and that $X_3=0$.
In the absence of driving we assume
that the motion of the electrons inside
the system is classically chaotic.
For example this is the case with the
so-called Sinai billiard of Fig.1a.
In such circumstances the energy of the system
is a constant of the motion, and the net circulating
current is zero due to ergodicity.
The simplest way to create a current $\mathcal{I}$
in an open system (Fig.1c) is to impose bias
by having a different chemical potential in each reservoir.
Another possibility is to create
an electro-motive-force (EMF) in the dot region.
In linear response theory it can be proved
that it does not matter what is the assumed
distribution of the voltage along the ``resistor".
The EMF is by Faraday law $-\dot{\Phi}$.
Assuming DC driving (constant EMF),
and the applicability of LRT, we get the ``Ohm law"
$\mathcal{I} = \bm{G}^{33} \times (-\dot{\Phi})$
and hence the transported charge is
$dQ = - \bm{G}^{33} \ dX_3$.
We call $\bm{G}^{33}$ the Ohmic (DC) conductance.
If we have a low frequency AC driving rather
than a DC driving, still the impedance (AC conductance)
is expected to be well approximated by the DC conductance
within a frequency range that we call the DC regime.
Yet another possibility is to induce current by
changing shape parameter in time,
while keeping either the bias or $X_3$ equal to zero.
Say that we change $X_1$, then in complete
analogy with Ohm law we can write
$dQ = - \bm{G}^{31} \ dX_1$. More generally
we can write
\begin{eqnarray}
dQ \ = \ - \sum_j \bm{G}^{3j} \ dX_j
\ee
Obviously this type of formula makes sense
only in the ``DC regime" where the current
at each moment of time depends only on the
rates $\dot{X}_j$.
\begin{figure}[b]
\Cn{
\epsfig{figure=pmp_cyc,height=0.35\hsize}
\ \ \ \
\epsfig{figure=pmp_cyc_plan,height=0.35\hsize}
}
\caption{
(a) Left: A driving cycle in $X$ space. In order to have non-zero
area enclosed we have to change (without loss of generality)
two parameters. (b) Right: In particular we consider pumping cycle
in the $X_3=0$ plane (no magnetic field). }
\end{figure}
\section{pumping cycles}
In practice the interest is a time periodic (AC) driving.
This means that the driving cycle can be represented
by a closed contour at the $(X_1,X_2,X_3)$ space
as in Fig.3a. In fact we assume that the contour is
lying in the $(X_1,X_2)$ plan as in Fig.3b.
We ask what is the amount of charge which is transported
via a section of the ring per cycle.
Assuming the applicability of LRT we get in the DC regime
\begin{eqnarray} \label{e5}
Q \ = \ \oint \mathcal{I} dt \ = \ \oint \bm{G} \cdot dX
\ee
where $X=(X_1,X_2,X_3)$ and
$\bm{G} = (\bm{G}^{31},\bm{G}^{32},\bm{G}^{33})$.
Later we shall define a more general object $\bm{G}^{kj}$
with $k,j=1,2,3$ that we call {\em generalized conductance matrix}.
In the above formula only the $k=3$
row enters into the calculation.
Getting $Q\ne 0$ means that the
current has a non-zero DC component.
So we can define ``pumping" as
getting DC current form AC driving.
From the above it is clear that
within the DC regime we have to vary
at least two parameters to
achieve a non-zero result.
In a closed (in contrast to open)
system this conclusion remains
valid also outside of the DC regime,
due to time reversal symmetry.
In order to get DC current from one parameter
AC driving, in a closed system,
it is essential to have a non-linear response.
{\em Ratchets} are non-linear devices
that use ``mixed" \cite{ratchH}
or ``damped" \cite{ratchD} dynamics
in order to pump with only one parameter.
We are {\em not} discussing such devices below.
\section{What is the problem?}
Most of the studies of quantum pumping were (so far)
about open systems. Inspired by Landauer who pointed out
that $\bm{G}^{33}$ is essentially the transmission of the device,
B{\"u}ttiker, Pretre and Thomas (BPT) have
developed a formula that allows the calculation of $\bm{G}^{3j}$
using the $S$ matrix of the scattering region \cite{BPT,brouwer}.
It turns out that the non-trivial extension of this approach
to closed systems involves quite restrictive assumptions \cite{MoBu}.
Thus the case of pumping in closed systems has been left un-explored,
except to some past works on adiabatic transport \cite{AvronNet,BeRo}.
Yet another approach to quantum pumping is to use
the powerful {\em Kubo~formalism} \cite{pmc,pmo,pmt}.
The Kubo formula, which we discuss later,
gives a way to calculate the
generalized conductance matrix $\bm{G}^{kj}$.
It is a well know formula \cite{landau},
so one can ask: what is the issue here?
The answer is that both the validity conditions,
and also the way to use the Kubo formula,
are in fact open problems in physics.
The Van Kampen controversy regarding the
validity of the Kubo formula in the classical
framework is well known, and by now has
been resolved. For a systematic classical derivation
of the Kubo formula with all the validity
conditions see Ref.\cite{frc} and references therein.
The assumption of chaos is essential
in the classical derivation.
If this assumption is not satisfied
(as in the trivial case of a driven 1D ring)
then the Kubo formula becomes non-applicable.
What about the Quantum Mechanical derivation?
The problem has been raised in Ref.\cite{wilk}
but has been answered only later
in Refs.\cite{crs,frc} and follow up works.
It is important to realize that the quantum
mechanical derivation of the Kubo formula
requires perturbation theory to infinite order,
not just 1st order perturbation theory.
We shall discuss later the non-trivial
self consistency condition of the quantum mechanical
derivation.
We note that the standard textbook derivation
of the Kubo formula assumes that the
energy spectrum is essentially a continuum.
A common practice is to assume some weak
coupling to some external bath \cite{imryK}.
However, this procedure avoids the question
at stake, and in fact fails to take into
consideration important ingredients that
have to do with {\em quantum chaos physics}.
In this lecture the primary interest
is in the physics of a closed {\em isolated} system.
Only in a later stage we look for the effects
that are associated with having a weak coupling
to an external bath.
Why do we say that it is not clear how
to use the Kubo formula? We are going to explain
that the quantum mechanical derivation of the
Kubo formula introduces an energy scale
that we call $\Gamma$. It plays an analogous
role to the level broadening parameter
which is introduced in case of a coupling to a bath.
Our $\Gamma$ depends on the rate $\dot{X}$
of the driving in a non-trivial way.
One may say that $\Gamma$ in case of an isolated
system is due to the non-adiabaticity of the driving.
Our $\Gamma$ affects both the dissipative
and the non-dissipative (geometric) part
of the response. Without a theory for
$\Gamma$ the quantum mechanical Kubo formula
is ill defined.
\section{Generalized forces and currents}
Given a Hamiltonian we define generalized forces
in the conventional way:
\begin{eqnarray}
\mathcal{F}^k \ = \ -\frac{\partial \mathcal{H}}{\partial X_k}
\ee
one obvious reasoning that motivates this definition
follows from writing the following (exact) expression for
the change in the energy $E=\langle \mathcal{H} \rangle$
of the system:
\begin{eqnarray}
E_{\tbox{final}}-E_{\tbox{initial}} \ = \
- \int \langle \mathcal{F}(t) \rangle \cdot dX
\ee
In particular we note that $\mathcal{F}^3$ should be
identified as the current $\mathcal{I}$.
This identification can be explained as follows:
If we make a change $d\Phi$ of the flux during a time $dt$,
then the EMF is $-d\Phi/dt$, leading to a current $\mathcal{I}$.
The energy increase is the EMF times the charge,
namely $dE=(-d\Phi/dt)\times(\mathcal{I}dt)=-\mathcal{I}d\Phi$.
Hence $\mathcal{I}$ is conjugate to $\Phi$.
As an example we consider \cite{pmt}
a network model \cite{kottos}.
See the illustration of Fig.4d.
The Hamiltonian is
\begin{eqnarray}
\mathcal{H} \ \ = \ \ \mbox{\small network}
\ \ + \ \ X_2 \ \delta(x-X_1)
\ee
We assume control over the position $X_1$
of the delta scatterer,
and also over the ``height" $X_2$
of the scatterer. By the definition we get:
\begin{eqnarray}
\mathcal{F}^1 \ &=& \ X_2 \delta'(x-X_1)
\\
\mathcal{F}^2 \ &=& \ -\delta(x-X_1)
\ee
Note that $\mathcal{F}^1$ is the ordinary Newtonian force
which is associated with translations. Its operation on
the wavefunction can be realized by the differential operator
\begin{eqnarray}
\mathcal{F}^1 \ \ \mapsto \ \ -X_2
\left( \overrightarrow{\partial} + \overleftarrow{\partial}
- \frac{2\mathsf{m}}{\hbar^2} X_2 \right)_{x=X_1+0}
\ee
where we have used the matching condition across the delta
function and $\mathsf{m}$ is the mass of the particle.
What about the current operator? For its definition we have
to introduce a vector potential $\mathcal{A}(x) = \Phi a(x)$
into the Hamiltonian such that
\begin{eqnarray}
\oint \vect{\mathcal{A}} \cdot \vect{dr} \ = \ \Phi
\ee
Thus we have to specify $a(x)$, which describes how
the vector potential varies along the loop.
This is not merely a gauge freedom because the
electric field $-\dot{\Phi} a(x)$ is a measurable
quantity. Moreover, a different $a(x)$ implies
a different current operator. In particular we can
choose $a(x)$ to be a delta function across
a section $x=x_0$. Then we get:
\begin{eqnarray}
\mathcal{I} \ \ = \ \ \frac{e}{2\mathsf{m}}
\left( \delta(x-x_0)p + p\delta(x-x_0) \right)
\ee
Note that the operation of this operator
can be realized by the differential operator
\begin{eqnarray}
\mathcal{I} \ \ \mapsto \ \ -i \frac{e\hbar}{2\mathsf{m}}
\Big(\overrightarrow{\partial}-\overleftarrow{\partial}\Big)_{x=x_0}
\ee
A few words are in order regarding the continuity of the charge flow.
It should be clear that in any moment the current through
different sections of a wire does not have to be the same,
because charge can accumulate. Kirchhoff law is not satisfied.
For example if we block the left entrance to the dot in Fig.2,
and raise the dot potential, then current is pushed out of
the right lead, while the current in the blocked side is zero.
Still if we make a full pumping cycle, such that the charge
comes back to its original distribution at the end of each cycle,
then the result for $Q$ should be independent of the section
through which the current is measured.
\begin{figure}[b]
\epsfig{figure=pmt_fig,width=\hsize}
\caption{
A scatterer (represented by a black circle) is
translated through a system that has a Fermi occupation
of spinless non-interacting electrons.
In (a) the system is a simple ring.
In (b) it is a chaotic ring (Sinai billiard).
In (c) and in (d) we have network systems that
are of the same type of (a) and (b) respectively.
In the network, the scatterer (``piston")
is a delta function (represented as a big circle) located at $x=X_1$.
The current is measured through $x=x_0$ (dotted vertical line).
In (e) we have an open geometry with left and right leads that
are attached to reservoirs that have the same chemical potential.}
\end{figure}
\section{Linear response theory}
Assume that $X(t)=X^{(0)} + \delta X(t)$,
and look for a quasi-stationary solution.
To have linear response means that the generalized
forces are related to the driving as follows:
\begin{eqnarray} \label{e15}
\langle \mathcal{F}(t) \rangle \ = \
\langle \mathcal{F} \rangle_0 \ + \
\int_{-\infty}^{\infty}
\bm{\alpha}(t-t') \cdot \delta X(t') \ dt'
\ee
where $\langle ... \rangle_0$ denote the expectation
value with respect to the unperturbed $X(t)=X^{(0)}$
stationary state. From now on we disregard the zero
order term (the ``conservative force"), and focus
on the linear term.
The generalized susceptibility $\chi^{kj}(\omega)$
is the Fourier transform of the (causal) response
kernel $\alpha^{kj}(\tau)$, while the generalized
conductance matrix is defined as
\begin{eqnarray}
\bm{G}^{kj} \ = \ \left.
\frac{\mbox{Im}[\chi^{kj}(\omega)]}{\omega}
\ \right|_{\omega\sim0} \ = \ \bm{\eta}^{kj} + \bm{B}^{kj}
\ee
The last equality defines the symmetric
and the anti-symmetric matrices $\bm{\eta}^{kj}$ and $\bm{B}^{kj}$.
Thus in the DC limit Eq.(\ref{e15}) reduces to a generalized Ohm law:
\begin{eqnarray}
\langle \mathcal{F}^k \rangle \ \ = \ \
-\sum_{j} \bm{G}^{kj} \ \dot{X}_j
\ee
which can be written in fancy notations as
\begin{eqnarray}
\langle F \rangle \ \ = \ \ -\bm{G}\cdot \dot{X} \ \ = \ \
-\bm{\eta} \cdot \dot{X} \ - \ \bm{B}\wedge \dot{X}
\ee
Note that the rate of dissipation is
\begin{eqnarray}
\dot{\mathcal{W}} \ \ = \ \ -\langle F \rangle \cdot \dot{X} \ \ = \ \
\sum_{kj} \bm{\eta}^{kj} \ \dot{X}_k \ \dot{X}_j
\ee
We would like to focus not on the dissipation issue,
but rather on the transport issue. From Eq.(\ref{e5}) we get
\begin{eqnarray}
Q \ \ = \ \
\Big[ \ -\oint \bm{\eta} \cdot dX \ \ - \oint \bm{B} \wedge dX \ \ \Big]_{k=3}
\ee
From now on we consider a planar $(X_1,X_2)$ pumping cycle,
and assume that there is no magnetic field.
Then it follows from time reversal symmetry [Onsager] that
$\bm{\eta}^{31} = \bm{\eta}^{32} = 0$, and consequently
\begin{eqnarray} \label{e21}
Q \ = \ -\oint \vect{\bm{B}} \cdot \ \vect{ds}
\ee
where $\vect{\bm{B}}=(\bm{B}^{23},\bm{B}^{31},\bm{B}^{12})$,
with $\bm{B}^{12} = 0$, and $\vect{ds}=(dX_2,-dX_1,0)$
is a normal vector in the pumping plane as in Fig.3b.
\newpage
The various objects that have been defined
in this section are summarized by the following diagram:
\ \\
{
\setlength{\unitlength}{2000sp}
\begin{picture}(4725,5767)(751,-7112)
\put(1501,-1861){\vector( 0,-1){375}}
\put(1501,-3061){\vector(-2,-3){242.308}}
\put(1801,-3061){\vector( 4,-1){1641.177}}
\put(4801,-4261){\vector( 0,-1){525}}
\put(4951,-5611){\vector( 1,-1){375}}
\put(4651,-5611){\vector(-1,-1){375}}
\put(1126,-1561){$\alpha^{kj}(t-t')$}
\put(1201,-2761){$\chi^{kj}(\omega)$}
\put(751,-3886){$\mbox{Re}[\chi^{kj}(\omega)]$}
\put(3301,-3886){$(1/\omega) \ttimes \mbox{Im}[\chi^{kj}(\omega)]$}
\put(3676,-6511){$\bm{\eta}^{kj}$}
\put(5326,-6511){$\bm{B}^{kj}$}
\put(5476,-7036){(non-dissipative)}
\put(4576,-5386){$\bm{G}^{kj}$}
\put(3001,-7036){(dissipative)}
\end{picture}
}
\ \\
\section{The Kubo formula}
The Kubo formula for the response kernel is
\begin{eqnarray}
\alpha^{kj}(\tau) \ = \ \Theta(\tau) \times
\frac{i}{\hbar} \langle [\mathcal{F}^k(\tau),\mathcal{F}^j(0)]\rangle_0
\ee
where the expression on the right hand side
assumes a zero order $X=X^{0}$ stationary state
(the so called ``interaction picture"),
and $\Theta(\tau)$ is the step function.
Using the definitions of the previous section,
and assuming a Fermi sea of non-interacting fermions
with occupation function $f(E)$,
we get the following expressions:
\begin{eqnarray} \nonumber
\bm{\eta}^{kj} &=&
-\pi\hbar\sum_{n,m}
\frac{f(E_n){-}f(E_m)}{E_n{-}E_m}
\mathcal{F}^k_{nm}\mathcal{F}^j_{mn}
\ \delta_{\Gamma}(E_m{-}E_n)
\\ \label{e23}
\bm{B}^{kj} &=&
2\hbar \sum_n f(E_n)
\sum_{m(\ne n)}
\frac{\mbox{Im}\left[\mathcal{F}^k_{nm}\mathcal{F}^j_{mn}\right]}
{(E_m{-}E_n)^2+(\Gamma/2)^2}
\ee
We have incorporated in these expression
a broadening parameter $\Gamma$ which is absent
in the ``literal" Kubo formula. If we set
$\Gamma=0$ we get no dissipation
($\bm{\eta}=0$). We also see that $\Gamma$ affects
the non-dissipative part of the response.
Thus we see that without having a theory
for $\Gamma$ the Kubo formula is an ill defined expression.
\section{Adiabatic transport (Geometric magnetism)}
The ``literal" Kubo formula (i.e. with $\Gamma=0$)
has been considered in Refs.(\cite{AvronNet,BeRo}).
In this limit we have no dissipation ($\bm{\eta}=0$).
But we may still have a non-vanishing $\bm{B}$.
By Eq.(\ref{e23}) the total $\bm{B}$ is a sum
over the occupied levels. The contribution of
a given occupied level $n$ is:
\begin{eqnarray}
\bm{B}^{kj}_n \ \ = \ \
2\hbar
\sum_{m(\ne n)}
\frac{\mbox{Im}\left[
\mathcal{F}^k_{nm}\mathcal{F}^j_{mn}\right]}
{(E_m-E_n)^2+(\Gamma/2)^2}
\ee
with $\Gamma=0$. This is identified as the
geometric magnetism of Ref.\cite{BeRo}.
We can get some intuition for $\vect{\bm{B}}$
from the theory of adiabatic processes.
The Berry phase is given as a line integral
$ (1/\hbar)\oint \vect{\bm{A}} \cdot dX $ over ``vector potential"
in $X$ space. By stokes law it can be converted
to an integral $(1/\hbar)\int\!\!\!\!\int \vect{\bm{B}} \cdot dS$
over a surface that is bounded by the driving cycle.
The $\vect{\bm{B}}$ field is divergence-less, but
it may have singularities at $X$ points where
the level $n$ has a degeneracy with a nearby level.
We can regard these points as the location of
magnetic charges. The result of the surface integral
should be independent of the choice of the surface modulo $2\pi$,
else Berry phase would be ill defined. Therefore
the net flux via a closed surface (which we can regard as
formed of two Stokes surfaces) should be zero modulo $2\pi$.
Thus, if we have a charge within a closed
surface it follows by Gauss law that it should
be quantized in units of $(\hbar/2)$. These are the
so called ``Dirac monopoles". In our setting $X_3$
is the Aharonov-Bohm flux. Therefore we have
vertical ``Dirac chains"
\begin{eqnarray}
\mbox{chain} \ = \ \left(X_1^{(0)}, \ \ X_2^{(0)}, \ \
\Phi^{(0)}+2\pi\frac{e}{\hbar}\times\mbox{\small integer}\right)
\ee
In the absence of any other magnetic field we have
time-reversal symmetry for either integer or half integer flux.
It follows that there are two types of Dirac chains:
those that have a monopole in the plane of the
pumping cycle, and those that have their monopoles
half unit away from the pumping plane.
In the next section we shall see how these observations
help to analyze the pumping process. We shall also illuminate
the effect of having $\Gamma \ne 0$.
Later we shall discuss the ``physics" behind $\Gamma$.
\section{Quantized pumping?}
The issue of quantized pumping is best illustrated
by the popular two delta barrier model,
which is illustrated in Fig.5.
The ``dot region" $|Q|<a/2$ is described by the potential
\begin{eqnarray}
U(r;X_1,X_2) =
X_1\delta\left(x+\frac{a}{2}\right) +
X_2\delta\left(x-\frac{a}{2}\right)
\end{eqnarray}
The pumping cycle is described in Fig.5c.
In the 1st half of the cycle an electron
is taken from the wire into the dot region
via the left barrier, while in the second
half of the cycle an electron is transfered
from the dot region to the wire
via the right barrier.
So it seems that one electron is pumped
through the device per cycle.
The question is whether it is exactly
one electron ($Q=e$) or not?
\begin{figure}[b]
\epsfig{figure=pme_2delta,width=0.45\hsize}
\ \ \ \
\epsfig{figure=pmp_levels,width=0.45\hsize} \\
\Cn{\epsfig{figure=DiracChains,width=0.90\hsize}}
\caption{
(a) Upper left: The energy levels of a ring
with two barriers, at the beginning of the pumping cycle.
It is assumed that the three lower levels are occupied.
(b) Upper right: The adiabatic levels as a function
of time during the pumping cycle.
(c) Lower Left: The $(X_1,X_2)$ locations of
the Dirac chains of the $3$ occupied levels.
Filled (hollow) circles imply that there
is (no) monopole in the pumping plane.
Note that for sake of illustration overlapping
chains are displaced from each other.
The pumping cycle encircles $2+1$ Dirac chains
that are associated with the 3rd and 2nd levels respectively.
(d) Lower right: The $2$ Dirac chains
that are associated with the 3rd level.}
\end{figure}
In the case of an open geometry the answer is known
\cite{SAA,barriers}.
Let us denote by $g_0$ the average transmission
of the dot region for $X$ values along the pumping
cycle. In the limit $g_0 \rightarrow 0$, which
is a pump with no leakage, indeed one gets $Q=e$.
Otherwise one gets $Q=(1-g)e$.
What about a {\em closed} (ring) geometry?
Do we have a similar result?
It has been argued \cite{SAA}
that if the the pumping process is strictly adiabatic
then we get exactly $Q=e$. We are going to explain
below that this is in fact not correct:
We can get either $Q<1$ or $Q>1$ or even $Q \gg 1$.
Recall that by Eq.(\ref{e21}) the pumped charge $Q$
equals the projected flux of the $\vect{\bm{B}}$ field
through the pumping cycle (Fig.3b).
If the charge of the monopoles
were uniformly distributed along the chains, it would
follow that $Q$ is exactly quantized.
But this is not the case,
and therefore $Q$ can be either
smaller or larger than $1$ depending
on the type of chain(s) being encircled.
In particular, in case of a tight cycle
around a monopole we get $Q\gg e$ which
is somewhat counter-intuitive, while if
the monopole is off-plane $Q < e$.
What is the effect of $\Gamma$ on this result?
It is quite clear that $\Gamma$ diminishes
the contribution of the singular term.
Consequently it makes $Q$ less than one.
This gives us a hint that the introduction of
$\Gamma$ might lead to a result which is
in agreement with that obtained for an open geometry.
We shall discuss this issue in the next sections.
\section{The Kubo Formula and ``quantum chaos"}
We turn now to discuss $\Gamma$. Any generic
quantum chaos system is characterized by some
short correlation time $\tau_{cl}$,
by some mean level spacing $\Delta$,
and by a semiclassical energy scale
that we denote as $\Delta_b$. Namely:
\begin{eqnarray}
\Delta \ & \propto& \ \hbar^{d}/\mbox{\small volume} \ =
\ \mbox{\small mean level spacing} \\
\Delta_b \ & \sim & \ \hbar/\tau_{\tbox{cl}} \ =
\ \mbox{\small bandwidth}
\ee
The term bandwidth requires clarification.
If we change a parameter $X$ in the Hamiltonian $\mathcal{H}$,
then the perturbation matrix $\mathcal{F}_{nm}$
has non-vanishing matrix elements within
a band $|E_n-E_m|<\Delta_b$. These matrix elements
are characterized by some root-mean-square magnitude $\sigma$,
while outside of the band the matrix elements are very small.
If the system is driven slowly in a rate $\dot{X}$
then levels are mixed non-perturbatively.
Using a quite subtle reasoning \cite{crs,frc,pmc,dsp}
the relevant energy range for the non-perturbative
mixing of levels is found to be
\begin{eqnarray} \label{e29}
\Gamma \ \ = \ \
\left(\frac{\hbar\sigma}{\Delta^2}|\dot{X}|\right)^{2/3} \!\times \Delta
\ \ \ \ \propto \ \ \ \
\left(L \frac{}{} |\dot{X}|\right)^{2/3} \frac{1}{L}
\ee
The latter equality assumes dot-wire geometry as in
Fig.1b, where $L$ is the length of the wire.
Now we can distinguish between three $\dot{X}$ regimes:
\begin{eqnarray}
\Gamma \ll \Delta & \ & \ \ \ \mbox{adiabatic regime} \\
\Delta < \Gamma < \Delta_b & \ & \ \ \ \mbox{non-adiabatic regime} \\
\mbox{\em otherwise} & \ & \ \ \ \mbox{non-perturbative regime}
\ee
In the adiabatic regime levels are not mixed by the driving,
which means that the system (so to say) follows the same level
all the time. In the perturbative regime there is
a non-perturbative mixing on small energy scales, but
on the large scale we have Fermi-Golden-Rule (FGR) transitions.
If the self consistency condition ($\Gamma \ll \Delta_b$)
breaks down, then the FGR picture becomes non-applicable,
and consequently $\Gamma$ becomes a meaningless parameter.
In the non-perturbative regime we expect semiclassical methods
to be effective, provided the system has a classical limit
(which is not the case with random matrix models \cite{rsp}).
In general one can argue that in the limit of infinite volume
(or small $\hbar$) perturbation theory always breaks down,
leading to a semiclassical behavior. But in the dot-wire geometry
this is not the case if we take the limit $L\rightarrow\infty$,
keeping the width of the wire fixed. With such limiting procedure
Eq.(\ref{e29}) implies that the self-consistency condition
$\Gamma \ll \Delta_b$ is better and better satisfied!
This means that the Kubo formula can be trusted.
Furthermore, with the same limiting procedure
the $L\rightarrow\infty$ is a {\em non-adiabatic} limit
because the adiabaticity condition $\Gamma \ll \Delta$ breaks down.
\section{Kubo formula using an FD relation}
The Fluctuation-dissipation (FD) relation
allows us to calculate the conductance $\bm{G}^{kj}$
from the correlation function $C^{kj}(\tau)$
of the generalized forces.
In what follows we use the notations:
\begin{eqnarray}
K^{kj}(\tau) \ &=& \
\frac{i}{\hbar} \langle [\mathcal{F}^k(\tau),\mathcal{F}^j(0)]\rangle_0
\\
C^{kj}(\tau) \ &=& \
\frac{1}{2}
\left( \langle \mathcal{F}^k(\tau)\mathcal{F}^j(0) \rangle_0 + cc\right)
\ee
Their Fourier transforms are denoted
$\tilde{K}^{kj}(\omega)$ and $\tilde{C}^{kj}(\omega)$.
The expectation value above assumes a zero order
stationary preparation.
We shall use subscript $|_F$ to indicate many-body Fermi occupation.
We shall use the subscript $|_T$ or the subscript $|_E$
to denote one-particle canonical or microcanonical preparation.
At high temperatures the Boltzmann approximation applies
and we can use the exact relation
$f(E_n){-}f(E_m) = \tanh((E_n{-}E_m)/(2T)) \times (f(E_n){+}f(E_m))$
so as to get
\begin{eqnarray}
\tilde{K}^{kj}_F(\omega) = i\omega \times
\frac{2}{\hbar\omega}\tanh\left(\frac{\hbar\omega}{2T}\right) \ C^{kj}_T(\omega)
\ee
At low temperatures we can use the approximation
$f(E){-}f(E') \approx -\frac{1}{2}[\delta_T(E{-}E_F)+\delta_T(E'{-}E_F)] \times (E{-}E')$
with $\delta_T(E{-}E_F)=-f'(E)$ so as to get
\begin{eqnarray}
\tilde{K}^{kj}_F(\omega) & \approx &
i \omega \times g(E) \ \tilde{C}^{kj}_{E_F}(\omega)
\ee
The application of this approximation
is ``legal" if we assume temperature $T\gg \Delta_b$.
This is a very ``bad" condition because for (e.g.) ballistic
dot $\Delta_b$ is the relatively large
Thouless energy. However, we can regard the
large $T$ result as an $E_F$~averaged
zero temperature calculation. Then it can
be argued that for a quantum chaos system with
a generic bandprofile the average is in fact
the ``representative" result (see discussion
of ``universal conductance fluctuation" in later sections).
Substituting the Kubo formula
$\alpha^{kj}(\tau) = \Theta(\tau) \ K^{kj}(\tau)$
in the definition of $\bm{G}^{kj}$,
and using the latter relation
between $K^{kj}(\tau)$ and $C^{kj}(\tau)$
we get after some straightforward algebra
the following expression for the conductance:
\begin{eqnarray} \label{e37}
\bm{G}^{kj} =
\int_0^{\infty} K_F^{kj}(\tau)\tau d\tau
\ \approx \
\mathsf{g}(E_F) \int_0^{\infty}C_{E_F}^{kj}(\tau)d\tau
\ee
where $\mathsf{g}(E_F)$ is the density of the one-particle states.
If we want to incorporate $\Gamma$ the recipe is simply:
\begin{eqnarray} \label{e38}
C(\tau) \ \ \mapsto \ \
C(\tau) \ \mbox{e}^{-\frac{1}{2}(\Gamma/\hbar) |\tau|}
\ee
The expression of $\bm{G}^{kj}$ using $C^{kj}(\tau)$
is a generalized FD relation. It reduces to the
standard FD relation if we consider the dissipative part:
\begin{eqnarray}
\bm{\eta}^{kj} \ = \ \frac{1}{2} \mathsf{g}(E_F) \tilde{C}_{E_F}^{kj}(\omega \sim 0)
\ee
whereas the non-dissipative part requires integration
over all the frequencies (see next section).
\section{Kubo via Green functions or $S$ matrix}
Now we would like to express $\bm{G}^{kj}$ using Green functions,
and eventually we would like to express it using the $S$ matrix
of the scattering region. The first step is to rewrite
the FD relation as follows:
\begin{eqnarray}
\bm{G}^{kj} \ \ &=& \ \
\hbar \mathsf{g}(E_F) \int_{-\infty}^{\infty}
\frac{-i\tilde{C}_{E_F}^{kj}(\omega)}{\hbar\omega-i(\Gamma/2)}
\ \frac{d\omega}{2\pi}
\ee
The second step is to write
\begin{eqnarray}
C^{kj}_E(\omega) = \frac{\hbar}{2\mathsf{g}(E)}
\left[ C^{kj}(E{+}\hbar\omega,E) + C^{jk}(E{-}\hbar\omega,E) \right]
\ee
where
\begin{eqnarray}
C^{kj}(E',E) &=&
2\pi \sum_{nm}
\mathcal{F}^k_{nm} \delta(E'-E_m)
\mathcal{F}^j_{mn} \delta(E-E_n)
\\
&=&
\frac{2}{\pi}
\ \mbox{trace}\left[
\mathcal{F}^k \ \mbox{Im}[\mathsf{G}(E')] \ \mathcal{F}^j \ \mbox{Im}[\mathsf{G}(E)]
\right]
\ee
We use the standard notations
$\mathsf{G}(z)=1/(z-\mathcal{H})$,
and $\mathsf{G}^{\pm}(E)=G(E{\pm}i0)$,
and $\mbox{Im}[\mathsf{G}]=-i(\mathsf{G}^{+}{-}\mathsf{G}^{-})/2=-\pi \delta (E{-}\mathcal{H})$.
After some straightforward algebra we get:
\begin{eqnarray}\nonumber
\bm{G}^{kj} =
i\frac{\hbar}{2\pi}\mbox{trace}
\left[
\mathcal{F}^k
\mathsf{G}(E_F{-}i\Gamma/2)
\mathcal{F}^j
\mbox{Im}[\mathsf{G}(E_F)]
\right.
\\ -
\left.
\mathcal{F}^k
\mbox{Im}[\mathsf{G}(E_F)]
\mathcal{F}^j
\mathsf{G}(E_F{+}i\Gamma/2)
\right]
\ee
For the dot-wire geometry in the limit $L\rightarrow\infty$
we can treat the $i\Gamma$ as if it were the infinitesimal $i0$.
Some more non-trivial steps allow us to reduce the trace
operation to the boundary ($r=0$) of the scattering region (Fig.2),
and then to express the result using the $S$ matrix.
Disregarding insignificant interference term that
has to do with having ``standing wave" the result is:
\begin{eqnarray}
\bm{G}^{3j} \ = \ \frac{e}{2\pi i}
\mbox{trace}\left(P_{\tbox{A}}\frac{\partial S}{\partial X_j}
S^{\dag}\right)
\ee
This formula, which we derive here using ``quantum chaos" assumptions
is the same as the BPT formula that has been derived for an open
geometry. It is important to remember that the
limit $L\rightarrow\infty$ is a non-adiabatic limit ($\Gamma\gg\Delta$).
Still it is a ``DC limit". Therefore what we get here is
``DC conductance" rather than ``adiabatic pumping".
The latter term is unfortunately widely used in the existing literature.
\section{The prototype pumping problem}
What is the current which is created by translating
a scatterer (``piston")? This is a ``pumping" question.
Various versions of the assumed geometry
are illustrated in Fig.4.
Though it sounds simple this questions contains
(without loss of generality) all the ingredients
of a typical pumping problem. Below we address
this question first within a classical framework,
and then within quantum mechanics.
The simplest case is to translate a scatterer
in 1D ring (Fig.4a).
Assuming that there is no other scattering mechanism
it is obvious that the steady state solution of
the problem is:
\begin{eqnarray}
dQ = 1 \times \frac{e}{\pi}k_{\tbox{F}} \times dX
\ee
We assume here Fermi occupation, but otherwise
this result is completely classical.
This result holds for any nonzero "size" of scatterer,
though it is clear that in the case of a tiny scatterer
it would take a much longer time to attain the steady state.
Also note that there is no dissipation in this problem.
The steady state solution is an {\em exact} solution
of the problem.
The picture completely changes if we translate
a scatterer inside a chaotic ring (Fig.4b).
In such case the problem
does not possess a steady state solution.
Still there is a quasi steady state solution.
This means that at any moment the state is
quasi-ergodic: If we follow the evolution for
some time we see that there is slow diffusion
to other energy surfaces (we use here phase space
language). This diffusion leads to dissipation
as explained in \cite{frc} (and more Refs therein).
However, we are interested here mainly in the
transport issue. As the scatterer pushes its way
through the ergodizing distribution, it creates
a current. Obviously the size of the scatterer
{\em do matter} in this case. Using classical
stochastic picture we can derive the following result:
\begin{eqnarray} \label{e47}
dQ =
\left[ \frac{g_T}{1{-}g_T}\right]
\left[ \frac{1{-}g_0}{g_0}\right]
\times
\frac{e}{\pi}k_{\tbox{F}}
\times dX
\ee
where $g_0$ is the transmission or the relative size
of the moving scatterer, while $g_T$ is the overall
transmission of the ring.
What about the quantum mechanical analysis?
We shall show that the same result is obtained
{\em on the average}. This means that the
classical expression still holds, but only
in a statistical sense. This is in close analogy
with the idea of ``universal conductance fluctuations".
We shall discuss the effect of $\Gamma$ on the
distribution of~$\bm{G}$.
It should be noticed that our quantum chaos network
model (Fig.4d) essentially generalizes the two barrier model.
Namely, one delta function is the ``scatterer" and
the other delta functions is replaced by a complicated ``black box".
Let us use the term ``leads" in order to refer to the two bonds
that connect the ``black box" to the scatterer.
Now we can ask what happens (given $\dot{X_1}$) if we take
the length of the leads to be very very long.
As discussed previously this is a non-adiabatic limit.
We shall explain that in this limit we expect to
get the same result as in the case of an open geometry.
For the latter the expected result is \cite{AvronSnow}:
\begin{eqnarray} \label{e48}
dQ = (1{-}g_0) \times \frac{e}{\pi}k_{\tbox{F}} \times dX
\ee
We shall explain how Eq.(\ref{e47}) reduces to Eq.(\ref{e48}).
The latter is analogous to the Landauer formula $\bm{G}^{33}=(e^2/2\pi\hbar)g_0$.
The charge transport mechanism which is represented
by Eq.(\ref{e48}) has a very simple heuristic explanation,
which is reflected in the term ``snow plow dynamics" \cite{AvronSnow}.
\begin{figure}[b]
\epsfig{figure=pmt_fig2,height=\hsize,angle=-90,clip}
\caption{
The average conductance $\bm{G}^{31}$ for
the network of Fig.4d. The average is
taken over more than 20000 levels
around $E_F$, while the calculation
(for each Fermi level) was performed
in an interval of 32000 levels.
The transmission of the ``piston"
is $g_{0} \approx 0.1$.
The perpendicular dotted line indicates the border of
the regime where the Kubo calculation is valid.
We also plot the standard deviation,
while the inset displays the distribution
for $\Gamma=0.0001\Delta$. }
\end{figure}
\newpage
\section{Analysis of the network model}
One way to calculate $\bm{G}^{31}$ for the network model
of Fig.4d is obviously to do it numerically using Eq.(\ref{e23}).
For this purpose we find the eigenstates of the network,
and in particular the wavefunctions $\psi^n=A_n\sin(k_nx+\varphi_n)$
at (say) the right lead. Then we calculate the matrix elements
\begin{eqnarray}
\mathcal{I}_{nm} &=& -i \frac{e\hbar}{2\mathsf{m}}
\left(\psi^n\partial\psi^m - \partial\psi^n\psi^m\right)_{x{=}x_0}
\\
\mathcal{F}_{nm} &=&
-\lambda\frac{\hbar^2}{2\mathsf{m}}
\left(\psi^n\partial\psi^m + \partial\psi^n\psi^m
-\lambda \psi^n\psi^m \right)_{x=X_1+0}
\ee
and substitute into Eq.(\ref{e23}). The distribution
that we get for $\bm{G}^{31}$, as well as the
dependence of average and the variance on $\Gamma$
are presented in Fig.6. We see that $\Gamma$
reduces the fluctuations. If we are deep
in the regime $\Delta \ll \Gamma \ll \Delta_b$
the variance becomes very small and consequently
the average value becomes an actual estimate for $\bm{G}^{31}$.
This average value coincides with the ``classical"
(stochastic) result Eq.(\ref{e47}) as expected on the
basis of the derivation below.
In order to get an expression for $\bm{G}^{31}$
it is most convenient to use the FD expression Eq.(\ref{e37}).
For this purpose we have to calculate the
cross correlation function of $\mathcal{I}$ and $\mathcal{F}^1$
which we denote simply as $C(\tau)$. If we describe
the dynamics using a stochastic picture \cite{pmt} we get
that $C(\tau)$ is a sum of delta spikes:
\begin{eqnarray}
C(\tau) \ = \
e\frac{v_{\tbox{F}}}{2L} 2\mathsf{m}v_{\tbox{F}}
\left[(1-g_0) \sum_{\pm} \pm \delta(\tau\pm\tau_1) \right]
+ {....}
\ee
where $\tau_1 = (x_0-X_1) / v_{\tbox{F}}$ is the time
to go from $X_1$ to $x_1$ with the Fermi velocity $v_{\tbox{F}}$,
and the dots stand for more terms due to additional reflections.
If we integrate only over the short correlation
then we get
\begin{eqnarray}
\int_0^{\mbox{short}} C(\tau) d\tau \ \ = \ \
-e \frac{\mathsf{m}v_{\tbox{F}}^2}{L}
\left[ 1-g_0 \right]
\ee
while if we include all the multiple reflections
we get a geometric sum that leads to \cite{pmt}:
\begin{eqnarray}
\int_0^{\infty} C(\tau) d\tau \ \ = \ \
-e \frac{\mathsf{m}v_{\tbox{F}}^2}{L}
\left[ \frac{1-g_0}{g_0}\right]
\left[ \frac{g_T}{1-g_T}\right]
\ee
This leads to the result that was already mentioned
in the previous section:
\begin{eqnarray} \label{e54}
\bm{G}^{31} = -
\left[ \frac{1-g_0}{g_0}\right]
\left[ \frac{g_T}{1-g_T}\right]
\times \frac{e}{\pi}k_{\tbox{F}}
\ee
We also observe that if the scattering in the outer
region results in ``loss of memory", then by Eq.(\ref{e38})
only the short correlation survives, and we get
\begin{eqnarray}
\bm{G}^{31} = - (1-g_0) \times \frac{e}{\pi}k_{\tbox{F}}
\ee
Technically this is a special case of Eq.(\ref{e54})
with the substitution of the serial resistance
$(1{-}g_T)/g_T = (1{-}g_0)/g_0 + (1{-}0.5)/0.5$.
The stochastic result can be derived also using
a proper quantum mechanical calculation \cite{pmt}.
The starting point is the following (exact)
expression for the Green function:
\begin{eqnarray}
\langle x | \mathsf{G}(E) | x_0 \rangle \ \ = \ \
- \frac{i}{\hbar v_F}\sum_p A_p \mbox{e}^{ik_E L_p}
\ee
The sum is over all the possible trajectories
that connect $x_0$ and $x$. More details on
this expression the the subsequent calculation
can be found in Ref.\cite{pmt}. The final result
for the {\em average} conductance coincides
with the classical stochastic result.
\section{Summary}
Linear response theory is the major tool for study of
driven systems. It allows to explore the crossover
from the strictly adiabatic ``geometric magnetism" regime
to the non-adiabatic regime. Hence it provides
a unified framework for the theory of pumping.
\begin{itemize}
\item[$\bullet$]
``Quantum chaos" considerations in the derivation
of the Kubo formula for the case of a closed isolated
system are essential ($\Gamma\propto|\dot{X}|^{2/3}$).
\item[$\bullet$]
We have distinguished between adiabatic,
non-adiabatic and non-perturbative regimes,
depending on what is $\Gamma$ compared with
$\Delta$ and $\Delta_b$.
\item[$\bullet$]
In the strict adiabatic limit Kubo formula
reduces to the familiar adiabatic transport
expression (``geometric magnetism").
\item[$\bullet$]
A generalized Fluctuation-dissipation relation
can be derived. In the zero temperature limit
an implicit assumption in the derivation
is having a generic bandprofile as implied
by quantum chaos considerations.
\item[$\bullet$]
We also have derived an $S$ matrix expression
for the generalized conductance of
a dot-wire system, in the non-adiabatic
limit $L\rightarrow\infty$.
The result coincides with that of open
system (BPT formula).
\item[$\bullet$]
The issue of ``quantized pumping" is analyzed
by regarding the field which is created
by ``Dirac chains". In the adiabatic regime
$Q$ can be either smaller or larger than unity,
while in the non-adiabatic regime $Q$ is less
than unity in agreement with BPT.
\item[$\bullet$]
We have analyzed pumping on networks
using Green function expressions.
The average result can be expressed
in terms of transmission probabilities.
The analog of universal conductance fluctuations
is found in the strict adiabatic regime.
The conductance becomes well define (small dispersion)
in the non-adiabatic regime.
\item[$\bullet$]
The average over the quantum mechanical result,
which becomes the well defined conductance
in the non-adiabatic regime, coincides with the
result that had been obtained for the corresponding
stochastic model.
\end{itemize}
\section{Acknowledgments}
I have the pleasure to thank T.~Kottos and
H.~Schanz for fruitful collaboration,
and Y.~Avishai, M.~B{\"u}ttiker, T.~Dittrich,
M.~Moskalets and K.~Yakubo for discussions.
This research was supported by
the Israel Science Foundation (grant No.11/02),
and by a grant from the GIF, the German-Israeli Foundation
for Scientific Research and Development.
|
2,869,038,153,986 | arxiv | \section{Introduction}
A traditional non-integer base $\beta$ was explored by R\'{e}nyi \cite{R} and Parry \cite{P}. It represented numbers using digits not-exceeding $\beta$. Every nonnegative real number can be represented as a string of digits, usually using the radix point in such bases. Integers are usually represented as infinite strings.
A different cool concept called \textit{exploding dots} was invented by Propp \cite{JP} and popularized by Tanton \cite{JT}. For a rational base $b/a$ it allows using digits below $a+b$. The advantage of this approach is that integers can be represented by finite strings. These bases were thoroughly studied by Akiyama and others in \cite{AFS}.
In this paper, we are interested in base 3/2, which represents integers using digits 0, 1, and 2. We discovered sequence A256785 in the OEIS \cite{OEIS}, which uses digits 0 and 1, and symbol H to represent integers. We call this base, base 1.5 to differentiate it from base 3/2. We discovered an isomorphism between the two bases.
While writing the results, we stumbled on another sequence, A265316, that was even more surprising. Consider the following sequence: Take even numbers written in base 3/2 using exploding dots with digits 0, 1, and 2. Then interpret the result in ternary. When we plugged in the results we got sequence A265316: Consider a greedy way to divide non-negative integers into an infinite set of sequences not containing a 3-term arithmetic progression. The sequence A265316 is the set of first elements of these sequences.
Here is how this paper is arranged. In Section~\ref{sec:explodingdots} we introduce exploding dots. In Section~\ref{sec:base32} we describe a particular case of exploding dots called $2 \leftarrow 3$ machine, corresponding to base 3/2. This base uses digits 0, 1, and 2 in their expansions. In Section~\ref{sec:base15} we discuss the base 1.5 introduced in sequence A256785 which uses digits 0 and 1, and symbol H.
In Section~\ref{sec:mysteryseq} we define sequence A265316 and discuss its connections to the base 3/2. We do not completely prove the fact that these sequences are the same, but we prove a lot of properties for both sequences.
In Section~\ref{sec:differentways} we explore several natural ways to represent the same number in base 1.5. In Section~\ref{sec:isomorphism} we produce an isomorphism between the two bases 1.5 and 3/2.
This research was done by the PRIMES STEP junior group. PRIMES STEP is a program based at MIT for students in grades 6-9 to try research in mathematics.
\section{Exploding Dots}\label{sec:explodingdots}
Here we explain exploding dots. We start with a row of boxes that can be extended to the left. We label the boxes from right to left. The rightmost box is labeled zero. The second one to the right is box 1, the third to the right is box 2 and so on.
We also have an integer $b$ that is our base. Consider integer $N$. To find its value in base $b$, we place $N$ dots in box 0. Now we allow explosions. As soon as there are $b$ dots in box $k$, they, BOOM, explode. That means we remove $b$ dots from box $k$ and add one dot in the box to the left, which is of course, numbered $k+1$. We continue exploding until nothing can explode anymore, meaning each box has no more than $b$ dots. This process is called a $1 \leftarrow b$ machine. At the end, we write the number of dots in each box from left to right, dropping the leading zeros. The result is the representation of integer $N$ in base $b$.
For example, to calculate 5 in base 2, we start with 5 dots in the rightmost box, box 0. We can represent this state of our machine as 5. Since we have more than two dots, each pair of dots explodes adding a dot to the box directly to the left. As there are two pairs, we add two dots to box 1 and remove 4 dots from box 0. We can represent the result as 21: one dot in the rightmost box and two dots in the box to the left. Now there are two dots together in box 1; therefore, we have another BOOM, which results in base-2 representation of 5: $5_{10} = 101_2$, see Figure~\ref{fig:base2}.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.7]{ExplodingDotsBase2.png}
\caption{Exploding dots show how to represent 5 in base 2}\label{fig:base2}
\end{figure}
The cool part about the exploding-dots machines is that they are easily generalizable to rational bases. The $a \leftarrow b$ machine is a machine where each time there are at least $b$ dots in a box, there is an explosion. An explosion in box $k$ wipes out $b$ dots from box $k$, while adding $a$ dots to box $k+1$. To represent an integer $N$, we start with $N$ dots in box zero. After the fireworks are over, that is all boxes have fewer than $b$ dots, we read the number of dots from left to right starting with the left most non-empty box. The result is the representation of $N$ in base $b/a$. We number the digits of this representation similar to the way we number boxes: from right to left: $d_k d_{k-1}\ldots d_1d_0$.
For example, to calculate 5 in base 3/2, we start with 5 dots in the rightmost box, box 0. We can represent this state of our machine as 5. Since we have more than three dots we have an explosion: the number of dots in the rightmost box decreases by 3 and we add 2 dots to the box on the left. The result is 22: which is the base-3/2 representation of 5: $5_{10} = 22_{3/2}$, see Figure~\ref{fig:base3over2}.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.6]{ExplodingDotsBase3Over2.png}
\caption{Exploding dots show how to represent 5 in base 3/2.}\label{fig:base3over2}
\end{figure}
\section{Base 3/2}\label{sec:base32}
The $2 \leftarrow 3$ machine is a machine where three dots explode generating two new dots in the box on the left.
For example, number 7 in base 3/2 becomes 211.
The first several numbers written in base 3/2 form sequence A024629 in the OEIS \cite{OEIS}: \[0, 1, 2, 20, 21, 22, 210, 211, 212, 2100, \ldots.\]
Here are some awesome properties of base 3/2 \cite{JP,JT}:
\begin{itemize}
\item Every integer only uses digits 0, 1, and 2.
\item Every number starting with 2 starts with 2.
\item Every number starting with 8 starts with 21 followed by either 0 or 2.
\item The last digit repeats in a cycle of 3, the last two digits repeat in a cycle of 9, and so on: the last $k$ digits repeat in a cycle of $3^k$.
\item Removing one or several last digits of an integer in this base gives another integer in the base.
\end{itemize}
It is interesting to note that base 6/4 is different from base 3/2. For example, numbers in base 6/4 can have 5 as a digit, while numbers in base 3/2 can not. For this reason, it is important not to reduce the fraction to simplest terms in this definition of the base. In particular, it is important to call this base, base 3/2, not base 1.5.
The digits in base $b/a$ represent how the integer $N$ can be decomposed into powers of $3/2$ \cite{JP}.
\begin{lemma}
If $d_kd_{k-1}\ldots d_1d_0$ is a representation of integer $N$ in base $3/2$, then
\[N = \sum_{i=0}^k d_i\frac{3^i}{2^i}.\]
\end{lemma}
\section{Base 1.5}\label{sec:base15}
A definition of base 1.5 is given in sequence A256785 in the OEIS \cite{OEIS}. This base uses three symbols: 0, 1, and H. Symbol H represents 0.5. Letter H was likely chosen because of the word \textbf{h}alf. For emphasis, we use 1.5 instead of $\frac{3}{2}$ in this section where appropriate.
Here are a few rational numbers using these three digits in ascending order of the number values:
\[\text{0, H, H0, 1, H00, HH, 10, H0H, H000, H1, HH0, 1H, H01, H00H, 100, }
\ldots.\]
We call this sequence the \textit{ascending} sequence and denote it as $A_n$. The corresponding values are:
\[0, 0.5, 0.75, 1, 1.125, 1.25, 1.5, 1.625, 1.6875, 1.75, 1.875, 2, 2.125, 2.1875, 2.25,
\ldots.\]
One might wonder how it could be possible to write this sequence: that is, why are we always able to find the next number in value in an infinite set of numbers? The smallest number with $j$ digits is H00...0: it has $j-1$ zeros and the value of $0.5\cdot 1.5^{j-1}$. Since this value increases as $j$ increases, to find all numbers that are less than $0.5\cdot 1.5^{j-1}$, we only need to have a finite check of all the numbers with less than $j$ digits.
An oddity of this base, although expected, is that not all of these numbers are integers. The indices of integers in this sequence are:
\[0, 3, 11, 25, 46, 77, 117, 169, 232, 308, 401, 508, 631, 771, 929, 1108, 1308, \ldots,\]
which is now sequence A320035.
The first few natural numbers written in this base are:
\[1 =1, \ 2 = 1\text{H}, \ 3 = 1\text{H}0, \ 4 = 1\text{H}1, \ 5 = 1\text{H}0\text{H}, \ 6 = 1\text{H}10, 7 = 1\text{H}11.\]
The weird thing about base 1.5 is that an $i$-digit number might be smaller than a $j$-digit number where $i > j$.
Other than the ascending order, there is another reasonable order to write these numbers in: we call it the \textit{dictionary} order. Consider numbers that use only zeros and two other digits $a < b$. Write the numbers in the increasing order. Replace $a$ by H, and $b$ by 1. In this order, the numbers with more digits will go after the numbers with fewer digits. Did we mention that this base is weird? The sequence of base 1.5 rational numbers in the dictionary order is sequence $B_n$:
\[\text{0, H, 1, H0, HH, H1, 10, 1H, 11, H00, H0H, H01, HH0, HHH, HH1, }\ldots.\]
The corresponding values are:
\[0, 0.5, 1, 0.75, 1.25, 1.75, 1.5, 2, 2.5, 1.125, 1.625, 2.125, 1.875, 2.375, 2.875, \ldots.\]
The indices of integers in the dictionary ordered sequence are:
\[0, 2, 7, 21, 23, 64, 69, 71, 193, 207, \ldots.\]
This is the sequence A265316. The sequence A265316 is not related to any base. We will discuss this unexpected connection in Section~\ref{sec:mysteryseq}.
The values in the dictionary order sequence go up and down, whereas if it were an integer base, they would always go up. To be precise, the dictionary order sequence goes up, up, down in a cyclic manner.
\begin{lemma}
For integer $k \geq 0$, $B_{3k}<B_{3k+1}<B_{3k+2}$, while $B_{3k+2} > B_{3k+3}$.
\end{lemma}
\begin{proof}
Numbers $B_{3k}$, $B_{3k+1}$, and $B_{3k+2}$ only differ in the last digit. Therefore, we have $B_{3k+2} = B_{3k+1}+ 1/2 = B_{3k}+1$. This proves the first part of the lemma.
For the second part we look at the last two digits. If the last two digits of $B_{3k+2}$ are 01, then $B_{3k+3}$ has the same prefix and the last two digits H0. The statement follows from the fact that $\text{H}0 < 1$. If the last two digits of $B_{3k+2}$ are H1, then $B_{3k+3}$ has the same prefix and the last two digits 10. The statement follows from the fact that $10 < \text{H}1$.
Suppose the last two digits of $B_{3k+2}$ are 11. Let us assume that $B_{3k+2}$ ends with $k \geq 2$ ones. We denote the digit before the last run of ones in $B_{3k+2}$ as $b$, where $b$ is either 0 or H. Then number $B_{3k+3}$ differs from $B_{3k+2}$ only in the last $k+1$ digits. The last $k+1$ digits of $B_{3k+2}$ are $b+0.5$ followed by $k$ zeros. Therefore the difference $B_{3k+2}-B_{3k+3}$ is:
\[1.5^{k-1} + 1.5^{k-2} + \cdots + 1.5^{1} + 1.5^{0} - \frac{1}{2} \cdot 1.5^{k}.\]
Summing the geometric series we get
\[2 \left(1.5^{k} - 1\right) - \frac{1}{2} \cdot 1.5^{k} = 1.5^{k+1} -2.\]
The fact that $k \geq 2$ means the difference is positive.
\end{proof}
We want to introduce some marvelous sequences that show the connection between the ascending order and the dictionary order. The first sequence shows the value order when the numbers are arranged in the dictionary order. In other words, our sequence $a(n)$ is such that $a(n) = k$, if $A_k=B_n$. This is always possible because the sequences $A_n$ and $B_n$ contain the same numbers, just in a different permutation. This sequence is now A320274:
\[0, 1, 3, 2, 5, 9, 6, 11, 17, 4, 7, 12, 10, 15, 23, 19, 27, 37, 14, 21, 29, 25, 34, 46,
\ldots.\]
Similarly, we can define sequence $b(n)$ so that $b(n) = k$, if $B_k=A_n$. This is now sequence A320273:
\[0, 1, 3, 2, 9, 4, 6, 10, 27, 5, 12, 7, 11, 28, 18, 13, 30, 8, 81, 15, 29, 19,
\ldots.\]
The two sequences above are permutations of non-negative integers. Therefore, they contain every number. By definition, they are inverses of each other.
\section{The mysteries of sequence A265316}\label{sec:mysteryseq}
\subsection{The definition of A265316}
Now we go back to sequence A265316, which appeared here as indices of integers when numbers containing digits 0, 1, and 2 are written in the dictionary order and interpreted in base 3/2. We call this sequence the \textit{Stanley cross-sequence}:
\[0, 2, 7, 21, 23, 64, 69, 71, 193, 207, 209, 214, \ldots.\]
Before providing the official definition of the sequence, we give several other definitions. A \textit{3-free} sequence is an integer sequence with no three elements forming an arithmetic progression. Given a start of a sequence of non-negative integers, the \textit{Stanley sequence} is a lexicographically smallest 3-free sequence with the given start \cite{OS}. The simplest Stanley sequence is the one that starts with 0, 1. It is sequence A005836 in the OEIS \cite{OEIS}:
\[0, 1, 3, 4, 9, 10, 12, 13, 27, 28, 30, \ldots.\]
Now we are ready to give a description of sequence A265316 from the OEIS \cite{OEIS}.
\begin{enumerate}
\item Consider the simplest Stanley sequence: 0, 1, 3, 4, 9, 10 and so on. We denote this sequence $S_0$. This sequence can be described as non-negative integers that do not contain 2 in their ternary representation.
\item Then we use the leftover integers and build a new minimal 3-free sequence. The new sequence is 2, 5, 6, 11, 14 and so on. This sequence is now sequence A323398 in the OEIS. We denote this new sequence $S_1$.
\item Then we exclude this sequence too and continue building a new greedy 3-free sequence $S_2$: 7, 8, 16, 17, 19, 20, 34, and so on. This sequence is now sequence A323418 in the OEIS.
\item We continue this procedure to the new sequence $S_3$: 21, 22, 24, 25, 48, 49, 51, and so on, which is now sequence A323419 in the OEIS.
\item It is known \cite{R} that 3-free sequences have density zero. Therefore, we can build an infinite number of such sequences. The starting numbers of these series of sequences form sequence A265316 which is the object of this section. That is A265316$(n)$ is the first term of $S_n$.
\end{enumerate}
\subsection{Greedy 3-free sequences in base 3/2}
We now want to repeat the procedure of building 3-free sequences in base 3/2 using not just integers, but all finite strings containing three digits 0, 1, and 2. We call these numbers \textit{integer-like} numbers.
It is wildly known \cite{OS} that the lexicographically first 3-free sequence, that is the simplest Stanley sequence, are numbers that are represented in base 3 without twos.
Our situation is similar and different at the same time. Integer-like numbers that are represented in base 3/2 have different values than the same strings interpreted in base 3. Also, there are two different natural orders on all integer-like numbers written with 0, 1, and 2. One is the value order if they are interpreted in base 3 or ten, and the other one when they are interpreted in base 3/2. The second order is different from the first. For example, $10 > 2$ in the first order and $10 < 2$ in the second. The first order is the dictionary order we described before. The good news: the numbers without twos will be ordered the same way in both orders.
We want to show that the lexicographically first sequence in integer-like numbers independently of which order, base 3 value or base 3/2 value, we choose is the same sequence:
\begin{lemma}
The sequence of integer-like numbers in base 3/2 that does not contain twos in their base 3/2 interpretation is a 3-free sequence. Moreover, this sequence is the lexicographically first 3-free sequence in both orders.
\end{lemma}
\begin{proof} The first part of the proof is similar to the corresponding proof for the Stanley sequence starting 0, 1.
Any integer-like number $x$ that has a digit 2 in base 3/2 can be written in the form $2b-a$, where $a$ and $b$ are integer-like numbers without a two in their $3/2$ representation and $b>a$. For example, $x=20211022021220220121111021 = 2\cdot 10111011011110110111111011-00011000001000000101111001$.
We can choose $b$ by replacing ones in $x$ with zeros and changing twos to ones; we can choose $a$ by changing all the digits 2 in $x$ to zero. Notice that $a,b < x$ in both orders.
Next, no three different integer-like numbers without a 2 in their base 3/2 representation can be in an arithmetic progression. To see why, suppose they were in such a progression. Let the numbers be $a$, $b$, and $2b-a$. If $a$ and $b$ are without a 2, then $2b$ is all 2s and 0s, and $2b$ would need all its digit 2s lined up with all of $a$'s digit 1s for $2b-a$ to not have a 2 remaining after subtraction. But then $a$ and $b$ are the same number, leading to a contradiction. This is the same argument as in base 10. As there are no carries in the argument, the argument works in any base.
We showed that the sequence of integer-like numbers without twos in base 3/2 is a 3-free sequence. Now we need to show that it is lexicographically the first in both orderings.
The sequence starts with 0 and 1 in both cases.
We continue by induction. Assume by induction that the first $n$ numbers are the integer-like numbers without a 2 in base 3/2, the $n$-th number being $y$. We know that the next number $z$ without a 2 is valid, and we must prove it is of the smallest value in both orderings. Suppose it is not of minimum value, and the next term is instead a number $x$ between $y$ and $z$. Then $x$ must contain a 2. As we saw before, $x$ can be represented as $2b-a$ for $b$ and $a$ two numbers without a 2. As $a, b<x$ in both orderings, then $a$ and $b$ must both be among the first $n$ terms of the sequence. Then $a$, $b$, $x$ form a 3-term arithmetic progression, leading to a contradiction. Therefore, the next 2-free integer-like number when written in base 3/2 is the next term of the sequence.
\end{proof}
We denote the sequence of integer-like numbers that contain only zeros and ones in base 3/2 and arranged in the dictionary order as $\mathcal{S}_0$. Now we want to consider a set of sequences $\mathcal{S}_k$, where $\mathcal{S}_k = \mathcal{S}_0 +2k$ in base 3/2. We show several properties of these sequences.
\begin{lemma}
\begin{enumerate}
\item Each sequence $\mathcal{S}_k$ is 3-free.
\item Sequences $\mathcal{S}_k$ do not overlap.
\item Every integer-like number belongs to one of the sequences.
\item $\mathcal{S}_k$ is the lexicographically first sequence with no 3-term arithmetic progression chosen from the set of numbers $\cup_{i \geq k} \mathcal{S}_i$ when we use the value order.
\end{enumerate}
\end{lemma}
\begin{proof}
1. Each sequence $\mathcal{S}_k$ does not contain a 3-term arithmetic progression. This follows from the fact that each sequence is a constant plus $\mathcal{S}_0$ and $\mathcal{S}_0$ does not contain a 3-term arithmetic progression.
2. Sequences $\mathcal{S}_k$ do not overlap. Consider an element $a_i$ in $\mathcal{S}_0$. We start by showing that $a_i + n$, for any integer $n > 1$, does not belong to $\mathcal{S}_0$. Indeed, when we add an integer $n > 1$, we either get 2 as the last digit, or we have a carry. A carry always generates a two. That means, in any sequence $\mathcal{S}_k$ no two numbers differ by an integer more than 1. That means $\mathcal{S}_k$ and $\mathcal{S}_j$ do not overlap for any $k \neq j$.
3. Every integer-like number belongs to one of the sequences. This can be proven by showing that every integer-like number in base 3/2 comes down to a number with only 1 and 0 by subtracting 2. Indeed, if a number contains a 2, then we can always subtract a 2 from it and get a positive integer-like number. We continue subtracting 2 while we have a 2 in the number. As this process is finite we have to end with an integer-like number consisting only of ones and zeros in their base 3/2 representation.
4. We know that $\mathcal{S}_0$ is lexicographically first. We proceed by induction. Suppose for $j \leq k$ sequence $\mathcal{S}_j$ is lexicographically first by value in the set
$\mathcal{S} \setminus \cup_{i \geq j} \mathcal{S}_i$. Consider lexicographically first sequence $F$ in this set $\mathcal{S} \setminus \cup_{i \geq k} \mathcal{S}_i$. Notice that every element in $F$ has to contain a 2 in its base 3/2 representation. That means if, we subtract a 2 from every element of $F$ we get integer-like numbers in the set $\mathcal{S} \setminus \cup_{i \geq k-1} \mathcal{S}_i$. It has to be lexicographically first, so it has to equal $\mathcal{S}_k$. Thus, sequence $F$ has to equal $\mathcal{S}_{k+1}$.
\end{proof}
We later need one more property about sequences $\mathcal{S}_i$ written in base 3/2. But first a definition. We say that a set of numbers with digits 0, 1, and 2 satisfies \textit{two-out-of-three property} if the following holds:
\begin{itemize}
\item The last digit of every number in the set uses two out of three possible digits.
\item Numbers in the set that end with $x$ can have only two possibilities for a digit before $x$, and both possibilities are realized.
\end{itemize}
\begin{lemma}
Sequences $\mathcal{S}_n$ when written in base 3/2 satisfies two-out-of three property.
\end{lemma}
\begin{proof}
Sequence $\mathcal{S}_0$ consists of numbers using zeros and ones. Thus it satisfies the two-out-of three property. Sequence $\mathcal{S}_n$ is constructed by adding the same number $x$ to all elements of $\mathcal{S}_0$ considered in base 3/2.
Now we start from the last digit and use induction. The last digit has two possibilities: the last digit of $x$ and the last digit of $x+1$. Consider numbers in $\mathcal{S}_n$ that end with the same string of $m$ digits denoted here by $z$. When we subtract $x$ from all these numbers we get a set of numbers with a fixed string $y$ at the end. Before it, we can only have 0 or 1 as a digit. Now when we add $x$ to these numbers, we have exactly two possibilities for the number before the string. Both of them are realized.
\end{proof}
\subsection{Write $\mathcal{S}_i$ in base 3/2 and interpret them in base 3}
First recall a famous fact about 3-term integer arithmetic progressions.
\begin{lemma}
The last digits of a 3-term arithmetic progression written in base 3 are either all the same or all different.
\end{lemma}
Before proceeding we need the following statement about sequences $\mathcal{S}_i$.
\begin{lemma}
Sequence $\mathcal{S}_k$ written in base 3/2 and then interpreted in base 3 is 3-free.
\end{lemma}
\begin{proof}
Suppose sequence $\mathcal{S}_k$ written in base 3/2 and then interpreted in base 3 contains an arithmetic progression $a,b,c$. There are only two possibilities for the last digit. That means $a,b,c$ have the same last digit in base 3. We subtract this digit and divide by 3. We get numbers $a',b'c'$ that are numbers $a,b,c$ without the last digit. They have to form the arithmetic progression. By our two-out-of-three property, as the last digit is fixed, there are only two possibilities for the digit before it, which is now the last digit in the new progression $a',b'c'$. It follows that the last digit in $a',b'c'$ is the same for all three numbers. By continuing, we get that the numbers $a,b,c$ are equal to each other, leading to a contradiction.
\end{proof}
The following statement which we did not prove is the last step that is needed for our conjecture: Each sequence $\mathcal{S}_n$ is lexicographically first 3-free sequence on the available numbers in base 3 order.
Now we state our main conjecture.
\begin{conjecture}
Sequence $\mathcal{S}_k$ written in base 3/2 but interpreted in base 3 is sequence $S_k$.
\end{conjecture}
\begin{corollary}
The Stanley cross-sequence can be defined as following: Take even numbers, write them in base 3/2, interpret the resulting string as numbers written in ternary.
\end{corollary}
Now we want to spend some time discussing sequences $\mathcal{S}_i$, for $i = 1,2,3$ in more detail.
\subsection{Examples}
We can easily describe the first few sequences $\mathcal{S}_i$ in terms of their representation in base 3/2:
\begin{itemize}
\item $\mathcal{S}_0$ are numbers written with 0 and 1: 0, 1, 10, 11, 100, and so on.
\item $\mathcal{S}_1$ are numbers that contain exactly one 2 that might be followed by zeros: 2, 12, 20, 102, 112, 120, 200, and so on.
\item $\mathcal{S}_2$ are numbers such that the last digit is 1 or 2 and the rest is a substring from $\mathcal{S}_1$: 21, 22, 121, 122, 201, 202, and so on.
\item $\mathcal{S}_3$ are numbers such that the last two digits are from the set $\{10,11,20,21\}$ and the rest is a substring from $\mathcal{S}_1$. Equivalently, we can say that $\mathcal{S}_3$ has 0 and 1 as the last digit and the rest as $\mathcal{S}_2$: 210, 211, 220, 221, 1210, 1211, and so on.
\end{itemize}
\section{Different ways to write numbers in base 1.5}\label{sec:differentways}
Non-integer bases have been known for a long time. Consider a number $\beta > 1$. The value of $x=d_n\dots d_2d_1d_0.d_{-1}d_{-2}\dots d_{-m}$ is
\[\begin{aligned}
x&=\beta ^{n}d_{n}+\cdots +\beta ^{2}d_{2}+\beta d_{1}+d_{0}\\
&\qquad +\beta ^{-1}d_{-1}+\beta ^{-2}d_{-2}+\cdots +\beta ^{-m}d_{-m}.
\end{aligned}
\]
This representation of $x$ is called a $\beta$-expansion and was introduced by R\'{e}nyi in 1957 \cite{R} and later studied by Parry \cite{P}. In such representations the numbers $d_i$ are non-negative integers less than $\beta$. Every real number has at least one $\beta$-expansion.
What we study is different. In our expansions we are more flexible. In case of base 3/2 we allow digits 1 and 2 in our expansions. In case of base 1.5 we allow a non-digit H in our expansions.
There is a famous greedy algorithm to write an integer $N$ in an integer base $b$. This algorithm was adapted for $\beta$-expansions by R\'{e}nyi \cite{R}.
We are trying to represent $N$ as $d_kd_{k-1}\ldots d_1d_0$. In this algorithm we start with finding the left-most digit $d_k$:
\begin{enumerate}
\item \textbf{Find the total number of digits.} Find the largest power $k$ so that $N \geq b^k$. Then $N$ has $k+1$ digits in base $b$.
\item \textbf{Find the left-most digit.} Then the digit $d_k$ is equal $\lfloor N/b^k \rfloor$.
\item \textbf{Repeat.} Consider the new integer $N - d_kb^k$ and continue recursively using zeros when necessary.
\end{enumerate}
This algorithm chooses the lexicographically largest expansion in case there are several expansions. There are other expansions that choose digits 0 and 1. They were studied by many authors, starting with Kempner \cite{K}.
\textbf{Use only zeros and ones.} We find $k$ so that $1.5^k < N < 1.5^{k+1}$. The first non-zero digit is $d_k=1$. Repeat with the difference. For example,
\[2= 10.010000010010010100000000010000001\ldots.\]
This is the same representation as used by R\'{e}nyi \cite{R} that is the largest expansion lexicographically. Another famous expansion is the lazy expansion:
\[2=0.11111\ldots.\]
As we can also use H for a digit, the number of possible expansions increases. Here are some natural examples.
\begin{enumerate}
\item \textbf{A finite expansion for integers.} The base is designed in such a way that any integer has a finite expansion. For example,
\[2 = 1H.\]
We show later in this section how to represent any integer as a finite expansion.
\item \textbf{Use only Hs and zeros.} Find $k$ so that $\frac{1}{2}\cdot 1.5^k < N < \frac{1}{2}\cdot 1.5^{k+1}$. The first non-zero digit is $d_k$. If follows that $N < 1.5^k$. Therefore, $d_k = $H. In this representation every number is represented using only 0 and H. For example,
\[2=\text{H}000.0\text{H}00\text{H}00\text{H}000\text{H}000\text{H}00\text{H}00\text{H}\ldots.\]
This extension chooses the first digit as far to the left as possible. It gives the largest expansion lexicographically when using digits 0, 1, and H.
\item \textbf{Use only Hs and ones.} We can use the fact that 1=0.HHH... which we represent as 0.$\overline{\mbox{H}}$. Similarly $1.5 = \text{H}.\overline{\mbox{H}}$, and so on. A $k$-th power of 1.5, for $k > 0$, is represented with $k$ digits H before radix followed by $\overline{\mbox{H}}$. If $k<0$, then $1.5^k$ is represented with $k$ zeros after the period followed by Hs. Consider a positive real number $x$ such that $1.5^k\leq x <1.5^{k+1}$. Denote the difference $x-1.5^k$ by $d$. We can represent $d$ using only zeros and Hs as above. Moreover, the largest non-zero digit of $d$ in this representation is less than $k$ as $d < 1.5^{k+1}-1.5^k=0.5\cdot 1.5^k$. When we add these representations of $d$ and $1.5^k$, we get $x$ represented with only Hs and ones starting from the significant digit. For example, if $x = 2$, then $k=1$ and $d=0.5$. Now we need to represent 0.5 using only Hs, that is $d= $H. Summing up, with this algorithm 2 is represented as
\[2=1.\text{HHHHHHHHHHHHHHH}\ldots\]
\item \textbf{Make the first digit as close to the number as possible.} Consider the set of numbers, $\frac{1}{2}\cdot 1.5^k$, $1.5^k$. Pick the largest number from this set that does not exceed $N$. Repeat with the difference. With this algorithm 2 is represented as
\[2= \text{H}000.00100000\text{H}000\text{H}000\text{H}\ldots.\]
\end{enumerate}
Notice, that the second algorithm and R\'{e}nyi's algorithm are connected. If we represent $x$ using only digits 0 and H, then by replacing H with 1 we get R\'{e}nyi's representation of $2x$ with only 0 and 1.
We now look at the numbers that are not extended past the decimal points. We already studied such numbers in base 3/2. Similar to that, we call such numbers in base 1.5 \textit{integer-like}. An integer-like number is either an integer or a non-integer, but every integer has an integer-like representation.
It is well-known that while infinite $\beta$-expansions are not unique, finite $\beta$-expansions are unique. A similar statement is true in base 1.5.
\begin{lemma}
No two integer-like numbers in base 1.5 have the same integer-like representation.
\end{lemma}
\begin{proof}
Consider two different representations $a_ka_{k-1}\ldots a_0$ and $b_jb_{j-1}\ldots b_0$ of the same value. If $a_0 = b_0 \neq 0$, we can replace $a_0$ and $b_0$ in both numbers by zero, and still have two different representations with the same value. If $a_0 = b_0 = 0$, we can remove the last digit from both numbers while still having two different numbers with the same value. That means, we can assume that $a_0 \neq b_0$. Given that these numbers have the same value, we write:
\[\sum_{i=0}^k a_k\frac{3^i}{2^i} = \sum_{i=0}^k b_k\frac{3^i}{2^i},\]
where we replace H with 1/2.
Now we multiply both sides by $2^{k+1}$. We get
\[\sum_{i=0}^k 2a_k3^i2^{k-i} = \sum_{i=0}^k 2b_k3^i2^{k-i}.\]
Both sides are integers. By taking this equation modulo 3, we get that
\[2^{k+1}a_0 = 2^{k+1}b_0.\]
Therefore,
\[a_0 = b_0.\]
This creates a contradiction with our assumption and proves the lemma.
\end{proof}
The sequence of natural numbers in base 1.5 starts as 0, 1, 1H, 1H0, 1H1, 1H0H, 1H10, 1H11, 1H0HH.
Algorithm for incrementing a number by 1 in Base 1.5:
\texttt{Start at the last digit:}
\texttt{if 0: change to 1, stop}
\texttt{if 1: change to H, carry 1}
\texttt{if H: change to 0, carry 1}
\texttt{move one digit to the left}
In other words, start with the first zero to the right. If there are no zeros, pad one zero at the beginning. Then for the string of digits starting and including the last zero replace them in a cyclic order: $0 \rightarrow 1 \rightarrow H \rightarrow 0$. For example, the representation of 7 is 1H11. We pad it with 0, to get 01H11, then cycle these digits to 1H0HH, which is the base 1.5 representation of 8. As another example, the representation of 22 is 1H100H1, so 23 must be 1H1010H.
\section{Isomorphism}\label{sec:isomorphism}
There are a lot of similarities between bases 3/2 and 1.5. Is there a connection between the two bases?
\begin{theorem}
Every number in base 1.5 is the same as the number with 2 times its value in base $\frac{3}{2}$ except with the digits 0, H, 1 replaced by 0, 1, 2 correspondingly.
\end{theorem}
\begin{proof}
We start by noticing that multiplying 0, H, 1 by 2 produces 0, 1, 2 correspondingly. Now consider number $x$ in base 1.5. That means we can use the base 1.5 representation to represent $x$ as a sum of powers of $3/2$ with coefficient 0, 1/2, and 1. Multiplying by 2 we get a representation of $2x$ as a sum of powers of $3/2$ with coefficient 0, 1, and 2. As sums of powers match the representations in bases and the representation is unique, the theorem is proven.
\end{proof}
For example, 2 in base 1.5 is 1H. That means 4 in base 3/2 is 21. In addition, we know other ways to represent 2 in base 1.5. We can use them to show how 4 can be represented in base 3/2.
\begin{enumerate}
\item \textbf{Only 0 and 1.}
\[4=\text{1}000.0\text{1}00\text{1}00\text{1}000\text{1}000\text{1}00\text{1}00\text{1}\ldots.\]
\item \textbf{Only 0 and 2.}
\[4= 20.020000020020020200000000020000002\ldots,\]
\item \textbf{Only 1 and 2.}
\[4=2.1111111111111111111111\ldots,\]
\item \textbf{The smallest leftover.}
\[4= 1000.00200000100010001\ldots.\]
\end{enumerate}
Because of the isomorphism, we can rewrite the properties in Section~\ref{sec:base32} in base 3/2 in our base 1/5:
\begin{itemize}
\item Every integer only uses digits 0, H, and 1.
\item Every positive integer starts with 1.
\item Every integer bigger than 3 starts with 1H followed by either 0 or 1.
\item The last digit repeats in a cycle of 3 numbers, the last two digits repeat in a cycle of 9 numbers, and so on. The last $k$ digits repeat in a cycle of $3^k$ numbers.
\item Removing one or several last digits of an integer in this base gives another integer in the base.
\end{itemize}
Here are some other parallels. Sequence A244040 is defined as a sum of digits of $n$ in fractional base 3/2. With respect to fractional base 1.5, A244040$(2n)$ is twice the sum of digits of $n$.
Also, sequence A256785 is related to base 1.5. It shows the numbers that have an integer digit sum in base 1.5: 1, 5, 11, 14, 20, 21, 22, 23, 26, 29, 30, 31, $\ldots$. Equivalently, these are numbers that have an even number of digits H in their representation. On the other hand, $2\cdot$ A256785 is the sequence of even numbers that have an even sum in base 3/2.
As another example, consider sequence A320035 that provides indices of integers in integer-like numbers written in ascending order in base 1.5. The same sequence is sequence of indices of even integers in the sequence of integer like numbers written in ascending order in base 3/2. The sequence for all integers in base 3/2 is now sequence A320272:
\[0, 1, 3, 6, 11, 17, 25, 34, 46, 60, 77, 96, 117, \ldots.\]
By the way, if we use the dictionary order, then indices or integers in integer-like numbers in base 3/2 are sequence A261691:
\[0, 1, 2, 6, 7, 8, 21, 22, 23, 63, 64, 65, 69, 70, 71, 192, 193, 194, 207, 208, \ldots.\]
The similar sequence in base 1.5 is every other term in A261691:
\[0, 2, 7, 21, 23, 64, 69, 71, 193, 207, \ldots.\]
It is our mystery sequence A265316.
\section{Acknowledgements}
We are grateful to PRIMES STEP program for allowing us to do this research.
|
2,869,038,153,987 | arxiv | \section{Introduction}
\label{sec:intro}
\begin{figure}
\centering
\resizebox{\columnwidth}{!}{
\includegraphics[width=\linewidth]{PPE-R-nonyl}}
\caption{Chemical structure of poly {\em para} phenylene ethynylene\xspace (poly-PPE). In this work $R$ indicates nonyl chains. $n$ is the number of repeat units, or the degree of polymerization.}
\label{fig:PPE-chemical-structure}
\end{figure}
Conjugated polymers have attracted a lot of interest due to their variable functional properties and application potential, e.g., in biochemical sensors~\cite{gaylord_dna_2002,kushon_detection_2002,harrison_amplified_2000,mcquade_conjugated_2000,kim_sensing_2005,liu_fluorescence_2005}, lasers~\cite{hide_semiconducting_1996}, light emitting diodes (LEDs)~\cite{ho_molecular-scale_2000,zhang_light-emitting_1993}, organic transistors~\cite{kaur_solvation_2007}, and photovoltaic devices~\cite{shaheen_organic-based_2005,brabec_polymerfullerene_2010}. The ease of processability and band gap tunability of polymeric semiconductors facilitates the realization of this potential, since it provides the opportunity for a targeted manipulation of electronic and morphological properties of single polymer chains and their aggregates. This, in turn, can be achieved by synthetic strategies, exploitation of properties of functional side chains, and/or solvent-induced transitions. One specific example in which both morphological and electro-optical properties of polymers are purposefully modified is the formation of highly fluorescent conjugated polymer dots for fluorescence imaging in live cells~\cite{wu_multicolor_2008}. Less toxicity together with flexibility and biocompatibility make these {\em polydots} attractive substitutes for their inorganic counterparts~\cite{halkyard_evidence_1998,tuncel_conjugated_2010}.
Among fluorescent polymers, poly {\em para} phenylene ethynylenes\xspace (poly-PPE) (see the chemical structure in~\Fig{PPE-chemical-structure}) are a class of strongly conjugated polymers with a rigid backbone and absorption and emission of light tunable from the ultraviolet (absorption) to the visible (emission) range~\cite{halkyard_evidence_1998}. In particular, PPEs can be used as fluorescence sensors because their fluorescence intensity is sensitive to the presence of other co-solutes. Due to the significance of polymer conformations for the photophysical properties of functionalized PPEs, it is desirable to steer the tendency of the polymer to stay in single strands or to aggregate in a particular solvent by rational manipulations~\cite{yue_evolution_2008}. It has been demonstrated that modification of side chain functionalization by, e.g., substituting surfactants with varying degree of hydrophobicity and solute-solvent or solvent-air interactions, can lead to controlled morphologies and, concomitantly, optical properties. In flexible and semi-flexible polymers conjugation along a single chain may be broken~\cite{vukmirovi_charge_2008,vukmirovi_charge_2009,mcmahon_ad_2009} due to a substantial out-of-plane torsion angle between two adjacent repeat units. However, such a simple criterion may be insufficient to describe and interpret the complex interplay between the actual chemistry of the backbone, functionalization by side chains, and solute-solvent interactions and their cumulative effect on the localization characteristics of excitations and hence the electronic and optical properties of the polymer.
In this study, a combined quantum-classical (QM/MM) approach is used to investigate optical properties of 2,5-dinonyl poly {\em para} phenylene ethynylene\xspace oligomers in dilute solutions with toluene and water, respectively, as model systems. Conformations of the PPE chains are explored by using MD simulations. Electronic excitations are calculated based on many Body Green's Functions theory within the $GW$ approximation and the Bethe-Salpeter equation ($GW$-BSE). The use of the latter technique in traditional quantum-chemical applications has recently increased~\cite{blase_first-principles_2011,baumeier_frenkel_2012,marom_benchmark_2012,van_setten_gw-method_2013}, not least due to its accurate prediction of both localized Frenkel and bimolecular charge transfer excitations~\cite{blase_charge-transfer_2011,baumeier_excited_2012,baumeier_electronic_2014}. Linking $GW$-BSE to a classical environment, represented at atomistic resolution by a polarizable force field, allows for the determination of optical properties in realistic environments from the self-consistent solution of the coupled QM/MM system. With this approach, it is possible to disentangle the conformational (as a result of side chain-solvent interactions) and electronic (due to local electric fields and polarization effects) contributions to the absorption spectra.
The rest of this paper is organized as follows: The methodologies and computational details, are described in \sect{methodology}. The resulting optical properties for isolated oligomers with up to ten repeat units and their sensitivity to structural details are presented in \sect{results_vacuum} and structural properties of dinonyl-10-PPE in solution are discussed in~\sect{results_structure}. In \sect{results_qmmm}, the respective optical absorption spectra resulting from $GW$-BSE/MM calculations are discussed with a focus on the electronic contributions of the environment and the conformational dynamics. A brief summary in \sect{summary} concludes the paper.
\section{Methodology}
\label{sec:methodology}
Classical molecular dynamics (MM/MD) simulations were performed using an OPLS type (optimized potentials for liquid simulations)~\cite{jorgensen_optimized_1984,jorgensen_development_1996,watkins_perfluoroalkanes:_2001} force field. The parameters were taken from Refs.~\cite{maskey_conformational_2011,maskey_internal_2013} in which PCFF (polymer consistent force field) force field parameters were converted to OPLS form. Modification for torsional potential parameters of the phenylene rings as in Ref.~\cite{bagheri_getting_2016} were employed. To study the behavior of PPE polymers in explicit solvents, water molecules were described using the SPC/E~\cite{berendsen_missing_1987} model for water and the OPLS force field for toluene.
Geometric mixing rules [$\sigma_{ij}=(\sigma_{ii}\sigma_{jj})^{\frac{1}{2}}$ and $\epsilon_{ij}=(\epsilon_{ii}\epsilon_{jj})^{\frac{1}{2}}$] for Lennard-Jones (LJ) diameters ($\sigma$) and LJ energies ($\epsilon$) were used for atoms of different species according to the OPLS conventions~\cite{jorgensen_optimized_1984,jorgensen_development_1996,watkins_perfluoroalkanes:_2001}. The reader is referred to Ref.~\cite{wong-ekkabut_good_2016} for more in-depth discussion on mixing rules. Non-bonded interactions between atom pairs within a molecule separated by one or two bonds were excluded. Interaction was reduced by a factor of $1/2$ for atoms separated by three bonds and more. Simulations were run using GROMACS version 5~\cite{van_der_spoel_gromacs:_2005}. A \unit[1.2]{nm} cutoff was employed for the real space part of electrostatics and Lennard-Jones interactions. The long-range electrostatics was calculated using particle-mesh Ewald (PME)~\cite{darden_particle_1993,essmann_smooth_1995} with the reciprocal-space interactions evaluated on a 0.16 grid with cubic interpolation of order 4. The importance of proper treatment of electrostatics in MM/MD simulations is discussed in detail in Ref.~\cite{cisneros_classical_2014}. The velocity-Verlet algorithm~\cite{verlet_computer_1967} was employed to integrate the equations of motions with \unit[1]{fs} time step. A Langevin thermostat~\cite{grest_molecular_1986} with a \unit[100]{fs} damping was used to keep the temperature of the system at \unit[300]{K}. The systems were energy minimized using the steepest descents algorithm. \unit[100]{ps} simulations in constant particle number, volume and temperature (NVT) ensemble at \unit[300]{K} were performed on the energy minimized systems. The simulation box size was $\unit[(15\times 13\times 13)]{nm^3}$ for dinonyl-10-PPE\xspace. Simulations were continued in constant particle number, pressure and temperature (NPT) ensemble at \unit[300]{K} and \unit[1]{bar} controlled by Parrinello-Rahman~\cite{parrinello_polymorphic_1981} barostat with a coupling time constant of \unit[2.0]{ps}. Molecular visualizations were done using Visual Molecular Dynamics (VMD) software~\cite{humphrey_vmd:_1996}.
The excited state electronic structure is evaluated using many-body Green's functions theory within the $GW$ approximation and the Bethe-Salpeter equation (BSE). Green's functions equations of motion are the basis for this approach which contains both the nonlocal energy-dependent electronic self-energy $\Sigma$ and the electron-hole interaction leading to the formation of excitons, described by the BSE. The first of three steps of the procedure is the determination of molecular orbitals and energies on the level of density-functional theory (DFT) by solving the Kohn-Sham equations
\begin{equation}
\left\{ -\frac{\hbar^2}{2m}\nabla^2 + V_\text{PP}({\ensuremath{\mathbf{r}}}) + V_H({\ensuremath{\mathbf{r}}}) +V_\text{xc}({\ensuremath{\mathbf{r}}})\right\}\psi_n^\text{KS}({\ensuremath{\mathbf{r}}}) = \varepsilon_n^\text{KS} \psi_n^\text{KS}({\ensuremath{\mathbf{r}}}).
\end{equation}
Here, $V_\text{PP}$ is a pseudo-potential (or effective-core potential), $V_H$ the Hartree potential, and $V_\text{xc}$ the exchange-correlation potential. Single-particle excitations are then obtained within the $GW$ approximation of many-body Green's functions theory, as introduced by Hedin and Lundqvist~\cite{hedin_effects_1969}, by substitution of the energy-dependent self-energy operator $\Sigma({\ensuremath{\mathbf{r}}},{\ensuremath{\mathbf{r}}}',E)$ for the DFT exchange-correlation potential, giving rise to the quasi-particle equations
\begin{equation}
\left\{ -\frac{\hbar^2}{2m}\nabla^2 + V_\text{PP}({\ensuremath{\mathbf{r}}}) + V_H({\ensuremath{\mathbf{r}}})\right\}\psi_n^\text{QP}({\ensuremath{\mathbf{r}}}) + \int{\Sigma({\ensuremath{\mathbf{r}}},{\ensuremath{\mathbf{r}}}',\varepsilon_n^\text{QP})\psi_n^\text{QP}({\ensuremath{\mathbf{r}}}')d{\ensuremath{\mathbf{r}}}'} = \varepsilon_n^\text{QP} \psi_n^\text{QP}({\ensuremath{\mathbf{r}}}).
\end{equation}
The self-energy operator is evaluated as
\begin{equation}
\Sigma({\ensuremath{\mathbf{r}}},{\ensuremath{\mathbf{r}}}',E) = \frac{i}{2\pi} \int{e^{-i\omega 0^+}G({\ensuremath{\mathbf{r}}},{\ensuremath{\mathbf{r}}}',E-\omega)W({\ensuremath{\mathbf{r}}},{\ensuremath{\mathbf{r}}}',\omega)\,d\omega} ,
\end{equation}
where
\begin{equation}
G({\ensuremath{\mathbf{r}}},{\ensuremath{\mathbf{r}}}',\omega) = \sum_n{\frac{\psi_n({\ensuremath{\mathbf{r}}})\psi_n^*({\ensuremath{\mathbf{r}}}')}{\omega-\varepsilon_n+i0^+\text{sgn}(\varepsilon_n -\mu)}}
\label{equ:Green}
\end{equation}
is the one-body Green's function in quasiparticle (QP) approximation and $W= \epsilon^{-1} v$ is the dynamically screened Coulomb interaction, comprising the dielectric function $\epsilon$, computed within the random-phase approximation, and the bare Coulomb interaction $v$. The ground state Kohn-Sham wave functions and energies are used to determine both $G$ and $W$. Since DFT underestimates the fundamental HOMO-LUMO gap, the self-energy and the resulting QP energies may deviate from self-consistent results. To avoid such deviations, we employ an iterative procedure in which $W$ is calculated from a scissor-shifted Kohn-Sham spectrum. From the resulting QP gap, a new value for the scissor-shift is determined and this procedure is repeated until convergence is reached. For each step, the QP energy levels are iterated and the Green's function of \equ{Green} and thus the self-energy are updated. A one-shot $G_0W_0$ calculation from Kohn-Sham energies may differ from iterated results by up to several 0.1 eV. Note that this (limited) self-consistency treatment does not change the QP structure of \equ{Green} (due to satellite structures or other consequences of a self-consistent spectral shape of $G(\omega)$).
An electron-hole excitation, e.g., resulting from photoexcitation, can not be described in an effective single-particle picture but requires explicit treatment of a coupled two-particle system. Within the Tamm-Dancoff approximation (TDA)~\cite{note_TDA}, the electron-hole wavefunction is given by $\Phi_S({\ensuremath{\mathbf{r}}}_e,{\ensuremath{\mathbf{r}}}_h) = \sum_\alpha^\text{occ}{\sum_\beta^\text{virt}{ A^S_{\alpha\beta}\psi_\beta({\ensuremath{\mathbf{r}}}_e)\psi^*_\alpha({\ensuremath{\mathbf{r}}}_h)}}$
where $\alpha$ and $\beta$ denote the single-particle occupied and virtual orbitals, respectively, and $A_{\alpha\beta}$ represent the electron-hole amplitudes. These amplitudes and the associated transition energies $\Omega_S$ can be obtained by solving the Bethe-Salpeter equation
\begin{equation}
(\varepsilon_\beta - \varepsilon_\alpha)A^S_{\alpha\beta} +\sum_{\alpha'\beta'}{K^\text{eh}_{\alpha\beta,\alpha'\beta'}(\Omega_S)A^S_{\alpha'\beta'} } = \Omega_S A^S_{\alpha\beta}
\end{equation}
in which $K^\text{eh}=\eta K^{x}+K^{d}$ ($\eta=2$ for singlets, $\eta=0$ for triplets) is the electron-hole interaction kernel comprised of bare exchange ($K^x$) and screened direct terms ($K^{d})$, respectively.
For practical calculations according to the $GW$-BSE method, first single-point Kohn-Sham calculations are performed using ORCA~\cite{neese_orca_2012}, the B3LYP functional~\cite{becke_density-functional_1993,lee_development_1988,vosko_accurate_1980,stephens_ab_1994}, effective core potentials of the Stuttgart/Dresden type~\cite{bergner_ab_1993}, and the associated basis sets that are augmented by additional polarization functions~\cite{krishnan_self-consistent_1980} of $d$ symmetry~\cite{note_ECP}. All steps involving the actual $GW$-BSE calculations are performed using the implementation for isolated systems~\cite{ma_excited_2009,ma_modeling_2010,baumeier_excited_2012,baumeier_frenkel_2012}, available in the VOTCA software package~\cite{ruhle_microscopic_2011,note_VOTCA}.
In VOTCA, the quantities in the $GW$ self-energy operator (dielectric matrix, exchange and correlation terms) and the electron-hole interaction in the BSE are expressed in terms of auxiliary atom-centered Gaussian basis functions. We include orbitals of $s$, $p$, $d$ symmetry with the decay constants $\alpha$ (in a.u.) 0.25, 0.90, 3.0 for C and 0.4 and 1.5 for H atoms, yielding converged excitation energies. It was also confirmed that the addition of diffuse functions with decay constants smaller than \unit[0.06]{a.u.} to the wave function basis set does not affect the low-lying excitations. For all systems considered in this paper, polarizability is calculated using the full manifold of occupied and virtual states in the random-phase approximation. Quasiparticle corrections are calculated for the $2n_\text{occ}$ lowest-energy states, and $n_\text{occ}$ occupied and $n_\text{occ}$ virtual states are considered in the Bethe-Salpeter equation. Further technical details can be found in Refs~\cite{ma_excited_2009,ma_modeling_2010,baumeier_excited_2012}.
\section{Results}
\label{sec:results}
\subsection{Optical absorption energies of isolated oligomers}
\label{sec:results_vacuum}
\begin{table}\centering
\caption{Electronic structure data for $n$-PPE oligomers with $n=1,\dots,10$ based on QM and MM optimized geometries: HOMO-LUMO gap from Kohn-Sham ($E_g^\text{KS}$), quasi-particle ($E_g^\text{QP}$) energies, optical excitation energy ($\Omega$), and the contributions to it from free inter-level transitions ($\langle D \rangle$) and electron-hole interaction ($\langle K^\text{eh} \rangle = \langle K^d + 2K^x\rangle$). All energies in eV. }
\label{tab:energies}
\begin{tabular}{lccccccccccc}
\hline\noalign{\smallskip}
& \multicolumn{5}{c}{\bf{QM optimized}} & & \multicolumn{5}{c}{\bf{MM optimized}} \\
\cline{2-6} \cline{7-12}\noalign{\smallskip}
$n$ & $E_g^\text{KS}$ & $E_g^\text{QP}$ & $\Omega$ & $\langle D \rangle$ & $\langle K^\text{eh} \rangle$ & & $E_g^\text{KS}$ & $E_g^\text{QP}$ & $\Omega$ & $\langle D \rangle$ & $\langle K^\text{eh} \rangle$ \\
\noalign{\smallskip}\hline\noalign{\smallskip}
1 & 4.67 & 8.46 & 5.15 & 9.11 & -3.96 & & 4.71 & 8.52 & 5.15 & 9.14 & -3.99 \\
2 & 3.80 & 7.07 & 4.17 & 7.49 & -3.32 & & 3.96 & 7.30 & 4.28 & 7.72 & -3.44 \\
3 & 3.42 & 6.45 & 3.73 & 6.84 & -3.11 & & 3.61 & 6.73 & 3.89 & 7.13 & -3.24 \\
4 & 3.22 & 6.12 & 3.50 & 6.51 & -3.01 & & 3.45 & 6.46 & 3.71 & 6.88 & -3.17 \\
5 & 3.10 & 5.92 & 3.36 & 6.32 & -2.96 & & 3.33 & 6.26 & 3.57 & 6.70 & -3.13 \\
6 & 3.02 & 5.79 & 3.27 & 6.21 & -2.94 & & 3.25 & 6.13 & 3.48 & 6.59 & -3.11 \\
7 & 2.97 & 5.69 & 3.21 & 6.13 & -2.92 & & 3.19 & 6.04 & 3.43 & 6.51 & -3.08 \\
8 & 2.94 & 5.63 & 3.17 & 6.09 & -2.92 & & 3.18 & 6.01 & 3.41 & 6.50 & -3.07 \\
9 & 2.90 & 5.58 & 3.13 & 6.04 & -2.91 & & 3.17 & 5.98 & 3.39 & 6.47 & -3.08 \\
10 & 2.88 & 5.53 & 3.11 & 6.01 & -2.90 & & 3.15 & 5.95 & 3.37 & 6.46 & -3.09 \\
$\infty$ & & & 3.08 & & & & & & 3.33 & & \\
Exp. & & \multicolumn{3}{c}{3.00 - 3.20 } \\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
Conformations of solvated PPE oligomers as obtained from classical MD simulations form the basis for the determination of optical excitations in a mixed QM/MM setup. The underlying assumption is that the details of molecular geometries resulting from the use of a classical force field are consistent with the chosen quantum mechanical description of the ground state. To confirm this the geometries of single $n$-PPE oligomers with $n=1,\dots,10$ were optimized in vacuum using both DFT (def2-TZVP~\cite{weigend_balanced_2005} basis set and B3LYP functional with Grimme's D3 dispersion corrections~\cite{grimme_semiempirical_2006}) and MD on the basis of the modified PCFF force field. Both approaches yield planar configurations of the PPE backbone for all values of $n$.
While qualitatively identical, quantitative differences can be observed. Most notably, the bonds in the phenyl rings and the $C-C$ bond connecting the ring to the ethyne are elongated by about 2\% in MD compared to DFT. In contrast, the length of the $C\equiv C$ triple bond results \unit[1.21]{\AA} in both cases. In the next step, $GW$-BSE calculations were performed to gauge the effect of these differences on the electronic and optical properties of the oligomers. The resulting electronic structure data, summarized in \tab{energies}, illustrates the effects of $GW$-BSE with respect to the calculation of excitation energies: Taking the quasi-particle corrections to the Kohn-Sham energies into account increases the HOMO-LUMO gap for, e.g., the QM optimized 10-PPE\xspace from \unit[2.88]{eV} to \unit[5.53]{eV}, which reflects the well-known underestimation of the fundamental gap by DFT. The energy for the lowest optically active coupled electron-hole excitation gives \unit[3.11]{eV}. Due to the fact that the excitation is not a pure HOMO-LUMO transition but has additional contributions from lower occupied and higher virtual single-particle orbitals, the contribution of the independent transitions, $\langle D \rangle$, is with \unit[6.01]{eV} slightly larger than $E_g^\text{QP}$. The associated effectively attractive electron-hole interaction, $\langle K^\text{eh} \rangle = \langle K^d + 2K^x\rangle$, in this structure amounts to \unit[2.90]{eV}. The obtained excitation energy is in good agreement with the experimental values of \unit[3.0-3.2]{eV} obtained from absorption peaks of dilute solutions of PPE in good solvents. As a function of the number of repeat units $n$, a monotonous decrease of all energies is found. This quantum-size effect is anticipated for strongly conjugated systems such as PPEs. From the particle-in-a-box model one can estimate, e.g., the optical excitation energy of an infinitely long chain via $\Omega(n) = \Omega_\infty - a/n$. By fitting the data for $n>3$ to this model, one obtains a value of $\Omega_\infty = \unit[3.08]{eV}$ indicating that for further studies of solvent effects on the excitations in PPE, it is reasonable to consider 10-PPE\xspace in the QM/MM setup.
Qualitatively identical results were obtained for the $GW$-BSE calculations based on the MM optimized geometries. However, there are some noticeable quantitative differences, most importantly with respect to the excitation energy $\Omega$ which is consistently larger as compared to the QM structures. For large $n$, the deviation amounts to \unit[0.25]{eV} and is a cumulative result of the geometric differences discussed above. Overall, the agreement is satisfying enough to conclude that the use of MD simulated conformations of 10-PPE\xspace in the following is capturing the relevant physico-chemical details.
\subsection{Structural properties of solvated 2,5-dinonyl-10-PPE\xspace}
\label{sec:results_structure}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{toluene_water_structures}
\caption{Structures of 2,5-dinonyl-10-PPE\xspace (a) in toluene and (b) in water after \unit[7.7]{ns} MD simulations in the NpT ensemble. (a): In toluene, the side chains are dispersed and separated from each other as well as the backbone. (b): In water, the side chains start to aggregate toward the backbone. }
\label{fig:toluene_water_structures}
\end{figure}
\begin{figure}
\centering
\resizebox{0.70\columnwidth}{!}{
\includegraphics[width=\linewidth]{Orderparameter}}
\caption{Orientations order parameter (Eq.~\ref{equ:orderparameterformula}) for 10-PPE\xspace with nonyl side chains in toluene solvent (blue circles) and water (green squares). The length of MM/MD simulations for both was 7.7\,ns in the NpT ensemble. Time average for toluene case is taken over frames of the last 1\,ns trajectory with 100\,ps between the frames. In the case of water, the time average was taken over frames of the last 600\,ps of the trajectory with a 100\,ps step.}
\label{fig:OrderParameter}
\end{figure}
Conformations of 2,5-dinonyl-10-PPE\xspace were studied in explicit water and toluene. Water is a poor solvent for both the backbone and the side chains while toluene is a good solvent for the backbone and a poor solvent for the side chains~\cite{maskey_conformational_2011,maskey_internal_2013}. Figure~\ref{fig:toluene_water_structures} shows the structure of 10-PPE\xspace (a) in toluene and (b) in water at 7.7\,ns in the NpT ensemble. For clarity, water and toluene molecules are not shown. In toluene, the backbone remains extended and the side chains are dispersed and separated from each other as well as the backbone. This is in agreement with the results of Ref.~\cite{maskey_conformational_2011}. Structural studies using small angle neutron scattering (SANS) have shown that dialkyl PPE forms a molecular solution with an extended backbone at high temperature and low concentrations~\cite{maskey_conformational_2011,perahia_molecules_2001}. In water (Fig.~\ref{fig:toluene_water_structures}(b)), the side chains start to aggregate toward each other and the backbone. This is in agreement with Ref.~\cite{maskey_conformational_2011,maskey_internal_2013}. Another important parameter is the correlation of aromatic rings along the backbone of PPE. The interplay between the arrangements of aromatic rings in PPE polymers and their electro-optical properties has been studied by several groups (see e.g.~\cite{bunz_polyaryleneethynylenes_2009,miteva_interplay_2000,kong_molecular_2015}).
The orientational order parameter\cite{hariharan_structure_1994}, given by
\begin{equation}
P_{\theta}=\frac{1}{2}\langle 3 \cos^2\theta -1\rangle
\label{equ:orderparameterformula}
\end{equation}
is a measure to quantify how aromatic rings within PPE polymer backbone are correlated. $\theta$ is the angle between the normal vectors to the planes of two aromatic rings which are apart from each other by a distance $\Delta n$. $P_{\theta}$ describes the average alignment of aromatic rings. Since each vector normal to the planes of the aromatic rings can be considered as a reference direction to calculate $\theta$ and then $P_{\theta}$, one needs to consider the vector normal to each plane as a reference direction and take an average: there are two averages in the calculation of $P_{\theta}$, one of which is the time average over time frames and the other one is over the selection of a vector normal to the plane of the rings. $P_{\theta}$ can have values [$-\frac{1}{2}$,1]~\cite{hariharan_structure_1994}. $P_{\theta}>0$ describes a co-planar alignment of aromatic rings, while $P_{\theta}<0$ indicates perpendicular alignments. $P_{\theta}=0$ and $P_{\theta}=1$ refer to completely random and fully co-planar alignment of the rings, respectively~\cite{maskey_conformational_2011,maskey_internal_2013}.
Figure~\ref{fig:OrderParameter} shows the order parameter versus $\Delta n$ for 10-PPE\xspace with nonyl side chains in toluene and water after~\unit[7.7]{ns} MD simulations in the NpT ensemble. The time average is taken over the frames of the last \unit[1]{ns} (\unit[0.6]{ns}) of the MM/MD trajectory with 100\,ps time step between the frames for 10-PPE\xspace in toluene (water). Having a value of around 0.4 (0.2) in toluene (water) indicates a correlation between the aromatic rings. This refers to an angle of around $39^{\circ}$ and $47^{\circ}$ for 10-PPE\xspace in toluene and water, respectively. In Ref.~\cite{bunz_polyaryleneethynylenes_2009}, the authors discussed optical properties of dialkyl and dialkoxy-PPEs in chloroform and dichloromethane, and the average angle of aromatic rings using a configuration-coordinate model in which they concluded the angle to be around 40 degrees.
\subsection{Optical absorption of solvated 2,5-dinonyl-10-PPE}
\label{sec:results_qmmm}
Excitation energies in complex molecular environments, such as molecular aggregates or solute-solvent mixtures, can accurately be obtained by treating the subpart of interest quantum-mechanically and embedding it into an environment at molecular mechanics resolution~\cite{risko_quantum-chemical_2011,may_can_2012,lunkenheimer_solvent_2013,schwabe_pericc2:_2012}. Here, we realize such a QM/MM scheme based on $GW$-BSE by representing the molecules in the MM region by a set of atomic properties such as static partial charges and polarizabilities, which then interact among each other and the $GW$-BSE part via classical electrostatic potentials. Polarizable interactions are taken into account via Thole's model~\cite{thole_molecular_1981,van_duijnen_molecular_1998}. For both neutral and excited complexes, total energies of the combined QM/MM system are obtained self-consistently and their difference defines the excitation energy in the polarizable environment. This procedure assumes that the states of interest and, in particular, their localization characteristics on the QM cluster are easily identifiable. Typically, this can be expected to be the case for the lowest optically active excitations in the PPEs studied here.
\begin{figure}
\includegraphics[width=\linewidth]{QMMM}
\caption{Example of $GW$-BSE/MM partitioning of the system. The oligomer is embedded into a two-layer environment of solvent molecules. Molecules within a region $R_1$ (red) are represented by both static atomic point charges and polarizabilities, while then ones within the extended layer $R_2$ (blue) are only represented by point charges.}
\label{fig:qmmm_cutoffs}
\end{figure}
A two-layer scheme is employed. Within a cutoff $R_1$ around the QM part, atomic partial and polarizable interactions are taken into account. In the more extended buffer region with $R_2 \geq R_1$, only electrostatics are active. An example for such a partitioning is depicted in \Fig{qmmm_cutoffs}. For a snapshot taken from the MD simulated morphology of 10-PPE\xspace with nonyl side chains in toluene, this approach is adopted using cutoffs of $R_1=\unit[2.5]{nm}$ and $R_2=\unit[4.0]{nm}$. The 10-PPE\xspace backbone is treated quantum-mechanically while the side chains and solvent molecules belong to the MM region. To split the functionalized oligomer into backbone and side chains, a link-atom approach~\cite{reuter_frontier_2000} with hydrogen saturation of the backbone-side chain bridge are employed. Partial charges for the solvent molecules and side chain fragments are determined from CHELPG~\cite{breneman_determining_1990} fits to the electrostatic potentials, while the atomic Thole polarizabilities are parametrized to match the molecular polarizability tensors obtained from DFT.
\begin{figure}
\includegraphics[width=\linewidth]{qmmm_overview.eps}
\caption{Total energies of the coupled QM/MM system ($E_\text{QMMM}^m$) for a representative snapshot at each iteration step $m$ of the self-consistent procedure in neutral and excited states. The inset shows the respective energy differences between two subsequent iteration steps. }
\label{fig:qmmm_convergence}
\end{figure}
\Fig{qmmm_convergence} shows a typical evolution of total energies of the coupled QM/MM system $E_\text{QMMM}$ during the self-consistency procedure~\cite{note_QMMM}. The zero of the energy scale is defined to correspond to the total energy of the neutral 10-PPE\xspace in vacuum (iteration $m=0$). It is apparent that the most significant change to the total energy in both neutral and excited states occurs during the very first step of the calculation. This is further corroborated by considering the change of total energy at iteration $m$ compared to the previous iteration as shown in the inset of \Fig{qmmm_convergence}. Within three iterations the respective changes are of the order of \unit[0.01]{eV} and, more importantly, no significant differences are observed for the two states. Overall, the effect of polarization is small for the solvated 10-PPE\xspace and consequently, the excitation energy is nearly unaffected by the environment. More precisely, the excitation energy is \unit[3.47]{eV} from the polarized QM/MM system, while a calculation using pure static MM yields \unit[3.44]{eV}. Omitting the environment altogether, i.e., performing a $GW$-BSE calculation on the isolated oligomer conformation, yields \unit[3.45]{eV}. The fact that environment effects only have a negligible impact on the calculated excitation energies can be attributed to a combination of the diluteness of the solution and the associated randomness of local electric fields and the small change of dipole moment between neutral and excited states. Similar observations have been made, e.g., for the optically excited states of push-pull oligomers embedded in a polarizable lattice~\cite{baumeier_electronic_2014}.
\begin{figure}
\includegraphics[width=\linewidth]{toluene_water_spectra}
\caption{Simulated absorption spectra (broadened by Gaussian functions with a FWHM of $\unit[0.3]{eV}$) of 10-PPE\xspace in (a) toluene and (b) water calculated in a static MM environment ($R_2=\unit[4]{nm}$) with a sampling time step $\Delta t=\unit[1]{ps}$ starting from $t_0=\unit[7.7]{ns}$. The average over the eleven respective snapshots is given in red. }
\label{fig:spectra_toluene_water}
\end{figure}
Based on the above results, it is justified to limit the QM/MM setup to only electrostatic interactions in the following. Having realized that the direct electronic effects of solvent molecules on the excitations in 10-PPE\xspace are small, the focus is now on indirect effects that originate from the influence on the backbone conformations. To this end, 10-PPE\xspace in both toluene and water is considered and sample the conformations at different time intervals ($\Delta t = \unit[10]{fs}, \unit[100]{fs}, \unit[1]{ps}$, and $\unit[10]{ps}$), all starting from an identical starting point $t_0=\unit[7.7]{ns}$ of our MD simulations are taken. For each of these snapshots the absorption spectrum is calculated in a static MM environment defined by $R_2=\unit[4]{nm}$ (implies $R_1=\unit[0]{nm}$). The obtained discrete spectra of excitation energies and associated oscillator strengths are broadened by Gaussian functions with a FWHM (full width at half maximum) of \unit[0.3]{eV}. It is found that the absorption properties are insensitive to structural dynamics of the backbone at time scales of \unit[100]{fs}. Only for times exceeding about \unit[500]{fs} fluctuations in the peak positions and heights of the spectra can be observed both in toluene and water. \Fig{spectra_toluene_water} shows the evolution of the absorption spectrum for the time steps $\Delta t = \unit[1]{ps}$, as well as the average over the eleven respective snapshots. While the dynamics of the backbone is comparatively slow since it is to a significant extent constrained by the nonyl side chain dynamics in both poor solvents, one can observe stronger fluctuations of the absorption spectra in water as compared to toluene.
\begin{figure}
\includegraphics[width=\linewidth]{excitations}
\caption{Analysis of the excited state wave functions for representative snapshots of 10-PPE in toluene.
Top row: Isosurfaces of excitation electron density ($\pm10^{-4}\unit{e/{\AA}^3}$).
Red color corresponds to negative values (hole density), blue to positive values (electron density).
Bottom rows: Isosurfaces of the main single-particle excitations contributing to the electron-hole
wave functions (isovalue $\pm 5\cdot10^{-3}$).}
\label{fig:excitations}
\end{figure}
To understand the origin of these fluctuations with respect to backbone conformations in more detail, the electron-hole wave functions of the excitations is analyzed at times $t=\unit[4]{ps}$ and $t=\unit[5]{ps}$, c.f. \Fig{spectra_toluene_water}(a). In the top row of \Fig{excitations} isosurfaces for the hole (red) and electron (blue) density distributions are shown. The overall conformation of the 10-PPE\xspace exhibits a characteristic bend as a result of the stress caused by side chain interactions. At both times, the excitation appears to be localized at the apex of the bend, more pronounced for the structure at \unit[4]{ps} which is lower in energy by \unit[0.13]{eV}. The different characteristics can be attributed to a slightly stronger out-of-plane bent angle between the phenylene and etynlene. Over all, co-planarity of the phenyl rings along the backbone (or the lack thereof) does not appear to affect the excitations significantly.
Analysis of the composition of the electron-hole wavefunction reveals striking differences between the two snapshots. The excitation shown in \Fig{excitations}(a) is formed to 60\% by a transition between the two frontier orbitals. The isosurfaces of these orbitals show that both HOMO and LUMO are practically extended along the full length of the backbone. Slight intensity variations can be noted with the HOMO being more attracted to the apex while the LUMO is thinning out at the same spot. These variations give rise to the coupled excitation being localized. At $t=\unit[5]{ps}$, in contrast, there is not a single dominant contribution to the electron-hole wavefunction. Rather, a superposition of several transitions is found, with HOMO-1 to LUMO and HOMO to LUMO+1 transitions being most significant. As can be seen in \Fig{excitations}(b) conformational changes result in a different localization characteristics of the underlying single-particle orbitals not at the apex but left and right from it, respectively. As a pure transition between two localized states, such as the one from HOMO-1 to LUMO, is energetically penalized by stronger exchange interactions. By mixing in transitions between lower lying occupied and higher unoccupied levels, an effectively more delocalized excitation is formed. An analogous analysis of the respective excitations of 10-PPE in water, i.e, at $t=\unit[9]{ps}$ and $t=\unit[10]{ps}$, reveals qualitatively similar behaviour.
\section{Summary}
\label{sec:summary}
Electronic excitations of PPE were computed using a QM/MM approach combining many-body Green's functions theory within $GW$ approximation and the Bethe-Salpeter equation. Conformations of solvated PPE as obtained from atomistic MD simulations were used in the mixed QM/MM setup in order to determine optical excitations of solvated PPE. The reliability of optical excitations based on MM/MD conformations were investigated by comparing optical excitations of $n$-PPE ($n=1,2,\dots,10$) using both optimized DFT geometries and MD geometries in vacuum. The results show that the excitation energies $\Omega$ calculated based on MM/MD conformations are larger than the ones calculated based on QM optimized geometries. For large $n$, the deviation amounts to \unit[0.25]{eV} and is the cumulative result of geometric differences between MM/MD geometries and QM geometries. Overall agreement between the excitation energies based on MM/MD conformations and QM geometries is good enough to conclude that the use of MM/MD conformations for 10-PPE\xspace captures the relevant physico-chemical properties.
Conformations of 2,5-dinonyl-10-PPE\xspace with nonyl side chains were studied in toluene and water. The side chains were found to be dispersed from each other and from the backbone in toluene. In water, the side chains tend to aggregate.
Optical excitations were calculated for 10-PPE\xspace in the QM/MM setup. The results show that the electronic environment contributions are negligible compared to the conformation dynamics of the conjugated PPE. From the analysis of the electron-hole wave function, sensitivity of energy and localization characteristics of the excited states to bends in global conformation of PPE polymer was observed.
|
2,869,038,153,988 | arxiv | \section{Introduction} In this paper, we consider an initial market model represented by the triplet $(S,\mathbb F,P)$, where $S$ represents the discounted prices for $d$-stocks, $\mathbb F$ is the ``public" information that is available to most agents, and $P$ is a probability measure. To this initial model, we add a random time $\tau$ that might not be seen through $\mathbb F$ when it occurs (mathematically speaking $\tau$ might not be an $\mathbb F$-stopping time). In this context, we adopt the progressive enlargement of filtration to model the larger information that includes both $\mathbb F$ and $\tau$. For the obtained new informational system that lives after $\tau$, denoted by $(S-S^{\tau},\mathbb G,P)$, our ultimate goal lies in measuring the impact of $\tau$ on log-optimal portfolio, no matter what is the pair $(S,\mathbb F)$ and no matter how it is related to $\tau$, which is an honest time with some mild assumption. In order to be more precise in our objective, we recall the definition of log-optimal and num\'eraire portfolios, the two portfolios that are intimately related to the logarithm utility. To this end, we denote by $W^{\theta}$ the wealth process of the portfolio $\theta$.
\begin{definition}\label{NP/LogOP} Let $(X, \mathbb H, Q)$ be a market model, where $X$ is the assets' price process, $\mathbb H$ is a filtration, and $Q$ is a probability measure. Consider a fixed investment horizon $T\in(0,+\infty)$, and a portfolio $\theta^*$.\\
{\rm{(a)}} $\theta^*$ is a {\it num\'eraire portfolio} for $(X, \mathbb H, Q)$ if $W^{\theta^*}>0$ and
\begin{eqnarray}\label{NP}
{{W^{\theta}}\over{W^{\theta^*}}}\ \mbox{is an $(\mathbb H, Q)$-supermartingale, for any portfolio $\theta$ with $W^{\theta}\geq 0$}.\hskip 0.5cm
\end{eqnarray}
{\rm{(b)}} $\theta^*$ is called a {\it log-optimal portfolio} for $(X, \mathbb H, Q)$ if $\theta^*\in \Theta(X,\mathbb H, Q)$ and
\begin{eqnarray}
u_T(X,\mathbb H, Q):=\sup_{\theta\in\Theta}E_Q\left[\ln(W^{\theta}_T)\right]= E_Q\left[\ln(W^{\theta^*}_T)\right],\label{LogInfinite}\end{eqnarray}
where $E_Q[.]$ is the expectation under $Q$, and $\Theta:=\Theta(X,\mathbb H, Q)$ is given by
\begin{eqnarray}\label{AdmissibleSet0}
\hskip -0.6cm \Theta(X,\mathbb H, Q):=\Bigl\{\mbox{portfolio}\ \theta\ :\ W^{\theta}> 0\quad \mbox{and}\quad E_Q\left[\vert \ln(W^{\theta}_T)\vert \right]<+\infty\Bigr\}.\end{eqnarray}
\end{definition}
The problem of maximization of expected log utility from terminal wealth, defined in (\ref{LogInfinite})-(\ref{AdmissibleSet0}), received a lot of attention in the literature, even though it is a particular case of the utility maximization theory problem. This latter problem is addressed at various levels of generality, see Cvitanic et al. \cite{CSW}, Karatzas and Wang \cite{Karatzas}, Karatzas and Zitkovic \cite{KZ}, Kramkov and Schachermayer \cite{KW99}, Merton \cite{merton71,merton73}, and the references therein to cite a few. The num\'eraire portfolio was introduced --up to our knowledge-- in Long \cite{Long}, where $W^{\theta}/W^{\theta^*}$ is required to be a martingale, while Definition \ref{NP/LogOP}-(a) goes back to Becherer \cite[Definition 4.1]{Becherer}. Then these works were extended and investigated extensively in different directions in Choulli et al. \cite{ChoulliDengMa}, Christensen and Larsen \cite{ChristensenLarsen2007}, G\"oll and Kallsen \cite{GollKallsen}, Hulley and Schweizer \cite{HulleySchweizer}, Kardaras and Karatzas \cite{KardarasKaratzas}, and the references therein. For more precise relationship between the num\'eraire and log-optimal portfolios, we refer the reader to \cite{ChoulliYansori1} and the references therein.\\
Thus, our setting falls into the topic of the {\it portfolio problem under asymmetries of information}. The literature about this topic can be divided into two major cases. The first main case, which is the most studied in the literature, is known in the finance and mathematical finance literature as {\it the insider trading} setting. For this insider framework, log-optimal portfolios are extensively studied and we refer the reader to Amendinger et al. \cite{amendingerimkellerschweizer98}, Ankirchner et al. \cite{ADImkeller}, Ankirchner and Imkeller \cite{AImkeller}, Corcuera et al. \cite{JImkellerKN}, Grorud and Pontier \cite{GrorudPontier}, Pikovsky and Karatzas \cite{pikovskykaratzas96}, Kohatsu-Higa and Sulem \cite{kohatsusulem06}, and the references therein to cite a few. Most of this literature focuses on two intimately related questions on the log-optimal portfolio for $(S,{\mathbb G}^*,P)$, where ${\mathbb G}^*$ is the initial enlargement of $\mathbb F$ with a random variable $L$ that represents the extra knowledge. In fact, under some assumption on the pair $(L, \mathbb F)$, frequently called Jacod's assumption, the existence of the log-optimal portfolio and the evaluation of the {\bf increment of expected logarithm-utility from terminal wealth} (denoted hereafter by IEU$(S,{\mathbb G}^*, \mathbb F)$) for $(S,{\mathbb G}^*,P)$ and $(S,\mathbb F,P)$ represent the core contribution of these papers, where it is proven that
\begin{equation}\label{InsiderFormula}
\mbox{IEU}(S,{\mathbb G}^*, \mathbb F):=u_T(S, {\mathbb G}^*,P)-u_T(S, \mathbb F,P)=\mbox{relative entropy}(P\big| Q^*).
\end{equation}
Hence, in this insider setting, the log-optimal portfolio for $(S,{\mathbb G}^*,P)$ exists if and only if $P$ has a finite entropy with respect to $Q^*$, which is an explicitly described probability measure associated to $L$. In particular, the quantity $\mbox{IEU}(S,{\mathbb G}^*, \mathbb F)$ is always a true gain due to the advantage of knowing fully $L$ by the investor endowed with the flow $\mathbb G^*$. The formula (\ref{InsiderFormula}) was initially derived by Pikovsky and Karatzas \cite{pikovskykaratzas96} for the Brownian filtration, and Amendinger et al. \cite{amendingerimkellerschweizer98} extended it to models driven by general continuous local martingales, where the authors connect this formula with Shannon entropy of $L$ for some models. The Shannon concept was further studied by Ankirchner et al. \cite{ADImkeller} afterwards, where the authors show its important role in measuring the impact of inside-information on log-optimal portfolios. Other related and interesting works, we cite Corcuera et al. \cite{JImkellerKN} and Kahatsu-Higa and Yamazato \cite{Kohatsu2011}, and Ankirchner et al. \cite{ADImkeller}. In this latter paper, the authors consider arbitrary filtrations $\mathbb F$ and $\mathbb G^*$ such that $\mathbb F\subset \mathbb G^*$, while assuming the continuity of $S$ and the {\it existence of a drift information condition} on the pair $(\mathbb F,\mathbb G^*)$. \\
The second major case, in contrast to the insider setting which uses the initial enlargement of $\mathbb{F}$, suggests to add the extra information over time as it occurs, and this leads to say that $\mathbb{G}$ is the progressive enlargement of filtration with $\tau$. This is our current framework, in which we complement \cite{ChoulliYansori1} that deals with the sub-model $(S^{\tau},\mathbb{G},P)$. On the one hand, a log-optimal portfolio is a num\'eraire portfolio, see Choulli and Yansori \cite{ChoulliYansori1,ChoulliYansori2}. On the other hand, thanks to Choulli et al. \cite{ChoulliDengMa} and Kardaras and Karatzas \cite{KardarasKaratzas} that connect the existence of a num\'eraire portfolio to the concept of No-Unbounded-Profit-with-bounded-risk (NUPBR hereafter), and the recent works of Aksamit et al. \cite{aksamitetal18,ACDJ3} and Choulli and Deng \cite{CD1} on NUPBR for the model $(S-S^{\tau},\mathbb G,P)$, the problem of the existence of the num\'eraire portfolio for $(S-S^{\tau},\mathbb G,P)$ is completely understood. For the log-optimal portfolio of $(S-S^{\tau}, \mathbb{G},P)$, the situation is more challenging, and the problem of its existence is the first major obstacle. To address this, we appeal to the explicit description of the set of deflators of $(S-S^{\tau}, \mathbb{G},P)$, recently developed by Choulli and Yansori \cite{ChoulliAlharbi}, and answer the following question.
\begin{equation}\label{Q2}
\mbox{For which $(S, \tau)$, does the log-optimal portfolio of $(S^{\tau}, \mathbb G,P)$ exist?}\end{equation}
It is worth mentioning that this existence question is much deeper and general than the corresponding one addressed in the insider setting. Indeed, in our framework, there is no hope for (\ref{InsiderFormula}) to hold in its current form, and only a practical answer to (\ref{Q2}) will allow us to answer the question below.
\begin{equation}\label{Q3}
\begin{cases}
\mbox{What are the {\it informational conditions} on $\tau$ }\\
\mbox{for the existence of the log-optimal portfolio of $(S-S^{\tau}, \mathbb G,P)$ }\\
\mbox{when it already exists for $(S,\mathbb F,P)$?}\end{cases}
\end{equation}
For our setting, the {\it increment of expected logarithmic-utility} between $(S-S^{\tau}, \mathbb G)$ and $(S,\mathbb F)$, denoted by IE$\mbox{U}_{\mbox{after}}(S, \tau, \mathbb F)$, is defined by
\begin{equation}\label{Delta(S, Tau)}
\mbox{IEU}_{\mbox{after}}(S,\tau, \mathbb F):=\Delta_T(S, \tau, \mathbb F):=u_T(S-S^{\tau}, \mathbb G,P)-u_T(S, \mathbb F,P),\end{equation}
and is affected by many factors. Hence, we will address the question of
\begin{equation}\label{Q5}
\mbox{what are the factors that explain the sensitivity of IEU$_{\mbox{after}}(S,\tau,\mathbb F)$ to $\tau$}?\end{equation}
Our definition for the utility increment, given in (\ref{Delta(S, Tau)}), stems from our main goal of measuring the impact of $\tau$ on the base model $(S,\mathbb F,P)$ via log utility. To answer (\ref{Q5}), we address the explicit computation of the log-optimal portfolio using $\mathbb F$-observable processes, and answer the following question.
\begin{equation}\label{Q4}
\mbox{How can the log-optimal portfolio of $(S-S^{\tau},\mathbb G,P)$ be described via $\mathbb F$?}\end{equation}
This paper contains four sections including the current one. Section \ref{Section2} presents the mathematical and the financial model besides the corresponding required notation and some preliminaries that are important herein. Section \ref{Section3} focuses on the existence of the log-optimal portfolio and the duality. Section \ref{Section4} describes explicitly both the log-optimal portfolio using the $\mathbb F$-predictable characteristics of the model, and discusses its financial applications and consequences. The paper contains an appendix where some proofs are relegated and some technical (new and existing) results are detailed.
\section{The mathematical framework and preliminaries}\label{Section2}
Throughout the paper, we suppose given a complete probability space $(\Omega, {\cal{F}},P)$. Then any filtration $\mathbb{H}$ on this space will be supposed to satisfy the usual conditions (i.e. $\mathbb{H}$ is right-continuous and complete).
\subsection{General notation}For any any filtration $\mathbb{H}$ on this space, we denote ${\cal A}(\mathbb H)$ (respectively ${\cal M}(\mathbb H)$) the set
of $\mathbb H$-adapted processes with $\mathbb H$-integrable variation (respectively that are $\mathbb H$-uniformly integrable martingale).
For any process $X$, we denote by $^{o,\mathbb H}X$ (respectively $^{p,\mathbb H}X$) the
$\mathbb H$-optional (respectively $\mathbb H$-predictable) projection of $X$. For an increasing process $V$, we denote $V^{o,\mathbb H}$ (respectively $V^{p,\mathbb H}$) its dual $\mathbb H$-optional (respectively $\mathbb H$-predictable) projection. For a filtration $\mathbb H$, ${\cal O}(\mathbb H)$, ${\cal P}(\mathbb H)$ and $\mbox{Prog}(\mathbb H)$ represent the $\mathbb H$-optional, the $\mathbb H$-predictable and the $\mathbb H$-progressive $\sigma$-fields respectively on $\Omega\times[0,+\infty[$. For an $\mathbb H$-semimartingale $X$, we denote by $L(X,\mathbb H)$ the set of all $X$-integrable processes in the Ito's sense, and for $H\in L(X,\mathbb H)$, the resulting integral is one dimensional $\mathbb H$-semimartingale denoted by $H\bigcdot X:=\int_0^{\cdot} H_udX_u$. If ${\cal C}(\mathbb H)$
is a set of processes that are adapted to $\mathbb H$,
then ${\cal C}_{\loc}(\mathbb H)$ --except when it is stated otherwise-- is the set of processes, $X$,
for which there exists a sequence of $\mathbb H$-stopping times,
$(T_n)_{n\geq 1}$, that increases to infinity and $X^{T_n}$ belongs to ${\cal C}(\mathbb H)$, for each $n\geq 1$. For any $\mathbb H$-semimartingale, $L$, the Doleans-Dade stochastic exponential denoted by ${\cal E}(L)$, is the unique solution to the SDE: $dX = X_{-} dL,$ $ X_0= 1,$ given by
\begin{equation}\label{S-exponential}
{\cal E}_t (L) = \exp \big ( L_t- L_0 - {1 \over 2} {\langle L^c \rangle}_{t} \big ) \prod_{0 < s \leq t} \big( 1 + \Delta L_s \big) e^{-\Delta L_s}.
\end{equation}
\subsection{The mathematical model}Our mathematical and financial model has two principal components. The first component, called throughout the rest of the paper as the initial market model is specified by the pair $(S,\mathbb{F})$. Here $\mathbb{F}$ is filtration representing the ``public" flow of information, which is available to all agents, and $S$ is a d-dimensional $\mathbb{F}$-semimartingale which models the price processes of $d$ risky assets. The second main component is the random time $\tau$, which might represent the default time of firm in credit risk theory, or the death time of an insured in life insurance, or the occurrence time of an event that might impact the initial market model. The occurrence of $\tau$, in general, might not be observable through $\mathbb{F}$, or mathematically this translates to $\tau$ might not be an $\mathbb{F}$-stopping time, see the literature about credit risk or that of life insurance. Hence, in order to model the flow of information for the whole model $(S,\mathbb{F}, \tau)$, we consider the following
\begin{equation}\label{GGtildem}
G_t := {^{o,\mathbb F}(I_{\Lbrack0,\tau[\![})_t}=P(\tau > t | {\cal F}_t),\quad \widetilde{G}_t := {^{o,\mathbb F}(I_{\Lbrack0,\tau]\!]})}=P(\tau \ge t | {\cal F}_t),
\quad \mbox{ and } \quad \ m := G + D^{o,\mathbb F}.
\end{equation}
The processes $G$ and $\widetilde G$ are known as Az\'ema supermartingale, while $m$ is a BMO $\mathbb F$-martingale. For more details about these, we refer the reader to \cite[paragraph 74, Chapitre XX]{dellacheriemeyer92}. The pair $(\widetilde{G}, G)$ or equivalently the pair $(G, D^{o,\mathbb{F}})$ are the parametrization of $\tau$ through the flow $\mathbb{F}$. For detailed discussion about why we consider this progressive enlargement of $\mathbb{F}$ with $\tau$ instead, we refer the reader to \cite{ChoulliYansori2} and the references therein. \\
Our resulting informational model, that we will investigate throughout the paper, is $(S, \mathbb{G}, P)$. For this model, our
main goal resides in studying the log-optimal portfolio deeply, and mainly answer the questions singled out in the introduction. As Choulli and Yansori \cite{ChoulliYansori2} already addressed fully the sub-model $(S^{\tau}, \mathbb{G}, P)$, and due to the myopic feature of the log utility, the main contribution of this paper lies in focusing on the sub-model $(S-S^{\tau}, \mathbb{G}, P)$. It is known in the probabilistic literature, that for general $\tau$, the process $S-S^{\tau}$ might not even be a $\mathbb{G}$-semimartingale, and hence in this case there is no chance for the log-optimal portfolio to exist. To avoid this technical problem and difficulty, which is not in the scope of this paper, we suppose throughout the paper that $\tau$ is an honest random time. This honest time concept is mathematically defined by the following.
\begin{definition}\label{honesttime}
A random time $\tau$ is called an ${\mathbb F}$-honest time if, for any $t$, there exists an ${\mathbb{F}}_t$-measurable random variable $\tau_t$ such that $\tau 1\!\!1_{\{\tau<t\}} = \tau_t 1\!\!1_{\{\tau<t\}}.$
\end{definition}
\subsection{Some useful preliminaries}
This subsection recalls some useful results and/or some definitions from the literature.
\begin{theorem} \label{OptionalDecompoTheorem} Suppose $\tau$ is an honest time. Then the following assertions hold.\\
{\rm{(a)}} For any $\mathbb F$-local martingale $M$, the process
\begin{eqnarray}\label{honestMhat}
{\cal T}^{(a)}(M):=I_{]\!]\tau,+\infty[\![}\bigcdot {M}+{{I_{]\!]\tau,+\infty[\![}}\over{1- G}}\bigcdot [m,M]+{{I_{]\!]\tau,+\infty[\![}}\over{1-G_{-}}}\bigcdot \left(\sum \Delta M(1-G_{-}) I_{\{\widetilde G=1>G_{-}\}}\right)^{p,\mathbb F}
\end{eqnarray}
is a $\mathbb G$-local martingale.\\
{\rm{(b)}} For any $M \in {\mathcal M}_{loc} (\mathbb F)$, the process
\begin{equation}\label{Glocalmartingaleaftertau}
\widetilde{M}^{(a)}:= I_{]\!] \tau, +\infty [\![} \bigcdot M + {1 \over 1-G_{-}} I_{]\!] \tau, +\infty [\![} \bigcdot \langle m, M\rangle^{\mathbb{F}}\quad\mbox{is a $\mathbb G$-local martingale.}
\end{equation}
\end{theorem}
We recall the mathematical definition of deflators, as these are the ``dual" processes to the wealth processes $W^{\theta}$, and hence play central role in solving the dual problem.
\begin{definition}\label{DeflatorDefinition} Consider the model $(X, \mathbb H, Q)$, where $\mathbb H$ is a filtration, $Q$ is a probability, and $X$ is a $(Q,\mathbb H)$-semimartingale. Let $Z$ be a process.\\
We call $Z$ a local martingale deflator for $(X,Q,\mathbb H)$ if $Z>0$ and there exists a real-valued and $\mathbb H$-predictable process $\varphi$ such that $0<\varphi\leq 1$ and both $Z$ and $Z(\varphi\bigcdot X)$ are $\mathbb H$-local martingales under $Q$. Throughout the paper, the set of these local martingale deflators will be denoted by ${\cal Z}_{loc}(X,Q,\mathbb H)$. The set of all deflators will be denoted by ${\cal D}(X,Q,\mathbb H)$. When $Q=P$, for the sake of simplicity, we simply omit the probability in notations and terminology.
\end{definition}
Thanks to \cite[Theorem 2.1]{ChoulliYansori1}, the log-optimal portfolio for a model $(X,\mathbb{H},P)$ is intimately related to the subset of ${\cal D}(X,\mathbb H)$ given by
\begin{equation}\label{logdeflatorset}
{\cal D}_{log} (X, \mathbb H) := \left \{ Z \in {\cal D}( X, \mathbb H ) \quad \vert \quad \sup_{t \geq 0 } E [-\ln (Z_t) ] < +\infty \right \}.
\end{equation}
We end this subsection by borrowing an important result, from \cite{ChoulliAlharbi}, on explicit description of all deflators for the model $(S-S^{\tau},\mathbb G)$ in terms of deflators for the initial model $(S, \mathbb F)$. To this end, throughout the paper, we assume the following assumptions
\begin{equation}\label{Assumptions4Tau}
\tau\quad\mbox{is a finite honest time such that}\quad G_{\tau}<+\infty\quad P\mbox{-a.s. and}\quad \left\{\widetilde{G}=1>G_{-}\right\}=\emptyset.
\end{equation}
and we consider the following processes
\begin{equation}\label{m(a)}
m^{(1)} := -(1-G_{-})^{-1} I_{\{G_{-}<1\}}\bigcdot m\quad\mbox{and}\quad {S}^{(1)}:=I_{\{G_{-}<1\}}\bigcdot S.
\end{equation}
\begin{theorem}\label{GeneralDefaltorDescription4afterTau}
Suppose that assumptions (\ref{Assumptions4Tau}) hold, and let $Z^{\mathbb G}$ be a process such that $(Z^{\mathbb G})^{\tau}\equiv 1$. Then the following assertions are equivalent.\\
{\rm{(a)}} $Z^{\mathbb G}$ is a deflator for $(S-S^{\tau}, \mathbb G)$ (i.e., $Z^{\mathbb G}\in {\cal D}(S-S^{\tau}, \mathbb G)$).\\
{\rm{(b)}} There exists a unique pair $\left(K^{\mathbb F}, V^{\mathbb F}\right)$ such that $K^{\mathbb F}\in {\cal M}_{loc}(\mathbb F)$, $V^{\mathbb F}$ is an $\mathbb F$-predictable RCLL and nondecreasing process such that
\begin{eqnarray*}V^{\mathbb F}_0=K^{\mathbb F}_0=0,\quad {\cal E}(K^{\mathbb F}){\cal E}(-V^{\mathbb F})\in {\cal D}({S}^{(1)}, \mathbb F),\end{eqnarray*}
\begin{equation}\label{repKG1a}
Z^{\mathbb G}={\cal E}(K^{\mathbb G}){\cal E}(-I_{]\!]\tau,+\infty[\![}\bigcdot {V}^{\mathbb F}),\ K^{\mathbb G}=I_{]\!]\tau,+\infty[\![}\bigcdot {\cal T}^{(a)}(K^{\mathbb F})+(1-G_{-})^{-1}I_{]\!]\tau,+\infty[\![}\bigcdot {\cal T}^{(a)}(m).\end{equation}
{\rm{(c)}} There exists a unique $Z^{\mathbb F}\in{\cal D}({S}^{(1)}, \mathbb F)$ such that \begin{equation}\label{repKGMultiGEneral}
Z^{\mathbb G}={{Z^{\mathbb F}/(Z^{\mathbb F})^{\tau}}\over{{\cal E}(-I_{]\!]\tau,+\infty[\![}(1-G_{-})^{-1}\bigcdot m)}}.\end{equation}
\end{theorem}
We end this section with the following definition of portfolio rate. Recall that a portfolio for the model $(X,\mathbb{H},Q)$ is any $\mathbb{H}$-predictable $\theta$ ($\mathbb{H}\in\{\mathbb{F},\mathbb{G}\}$), such that $\theta$ is $X$-integrable in the semimartingale sense, and the corresponding wealth process with initial value one is given by $W^{\theta}=1+\theta\bigcdot {X}$.
\begin{definition} Let $\theta$ be a portfolio for $(X,\mathbb{H},Q)$ such that both $W^{\theta}$ and $W^{\theta}_{-}$ are positive (i.e. $W^{\theta}>0$ and $W^{\theta}_{-}>0$ ). Then we associate to the portfolio $\theta$, the portfolio rate $\varphi$ given by
$$\varphi:=\theta/W^{\theta}_{-}.$$
It exists, it is $X$-integrable and $\varphi\in {\cal{L}}(X,\mathbb{H},Q)$, where
\begin{equation}\label{L(X,H)}
{\cal{L}}(X,\mathbb{H},Q):=\Bigl\{\varphi\ \mathbb{H}\mbox{-predictable}:\ \varphi\Delta{X}>-1\Bigr\}.
\end{equation}
When $Q=P$, we simply omit the probability in notation and write $ {\cal{L}}(X,\mathbb{H})$.
\end{definition}
\section{Log-optimal portfolio for $(S-S^{\tau},\mathbb{G})$: Existence and duality}\label{Section3}
In this section, we address the existence of the log-optimal portfolio after random time. Or equivalently, in virtue of \cite[Theorem 2.1]{ChoulliYansori1}, the existence of the solution to the dual minimization problem
\begin{equation}\label{dualaftertau}
\min_{Z^{\mathbb G} \in {\cal D}_{log}(S-S^{\tau}, \mathbb G)} \mathbb E \left [ -\ln (Z^{\mathbb G}_T)\right].
\end{equation}
The solution to this dual problem, when it exists, it will {\it naturally} involve the information theoretical concept(s) such as Hellinger or entropy that we recall below.
\begin{definition}\label{hellinger}
Consider a filtration $\mathbb H$, and let $N$ be an $\mathbb H$-local martingale such that $1+ \Delta N > 0 $, and denoted by $N^c$ its continuous local martingale part. \\
1) We call a Hellinger process of order zero for $N$, denoted by $h^{(0)}(N, \mathbb H)$, the process $h^{(0)}(N,\mathbb H):= \left(H^{(0)}(N, \mathbb H) \right)^{p, \mathbb H}$ when this projection exists, where
\begin{align}
H^{(0)} (N, \mathbb H) := {1 \over 2} \langle N^{c}\rangle^{\mathbb H} + \sum (\Delta N - \ln(1+\Delta N)).
\end{align}
2) We call an entropy-Hellinger process for $N$, denoted by $h^{(E)}(N, \mathbb{H})$, the process $h^{(E)}(N, \mathbb H):= (H^{(E)}(N,\mathbb H))^{p, \mathbb H}$ when this projection exists, where
\begin{align}
H^{(E)} (N, \mathbb H) := {1 \over 2} \langle N^{c}\rangle^{\mathbb H} + \sum ((1+\Delta N) \ln(1+\Delta N) - \Delta N).
\end{align}
3) Let $Q^1$ and $Q^2$ be two probabilities such that $Q^1\ll Q^2$. If $Q^i_T:=Q^i\big|_{{\cal H}_T}$ denotes the restriction of $Q^i$ to ${\cal H}_T$ ($i=1,2$), then
\begin{eqnarray}\label{entropy}
{\cal H}_{\mathbb H}(Q^1_T\big| Q^2_T):=E_{Q^2}\left[{{dQ^1_T}\over{dQ^2_T}}\ln\left({{dQ^1_T}\over{dQ^2_T}}\right)\right].\end{eqnarray}
\end{definition}
The following theorem, which is our main contribution of this section, characterizes completely and in various manners the existence of log-optimal portfolio for $(S-S^{\tau},\mathbb{G})$.
\begin{theorem}\label{existencetheorem}
Suppose (\ref{Assumptions4Tau}) holds, and consider $m^{(1)}$ and ${S}^{(1)}$ defined in (\ref{m(a)}). Then the following assertions are equivalent.\\ {\rm{(a)}} Log-optimal portfolio for $(S-S^{\tau}, \mathbb G)$ exists, and $\widetilde{\varphi}^{\mathbb G}$ denotes its portfolio rate. \\
{\rm{(b)}} There exists $K^{\mathbb F} \in {\mathcal M}_{loc} (\mathbb F)$ and a RCLL, nondecreasing, and $\mathbb F$-predictable process $V$ such
\begin{equation}\label{Condition4Existence0}
K^{\mathbb F}_0 = V_0 = 0,\quad Z^{\mathbb F} := {\cal E}(K^{\mathbb F})\exp (-V) \in {\cal D}({S}^{(1)}, \mathbb F)\end{equation}
and
\begin{equation}\label{existencecond1}
E \left [ (1-\widetilde{G}) \bigcdot \left(H^{(0)} (K^{\mathbb F},\mathbb F)+V+h^{(E)}( m^{(1)},\mathbb F)+\langle K^{\mathbb F}, m^{(1)} \rangle^{\mathbb F}\right)_T\right]<+\infty.
\end{equation}
{\rm{(c)}} There exits a unique solution to (\ref{dualaftertau}). I.e. there exists unique $\widetilde{Z}^{\mathbb G} \in {\cal D}(S-S^{\tau}, \mathbb G)$ such that
\begin{equation}
\inf_{Z^{\mathbb G} \in {\cal D}(S-S^{\tau}, \mathbb G) } E \left[-\ln (Z^{\mathbb G}_T)\right ] =E \left [ -\ln (\widetilde{Z}_{T}^{\mathbb G})\right] < +\infty.
\end{equation}
{\rm(d)} There exists a unique $\widetilde{Z}^{\mathbb F} \in {\cal D}({S}^{(1)}, \mathbb F)$ such that
\begin{equation}
\inf_{Z^{\mathbb G} \in {\cal D}(S-S^{\tau}, \mathbb G) } E \left[-\ln (Z^{\mathbb G}_T)\right ] = E \left [-\ln \dfrac{\widetilde{Z}_T^{\mathbb F}/\widetilde{Z}_{T \wedge \tau}^{\mathbb F}}{{\cal E}(I_{]\!] \tau, \infty [\![} \bigcdot {m}^{(1)})}\right ].
\end{equation}
Furthermore, when the triplet $(\widetilde{\varphi}^{\mathbb G},\widetilde{Z}^{\mathbb G}, \widetilde{Z}^{\mathbb F})$ exists, then it satisfies the following,
\begin{equation}
{\cal E}(\widetilde{\varphi}^{\mathbb G} \bigcdot (S-S^{\tau})) = \dfrac{1}{\widetilde{Z}^{\mathbb G}} = \dfrac{(\widetilde{Z}^{\mathbb F})^{\tau} {\cal E}(I_{]\!] \tau , \infty [\![} \bigcdot {m}^{(1)})} {\widetilde{Z}^{\mathbb F}}.
\end{equation}
\end{theorem}
The proof of Theorem \ref{existencetheorem} relies on the following lemma, which is interesting in itself.
\begin{lemma}\label{deflator4hellinger}
Suppose that (\ref{Assumptions4Tau}) holds, and let $m^{(1)}$ and ${S}^{(1)}$ given in (\ref{m(a)}).
Then the following hold \\
{\rm{(a)}} Both processes $(1-G_{-})^{-2}I_{]\!] \tau,\infty [\![} \bigcdot \langle m \rangle^{\mathbb F}$ and $H^{(0)} (m^{(1)},\mathbb{F})$ are ${\mathbb F}$-locally integrable, and
\begin{eqnarray}
\left ( {I_{]\!] \tau, \infty [\![} \over (1-G_{-})^2} \bigcdot \langle m \rangle^{\mathbb F} -I_{]\!] \tau, +\infty [\![} \bigcdot H^{(0)} (m^{(1)}, {\mathbb F}) \right )^{p, \mathbb F} =(1-G_{-}) \bigcdot h^{(E)}(m^{(1)},\mathbb F).\label{m(1)2Hellinger}
\end{eqnarray}
{\rm{(b)}} For any $Z^{\mathbb{G}}\in {\cal{D}}_{log}(S-S^{\tau},\mathbb{G})$, there exists $Z^{\mathbb{F}}\in {\cal{D}}(S^{(1)},\mathbb{F})$ such that
\begin{equation*}
E\left[-\ln\left(Z^{\mathbb{G}}_T\right)\right]\geq E\left[{{Z^{\mathbb{F}}_T/Z^{\mathbb{F}}_{T\wedge\tau}}\over{{\cal E}_{T} \left (- I_{]\!] \tau, \infty [\![}(1-G_{-})^{-1} \bigcdot m \right)}}\right].
\end{equation*}
{\rm{(c)}}The following equality holds
\begin{equation}\label{optimalZF}
\inf_{Z^{\mathbb G} \in {\cal D}(S-S^{\tau}, \mathbb G)} E \left [ - \ln (Z^{\mathbb G}_T) \right] = \inf_{Z^{\mathbb F} \in {\cal D}({S}^{(1)}, \mathbb F)} E \left[ -\ln \left (\frac{Z_T^{\mathbb F}/ Z_{T \wedge \tau}^{\mathbb F}}{{\cal E}_{T} \left (- I_{]\!] \tau, \infty [\![}(1-G_{-})^{-1} \bigcdot m \right)} \right ) \right]
\end{equation}
{\rm{(d)}}
If $Z^{\mathbb F} \in {\cal D}({S}^{(1)}, \mathbb F)$ such that $Z^{\mathbb F}/\left((Z^{\mathbb F} )^{\tau}{\cal E}(-I_{]\!] \tau, \infty [\![} \bigcdot {m}^{(1)})\right)\in {\cal D}_{log}(S-S^{\tau}, \mathbb G)$, then there exists a nondecreasing and $\mathbb F$-predictable process $V$ such
$$K^{\mathbb F}_0 = V_0 = 0,\quad Z^{\mathbb F} := {\cal E}(K^{\mathbb F})\exp (-V),$$
and
\begin{eqnarray}
&& E \left[ -\ln \left (\frac{Z_T^{\mathbb F}/Z_{T\wedge \tau}^{\mathbb F}}{{\cal E}_{T} \left (I_{]\!] \tau, \infty [\![} \bigcdot {m}^{(1)} \right)}\right )\right] \nonumber \\
&&= E \left [(1-\widetilde{G})\bigcdot \left(V+h^{(E)}(m^{(1)},\mathbb F)+ H^{(0)} (K^{\mathbb F},\mathbb F)+\langle K^{\mathbb F}, m^{(1)} \rangle^{\mathbb F} \right)_T\right].\label{fromZF2KV}
\end{eqnarray}
\end{lemma}
The proof of this lemma is relegated to Appendix \ref{proof4lemmas}, while below we give the proof of Theorem \ref{existencetheorem}.
\begin{proof}[Proof of Theorem \ref{existencetheorem}] The proof of (a)$\iff$(b) follows immediately from \cite[ Theorem 3.2]{ChoulliYansori1} applied to the model $(X,\mathbb{H})=(S-S^{\tau},\mathbb{G})$. If assertion (c) holds, then by combining (\ref{optimalZF}) with Lemma \ref{deflator4hellinger}-(b) applied to $\widetilde{Z}^{\mathbb{G}}$, assertion (d) follows immediately. Hence, this proves (c) $\Longrightarrow$ (d). It is clear that if assertion (d) holds, then the existence of the solution to (\ref{dualaftertau}) exists and takes the form of $\widetilde{Z}^{\mathbb{G}}=\dfrac{\widetilde{Z}^{\mathbb F}/(\widetilde{Z}^{\mathbb F})^{\tau}}{{\cal E}(I_{]\!] \tau, \infty [\![} \bigcdot {m}^{(1)})}$, while the uniqueness of the solution is due to the strict concavity of the logarithm function. This proves (d) $\Longrightarrow$ (c). Thus the rest of this proof focuses on proving (c) $\Longleftrightarrow$ (b). To this end, we recall that in virtue of \cite[ Theorem 3.2]{ChoulliYansori1}, the problem (\ref{dualaftertau}) admits a solution if and only if ${\cal D}_{log}(S-S^{\tau}, \mathbb G)\not=\emptyset$, or equivalently $\inf_{Z^{\mathbb G} \in {\cal D}(S-S^{\tau})} E \left[ -\ln (Z_T^{\mathbb G})\right ] < +\infty.$ Thus, when assertion (b) holds, the equality (\ref{fromZF2KV}) in Lemma \ref{deflator4hellinger} allows us to conclude that assertion (c) holds, and the proof of (b) $\Longrightarrow$ (c) is complete. To prove the reverse, we assume that assertion (c) holds and consider ${Z}^{\mathbb G} \in {\cal D}_{log} (S-S^{\tau}, \mathbb G)\subset {\cal D} (S-S^{\tau}, \mathbb G)$. Thus, Theorem \ref{GeneralDefaltorDescription4afterTau} guarantees the existence of $Z^{\mathbb F} \in {\cal D} (S^{(1)}, \mathbb{F})$ such that
$$
{Z}^{\mathbb G}={{Z^{\mathbb F}/(Z^{\mathbb F})^{\tau}}\over{{\cal E}\left(-I_{]\!] \tau, \infty [\![} \bigcdot {m}^{(1)}\right)}}\in {\cal D}_{log} (S-S^{\tau}, \mathbb G).$$
Therefore, by applying Lemma \ref{deflator4hellinger}-(d) to $Z^{\mathbb F} $, assertion (b) follows immediately. This proves (c) $\Longrightarrow$ (b), and ends the proof of the theorem.
\end{proof}
In the rest of this section, we discuss a direct consequence of Theorem \ref{existencetheorem}, which is important in itself. It is in fact an application of Theorem \ref{existencetheorem}, which gives a sufficient condition in terms of information theoretical concept, for $(S-S^{\tau}, \mathbb G)$ to admit the log-optimal portfolio when $({S}^{(1)},\mathbb{F})$ does already.
\begin{theorem}\label{sufficientcond}
Suppose that (\ref{Assumptions4Tau}) holds, and $({S}^{(1)}, \mathbb F)$ admits the log-optimal portfolio. Then the log-optimal portfolio for $(S-S^{\tau}, \mathbb G)$ exists if
\begin{equation}\label{existencecond2}
E \Bigl[ \int_0^T (1- G_{s-})dh_s^{(E)}(m^{(1)}, \mathbb{F})\Bigr] < +\infty.
\end{equation}
\end{theorem}
\begin{proof}
On the one hand, thanks to a combination of \cite[Theorem 2.1]{ChoulliYansori1} and \cite[Proposition B.2]{ChoulliYansori2}, we deduce that $({S}^{(1)}, \mathbb F)$ admits the log-optimal portfolio if and only if there exist $K^{\mathbb F} \in {\cal M}_{loc}(\mathbb F)$ and a RCLL, nondecreasing, and $\mathbb F$-predictable process $V$ such that
\begin{equation*}
K^{\mathbb F}_0= V_0 = 0,\ Z^{\mathbb F} := {\cal E}(K^{\mathbb F})e^{-V} \in {\cal D}({S}^{(1)}, \mathbb F)\ \mbox{and}\ E \left [ -\ln (Z_T^{\mathbb{F}}) \right ] = E \left [ V_T + H^{(0)}_T (K^{\mathbb F}, \mathbb F)\right ] < + \infty .
\end{equation*}
On the other hand, thanks to \cite [Lemma B.1]{ChoulliYansori2} and the condition $E[H^{(0)}_T (K^{\mathbb F}, \mathbb F)]<+\infty$, we deduce that $\sup_{0 \leq t \leq T} \vert K_t^{\mathbb F} \vert \in L^{1}(P)$, (or equivalently $E[[K^{\mathbb F} ,K^{\mathbb F}]^{1/2}_T]<+\infty$). A combination of this condition with $m$ being a BMO $\mathbb F$-martingale implies clearly that $\langle K^{\mathbb F}, m \rangle ^{\mathbb F}$ has integrable variation, and hence $(1-\widetilde{G})\bigcdot \langle K^{\mathbb F}, m^{(1)} \rangle ^{\mathbb F}$ does also. Therefore, in virtue of this latter fact and Theorem \ref{existencetheorem}, we can conclude the condition \eqref{existencecond2} is sufficient for the existence of the log-optimal portfolio for $(S-S^{\tau}, \mathbb G)$ as soon as $({S}^{(1)}, \mathbb F)$ admits log-optimal portfolio.
This ends the proof of the theorem.
\end{proof}
\section{Log-optimal portfolio for $(S-S^{\tau}, \mathbb G)$: Description and sensitivity}\label{Section4}
This section describes explicitly the log-optimal portfolio for the model $(S-S^{\tau},\mathbb{G})$, when it exists, using the $\mathbb{F}$-parameters of the pair $(S, \tau)$. Thanks to \cite[Theorem 2.1]{ChoulliYansori1} (see also \cite[Theorem 5.2]{ChoulliYansori2} for the case $(S^{\tau},\mathbb{G})$), this task is feasible no matter what is the initial model $(S,\mathbb{F},P)$, due to the statistical techniques called {\it the predictable characteristics} of semimartingales. For more details about these tools, we refer the reader to \cite[Chapters III and IV]{jacod79} and \cite[Section \RN{2}.2]{jacodshiryaev}, and for their applications we refer to \cite{ChoulliYansori1,ChoulliYansori2,ChoulliStricker2005,ChoulliStricker2006,ChoulliStricker2007} and the references therein to cite few. Thus, the rest of this paragraph will parametrizes the pair $(S,\tau)$, via predictable characteristics which are $\mathbb{F}$-observable. Throughout the rest of the paper, on $\Omega \times [0, + \infty ) \times {\mathbb R}$, we consider the $\sigma$-algebras
\begin{equation*}
\widetilde{\cal O}(\mathbb F) := {\cal O}(\mathbb F) \otimes {\cal B}({\mathbb R}^d) \quad\mbox{and}\quad \widetilde{\cal P}(\mathbb F) := {\cal P}(\mathbb F) \otimes {\cal B}({\mathbb R}^d),
\end{equation*}
where ${\cal B}({\mathbb R}^d)$ is the Borel $\sigma$-field on the optional and predictable $\sigma$-fields, respectively. \\
{\bf Parametrization of $S$ and $S^{(1)}$:} The random measure associated to the jumps of $S$, denoted by $\mu$, is given by
\begin{equation*}
\mu(dt, dx):= \sum_{s>0} I_{\{\Delta S \neq 0\}} \delta_{(s, \Delta S_s)}(dt, dx).
\end{equation*}
For a product-measurable functional $W\geq0$ on $\Omega \times [0, + \infty ) \times {\mathbb R}$, we denote by $W \star \mu$ the process
\begin{equation}
W \star \mu := \int_0^t \int_{{\mathbb R} \backslash \{0\}} W
(u, x) \mu(du, dx) = \sum_{0<u\leq t} W(u, \Delta S_u) I_{\{\Delta S \neq 0\}}
\end{equation}
Thus, on $\Omega \times [0, + \infty ) \times {\mathbb R}$, we define the $\sigma$-finite measure $M^{\mathbb P}_{\mu} := {\mathbb P} \otimes \mu$ by
\begin{equation*}
\int W dM^{\mathbb P}_{\mu} := E\left ( W \star \mu_{\infty} \right )
\end{equation*}
The $\mathbb{F}$-compensator of $\mu$ is the random measure $\nu$ defined by $E \left ( W \star \mu_{\infty}\right ) = E \left ( W \star \nu_{\infty} \right )$, for each $\widetilde{{\mathcal P}}(\mathbb F)$-measurable $W\geq0$. Then, by \cite[Theorem 2.34]{jacodshiryaev} and fixed truncation function $h(x):= x I_{\{\vert x \vert \leq 1\}}$ the so called "canonical representation"
of $S$ is given by the following decomposition:
\begin{equation}\label{CanonicalDecomposition4S}
S = S_0 + S^c + h(x) \star (\mu - \nu) + b \bigcdot A +(x-h(x)) \star \nu.
\end{equation}
where $S^c$ is the continuous local martingale part of $S$. $h \star (\mu -\nu)$ is the unique pure jump ${\mathbb H}$-local martingale with jumps that given by $h(\Delta S)I_{\{\Delta S \neq 0\}} - {^{p, \mathbb{F}}}(h(\Delta S) I_{\{\Delta S \neq 0\}})$. For $\nu$ and $C$ is the matrix with entries $C_{ij} := \langle S^{c,i}, S^{c,j} \rangle$ we can find a version satisfying
\begin{equation*}
C = c \bigcdot A, \quad \nu(dt, dx) = dA_t F_t(dx), \quad F_t\left(\{0\} \right)=0, \quad \int (\vert x \vert^2 \wedge 1)F_t(dx) \leq 1.
\end{equation*}
where $A$ is increasing and continuous, $b$ and $c$ are predictable processes, $F_t(dx)$ is a predictable kernel, $b_t(w)$ is a vector in ${\mathbb R}^d$ and $c_t(w)$ is a symmetric $d \times d$-matrix, for all $(w,t) \in \Omega \times {\mathbb R}$. The quadruplet $(b, c, F, A)$ are the predictable characteristics of $S$.
Throughout the rest of the paper, we define
\begin{equation}
\widehat{W}_t := \int W(t, x) \nu(\{t\}, dx), \quad a_t:= \widehat{1}_t= \nu(\{t\}, {\mathbb R}^d).
\end{equation}
for any predictable functional $W$ such that the above integral exists. As $S^{(1)}=I_{\{G_{-}<1\}}\bigcdot {S}$, then the random measure for its jumps $\mu_1$ and its compensator random measure $\nu_1$ are given by
\begin{equation}\label{mu1nu1}
\mu_1(dt,dx):=I_{\{G_{t-}<1\}}\mu(dt,dx)\quad\mbox{and}\quad \nu_1(dt,dx):=I_{\{G_{t-}<1\}}\nu(dt,dx).
\end{equation}
Hence, it is easy to check that the predictable characteristics $(b^{(1)}, c^{(1)}, F_1, A^{(1)})$ of $S^{(1)}$ are given by
\begin{equation}\label{Characteristics4S1}
A^{(1)}:=I_{\{G_{-}<1\}}\bigcdot A,\quad F_1(t,dx):=I_{\{G_{t-}<1\}}F(t,dx),\quad b^{(1)}=b,\quad c^{(1)}=c.\end{equation}
{\bf Parametrization of $\tau$:} Thanks to \cite[Theorem 3.75]{jacod79} and \cite[Lemma 4.24]{jacodshiryaev}, we will consider Jacod's decomposition for the $\mathbb F$-martingale $m$ as follows
\begin{equation}\label{Decomposition4m}
{m} = \beta^{(m)} \bigcdot S^c + U^{(m)} \star (\mu-\nu) + g^{(m)}\star \mu + m^{\perp}, \quad U^{(m)} := f^{(m)} + \dfrac{\widehat{f^{(m)}}}{1-a}I_{\{a<1\}}.
\end{equation}
For the sake of simplifying the formulas, throughout the rest of the paper, we consider the functionals
\begin{equation}\label{Proceses(m,1)}
\begin{cases}\beta^{(m,1)} :=\beta^{(m)}(1-G_{-})^{-1}I_{\{G_{-}<1\}},\quad f^{(m,1)} :=f^{(m)}(1-G_{-})^{-1}I_{\{G_{-}<1\}},\cr\\
g^{(m,1)} :=g^{(m)}(1-G_{-})^{-1}I_{\{G_{-}<1\}},\quad m^{(\perp,1)} :=I_{\{G_{-}<1\}}(1-G_{-})^{-1}\bigcdot {m}^{\perp},\end{cases}
\end{equation}
and the following function
\begin{equation}\label{klog}
{\mathcal K}_{log}(y) := \dfrac{-y}{1+y}+\ln (1+y) \quad \mbox{for any} \quad y>-1.
\end{equation}
The rest of this section is divided into three subsections. The first subsection elaborates our main results, and discusses the applications of these results and their financial interpretations as well. The second subsection illustrates these main results on the model where $(S,\mathbb{F})$ follows the jump-diffusion model. The last subsection proves the main results of the first subsection.
\subsection{Main results and their applications and interpretations}
In this subsection, we start in the following theorem by describing completely and as explicit as possible the log-optimal portfolio of $(S-S^{\tau},\mathbb{G})$ using the parameters of the pair $(S,\tau)$ which are $\mathbb{F}$-observables. This allows us to single out, with sharpe precision, what are the various risks induced by $\tau$ that really affect the existence and the structure as well of the log-optimal portfolio.
\begin{theorem}\label{generalpredictable}
Suppose (\ref{Assumptions4Tau}) holds, and let $(\beta^{(m,1)},f^{(m,1)})$ and ${\mathcal K}_{log}$ be given by (\ref{Proceses(m,1)}) and \eqref{klog} respectively. Then the following assertions are equivalent. \\
{\rm{(a)}} The log-optimal portfolio $\widetilde{\theta}^{\mathbb G}$ for $(S-S^{\tau}, \mathbb G)$ exists (i.e. ${\cal D}_{log}(S-S^{\tau}, \mathbb G) \neq \phi)$. \\
{\rm{(b)}} There exists $\widetilde{\varphi} \in {\cal L}(S^{(1)}, \mathbb F)$ such that
\begin{equation}
(\theta - \widetilde{\varphi})^{tr}\left\{b- c(\beta^{(m,1)}+\widetilde{\varphi})+ \int \left ( \dfrac{1-f^{(m,1)}(x)}{1+\widetilde{\varphi}^{tr}x}x- h(x)\right)F_1(dx)\right\} \leq 0,\ P\otimes A^{(1)}\mbox{-a.e.},\label{G1}\end{equation}
for any $\theta \in {\cal L}(S^{(1)}, \mathbb F)$, and
\begin{equation} E \left[(1-G_{-}) \bigcdot \left( \widetilde{V}^{(1)}+\widetilde{\varphi}^{tr} c {\widetilde \varphi} \bigcdot A^{(1)}+{\mathcal K}_{log} ({\widetilde \varphi}^{tr} x)(1-f^{(m,1)})\star \nu_1 \right)_T \right ] < + \infty. \label{G2}\end{equation}
Here
\begin{equation} \widetilde{V}^{(1)} := \widetilde{\varphi}^{tr} \left(b-c (\beta^{(m,1)}+ \widetilde{\varphi})\right)\bigcdot A^{(1)} + \left [ \dfrac{(1-f^{(m,1)} (x))\widetilde{\varphi}^{tr} x }{1+\widetilde{\varphi}^{tr} x} - \widetilde{\varphi}^{tr}h(x)\right]\star \nu_1. \label{V(1)}\end{equation}
Furthermore, when they exist, the processes $\widetilde{\theta}^{\mathbb G}$, $\widetilde{\varphi}$ and $\widetilde{Z}^{\mathbb G}$ and $\widetilde{Z}^{\mathbb F} \in {\cal D}({S}^{(1)}, \mathbb F)$ solution to the minimization of the RHS term of \eqref{optimalZF} are related to each other via the following
\begin{align}
& \widetilde{\theta}^{\mathbb G} \left ( 1+ ( \widetilde{\theta}^{\mathbb G} \bigcdot (S-S^{\tau}))_{-}\right)^{-1} = \widetilde{\varphi} \quad P\otimes A\mbox{-a.e.} \quad \mbox{on} \quad ]\!] \tau , \infty [\![, \label{Equation4.12}\\
& \widetilde{Z}^{\mathbb G} = {\cal E}(\widetilde{K}^{\mathbb G}){\cal E}(-I_{]\!] \tau, \infty [\![} \bigcdot \widetilde{V}^{(1)}) = \dfrac{{\cal E}(I_{]\!] \tau, \infty [\![} \bigcdot \widetilde{K}^{\mathbb F}){\cal E}(-I_{]\!] \tau, \infty [\![} \bigcdot \widetilde{V}^{(1)})}{{\cal E}(I_{]\!] \tau, \infty [\![} \bigcdot {m}^{(1)} )} = \dfrac{\widetilde{Z}^{\mathbb F}/(\widetilde{Z}^{\mathbb F})^{\tau}}{{\cal E}(I_{]\!] \tau, \infty [\![}\bigcdot {m}^{(1)} )},\label{Equation4.13}\\
& \widetilde{K}^{\mathbb G}:= - \widetilde{\varphi} \bigcdot {\cal T}^{(a)}(S^c) - \dfrac{\widetilde{\Gamma}^{(1)}\widetilde{\varphi}^{tr}x}{1+ \widetilde{\varphi}^{tr}x} I_{]\!] \tau, \infty [\![} \star (\mu - (1-f^{(m,1)}) \bigcdot \nu), \label{Equation4.14}\\
& \widetilde{K}^{\mathbb F} := - \widetilde{\varphi} I_{\{G_{-}<1\}}\bigcdot S^{c} - \widetilde{\Gamma}^{(1)}\bigcdot {m}^{(1)} - \dfrac{\widetilde{\Gamma }^{(1)} (1-f^{(m,1)}) \widetilde{\varphi}^{tr} x}{1+\widetilde{\varphi}x} \star (\mu_1 -\nu_1) + \dfrac{\widetilde{\Gamma }^{(1)} (\widetilde{\varphi}^{tr} x)g^{(m,1)}}{1+\widetilde{\varphi}x} \star \mu_1, \label{KF}\\
& \widetilde{\Gamma }^{(1)} := \left ( 1- a + \widehat{f^{(op)}} + \widehat{f^{(m,1)}} - \reallywidehat{f^{(op)}f^{(m,1)}} \right )^{-1}, \quad \quad f^{(op)}(t,x):= (1+ \widetilde{\varphi}_t^{tr}x)^{-1}. \label{Gamma(1)andf(op)}
\end{align}
\end{theorem}
The proof of this theorem is relegated to Subsection \ref{Subsection4Proofs} for the sake of keeping this subsection with less technicalities, while the theorem conveys two principal results that we discuss herein. The first core result, which is the equivalence between assertions (a) and (b), gives a complete and explicit characterization of the log-optimal portfolio rate. Hence, this result simultaneously gives necessary and sufficient conditions on the pair $(S,\tau)$ such that the model $(S-S^{\tau},\mathbb{G})$ admits the log-optimal portfolio. It is important to mention that this first result, as we said it in the introduction that in virtue of \cite[Theorem 2.1]{ChoulliYansori1}, also gives a complete and explicit characterization via (\ref{G1}) for the num\'eraire portfolio rate of $(S-S^{\tau},\mathbb{G})$. This equation shows that the num\'eraire portfolio (and hence the log-optimal portfolio) is impacted by ``the correlation" between $\tau$ and $S$ only.
The second principal result of Theorem \ref{generalpredictable} lies in the meaning of (\ref{Equation4.12})-(\ref{Equation4.13}). On the one hand, (\ref{Equation4.12}) and the first equality of (\ref{Equation4.13}) explain the exact duality relationship between the log-optimal wealth and log-optimal deflator solution to (\ref{dualaftertau}). In fact, it is easy to check that these equalities yield $1+ \widetilde{\theta}^{\mathbb G} \bigcdot (S-S^{\tau})={\cal{E}}(\widetilde\varphi{I}_{]\!] \tau, \infty [\![} \bigcdot {S})=1/{\widetilde{Z}^{\mathbb{G}}}$. On the other hand, the second and third equalities in (\ref{Equation4.13}) convey the structures of the optimal deflator solution to (\ref{dualaftertau}). In virtue of the $\mathbb{G}$-martingale decomposition in \cite{ChoulliAlharbi} (see Theorem \ref{GeneralDefaltorDescription4afterTau}), these structures were expected, and the novelty herein resides in giving a deep description on which part of $\tau$ plays central role and how it does it.\\
One of the important application of Theorem \ref{generalpredictable} is the quantification of impact of $\tau$ on the maximum expected logarithm utility problem. To this end, we calculate {\it the increment in maximum expected logarithm utility from terminal wealth} for the models $(S-S^{\tau},\mathbb{G})$ and $(S,\mathbb{F})$ in the following theorem. More importantly, we single out the main factors intrinsic to $\tau$, which really measure the sensitivity of the log-optimal portfolio to $\tau$, and we quantify and interpret these factors afterwards.
\begin{theorem}\label{riskfactors}
Suppose that (\ref{Assumptions4Tau}) is satisfied, the log-optimal portfolio rate for $(S, \mathbb F)$ exists, and \eqref{existencecond2} holds. Then there exists $\widetilde{\varphi} \in {\cal L}(S, \mathbb F)$ satisfying \eqref{G1}, and the following equalities hold
\begin{align}
& \Delta_T (S, \tau, \mathbb F):= u_T(S-S^{\tau}, \mathbb G) - u_T(S, \mathbb F) \nonumber \\
& = - \underbrace{E \left [- ((1-\widetilde{G}) \bigcdot \widetilde{\cal H}(\mathbb G))_T +((1-\widetilde{G}) \bigcdot \widetilde{\cal H}(\mathbb F))_T - \langle \widetilde{K}^{\mathbb F} -\widetilde{L}^{\mathbb F}, I_{\{G_{-}<1\}}\bigcdot {m} \rangle_T^{\mathbb F} \right ]}_{\mbox{correlation-risk-after-$\tau$}} \label{Equation4.17} \\
& - \underbrace{E \left [ (\widetilde{G} \bigcdot \widetilde{\cal H}(\mathbb F))_T \right]}_{\mbox{cost-of-late-investment}} + \underbrace{E \left[ \langle \widetilde{L}^{\mathbb F}, I_{\{G_{-}<1\}}\bigcdot {m} \rangle_T^{\mathbb F} \right ]}_{\mbox{NP($\mathbb{F}$)-correlation}} + \underbrace{E\left[ (1-G_{-}) \bigcdot h^{(E)} \left ( m^{(1)}, \mathbb F\right )_T \right]}_{\mbox{information-premium-after-$\tau$}}, \nonumber\\
& = - \underbrace{E \left [ (\widetilde{G} \bigcdot \widetilde{\cal H}(\mathbb F))_T \right]}_{\mbox{cost-of-late-investment}}+ \underbrace{E \left [\int_0^T {\cal P}_t^{(N,1)} dA^{(1)}_t \right ]}_{\mbox{num\'eraire-change-premium}} + \underbrace{E \left[ \langle \widetilde{L}^{\mathbb F}, I_{\{G_{-}<1\}}\bigcdot {m} \rangle_T^{\mathbb F} \right ]}_{\mbox{NP($\mathbb{F}$)-correlation}}.
\label{Equation4.18} \end{align}
Here $\widetilde{K}^{\mathbb F}$ is given by \eqref{KF}, and ${\cal P}^{(N,1)}$, $\widetilde{L}^{\mathbb F}$, $\widetilde{\cal H}(\mathbb G)$, $\widetilde{\cal H}(\mathbb F)$ are given by
\begin{align}
{\cal P}_t^{(N,1)}& := (1-G_{t-})\left\{(\widetilde{\varphi}_t - \widetilde{\lambda}_t)^{tr} b_t - (\widetilde{\varphi}_t - \widetilde{\lambda}_t)^{tr} c_t\beta^{(m,1)}_t - \frac{1}{2} \widetilde{\varphi}_t^{tr} c_t \widetilde{\varphi}_t + \frac{1}{2} \widetilde{\lambda}_t^{tr} c_t \widetilde{\lambda}_t \right\}\nonumber \\
& +(1-G_{t-}) \int \left((1-f^{(m,1)}(t, x))\ln \left( \frac{1+\widetilde{\varphi}^{tr}_t x}{1+\widetilde{\lambda}^{tr}_t x}\right) -(\widetilde{\varphi}_t - \widetilde{\lambda}_t)^{tr} h(x) \right) F_1(t,dx),\label{Equation4.19} \\
\widetilde{L}^{\mathbb F} & := -\widetilde{\lambda} \bigcdot S^{c} - \frac{\widetilde{\Xi} \widetilde{\lambda}^{tr} x}{1+\widetilde{\lambda}^{tr}x} \star(\mu -\nu), \quad \quad \widetilde{\Xi}_t^{-1} := 1 -a_t + \int \frac{\nu({\{t\}}, dx)}{1+\widetilde{\lambda}^{tr}x}, \label{Equation4.20} \\
\widetilde{\cal H}(\mathbb G) & := \widetilde{V}^{(1)} + \sum \left ( - \Delta \widetilde{V}^{(1)} - \ln (1- \Delta \widetilde{V}^{(1)}) \right ) + H^{(0)} (\widetilde{K}^{\mathbb F}, \mathbb F),\label{Equation4.21} \\
\widetilde{\cal H}(\mathbb F) & := \widetilde{V}^{\mathbb F} + \sum \left (-\Delta \widetilde{V}^{\mathbb F} - \ln (1- \Delta \widetilde{V}^{\mathbb F}) \right ) + H^{(0)} (\widetilde{L}^{\mathbb F}, \mathbb F), \label{Equation4.22}
\end{align}
while $\widetilde{V}^{(1)}$ and $\widetilde{V}^{\mathbb F}$ are defined by \eqref{V(1)} and
\begin{equation}\label{Equation4.23}
\widetilde{V}^{\mathbb F} := \left (\widetilde{\lambda}^{tr} b -\widetilde{\lambda}^{tr} c\widetilde{\lambda} \right ) \bigcdot A + \left (\dfrac{\widetilde{\lambda}^{tr} x}{1+ \widetilde{\lambda}^{tr} x} - \widetilde{\lambda}^{tr} h \right) \star \nu.
\end{equation}
\end{theorem}
\subsection{The case when $(S, \mathbb F)$ is a jump-diffusion model}\label{Subsection4Example}
In this section, we illustrate the results of Sections 3 and 4 on the jump-diffusion market model.
To this end, we consider the case of one-dimensional jump-diffusion framework for the market model $(S,\mathbb F, P)$. Precisely, we suppose that on $(\Omega, {\cal{F}},P)$ a one-dimensional Brownian motion and a Poisson process $N$ with rate $\lambda>0$ are defined such that $W$ and $N$ are independent. Then $\mathbb F$ is the completed and right-continuous filtration generated by $(W,N)$, and the stock's price $S$ is given by
\begin{equation} \label{jumpmodel}
S_t = S_0 + \int_0^t S_{s-} \mu_s ds + (S_{-} \sigma \bigcdot W)_t + (S_{-} \zeta\bigcdot N^{\mathbb F})_t, \quad \quad N^{\mathbb F}:= N - \lambda t.
\end{equation}
Here, ${N}^{\mathbb F}$ is the martingale compensated Poisson process, and $ \mu,\ \sigma$ , an $ \zeta $ are bounded adapted processes, and there exists a constant $\delta \in (0,+\infty)$ such that
\begin{equation}\label{coefficientscond}
\zeta>-1\quad\mbox{and}\quad \sigma+\vert\zeta\vert\geq \delta,\ P\otimes dt\mbox{-a.e.}.
\end{equation}
As $\mathbb{F}={\mathbb{F}}^{(W,N)}$ and $m$ is an $\mathbb{F}$-martingale, then there exists a pair $(\varphi^{(m)},\psi^{(m)})$ of $\mathbb{F}$-predictable processes such that
\begin{equation}\label{Phi(m)Psi(m)}
m=m_0+\varphi^{(m)}\bigcdot W+\psi^{(m)}\bigcdot {N}^{\mathbb F},\quad\mbox{and}\quad \int_0^T \left((\varphi^{(m)}_s)^2+(\psi^{(m)}_s)^2\right)ds<+\infty\quad P\mbox{-a.s.}.\end{equation}
Thus, in this framework, $\tau$ is parametrized by $(\varphi^{(m)},\psi^{(m)}, G_{-})$. However, when dealing with $(S-S^{\tau},\mathbb{G})$ as it is shown previously, only the triplet $(\varphi^{(m,1)},\psi^{(m,1)}, 1-G_{-})$ defined below appears naturally.
\begin{equation}\label{Psi(m,1}
\varphi^{(m,1)}:={{\varphi^{(m)}}\over{1-G_{-}}}I_{\{G_{-}<1\}},\quad \psi^{(m,1)}:={{\psi^{(m)}}\over{1-G_{-}}}I_{\{G_{-}<1\}}.\end{equation}
\begin{theorem}\label{logportfolio-jumpmodel}
Suppose (\ref{Assumptions4Tau}) holds, $S$ is given by (\ref{jumpmodel})-(\ref{coefficientscond}), and $\mathbb F = \mathbb F^{(W, N)}$. Consider
\begin{equation}\label{PhiTilde-LamdaTilde}
\widetilde{\varphi}:= \dfrac {\zeta \Lambda_m + \vert \zeta \vert \sqrt{\Lambda^2_m + 4 \sigma^2 \lambda (1+\psi^{(m,1)})}-2\sigma^2}{2 \sigma^2 \zeta S_{-}} {I}_{\{G_{-}<1\}}\ \&\
\widetilde{\lambda}:= \dfrac {\zeta \Lambda_0 + \vert \zeta \vert \sqrt{\Lambda^2_0 + 4 \sigma^2 \lambda}}{2 \sigma^2 \zeta S_{-}} - \dfrac{1}{\zeta S_{-}} ,
\end{equation}
where $\Lambda_m := \mu -\lambda \zeta - \sigma\varphi^{(m,1)} + \sigma^2\zeta^{-1}$ and $\Lambda_0:= \mu -\lambda \zeta+ \sigma^2\zeta^{-1}$.\\
Then $\widetilde{\varphi}I_{]\!] \tau, \infty [\![}$ is the num\'eraire portfolio rate for $(S-S^{\tau}, \mathbb G)$ and $\widetilde{\lambda}$ is the log-optimal portfolio rates for $(S, \mathbb F)$. If furthermore $\tau$ satisfies
\begin{equation}\label{sufficientcondjump}
E \left [\int_0^T (1-G_{s-}) \left( (\varphi^{(m,1)}_s)^2+ \lambda (1+\psi^{(m,1)}_s)\ln(1+\psi^{(m,1)}_s)-\lambda \psi^{(m,1)}_s\right) ds \right] < +\infty,
\end{equation}
then the following equivalent assertions hold.\\
{\rm{(a)}} The solution of (\ref{dualaftertau}) exists, and it is given by
\begin{equation}
\widetilde{Z}^{\mathbb G}:= {\cal E}(\widetilde{K}^{\mathbb G}), \quad \widetilde{K}^{\mathbb G}:= - \widetilde{\varphi} \sigma \bigcdot {\cal T}^{(a)}(W) + \frac{(\psi^{(m,1)} -1) \widetilde{\varphi}\zeta S_{-}}{1+\widetilde{\varphi} \zeta S_{-}} \bigcdot {\cal T}^{(a)}(N^{\mathbb F}).
\end{equation}
{\rm{(b)}} $\widetilde{\varphi}I_{]\!] \tau, \infty [\![}$ is the log-optimal rate for the model $(S-S^{\tau}, \mathbb G)$.
\end{theorem}
\begin{proof} In virtue of (\ref{jumpmodel}), we deduce that $\Delta S = S_{-} \zeta \Delta N$, and hence in this setting we calculate the pair $(\mu, \nu)$ and the predictable characteristics quadruplet $(b,c,F, A)$ for the model $(S,\mathbb{F})$ as follows.
\begin{equation}\label{munu}
\mu(dx, dt):= \delta_{S_{t-} \zeta_{t}}(dx) dN_t,\quad \nu(dx, dt):= \lambda \delta_{S_{t-} \zeta_t} (dx) dt,\end{equation}
where $\delta_a$ is the Dirac mass at point $a$, and
\begin{equation}\label{Characteristic4S}
b= (\mu - \lambda \zeta I_{\{\vert \zeta \vert S_{-} > 1 \}})S_{-}, \quad c = (\sigma S_{-})^2, \quad F_t(dx) = \lambda \delta_{\zeta_t S_{t-}}(dx), \quad A_t = t.\end{equation}
Then, we derive $ {\cal L} (S, \mathbb F)$, which is an open set in $\mathbb R$ (when we fix $(\omega,t)\in \Omega\times[0,+\infty)$) and is
\begin{align}\label{phispace}
{\cal L}(S, \mathbb F) := \{ \varphi\ \mathbb{F}\mbox{-predictable} \ \vert \ \varphi S_{-} \zeta > -1 \ P\otimes dt\mbox{-a.e.}\} = ( -1 / (S_{-} \zeta )^{+}, 1/ (S_{-} \zeta )^{-}),
\end{align}
with the convention $1/0^+= + \infty$. The Jacod's components for $m$, i.e. $(\beta^{(m)}, f^{(m)}, g^{(m)}, m^{\perp})$ in this framework take the form of
\begin{equation}\label{Charateristics4m}
(\beta^{(m)}, f^{(m)}, g^{(m)}, m^{\perp})=(\dfrac{\varphi^{(m)}}{\sigma S_{-}},\psi^{(m)}, 0, 0).\end{equation}
The rest of this proof is divided into two parts.\\
{\bf Part 1.} This part proves that $ \widetilde{\lambda}$ is the log-optimal portfolio rate of $(S,\mathbb{F})$.\\
Thanks to \cite[Theorem 2.1]{ChoulliYansori1}, the log-optimal portfolio rate is the unique solution $\widetilde{\psi}\in {\cal L}(S, \mathbb F) $ to
\begin{equation}\label{logOptimalEquation4S}
(\psi - \widetilde{\psi})^{tr}\left\{b- c\widetilde{\psi}+ \int \left ( (1+\widetilde{\psi}^{tr}x)^{-1}x- h(x)\right)F(dx)\right\} \leq 0,\ P\otimes A\mbox{-a.e.},\end{equation}
for any $\psi \in {\cal L}(S, \mathbb F)$, and
\begin{equation} \label{Integrability4PsiTilde}
E \left[(1-G_{-}) \bigcdot \left( \widetilde{V}^{\mathbb{F}}+{\widetilde\psi}^{tr} c {\widetilde\psi} \bigcdot A+{\mathcal K}_{log} ({\widetilde\psi}^{tr} x)\star \nu\right)_T \right ] < + \infty,\end{equation}
where ${\mathcal K}_{log}$ is given by (\ref{klog}) and $\widetilde{V}^{\mathbb{F}}$ is given by
\begin{equation} \label{Vtilde(F)}
\widetilde{V}^{\mathbb{F}} := \widetilde{\psi}^{tr} \left(b-c\widetilde\psi\right)\bigcdot A + \left( {{{\widetilde\psi}^{tr} x }\over{1+\widetilde{\psi}^{tr} x}} - \widetilde{\psi}^{tr}h(x)\right)\star \nu.\end{equation}
Thus, by inserting (\ref{Characteristic4S} ) in (\ref{logOptimalEquation4S}), and using (\ref{phispace}) which claims that $ {\cal L}(S, \mathbb F) $ is an open set of $\mathbb{R}$ point-wise in $(\omega,t)$, we deduce that $\widetilde{\psi}\in {\cal L}(S, \mathbb F)$ is the unique solution to
\begin{align*}
0 = \mu -\lambda \zeta - \widetilde{\psi} S_{-} \sigma^2 + \frac{ \lambda \zeta }{1+ \widetilde{\psi} \zeta S_{-}}.
\end{align*}
It is clear that $\widetilde\lambda$ given in (\ref{PhiTilde-LamdaTilde}) is the unique solutions to the above equation belonging to $ {\cal L}(S, \mathbb F) $, and hence $\widetilde\lambda$ is the num\'eraire portfolio rate for $(S,\mathbb{F})$. To prove that $\widetilde\lambda$ is the log-optimal portfolio rate, we need to check that (\ref{Integrability4PsiTilde}) holds. To this end, it is easy to see that in our setting, (\ref{Integrability4PsiTilde}) becomes
\begin{equation}
E \left [ \int_0^T (1-G_{t-})\left \{ (\widetilde{\lambda}_t \sigma_t S_{t-})^2 + \lambda {\cal K}_{log}(\widetilde{\lambda}_t S_{t-} \zeta_t) \right\} dt \right ] < +\infty .
\end{equation}
This latter condition is always true due to the fact that the processes $\widetilde{\lambda} S_{-}$, $\sigma$ and
\begin{equation*}
(1+ \widetilde{\lambda} S_{-} \zeta)^{-1} = \frac{-\zeta \Lambda + \sqrt{(\zeta \Lambda)^2 +4 \sigma^2 \lambda \zeta^2}}{2 \lambda \zeta^2},
\end{equation*}
are bounded due to \eqref{coefficientscond}. This proves that indeed $\widetilde{\lambda}$ is the log-optimal portfolio rate of $(S,\mathbb{F})$.\\
{\bf Part 2.} Here we prove that $ \widetilde{\varphi}I_{]\!] \tau, \infty [\![}$ is the num\'eraire portfolio rate for $(S-S^{\tau},\mathbb{G})$, or equivalently $ \widetilde{\varphi}$ is the unique solution to (\ref{G1}). Then by plugging (\ref{Characteristic4S}) and (\ref{Charateristics4m}) in (\ref{G1}), and using the fact that $ {\cal L}(S, \mathbb F) $ is an open set of $\mathbb{R}$ point-wise in $(\omega,t)$ due to (\ref{phispace}) and $ {\cal L}(S^{(1)}, \mathbb F)=\left\{\varphi\ \mathbb{F}\mbox{-predictable}\ :\ \varphi{I}_{\{G_{-}<1\}}\in{\cal L}(S, \mathbb F)\right\}$, we deduce that on $\widetilde{\varphi}$ is the unique solution to
\begin{align*}
0 = \left(\mu -\lambda \zeta - \varphi^{(m,1)} \sigma - \widetilde{\varphi} S_{-} \sigma^2 + \frac{( 1-\psi^{(m,1)}) \lambda \zeta }{1+ \widetilde{\varphi} \zeta S_{-}}\right)I_{\{G_{-}<1\}}.
\end{align*}
Thus, direct calculation shows that $\widetilde{\varphi}$ is the one given by (\ref{logportfolio-jumpmodel}), and this proves that $ \widetilde{\varphi}I_{]\!] \tau, \infty [\![}$ is the num\'eraire portfolio rate of $(S-S^{\tau},\mathbb{G})$, and the second par is complete.\\
{\bf Part 3.} This part proves that the two assertion (a) and (b) are equivalent and hold under (\ref{sufficientcondjump}). To this end, we remark that due to (\ref{Phi(m)Psi(m)}), (\ref{m(a)}) and Definition \ref{hellinger}, we calculate
\begin{align*}
h^{(E)}(m^{(1)}, \mathbb F) & = \frac{1}{2} \int^{\cdot}_0(\varphi^{(m,1)}_s)^2 ds + \left ( \sum_{0<s\leq\cdot} (1+\psi^{(m,1)}_s\Delta{N}_s)\ln(1+\psi^{(m,1)}_s\Delta{N}_s)- \psi^{(m,1)}_s\Delta{N}_s \right )^{p, \mathbb F}\\
& = \frac{1}{2} \int^{\cdot}_0(\varphi^{(m,1)}_s)^2 ds + \left (\sum_{0<s\leq\cdot} (1+\psi^{(m,1)}_s)\ln(1+\psi^{(m,1)}_s) \Delta{N}_s- \psi^{(m,1)}_s\Delta{N}_s \right )^{p, \mathbb F}\\
& = \int_0^T \left ( \frac{1}{2} (\varphi^{(m,1)}_s)^2 + \lambda (1+\psi^{(m,1)}_s)\ln(1+\psi^{(m,1)}_s) - \lambda \psi^{(m,1)}_s \right) ds.
\end{align*}
Thus, the condition (\ref{sufficientcondjump}) is the version of the condition (\ref{existencecond2}) in the current setting of jump-diffusion. Hence, by combining part 1 (assertion (a)) and Theorem \ref{sufficientcond}, we deduce that assertion (b) holds. The equivalence between assertion (a) and (b) is a direct consequence of Theorem \ref{generalpredictable} combined with part 2, and both (\ref{Characteristic4S} ) and (\ref{Charateristics4m}) which imply
\begin{align*}
\widetilde{K}^{\mathbb G} = & - \widetilde{\varphi} \bigcdot {\cal T}^{(a)}(S^c) - \dfrac{\widetilde{\varphi} x}{1+ \widetilde{\varphi}x} I_{]\!] \tau, \infty [\![} \star (\mu - (1-f^{(m,1)}) \bigcdot \nu) \\
= & - \widetilde{\varphi} \sigma{S}_{-}\bigcdot {\cal T}^{(a)}(W) - \frac{ \widetilde{\varphi}\zeta S_{-}}{1+\widetilde{\varphi} \zeta S_{-}} \bigcdot ({I}_{]\!] \tau, \infty [\![}\bigcdot {N}-\lambda(1-\psi^{(m,1)})\bigcdot {dt}) \\
= & - \widetilde{\varphi} \sigma{S}_{-}\bigcdot {\cal T}^{(a)}(W) - \frac{(1- \psi^{(m,1)} ) \widetilde{\varphi}\zeta S_{-}}{1+\widetilde{\varphi} \zeta S_{-}} \bigcdot {\cal T}^{(a)}(N^{\mathbb F})
.
\end{align*}
The last equality is a direct consequence of $(1-\psi^{(m,1)})d{\cal{T}}^{(a)}(N^{\mathbb{F}})={I}_{]\!] \tau, \infty [\![}d{N}-\lambda(1-\psi^{(m,1)}){I}_{]\!] \tau, \infty [\![}{dt}$. This ends the proof of the theorem.
\end{proof}
\subsection{Proof of Theorems \ref{generalpredictable} and \ref{riskfactors}}\label{Subsection4Proofs}
This subsection details the proof of Theorems \ref{generalpredictable} and \ref{riskfactors} and elaborates the intermediate technical lemmas that are vital for the proof of these theorems. The proof of Theorems \ref{generalpredictable} is based essentially on three intermediate lemmas that are interesting in themselves. The first lemma determines the triplet $(\mu^{\mathbb{G}},\nu^{\mathbb{G}},S^{c,\mathbb{G}})$, which is constituted by the random measure of the jumps of $S-S^{\tau}$, its $\mathbb{G}$-compensator random measure, and the continuous $\mathbb{G}$-local martingale part of $S-S^{\tau}$.
\begin{lemma}\label{PredictableCharateristoics4S(tau)} The following assertions hold.\\
{\rm{(a)}}The random measure of the jumps of $(S-S^{\tau},\mathbb{G})$ is given by
\begin{equation}\label{Grandommeasure}
\mu^{\mathbb G}(dt, dx):= I_{\{t> \tau\}} \mu(dt, dx),
\end{equation}
and hence its $\mathbb{G}$-compensator random measure, denoted by $\nu^{\mathbb{G}}$, is given by
\begin{equation}\label{Gmeasurecompensator}
\nu^{\mathbb G}(dt, dx):= I_{\{t>\tau\}} (1-f^{(m,1)}(x,t))\nu(dt, dx).
\end{equation}
{\rm{(b)}} The continuous $\mathbb{G}$-local martingale part of $(S-S^{\tau},\mathbb{G})$ is given by
\begin{equation}\label{Scontinuousaftertau}
S^{c, \mathbb G}={\cal{T}}^{(a)}(S^c)= I_{]\!] \tau, \infty [\![} \bigcdot S^{c} + c \beta^{(m,1)}I_{]\!] \tau, \infty [\![} \bigcdot A .
\end{equation}
\end{lemma}
The proof of the lemma is relegated to Appendix \ref{proof4lemmas}. Our second lemma connects some $\mathbb{G}$-integrability of $\mathbb{F}$-integrability using the random measures.
\begin{lemma}\label{Existence4KFtilde} The following assertions hold.\\
{\rm{(a)}} The process $\widetilde{\Gamma}^{(1)}$, defined in (\ref{Gamma(1)andf(op)}) is positive, and both $\widetilde{\Gamma}^{(1)}$ and $1/\widetilde{\Gamma}^{(1)}$ are $\mathbb{F}$-locally bounded. \\
{\rm{(a)}} Let $f$ and $g$ be functionals that are $\widetilde{\cal{P}}(\mathbb{F})$-measurable and $\widetilde{\cal{O}}(\mathbb{F})$-measurable respectively. If $f\star(\mu^{\mathbb{G}}-\nu^{\mathbb{G}})$ and $g\star\mu^{\mathbb{G}}$ are well defined $\mathbb{G}$-local martingales, then $f(1-f^{(m,1)})\star(\mu_1-\nu_1)$ and $g(1-f^{(m,1)})\star\mu_1$ are well defined $\mathbb{F}$-local martingale.
\end{lemma}
The proof of the lemma mimics the proof of \cite[Lemma 5.12]{ChoulliYansori2}, and it will be omitted. Below, we elaborate our third lemma, which connects positive $\mathbb{G}$-supermartingale to $\mathbb{F}$-supermartingales.
\begin{lemma}\label{F-supermartingale2G-supermartingale}
Let $X$ be a ${\mathbb F}$-semimartingale such that
$$\Delta X> -1\quad\mbox{and}\quad I_{\{G_{-}=1\}}\bigcdot X=0.$$
Then ${\cal E}(X)$ is a nonnegative ${\mathbb F}$- supermartingale if and only if
\begin{equation}
Y:= \dfrac{{\cal E}(I_{]\!] \tau , \infty [\![} \bigcdot X)}{{\cal E} (-(1-G_{-})^{-1} I_{]\!] \tau , \infty [\![} \bigcdot m )} \quad \quad \quad \mbox{is a} \quad {\mathbb G}\mbox{-supermartingale}.
\end{equation}
\end{lemma}
The proof of this lemma is relegated to Appendix \ref{proof4lemmas}, while below we prove Theorem \ref{generalpredictable}.
\begin{proof}[Proof of Theorem \ref{generalpredictable}] The proof of this theorem requires the predictable characteristics of the model $(S-S^{\tau},\mathbb{G})$, and hence we divided this proof into two parts. The first part derives the predictable characteristics of $(S-S^{\tau},\mathbb{G})$, while the second part proves the statements of the theorem.\\
{\bf Part 1.} This part discusses the predictable characteristics of $(S-S^{\tau},\mathbb{G})$. Thus, by combining Lemma \ref{PredictableCharateristoics4S(tau)} and (\ref{CanonicalDecomposition4S}), we derive
\begin{align}
S -S^{\tau} & = I_{]\!] \tau, \infty [\![} \bigcdot S= I_{]\!] \tau, \infty [\![} \bigcdot S^{c} + I_{]\!] \tau, \infty [\![} h \star (\mu - \nu) +I_{]\!] \tau, \infty [\![} b \bigcdot A + I_{]\!] \tau, \infty [\![} (x-h) \star \mu \nonumber\\
& = S^{c, \mathbb G} +h \star(\mu^{\mathbb G}-\nu^{\mathbb G})+ I_{]\!] \tau, \infty [\![} b \bigcdot A - c\beta^{(m,1)} I_{]\!] \tau, \infty [\![} \bigcdot A - \left ( \int h(x) f^{(m,1)}(x) F(dx) \right ) I_{]\!] \tau, \infty [\![} \bigcdot A \nonumber\\
& \hskip 1cm +(x-h)\star\mu^{\mathbb G} \nonumber\\
& = S^{c, \mathbb G} +h \star(\mu^{\mathbb G}-\nu^{\mathbb G}) + \left( b- c\beta^{(m,1)} -\int h f^{(m,1)} F(dx)\right ) I_{]\!]\tau,+\infty[\![} \bigcdot A +(x-h)\star\mu^{\mathbb G} \nonumber.
\end{align}
Hence, this $\mathbb{G}$-canonical decompositiojn of $(S-S^{\tau},\mathbb{G})$ allows us to obtain the predictable characteristics $\left(b^{\mathbb G},c^{\mathbb G},F^{\mathbb G}, A^{\mathbb G}\right)$ of $(S-S^\tau,\mathbb G)$ as follows
\begin{equation}\label{predictablechar}\begin{cases}
b^{\mathbb G}: = \Bigl(b- c\beta^{(m,1)}- \int h(x) f^{(m,1)}(x) F(dx)\Bigr) I_{]\!]\tau,+\infty[\![}, \quad \quad c^{\mathbb G}:= I_{]\!]\tau,+\infty[\![}c, \\
F^{\mathbb G}(dx):= I_{]\!]\tau,+\infty[\![}\left( 1- f^{(m,1)}(x)\right)F(dx), \quad \quad A^{\mathbb G}:= I_{]\!]\tau,+\infty[\![}\bigcdot {A}.\end{cases}
\end{equation}
{\bf Part 2:} By applying \cite[Theorem 2.1]{ChoulliYansori3} directly to $(S-S^{\tau}, {\mathbb{G}})$, we deduce that assertion (a) holds if and only if there exists $\widetilde{\varphi}^{\mathbb G} \in {\cal L}(S-S^{\tau}, \mathbb G)$ such that for any $\varphi \in {\cal L}_b(S-S^{\tau}, \mathbb G)$ the following hold:
\begin{equation}\label{Equation1}
(\varphi-\widetilde{\varphi}^{\mathbb G})^{tr}(b^{\mathbb G}- c^{\mathbb G}\widetilde{\varphi}^{\mathbb G}) + \int \left ( \dfrac{(\varphi-\widetilde{\varphi}^{\mathbb G})^{tr}x}{1+(\widetilde{\varphi}^{\mathbb G})^{tr}x} - (\varphi-\widetilde{\varphi}^{\mathbb G})^{tr} h(x) \right ) F^{\mathbb G}(dx) \leq 0,
\end{equation}
and
\begin{equation}\label{Equation2}
E \left [ \widetilde{V}_T^{\mathbb G} +\frac{1}{2} ((\widetilde{\varphi}^{\mathbb G})^{tr} c^{\mathbb G} \widetilde{\varphi}^{\mathbb G}\bigcdot A^{\mathbb G} )_T + ( {\mathcal K}_{log} ((\widetilde{\varphi}^{\mathbb G})^{tr} x) \star \nu^{\mathbb G} )_T \right ] < + \infty, \end{equation}
where
\begin{equation}\label{V(G)}
\widetilde{V}^{\mathbb G} := \left[ (\widetilde{\varphi}^{\mathbb G})^{tr} (b^{\mathbb G} - c^{\mathbb G} \widetilde{\varphi}^{\mathbb G}) + \int \left ( \dfrac{(\widetilde{\varphi}^{\mathbb G})^{tr}x}{1+(\widetilde{\varphi}^{\mathbb G})^{tr} x} -(\widetilde{\varphi}^{\mathbb G})^{tr} h(x) \right ) F^{\mathbb G}(dx) \right]\bigcdot A^{\mathbb G}.
\end{equation}
Furthermore, the following properties hold
\begin{align}
& \widetilde{\theta}^{\mathbb G} \left ( 1+ ( \widetilde{\theta}^{\mathbb G} \bigcdot (S-S^{\tau}))_{-}\right)^{-1} =\widetilde{\varphi}^{\mathbb G} \quad P\otimes A\mbox{-a.e.} \quad \mbox{on} \quad ]\!] \tau , \infty [\![, \label{ThetaG2Phitilde}\\
& {Z}^{\mathbb G} = {\cal E}({K}^{\mathbb G}){\cal E}(-I_{]\!] \tau, \infty [\![} \bigcdot \widetilde{V}^{\mathbb{G}} )={1\over{{\cal{E}}( \widetilde{\varphi}^{\mathbb G}\bigcdot (S-S^{\tau}))}},\label{Equation4.49}\\
&{K}^{\mathbb G}:= -\widetilde{\varphi}^{\mathbb G}\bigcdot {\cal T}^{(a)}(S^c) - \dfrac{\widetilde{\Gamma}^{\mathbb{G}}(\widetilde{\varphi}^{\mathbb G})^{tr}x}{1+ (\widetilde{\varphi}^{\mathbb G})^{tr}x} \star (\mu^{\mathbb{G}} - \nu^{\mathbb{G}} ), \label{KG2PhiTilde}\\
& \widetilde{\Gamma }^{\mathbb{G}}_t := \left ( 1 + \int (f^{(op,\mathbb{G})}(t,x)-1)\nu^{\mathbb{G}}(\{t\},dx) \right )^{-1}, \quad \quad f^{(op,\mathbb{G})}(t,x):=(1+ (\widetilde{\varphi}^{\mathbb G}_t)^{tr}x)^{-1}. \label{GammaG}
\end{align}
Then, in virtue of Lemma \ref{Theta(G)2Theta(F)}-(c), we obtain the existence of $\widetilde{\varphi}\in {\cal L}(S^{(1)}, \mathbb{F})$ such that
\begin{equation}\label{phi}
\widetilde{\varphi}^{\mathbb G}{I}_{]\!] \tau, \infty [\![} = \widetilde{\varphi} I_{]\!] \tau , \infty [\![}.
\end{equation}
Thus, by inserting this latter equality with (\ref{Gmeasurecompensator}) in (\ref{GammaG}), and using (\ref{Gamma(1)andf(op)}), we obtain
\begin{equation}\label{Gammag2Gamm(1)}
f^{(op,\mathbb{G})}= f^{(op)}\quad\mbox{and}\quad \widetilde{\Gamma }^{\mathbb{G}}= \widetilde{\Gamma }^{(1)}\quad\mbox{on}\quad ]\!] \tau, \infty [\![,
\end{equation}
Similarly, by inserting (\ref{phi}) and \eqref{predictablechar} in (\ref{V(G)}) and using (\ref{V(1)}) afterwards, we get
\begin{equation}\label{V(G)bis}
V^{\mathbb G} =I_{]\!]\tau,+\infty[\![}\bigcdot {V}^{(1)}.
\end{equation}
As a consequence, by combining this equality with (\ref{phi}), (\ref{Gammag2Gamm(1)}) and Lemma \ref{PredictableCharateristoics4S(tau)}-(a), we get
\begin{equation}\label{KG2KtildeG}
{K}^{\mathbb G}=\widetilde{K}^{\mathbb G}\quad\mbox{ and}\quad {Z}^{\mathbb G} = \widetilde{Z}^{\mathbb G}=1/{\cal{E}}(\widetilde{\varphi}\bigcdot (S-S^{\tau})). \end{equation}
Again, by plugging (\ref{phi}) and (\ref{predictablechar}) in (\ref{Equation1})-(\ref{Equation2}), we conclude that assertion (a) holds if and only if there exists $\widetilde{\varphi}\in {\cal L}(S^{(1)}, \mathbb{F})$ such that for any ${\varphi}\in {\cal L}_b(S^{(1)}, \mathbb{F})$, on $]\!]\tau,+\infty[\![$ $P\otimes A$-a.e.
\begin{equation}\label{Equation1bis}
(\theta - \widetilde{\varphi})^{tr}( b+ c\beta^{(m)}- c \widetilde{\varphi}) + \int (\theta- \widetilde{\varphi})^{tr}\left ( \dfrac{ (1-f^{(m,1)}(x)x}{1+ \widetilde{\varphi}^{tr}x} - h(x) \right )F(dx) \leq 0,
\end{equation}
and (\ref{G2}) holds. Thus the proof of (a) $\Longleftrightarrow$ (b) follows immediately from combining these facts with Lemma \ref{VG2VF}-(c). \\
{\bf Part 3.} This part focuses on proving (\ref{Equation4.12}), (\ref{Equation4.13}) and (\ref{Equation4.14}).\\
Remark that (\ref{Equation4.12}) is a direct consequence of (\ref{phi}) and (\ref{ThetaG2Phitilde}), while both (\ref{Equation4.14}) and the first equality in (\ref{Equation4.13}) follow combining (\ref{KG2KtildeG}), (\ref{KG2PhiTilde}), (\ref{phi}), (\ref{Gammag2Gamm(1)}), (\ref{Equation4.49}) and (\ref{KG2PhiTilde}). Thus, the rest of this part focuses on proving the second equality in (\ref{Equation4.13}), or equivalently in virtue of (\ref{V(G)bis})
\begin{equation}\label{KGtilde2KFtilde} \widetilde{K}^{\mathbb{G}} = {\cal T}^{(a)}( \widetilde{K}^{\mathbb{F}})+ (1-G_{-})^{-1} I_{]\!] \tau , \infty [\![ }\bigcdot {\cal T}^{(a)} (m).\end{equation}
To this end we use the fact that two local martingales are equal if and only if their continuous local martingale parts coincides and their jump parts are equal also. On the one hand, we use the fact that
$\Delta {\cal T}^{(a)}(X) = \dfrac{1-G_{-}}{1-\widetilde{G}} I_{]\!] \tau , \infty [\![} \Delta X $, and derive
\begin{align*}
\Delta {\cal T}^{(a)}(I_{]\!] \tau , \infty [\![} (K^{\mathbb F} + (1-G_{-})^{-1} \bigcdot m )) & = \frac{1-G_{-}}{1-\widetilde{G}} I_{]\!] \tau, \infty [\![} \left (\Delta K^{\mathbb F} - \frac{1-\widetilde{G}}{1-G_{-}}-1 \right )\\
& = \left ( \frac{1-G_{-}}{1-\widetilde{G}} \left(\Delta K^{\mathbb F} -1\right) +1 \right ) I_{]\!] \tau, \infty [\![}.
\end{align*}
On the other hand, thanks to (\ref{Equation4.14}), we calculate
\begin{align*}
\Delta {\widetilde{K}^{\mathbb G}} & = I_{]\!] \tau , \infty [\![} \left(- \frac{\widetilde{\Gamma}^{(1)} \widetilde{\varphi}^{tr} \Delta S}{1+\widetilde{\varphi}^{tr}\Delta S} + \int \frac{\widetilde{\Gamma}^{(1)} \widetilde{\varphi}^{tr} x}{1+\widetilde{\varphi}^{tr}x} F^{\mathbb G} (dx)dA_t^{\mathbb G}\right)\\
& = I_{]\!] \tau , \infty [\![} \left(- \frac{\widetilde{\Gamma}^{(1)} \widetilde{\varphi}^{tr} \Delta S}{1+\widetilde{\varphi}^{tr}\Delta S} + \widetilde{\Gamma}^{(1)} \int (1- \frac{1}{1+\widetilde{\varphi}^{tr}x}) \nu^{\mathbb G}(\{t\},dx)\right)\\
& = I_{]\!] \tau , \infty [\![} \left(- \frac{\widetilde{\Gamma}^{(1)} \widetilde{\varphi}^{tr}\Delta S}{1+\widetilde{\varphi}^{tr}\Delta S} + \widetilde{\Gamma}^{(1)} \int (1-f^{(op)})(1-f^{(m)}) \nu(\{t\},dx)\right) \\
& = I_{]\!] \tau , \infty [\![} \left(- \frac{\widetilde{\Gamma}^{(1)}\widetilde{\varphi}^{tr} \Delta S}{1+\widetilde{\varphi}^{tr}\Delta S} + \widetilde{\Gamma}^{(1)} \left (a- \widehat{f^{(op)}}- \widehat{f^{(m)}}+\widehat{f^{(op)}f^{(m)}}\right)\right)\\
\Delta \widetilde{K}^{\mathbb G} & = I_{]\!] \tau, \infty [\![} \left ( \frac{\widetilde{\Gamma}^{(1)}}{1+\widetilde{\varphi}^{tr}\Delta S} -1 \right )
\end{align*}
Then, by comparing the jump parts and using $^{o, \mathbb F} \left( I_{]\!] \tau, \infty [\![} \right) = (1- \widetilde{G}) I_{]\!] 0, +\infty [\![}$, we get
\begin{align}
\Delta \widetilde{K}^{\mathbb F} & = \dfrac{(1-\widetilde{G}) \widetilde{\Gamma}^{(1)}}{(1-G_{-})(1+\widetilde{\varphi}^{tr}\Delta S)} -1
\end{align}
Thus, by combining $(\Delta \widetilde{K}^{\mathbb F} = \Delta \widetilde{K}^{\mathbb F}I_{\{\Delta S \neq 0 \}}+\Delta \widetilde{K}^{\mathbb F}I_{\{\Delta S \neq 0 \}})$ with the fact that on $\{\Delta S \neq 0 \}, 1- \widetilde{G} = (1-G_{-}) \left( 1- f^{(m,1)}(\Delta S) - g^{(m,1)}(\Delta S)\right )$, we obtain on $(G_{-}<1)$,
\begin{align}
\Delta \widetilde{K}^{\mathbb F} = & \left(\dfrac{( 1- \widetilde{G}) \widetilde{\Gamma}^{(1)}}{(1+\widetilde{\varphi}^{tr}\Delta S)} -1\right)I_{\{\Delta{S}\not=0\}} + \left (\dfrac{(1-\widetilde{G}) \widetilde{\Gamma}^{(1)}}{1-G_{-}} -1\right)I_{\{\Delta S = 0 \}}\nonumber\\
= & - \dfrac{( 1- \widetilde{G}) \widetilde{\Gamma}^{(1)}\widetilde{\varphi}^{tr}\Delta S}{(1+\widetilde{\varphi}^{tr}\Delta S)} + \dfrac{(1-\widetilde{G}) \widetilde{\Gamma}^{(1)}}{(1-G_{-})} -1\nonumber\\
= & \dfrac{ \widetilde{\Gamma}^{(1)} (f^{(m,1)}(\Delta S)-1)\widetilde{\varphi}^{tr} \Delta S} {1+\widetilde{\varphi}^{tr}\Delta S} + \dfrac{\widetilde{\Gamma}^{(1)}g^{(m,1)}(\Delta S)\widetilde{\varphi}^{tr}\Delta S}{1+\widetilde{\varphi}^{tr}\Delta S} -\widetilde{\Gamma}^{(1)} \Delta{m}^{(1)}+ \widetilde{\Gamma}^{(1)} -1 \label{LastEquality}
\end{align}
Put
$$W^{(1)} (t,x):= \widetilde{\Gamma}^{(1)}_t (f^{(m,1)}(t,x)-1)\left( \frac{1}{1+\widetilde{\varphi}^{tr}_t x} -1\right) = \frac{\widetilde{\Gamma}^{(1)}_t \widetilde{\varphi}^{tr}_tx(f^{(m,1)}(t,x)-1)}{1+\widetilde{\varphi}^{tr}_t x},$$ and thanks to Lemma \ref{Existence4KFtilde}, we deduce that $W^{(1)}\star(\mu_1-\nu_1)$ and $ \dfrac{\widetilde{\Gamma}^{(1)}g^{(m,1)}\widetilde{\varphi}^{tr}x}{1+\widetilde{\varphi}^{tr}x}\star\mu_1$ are well defined $\mathbb{F}$-local martingales, and furthermore
$$\widehat{W^{(1)}} = 1 - \widetilde{\Gamma}^{(1)}\quad\mbox{and}\quad \Delta\left({W}^{(1)}\star(\mu_1-\nu_1)\right)=\dfrac{ \widetilde{\Gamma}^{(1)} (f^{(m,1)}(\Delta S)-1)\widetilde{\varphi}^{tr} \Delta S} {1+\widetilde{\varphi}^{tr}\Delta S} + \widetilde{\Gamma}^{(1)} -1 .$$
Thus, by combining these remarks with (\ref{LastEquality}) and Lemma \ref{Existence4KFtilde}, we deduce that
$$
\Delta \widetilde{K}^{\mathbb F} = \Delta\left({W}^{(1)}\star(\mu_1-\nu_1)\right)+\Delta\left( \dfrac{\widetilde{\Gamma}^{(1)}g^{(m,1)}\widetilde{\varphi}^{tr}x}{1+\widetilde{\varphi}^{tr}x}\star\mu_1\right)-\widetilde{\Gamma}^{(1)} \Delta{m}^{(1)}.$$
Therefore, this latter equality combined with the continuous $\mathbb{G}$-local martingale of $\widetilde{K}^{\mathbb{G}}$ and the continuous $\mathbb{F}$-local martingale part of $ \widetilde{K}^{\mathbb F}$ are
$$( \widetilde{K}^{\mathbb F})^c:=-\widetilde\varphi\bigcdot S^c-(1-G_{-})^{-1}I_{\{G_{-}<1\}}\bigcdot m^c\quad\mbox{and}\quad ( \widetilde{K}^{\mathbb{G}})^c:=-\widetilde\varphi\bigcdot {\cal{T}}^{(a)}(S^c).$$ Thus, these yield that (\ref{KGtilde2KFtilde}) holds with $\widetilde{K}^{\mathbb F} $ is given by (\ref{KF}). This proves the second equality in (\ref{Equation4.13}), and the proof of theorem will be complete as soon as we show that $ {\cal E }( \widetilde{K}^{\mathbb F}) {\cal E }(-\widetilde{V}^{(1)}) \in {\cal D}({S}^{(1)}, \mathbb F)$, equivalently $ {\cal E }( \widetilde{K}^{\mathbb F}) {\cal E }(- \widetilde{V}^{(1)}){\cal E}(\psi \bigcdot {S}^{(1)})$ is an $\mathbb F$-supermartinagale for any $\psi\in {\cal{L}}({S}^{(1)}, \mathbb F)\cap{L}({S}^{(1)}, \mathbb F)$. To this end, we consider $\varphi\in {\cal{L}}({S}^{(1)}, \mathbb F)\cap{L}({S}^{(1)}, \mathbb F)$ and remark that
$${\cal E }( \widetilde{K}^{\mathbb F}) {\cal E }(- \widetilde{V}^{(1)}){\cal E}(\psi \bigcdot {S}^{(1)})={\cal{E}}(X),$$ where $X$ is an $\mathbb{F}$-semimartingale, and due to (\ref{Equation4.13}) we have
$${{{\cal E}(I_{]\!] \tau , \infty [\![} \bigcdot X)}\over{{\cal E} (-(1-G_{-})^{-1} I_{]\!] \tau , \infty [\![} \bigcdot m)}}=\widetilde{Z}^{\mathbb{G}}{\cal E}(I_{]\!] \tau , \infty [\![}\psi \bigcdot {S})$$ is a positive $\mathbb{G}$-supermartinagle. In virtue of Lemma \ref{F-supermartingale2G-supermartingale}, this latter fact is equivalent to ${\cal{E}}(X)$ being a positive $\mathbb{F}$-supermatingale. This proves that $\widetilde{Z}^{\mathbb{F}}\in{\cal D}({S}^{(1)}, \mathbb F)$, and the proof of theorem is complete.
\end{proof}
The rest of this subsection proves Theorem \ref{riskfactors}, which relies heavily on the following lemma.
\begin{lemma}\label{lemma6.5}
Suppose that assumptions of Theorem \ref{riskfactors} hold, and consider its notations. Then the following assertions hold.\\
{\rm {(a)}} The $\mathbb{F}$-compensator of $(1-\widetilde{G}) \bigcdot \widetilde{\cal H}(\mathbb F)$ is given by
\begin{align}
\left ( (1-\widetilde{G}) \bigcdot \widetilde{\cal H}(\mathbb F) \right )^{p, \mathbb F} & = (1-G_{-}) \left (\widetilde{\lambda}^{tr} b - \frac{1}{2} \widetilde{\lambda}^{tr} c \widetilde{\lambda} \right) \bigcdot A \nonumber \\
& + (1-G_{-}) \left ( \frac{f^{(m)} \widetilde{\lambda}^{tr}x}{(1-\Delta \widetilde{V}^{\mathbb F})(1+\widetilde{\lambda}^{tr}x)} + ( 1-f^{(m)}) \ln (1+\widetilde{\lambda}^{tr}x) - \widetilde{\lambda}^{tr}h \right) \star \nu.
\end{align}
{\rm{(b)}} We have that
\begin{equation}
\langle \widetilde{L}^{\mathbb F}, m \rangle^{\mathbb F} = - \widetilde{\lambda}^{tr} c\beta^{(m)} \bigcdot A - \frac{f^{(m)} \widetilde{\lambda}^{tr}x}{(1-\Delta \widetilde{V}^{\mathbb F})(1+\widetilde{\lambda}^{tr}x)}\star\nu.
\end{equation}
{\rm{(c)}} We always have that $E \left [H_T ^{(0)}(\widetilde{K}^{\mathbb G}, \mathbb G) \right ] = E \left [ \left (H^{(0)}(\widetilde{K}^{\mathbb G}, \mathbb G) \right )^{p, \mathbb F}_T \right]$ and
\begin{align}\label{Equation4.63}
\left (H ^{(0)}(\widetilde{K}^{\mathbb G}, \mathbb G) \right )^{p, \mathbb F} = & \frac{(1-G_{-})}{2} \widetilde{\varphi}^{tr} c \widetilde{\varphi} \bigcdot A + \sum_{0 < s\leq .} (1-G_{s-}) \left ( \widetilde{\Gamma}_s^{(1)} - 1 - \ln (\widetilde{\Gamma}_s^{(1)} \right ) \nonumber \\
&+ (1-G_{-}) \left ( \frac{-\widetilde{\Gamma}^{(1)} \widetilde{\varphi}^{tr} x}{1+ \widetilde{\varphi}^{tr} x} + \ln (1+ \widetilde{\varphi}^{tr} x) \right )(1-f^{(m,1)}) \star \nu. \end{align}
and
\begin{equation}\label{Equation4.64}
1- \Delta\widetilde{V}^{(1)} = (\widetilde{\Gamma}^{(1)})^{-1}, \quad and \quad (1-\widetilde{\Gamma}^{(1)}) \frac{(\widetilde{\varphi}^{tr}x)(1-f^{(m,1)})}{1+\widetilde{\varphi}^{tr}x} \star \nu_1 = - \sum \frac{(\widetilde{\Gamma}^{(1)}-1)^2}{\widetilde{\Gamma}^{(1)}}
\end{equation}
\end{lemma}
For the sake of simple exposition, we relegate the proof of this lemma to Appendix \ref{proof4lemmas}.
\begin{proof}[Proof of Theorem \ref{riskfactors}] The proof of the theorem is delivered in two parts, where we prove (\ref{Equation4.17}) and (\ref{Equation4.18}). Both equalities relies on writing the quantity $u_T(S, \mathbb F)$ in two manners as follows.
\begin{align}
u_T(S, \mathbb F) &= E \left [ \ln ({\cal E}_T(\widetilde{\lambda} \bigcdot S ))\right] = E \left[ -\ln ({\cal E}_T(-\widetilde{V}^{\mathbb F})) \right ] + E \left[ -\ln ({\cal E}_T(\widetilde{L}^{\mathbb F})) \right ] \nonumber \\
& = E \left[\widetilde{V}_T^{\mathbb F} + \sum_{0 < s \leq T} (-\Delta \widetilde{V}_s^{\mathbb F} - \ln (1-\Delta \widetilde{V}_s^{\mathbb F})) + H^{(0)}_T (\widetilde{L}^{\mathbb F}, \mathbb F) \right ]\label{Equation4.65}\\%=E \left [ \widetilde{\cal H}_T (\mathbb F) \right] =
& = E \left [ (\widetilde{G} \bigcdot \widetilde{\cal H}(\mathbb F))_T \right] + E \left [ ((1-\widetilde{G}) \bigcdot \widetilde{\cal H} (\mathbb F))_T \right].\label{Equation4.66}
\end{align}
{\bf Part 1.} This part proves the equality (\ref{Equation4.18}). Thus, we use the duality $\widetilde{Z}^{\mathbb{G}}=1/{\cal{E}}(\widetilde{\varphi} \bigcdot (S-S^{\tau}) )$ and (\ref{Equation4.13}), and derive
\begin{align}
&u_T(S-S^{\tau}, \mathbb G) \nonumber \\
& = E \left [ \ln ({\cal E}_T (\widetilde{\varphi} \bigcdot (S-S^{\tau}) ))\right] \nonumber \\
& = E \left[ -\ln ({\cal E}_T(- I_{]\!] \tau , \infty [\![} \bigcdot \widetilde{V}^{(a)})) \right ] + E \left[ -\ln ({\cal E}_T(I_{]\!] \tau , \infty [\![} \bigcdot \widetilde{K}^{\mathbb G})) \right ] \nonumber \\
& = E \left[I_{]\!] \tau , \infty [\![} \bigcdot \widetilde{V}_T^{(1)} + \sum_{0 < s \leq T} I_{]\!] \tau , \infty [\![} (-\Delta \widetilde{V}_s^{(1)} - \ln (1-\Delta \widetilde{V}_s^{(1)})) + H^{(0)}_T (\widetilde{K}^{\mathbb G}, \mathbb G) \right ] \label{H(0)4KtildeG} \\
& = E \left[(1-G_{-}) \bigcdot \widetilde{V}_T^{(1)} + (1-G_{-}) \bigcdot \sum_{0 < s \leq T} \left (-\Delta \widetilde{V}_s^{(1)} - \ln (1-\Delta \widetilde{V}_s^{(1)}) \right) +\frac{1-G_{-}}{2}\widetilde{\varphi}^{tr} c \widetilde{\varphi} \bigcdot A_T \right] \nonumber \\
& + E\left[ \sum_{0 < s \leq T} (1-G_{s-}) \left (\widetilde{\Gamma}_s^{(1)} -1 -\ln (\widetilde{\Gamma}_s^{(1)}) \right ) + (1-G_{-})\left ( \ln (1+\widetilde{\varphi}^{tr} x)- \frac{\widetilde{\Gamma}^{(1)} \widetilde{\varphi}^{tr}x}{1+\widetilde{\varphi}^{tr}x}\right )(1-f^{(m,1)})\star \nu_T \right ] \nonumber \\
& = E \left [ (1-G_{-}) \bigcdot \widetilde{V}_T^{(1)} +\frac{1-G_{-}}{2}\widetilde{\varphi}^{tr} c \widetilde{\varphi} \bigcdot A_T + (1-G_{-})\left ( \ln (1+\widetilde{\varphi}^{tr} x)- \frac{\widetilde{\varphi}^{tr}x}{1+\widetilde{\varphi}^{tr}x}\right )(1-f^{(m,1)})\star \nu_T \right] \nonumber \\
& = E \left [(1-G_{-}) \bigcdot \left\{\widetilde{\varphi}^{tr} (b-c(\beta^{(m,1)} + \frac{\widetilde{\varphi}}{2})) \bigcdot A_T +\left((1-f^{(m,1)})\ln(1+\widetilde{\varphi}^{tr}x) -\widetilde{\varphi}^{tr} h\right) \star \nu \right\}_T \right].\label{Equation4.67}
\end{align}
Thus, by combining (\ref{Equation4.66}), (\ref{Equation4.67}), Lemma \ref{lemma6.5}-(a) and (\ref{Equation4.19}), we obtain
\begin{align*}
&u_T(S-S^{\tau}, \mathbb G)-u_T(S, \mathbb F)\\
&=-E \left [ ((1-\widetilde{G}) \bigcdot \widetilde{\cal H} (\mathbb F))_T +\int_0^T{\cal{P}}^{(N,1)}_s dA^{(1)}_s- (1-G_{-})\widetilde{\lambda}^{tr}\beta^{(m,1)}\bigcdot A_T- \frac{(1-G_{-})f^{(m,1)} \widetilde{\lambda}^{tr}x}{(1-\Delta \widetilde{V}^{\mathbb F})(1+\widetilde{\lambda}^{tr}x)} \star\nu_T\right].
\end{align*}
Thus, the equality (\ref{Equation4.18}) follows immediately from the above equality combined with Lemma \ref{lemma6.5}-(b) and the facts that $(1-G_{-})\beta^{(m,1)} =\beta^{(m)}I_{\{G_{-}<1\}}$ and $(1-G_{-})f^{(m,1)} =f^{(m)}I_{\{G_{-}<1\}}$.\\
{\bf Part 2.} This parts proves the equality (\ref{Equation4.17}). To this end, we remark that (\ref{Equation4.13}) implies that ${\cal{E}}( \widetilde{K}^{\mathbb{G}})={\cal{E}}( I_{]\!] \tau , \infty [\![}\bigcdot \widetilde{K}^{\mathbb{F}})/{\cal{E}}( I_{]\!] \tau , \infty [\![}\bigcdot {m}^{(1)})$. Hence, by taking logarithm in both sides of the equality and using \cite[Proposition B.2-(a) ]{ChoulliYansori2}, we deduce that
\begin{equation*}\label{KG2KtildeF}
H^{(0)} (\widetilde{K}^{\mathbb G}, \mathbb G)=\widetilde{K}^{\mathbb G}- I_{]\!] \tau , \infty [\![}\bigcdot \widetilde{K}^{\mathbb F}+ I_{]\!] \tau , \infty [\![}\bigcdot {m}^{(1)}+ I_{]\!] \tau , \infty [\![}\bigcdot {H}^{(0)} (\widetilde{K}^{\mathbb F}, \mathbb F)-I_{]\!] \tau , \infty [\![}\bigcdot {H}^{(0)} (m^{(1)}, \mathbb F).
\end{equation*}
Thus, by combining this equality with (\ref{H(0)4KtildeG}), (\ref{Equation4.21}), (\ref{Glocalmartingaleaftertau}) (see Theorem \ref{OptionalDecompoTheorem}-(b)) and (\ref{m(1)2Hellinger}) (see Lemma \ref{deflator4hellinger}-(a)), we derive
\begin{align*}
&u_T(S-S^{\tau}, \mathbb G) \\
& = E \left [((1-\widetilde{G}) \bigcdot \widetilde{\cal H} (\mathbb G))_T - I_{]\!] \tau , \infty [\![} \bigcdot \widetilde{K}_T^{\mathbb F} - \frac{- I_{]\!] \tau , \infty [\![}}{1-G_{-}} \bigcdot m_T - (I_{]\!] \tau , \infty [\![} \bigcdot {H}^{(0)} ({m}^{(1)}, \mathbb{F}))_T \right] \\
& = E \left [((1-\widetilde{G}) \bigcdot \widetilde{\cal H} (\mathbb G))_T + \frac{I_{]\!] \tau , \infty [\![}}{1-G_{-}} \bigcdot \langle \widetilde{K}^{\mathbb F},m \rangle_T^{\mathbb F}+ \frac{- I_{]\!] \tau , \infty [\![}}{(1-G_{-})^2} \bigcdot \langle m \rangle^{\mathbb F}_T - H^{(0)}_T (\frac{-I_{]\!] \tau , \infty [\![}}{1-G_{-}} \bigcdot m, \mathbb F) \right ] \\
& = E \left [((1-\widetilde{G}) \bigcdot \widetilde{\cal H} (\mathbb G))_T + \langle \widetilde{K}^{\mathbb F}, I_{\{G_{-} <1 \}} \bigcdot m \rangle_T^{\mathbb F} \right ] + E \left [ (1-G_{-}) \bigcdot h^{(E)}_T (m^{(1)}, \mathbb F) \right ].
\end{align*}
Therefore, thanks to this latter equality and (\ref{Equation4.66}), we obtain
\begin{align*}
&u_T(S-S^{\tau}, \mathbb G)-u_T(S, \mathbb{F})\\
&=-E \left [ ((1-\widetilde{G}) \bigcdot \widetilde{\cal H} (\mathbb F))_T -((1-\widetilde{G}) \bigcdot \widetilde{\cal H} (\mathbb G))_T + \langle \widetilde{L}^{\mathbb F}-\widetilde{K}^{\mathbb F}, I_{\{G_{-} <1 \}} \bigcdot m \rangle_T^{\mathbb F} \right ] \\
&-E \left [ (\widetilde{G} \bigcdot \widetilde{\cal H}(\mathbb F))_T \right] + E \left [ \langle \widetilde{L}^{\mathbb F}, I_{\{G_{-} <1 \}} \bigcdot m \rangle_T^{\mathbb F} \right ]+ E \left [ (1-G_{-}) \bigcdot h^{(E)}_T (m^{(1)}, \mathbb F) \right ].
\end{align*}
This proves (\ref{Equation4.18}) and the proof of the theorem is complete.
\end{proof}
|
2,869,038,153,989 | arxiv | \section{Introduction}
Learning to control one's body is a crucial skill for any embodied agent. A common way of framing the problem of learning to control an agent is Reinforcement Learning (RL). RL poses the problem in terms of actions that an agent can perform, observed states of the world and some reward function that pays out a treat or punishes the agent depending on its performance. The aim of an optimal RL controller is to maximize the collected rewards. Reinforcement Learning has been studied widely and applied to different domains of learning and control.
Suppose we want a robot to learn to control its movements by direct continuous voltage control. Many of the recent prominent RL results \cite{mnih2015human,silver2016mastering} are restricted to \emph{discrete state and discrete action spaces} such as ATARI. Some newer approaches (e.g. \cite{DDPG,wang2016sample,schulman2017proximal}) extend into continuous state and action spaces. However, almost all recent methods rely on huge datasets to perform well (\emph{data efficiency problem}). Such datasets are normally not available for real robots and difficult to obtain. Another problem with current methods is that they work well in learning to reach single goals. For instance, often algorithms learn to reach a particular state (e.g. position in the state space) with an end-effector. However, robots need to learn to achieve different goals. Approaches to obtaining more data and to apply RL to multiple goal tasks can be a physical simulation of the robot \cite{TobinFRSZA17} or \emph{sample augmentation} of existing data \cite{HER}. Especially sample augmentation has seen recent advances but the state-of-the-art methods can only produce a limited number of augmented samples.
In this paper we try to address these issues by 1) presenting a novel RL approach to learning behavior in continuous state/action spaces with multiple goals called Continuous Value Iteration (CVI) and by 2) presenting a novel data augmentation method called Imaginary Experience Replay (IER). We show that the combination of CVI+IER enables robots to solve tasks efficiently with fewer training examples. We evaluate this in simulation and on a physical voltage controlled robot arm (Figure \ref{fig:robot_arm}).
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\columnwidth]{figs/robot.png}
\caption{The voltage controlled robot arm consisting of a chain of two Dynamixel XH-430 motors (the coordinate system shows the Cartesian task space of the robot).}
\label{fig:robot_arm}
\end{figure}
\section{Continuous state and action MDP with goals}
We frame the problem of learning behavior as a Reinforcement Learning problem. In particular, we assume a standard \emph{continuous action, continuous state Markov Decision Process (MDP)} that describes the world and the (possibly stochastic) effect of actions. We extend the standard RL MDP formulation through a reward function conditioned on goals. We assume an environment $E=(\mathcal{S},\mathcal{A},T,\mathcal{G},R)$ with
\begin{itemize}
\item $\mathcal{S}$ - states $\mathcal{S} \subseteq \mathbb{R}^n$
\item $\mathcal{A}$ - actions $\mathcal{A}\subseteq \mathbb{R}^m$
\item $T$ - transition dynamics $T : \mathcal{S}\times \mathcal{A}\times \mathcal{S} \rightarrow \mathbb{R}$ with $T(s,a,s') := P(s_{t+1}=s' | s_{t}=s,a_t=a)$ because of the Markov property.
\item $\mathcal{G}$ - goals $\mathcal{G} \subseteq \mathbb{R}^k$
\item Reward $R_{g}(s_t,a_t,s_{t+1})\rightarrow \mathbb{R}$ with $s_t,s_{t+1} \in S$, $a \in A$ and $g \in G$
\item Policy $\pi : \mathcal{S} \times\mathcal{G} \rightarrow \mathcal{A}$ for choosing actions
\item Discount factor $\gamma$
\end{itemize}
The robot interacts with the environment by choosing actions and observing states over time. Each trajectory takes the form $\tau=[...,(s_t,a_t,r_t,s_{t+1},g),..]$. The goal for an agent is to choose actions from $\mathcal{A}$ so as to maximize the cumulative discounted reward $R=\sum_{t=0}^{t_{\operatorname{max}}}\gamma^t r_t$. MDPs can be solved by learning a value function $V: \mathcal{S}\times \mathcal{G} \rightarrow \mathbb{R}$ that maps states to a utility value describing the value of the state for achieving the maximum reward for a given goal $g$. $V$ is typically described by the Bellmann Equation \cite{Bellman:1957}
\begin{eqnarray}
\begin{split}
V(s,g) = \operatorname{max}_{a \in A}\int_{s'} T(s,a,s') (R_g(s, a, s') + \\ \gamma V(T(s,a,s'),g))ds'
\end{split}
\label{bellmann}
\end{eqnarray}
which iteratively sets the value of $s$ given $g$ to the maximum over the instant reward and the expected discounted value of the future state when choosing $a$ optimally and acting optimally thereafter. Another function similar to $V$ which is often used in value function approaches is $Q: \mathcal{S}\times\mathcal{G}\times\mathcal{A}\rightarrow\mathbb{R}$ that measures the value of action $a$ given state $s$ and goal $g$.
\section{Related Work}
Lots of recent work in RL deals with \emph{discrete action and discrete state spaces} using Deep Neural Network such as DQN \cite{DQN}, Double DQN\cite{hasselt2016deep}, and the dueling architecture \cite{wang2016dueling}. In discrete state and action spaces, the integral and the $\operatorname{max}_{a \in A}$ of Equation \ref{bellmann} can be calculated easily. However, these methods are not directly applicable to robotics because they require discretization of state and action spaces. Other algorithms address \emph{continuous state spaces} but \emph{discrete action spaces} environments. Examples of such algorithms are Fitted Value Iteration (FVI) \cite{boyan1995generalization} and Kernel-Based methods \cite{Ormoneit2002KernelBasedRL} which use a model to estimate the distribution of future states and so the integral in Equation \ref{bellmann}. However, such algorithms do not tackle continuous action spaces where the term $\operatorname{max}_{a \in A}$ is not applicable since there is an unlimited amount of actions possible. Similar in problem setting to our work are all RL algorithms dealing with \emph{continuous state and continuous action spaces} such as CACLA \cite{CACLA}, NFQCA \cite{hafner2007neural}, DPG \cite{silver2014DPG} and DDPG \cite{DDPG}. DDPG and variants are actor-critic algorithms that estimate a policy (actor) for choosing actions using the reward signal estimated by a (neural) value function estimator (critic) \cite{konda2000actor}.
In our approach, we solve the MDP without having to iterate over states or actions (discrete state/actions) nor do we explicitly learn a transition model $T$ as model-based RL. Our approach (CVI) is a value function approximation approach and therefore related to DQN and similar algorithms. However, we deal with continuous state/action spaces using simple generalizations in the state and action space through regression. CVI is dealing with continuous state/action spaces which are dominated by actor-critic models. From actor-critic, we differ by not using policy gradients for some actor network. CVI does rely on estimating both $V$ and $Q$ function and faster updates of $V$ vs $Q$ - in that sense, there is some similarity with target network DQN approaches. Against the current trends, CVI (although in principle agnostic to regressor choice) is described and implemented in this paper using a simple non-parametric estimator.
Another line of similarity/difference with recent methods is the problem of \emph{multi goal learning}. Within RL there is some work on multiple goal learning like \cite{precup2001off} or UVFA \cite{UVFA}. Latter proposes to use a single function approximator to estimate the value function of a factored goal and state spaces. Results show that it is possible to generalize over multiple goals if there is a structure in the goal space \cite{Foster2002}. The main differences between our work and UVFA and similar approaches \cite{sutton2011horde} is that they work with discrete state and action spaces. They also do not investigate the impact of sample augmentation. For example, \cite{Yang2017} extends DDPG with continuous action and state spaces to multiple goals but require them to be discrete and limited.
Outside of RL, controllers for robots without prior knowledge are estimated using \emph{self-exploration}. In motor babbling \cite{demiris2005motor}, random actions are performed and through the resulting observations, a forward model is trained. In goal babbling \cite{rolf2012goal}, goals are set in the task space and during exploration, an inverse model is trained. The placement of the goals can be controlled by intrinsic motivation \cite{baranes2013active} for more efficient exploration of the task space. Similarities of our work and goal babbling are the random placement of goals and the random exploration at the beginning of an experiment. However, CVI does not train an explicit forward or inverse model.
\section{Continuous Value Iteration (CVI)}
We propose \emph{Continuous Value Iteration (CVI)} for learning near optimal controllers in continuous state and continuous action space MDPs. CVI's core is the estimation of the value function $V$ and its subsequent use to approximate $Q$. Past experiences of the robot in the form of trajectories $\tau$ -- i.e. states, actions, rewards, and goals are stored in a \emph{replay buffer} $B$ that consists of tuples $(s_t,a_t,r_t,s_{t+1},g)$. We perform Value-Iteration to propagate the values through the state and goal space. Since both spaces are continuous, we have to achieve generalization using a function approximator. A regressor is used to learn estimates of $V$ and when it has converged, it is used to estimate $Q$.
In this paper, we use k nearest neighbor (KNN) regression \cite{altman1992introduction} with an Euclidean distance function, however, in principle, other regressors could be used. KNN is a non-parametric method that generalizes well locally. The advantage in this task of KNN over other regressors is its simplicity and that it estimates the values conservatively in regions without training data.
The algorithm (see also Algorithm \ref{algo:CVI}) has 4 Steps (a) action selection and data collection, (b) sample augmentation, (c) $V$ learning, (d) $Q$ learning.
\begin{algorithm}
\caption{Continuous Value Iteration}\label{algo:CVI}
\begin{algorithmic}[1]
\State \textbf{Given:} Function approximators $V(s, g)$, $Q(s, g, a)$
\State \textbf{Given:} Parameters $\nu,\beta,\gamma$ (and $k$ for KNN)
\State Initialize $V_{0}(s,g) = 0$ and $Q_{0}(s,g,a)=0$; $\forall s\in\mathcal{S},\forall g\in\mathcal{G}, \forall a\in\mathcal{A}$
\State Initialize $B=\emptyset$
\Loop
\State // (a) Action selection and data collection
\For{Episode e = 1,E}
\State Choose $g\in\mathcal{G}$
\For{Timestep t = 1, N}
\State Observe $s_t$, choose and execute $a_t$ according to $\pi$, receive reward $r_t$
\State Save $(s_{t-1}, a_{t-1}, r_t, s_t, g)$ in $B$
\State Stop when $r_t=1$
\EndFor
\EndFor
\State \textit{// (b) Sample augmentation}
\State HER, IER
\State \textit{// (c) $V$ Iteration}
\For{Iteration i = 1, I}
\ForAll{$(s, a, r, s', g)$ in $B$}
\State $V_{i+1} \gets [s, g, \operatorname{max}(r, \gamma V_{i}(s',g) , \beta V_{i}(s,g))]$
\EndFor
\State Stop loop when $V$ converges.
\EndFor
\State \textit{// (d) $Q$ learning}
\ForAll{$(s, a, r, s', g)$ in $B$}
\State $Q \gets [s, g, a, V_{I}(s', g)]$
\EndFor
\EndLoop
\end{algorithmic}
\end{algorithm}
\paragraph{Action selection and data collection} In each iteration the robot chooses and performs an action and collects data. This leads to observed trajectories $\tau$ and observed tuples $\tau_{t}$. We store observed tuples in the replay buffer $B$. For training, actions are chosen with an $\epsilon$-greedy action selection policy, where a random action is chosen with the probability of $\epsilon$ and the best action is chosen otherwise. This helps the algorithm to exploit existing knowledge during exploration. When using and validating the policy, a greedy policy following the most valuable action is used.
\paragraph{Sample augmentation} Data in the replay buffer $B$ can be augmented using various techniques. We employ three strategies: 1) None: the buffer stays as is, 2) Hindsight experience replay (HER), 3) Imaginary Experience Replay (IER). HER and IER add $\tau_t$ tuples to the replay buffer $B$ using different strategies. More technical detail will follow in Sections \ref{sec:HER} and \ref{sec:IER}.
\begin{figure}
\centering
\includegraphics[width=0.2\textwidth]{figs/cvi_explanation}
\caption{Two trajectories with three transitions in the point environment and a red goal region with a reward of 1}
\label{fig:show_cvi}
\end{figure}
\paragraph{$V$ iteration} We compute new estimates of $V$ using the replay buffer $B$ as input. The algorithm iterates $I$ times over the entire replay buffer $B$ and computes training data for the KNN regressor.
\begin{eqnarray}
\label{eq:v_update}
V^{i+1}\leftarrow [s,g,\operatorname{max}(r,\gamma V^i(s', g),\beta V^i(s, g)]
\end{eqnarray}
Training data for $s,g$ pairs is computed by taking the maximum value of the following three sources. We explain the sources referring to Figure \ref{fig:show_cvi}.
\begin{itemize}
\item Reward $r$: The reward is an immediate source of value if a goal state was reached. For examples, this term is chosen for point $A$ (Figure \ref{fig:show_cvi}), because the action directly leads to a reward.
\item Discounted predicted value $\gamma V^i(s',g)$: This value is the same as the Bellmann backup except here it is predicted by the regressor. This term spreads value along successful observed trajectories. For example, this term is chosen for point B (Figure \ref{fig:show_cvi}), when the future state ($B'=A$) has a value. The hyper-parameter $\gamma$ is the typical discount factor in MDPs. We use $\gamma=0.99$.
\item The estimated value of the current state $\beta V^i(s, g)$: This allows the algorithm to spread values through the state and goal space. The term implicitly replaces the search for the best action in Equation \ref{bellmann} ($\operatorname{max}_{a \in A}$). For example, this term is chosen for point $C$ (Figure \ref{fig:show_cvi}), if the neighboring state $B$ has a high value. The cooling factor $\beta$ counteracts overestimations from previous $V$. With an $\epsilon$-greedy agent, areas with an overestimated value are explored more in future episodes, so that errors are quickly fixed.
\end{itemize}
\section{Sample Augmentation:}
Robots can only gain limited experience from the environment constrained by the real-time movements and the sampling rates of sensors and actuators. To enhance the training data it can be enhanced by additional samples, which can be inferred without additional knowledge like physics simulations. In this paper, we propose a new approach (IER) and compare it to an existing approach (HER).
\subsection{Hindsight Experience Replay (HER)}
\label{sec:HER}
HER \cite{HER} is a recent example of sample augmentation where experiences (samples) are added to the buffer for additional goals. HER assumes that states can be goals and therefore states later in a trajectory/episode can be assigned as goals to samples earlier in the trajectory thereby creating new samples. The new data is added to the replay buffer $B$. The newly created samples are limited to have goals which are previously seen states. The maximum amount of samples HER can create for a trajectory of length $n$ is $\frac{n (n+1)}{2}$. Importantly, HER assumes that goal and state space are equal, i.e. that goals are states in the state space. Therefore, HER cannot be applied to domains, where the goal and state spaces are separate.
\subsection{Imaginary Experience Replay (IER)}
\label{sec:IER}
We address the main limitations of HER in this paper by proposing Imaginary Experience Replay (IER). IER is able to 1) produce infinite amounts of samples from finite experiences $B$ and 2) deal with separate goal and state space domains. IER does this by extending experienced transitions with \textit{imaginary} goals. The algorithm (see also Algorithm \ref{algo:IER}) samples any number $S$ of additional samples from $B$. For all additional samples, a new random goal $\hat{g} \in G$ is sampled and the reward is changed according to $R_{\hat{g}}(s_{t},a_{t},s_{t+1})$. IER applied to CVI helps in spreading discounted rewards through the $V$ (and $Q$) landscape and therefore aids generalizations across different goals. Samples created by IER can serve as the glue between experienced trajectories with different goals.
\begin{algorithm}
\caption{Imaginary Experience Replay (IER)}\label{algo:IER}
\begin{algorithmic}[1]
\State \textbf{Given:} \\
Replay buffer $B$ with transitions ($s, a, r, s', g$) \\
Goal space $G$ and a way to sample from $G$\\
Reward function $R_g(s,a,s')$
\For{Sample s = 1, S}
\State Sample imaginary goal $\hat{g}$ from G (using any distribution, we use uniform)
\State Sample transition $(s, a, r, s', g)$ from replay buffer $B$
\State Store $(s, a, R_{\hat{g}}(s,a,s'), s',\hat{g})$ in replay buffer $B$
\EndFor
\end{algorithmic}
\end{algorithm}
\section{Experiment I: Simulated point environment}
We validate CVI and IER using two types of environments. The first environment is a simulation of a moving two-dimensional point on a plane. We use the simulation to explore the impact of design choices and compare our method with existing state-of-the-art methods.
\subsection{Experimental Setup}
\label{experimental_setup}
\paragraph{State space $\mathcal{S}\subseteq \mathbb{R}^2$} with $s=[x,y]$ agent position.
\paragraph{Actions $\mathcal{A}\subseteq \mathbb{R}^2$} with $a=[dx, dy]$ agent velocity.
\paragraph{Transition function $T(s,a,s') := P(s_{t+1}=s' | s_{t}=s,a_t=a)$}
\paragraph{Goals $\mathcal{G}=\mathcal{S}$} with a fixed margin $w$ that determines if a goal has been reached. We study two types of experiments:
\begin{itemize}
\item \emph{one goal} - goal is the same for the duration of a particular experiment (training and evaluation).
\item \emph{random goal} - new goals are chosen randomly from $\mathcal{G}$ when a goal has been reached.
\end{itemize}
\paragraph{Reward function $R_g$} The reward function is binary: a state $s$ is considered a goal state iff $|s-g| < w$ and the reward for goal states is one. For all other states reward is zero.
\begin{eqnarray}
R_g(s,a,s')=\begin{cases}
1, \text{iff } |s'-g| < w\\
0, otherwise
\end{cases}
\end{eqnarray}
\paragraph{Training} For each training episode, the agent's position is set to a random location. In each episode, a maximum of 200 transitions (timesteps) can be performed. The agent position is set randomly and the agent gets 30 timesteps to reach the goal. The agent's position is reset randomly if the agent reaches the goal or 30 timesteps are up. If 200 timesteps are up, the training episode is finished. The agent can reach a variable number of goals within a training iteration.
\paragraph{Evaluation}
\label{evaluation} We evaluate the controller after each training episode. The maximum length of an evaluation episode is 2000 timesteps. The agent is randomly set (and in \emph{random goal} tasks a goal is randomly chosen). For each trial the agent then has a maximum of 30 timesteps to reach the goal. The agent's position is reset when reaching the goal or after 30 timesteps. The performance of the controller is quantified by comparing the agent's trajectory with the analytically calculated optimal trajectory. Suppose the agent took $m$ steps to reach the goal with $\operatorname{opt}$ being the shortest possible number of steps to reach the goal, then the \emph{optimal control score} is
\begin{eqnarray}
\operatorname{score}(m)=
\begin{cases}
1-\frac{m-\operatorname{opt}}{30-\operatorname{opt}}, \text{iff agent reaches goal}\\
0, otherwise
\end{cases}
\end{eqnarray}
with $\operatorname{score}(m)=1$ if the agent takes as many timesteps to the goal as the optimal trajectory would and $0$ if the goal is never reached. The average score of all trials is the final score.
\paragraph{Benchmarking}
We evaluated various systems to understand the performance of CVI and the effect of sample augmentation. We benchmark our own sample augmentation technique (IER) against the state of the art (HER). We execute the same experiment 10 times for each of the following systems.
\begin{itemize}
\item\emph{CVI:} vanilla CVI without augmentation
\item\emph{CVI+HER:} CVI with HER sample augmentation
\item\emph{CVI+IER:} CVI with IER sample augmentation. The number of samples added is equal to the number of samples HER can produce for the given replay buffer $B$.
\item\emph{CVI+IER 3X:} CVI with IER sample augmentation. The number of samples is the same that HER can produce times 3.
\item\emph{CVI+IER 10X:} CVI with IER sample augmentation. The max number of samples is the same that HER can produce times 10.
\end{itemize}
\subsection{Results I: CVI learns to solve continuous state and action space tasks}
We first evaluate CVI's ability to solve \emph{one goal} tasks. Figure \ref{fig:point_env_no_obstacles_cvi+augmentation}a shows that CVI (even without any sample augmentation strategies) is able to quickly solve \emph{one goal} tasks in the point environment. We see convergence reliably after 10 training iterations (2000 timesteps). Figure \ref{fig:point_env_no_obstacles_cvi+augmentation}a also compares the performance of different sample augmentation strategies. It is clear that for the \emph{one goal} tasks in a point environment sample augmentation is not necessary to achieve a good performance.
\begin{figure}
\centering
\includegraphics[width=0.46\textwidth]{figs/point_env_no_obstacles_cvi+augmentation_no_au}
\caption{Performance of CVI with various sample augmentation strategies in \emph{one goal} (a) and \emph{random goal} (b) tasks in the point environment. The y-axis shows, how close the agent is to optimal control (see Section \ref{experimental_setup}). For each configuration, 10 independent runs were performed and the averages with their bands of $\pm \sigma$ are shown. The KNNs use k=5 neighbors.}
\label{fig:point_env_no_obstacles_cvi+augmentation}
\end{figure}
Figure \ref{fig:predicted_real_value} shows the learned reward value $V$ for states in the state space (\emph{one goal}). We measure this by sampling a collection of states $s$ and plotting their $V_i(s, g)$ (with fixed $g$ and $i\in[1,12,100]$). We can see that after few iterations ($12$) (Figure \ref{fig:predicted_real_value}b) the value function approximates the true underlying discounted reward value landscape (optimal $V^*$) shown in Figure \ref{fig:predicted_real_value}d. This explains why the agent is able to collect rewards quickly. Notice that basically the task is solved after 10-12 iterations. After that, the $V$ estimates become even more accurate, however, this is not strictly necessary for good task performance. All it takes is for the landscape of $V(s,g)$ to have similar local derivatives as $V^*(s)$.
\begin{figure}[ht]
\centering
\subfloat[Predicted values $it=1$]{{\includegraphics[width=0.2\textwidth]{figs/values_0.jpg} }}%
\qquad
\subfloat[Predicted values $it=12$]{{\includegraphics[width=0.2\textwidth]{figs/values_11.jpg} }}%
\qquad
\subfloat[Predicted value $it=100$]{{\includegraphics[width=0.2\textwidth]{figs/values_99.jpg} }}%
\qquad
\subfloat[Actual Value of states]{{\includegraphics[width=0.2\textwidth]{figs/exp_2_real_values_08.jpg} }}%
\caption{Value function in the point environment with fixed goal at $g=(1,1)$ ($d_{max}=0.2$ and $w=0.2$). (a - c) prediction over time with CVI ($\gamma = 0.85$ and $\beta = 0.99$); (d) analytically calculated.}
\label{fig:predicted_real_value}
\end{figure}
We did similar experiments in an environment with an obstacle (a virtual wall). Results are omitted here due to space limitations, but CVI solved that environment equally quickly.
\subsection{Results II: CVI + IER learn to solve tasks with different goals}
We evaluated CVI's ability to solve \emph{random goal} tasks. Figure \ref{fig:point_env_no_obstacles_cvi+augmentation}b shows that CVI is able to solve \emph{random goal} tasks in the point environment. However, here sample augmentation strategies clearly improve the performance of the system significantly. CVI alone is learning to solve the task, however, adding sample augmentation strategies aids early convergence. We see convergence starting from 10 training iterations (2000 samples) in the best case. This is comparable to the \emph{one goal} environment with almost no loss in learning speed.
Figure \ref{fig:point_env_no_obstacles_cvi+augmentation}b also compares the performance of different sample augmentation strategies. We can see that IER outperforms HER, even with the same amount of additional samples. The main reason for this is that IER allows for more generalization in the goal space through much more samples with varying goals. Importantly, adding more goals does not significantly slow learning when using IER. Convergence is slower, but adding infinitely many goals to the task has almost negligible effects compared to the increased difficulty.
\subsection{Results III: CVI vs DDPG}
We compared CVI with the current continuous action/state space state-of-the-art algorithm DDPG. DDPG uses a replay buffer to directly optimize a policy. We used the DDPG implementation of OpenAI\footnote{https://github.com/openai/baselines/tree/master/baselines/ddpg} on the point environment and optimized DDPG hyper-parameters with full grid search. We show the results of the best configuration.
Figure \ref{fig:cvi_ddpg} shows that DDPG does not solve the environment whereas CVI quickly learns a near optimal controller. Further experiments showed that the main reason for DDPGs failure is the sparse reward structure of the task. In contrast to CVI, DDPG does not seem to be able to handle very sparse rewards. Notice that these experiments were done in the \emph{one goal} point environment. Since DDPG was unable to solve this task we did not extend evaluation to random goals. Also, we omitted sample augmentation since HER, IER and other goal sample augmentation techniques have no effect in this setting.
\begin{figure}
\centering
\includegraphics[width=0.46\textwidth]{figs/cvi_ddpg.pdf}
\caption{Comparison of CVI and DDPG in the point environment ($d_{max}=0.05$, $w=0.1$). The KNNs use k=5. In the x-axis, algorithm iterations are shown where one iteration includes 200 state transitions in the environment and in the y-axis, the average return per test episode with $\pm \sigma$ is shown where 1 means that the goal is reached in all episodes.}
\label{fig:cvi_ddpg}
\end{figure}
\section{Experiment II: Robot arm}
We use a robot attached to a table to show that CVI can learn continuous controllers in the real-world. The robot consists of a kinematic chain of Dynamixel motors (see Figure \ref{fig:robot_arm}, page \pageref{fig:robot_arm}). We use this experimental setup to validate CVI in two ways. First, we are interested in direct voltage motor control in the real-world. Second, we evaluate whether the system learns to reach coordinates in an absolute Cartesian space (task space) without directly providing observable access to the Cartesian space. In other words, the robot's state representation does not include the task space. The robot has to learn to control task space through sparse reward feedback only.
\subsection{Experimental Setup}
\paragraph{State space $\mathcal{S}$} The state space of the robot is the joint space and joint velocity of the two actuators $s=[r_1,r_2,\dot{r_1},\dot{r_2}]$. The state is directly measured by hardware sensors.
\paragraph{Actions $\mathcal{A}$} The motors are directly controlled by setting voltages $a=[v_1,v_2]$. The Dynamixel uses PWM to control average supply voltage to the motor based on $a$ and thereby controls the torque applied to the motor. The Motors are backdrivable and the robot is not statically stable. The robot just falls down, when the voltage is zero. Similarly, the robot requires some control signal to overcome joint friction.
\paragraph{Goal space $\mathcal{G}$} Goals are positions in the Cartesian space $(x,y)$. The Cartesian space is centered in the middle of the first joint and fixed along the axis of the table (see Figure \ref{fig:robot_arm}). I.e. the goal space is relative to the table and does not rotate with the robot. Notice that state and goal space are completely separate spaces. Goals are not part of the state space and only indirectly accessible to the robot via sparse reward signals.
\paragraph{Reward $R$} The reward function is the same as in the point environment. We measure the position of the end effector in goal space using internal readouts of the motor rotations and prior knowledge of the forward model to calculate the absolute position in space with respect to the Cartesian coordinate system. This is used exclusively in the reward function. The robot does not have access to the forward model.
\paragraph{Training} We train the system for 60 minutes ($\sim$ 20k experienced state transitions at 5 fps). The robot starts in a random position. A goal is chosen and the robot has 100 timesteps (20 sec) to reach the goal. If it reaches the goal or time is up, the goal is reset randomly. Notice that in contrast to the point environment, the robot position is never reset. Also, this is effectively a random goal environment.
\paragraph{Evaluation} We test the system for 10 minutes (3000 timesteps at 5fps) with a similar setup as in training. We choose a random goal and the robot has a maximum of 100 timesteps (20 seconds) to reach that goal. The goal is reset if the robot reaches the goal or time is up. We measure how many rewards the robot was able to collect in 10 minutes (cumulative reward).
The experiments are repeated 10 times each to obtain statistical data.
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\textwidth]{figs/success_arm_x3.pdf}
\caption{Average cumulative reward in 3000 timesteps (10 minutes) of test data on the robot arm for various algorithms}
\label{fig:success_arm}
\end{figure}
\subsection{Results and Discussion}
Figure \ref{fig:success_arm} shows the performance of various strategies of CVI with and without augmentation on the real robot with voltage control action space and Cartesian goal space. CVI significantly outperforms the random control policy. The results also show that sample augmentation significantly helps CVI. HER gives some improvement but maybe not statistically significant. However, IER and especially IER+HER make the system perform very well. If we check the performance of CVI+HER+IER we can see that in 10 minutes it reaches an average of 80 goals. That directly translates into reaching a new goal on average every 7.5 seconds.
In our view, the performance of the system is remarkable. CVI (HER+IER) learns to perform actions directly in the real-world. Here, CVI does not use stable position control but directly manipulates the voltage of the motors and has to deal with physical effects such as gravity, friction without receiving lots of sensory information (only joint position and velocity). Moreover, the robot learns to control a task space to which it has no direct access. It can only indirectly access information about this space via the reward function. Reward though is binary and is only experienced when reaching a goal. This shows that CVI can efficiently solve continuous action and state space RL problems with sparse rewards.
\section{Conclusion}
We have presented two novel methods for Reinforcement Learning in continuous state, goal and action spaces. Firstly, Continuous Value Iteration (CVI) enables the efficient estimation of utility functions $V$ and $Q$. We see that these methods generalize well in continuous state, action, and goal spaces. Second, Imaginary Experience Replay (IER) significantly enhances the performance of CVI by adding potentially unlimited amounts of samples for better generalization. We have shown in two environments that the proposed methods perform well. Importantly, CVI+IER enable a voltage controlled real robot to quickly learn to move in the real-world without explicitly learning forward or inverse models.
|
2,869,038,153,990 | arxiv | \section{Introduction}
In this paper we consider the risk-sensitive reward maximization problem on
${\mathds{R}^{d}}$ for diffusions controlled through the drift.
The main objective is to derive a variational formulation for the risk-sensitive
reward in the spirit of \cite{VAB}, which does so for discrete time problems on a
compact state space, and analyze the associated Hamilton--Jacobi--Bellman (HJB)
equation.
Since the seminal work of Donsker and Varadhan \cite{DoVa-72,DoVa-76b},
this problem has acquired prominence.
The variational formula derived here can be viewed as a controlled version of
the variational formulas for principal eigenvalues of diffusion operators arising
in large deviations.
For reversible diffusions, this formula can be viewed as an abstract Courant--Fischer
formula \cite{DoVa-72}.
For general diffusions, the correct counterpart in linear algebra is the
Collatz--Wielandt formula for the principal eigenvalue of non-negative
matrices \cite[Chapter~8]{Meyer}.
For its connection with the large deviations theory for finite Markov chains and
an equivalent variational description, see \cite{Dembo}.
There has been considerable interest to generalize this theory to a natural class of
nonlinear self-maps on positive cones of finite or infinite dimensional spaces.
The first task is to establish the existence and where possible, uniqueness of the
principal eigenvalue and eigenvector (the latter modulo a scalar multiple as usual),
that is, a nonlinear variant of the Perron--Frobenius theorem in the finite dimensional
case and its generalization, the Krein--Rutman theorem, in Banach spaces.
This theory is carried out in, e.g., \cite{Lemmens,Ogiwara}.
The next problem is to derive an abstract Collatz--Wielandt formula for the principal
eigenvalue \cite{Gaubert}.
In bounded domains, a Collatz--Wielandt formula for the Dirichlet principal eigenvalue
of a convex nonlinear operator is obtained in \cite{Armstrong-09}.
Our first objective coincides with this, albeit for
Feynman--Kac operators arising in risk-sensitive control that we introduce later.
For risk-sensitive \emph{reward} processes, that is, the problem of maximizing the
asymptotic growth rate for the risk-sensitive reward in discrete time problems,
one can go a step further and give an explicit characterization of the principal
eigenvalue as the solution of a concave maximization problem \cite{VAB}.
The objective of this article is to carry out this program for controlled diffusions.
At this juncture, it is worthwhile to underscore the difference between reward
maximization and cost minimization problems with risk-sensitive criteria.
Unlike the more classical criteria such as ergodic or discounted, they cannot be
converted from one to the other by a sign flip. The cost minimization criterion,
after a logarithmic transformation applied to its HJB equation, leads to the
Isaacs equation for a zero-sum stochastic differential game \cite{FM}.
An identical procedure applied to the reward maximization problem would lead
to a \emph{team} problem wherein the two agents seek to maximize the same payoff
\emph{non-cooperatively}.
The latter in particular implies that their decisions at any time are conditionally
independent given the state (more generally, the past history).
Our approach leads to a concave maximization problem, an immense improvement
with potential implications for possible numerical schemes.
This does not seem possible for the cost minimization problem.
Thus the complexity of the latter is much higher.
Recently, a risk-sensitive maximization problem
is also studied in \cite{BS-20} under a blanket geometric stability condition.
In the present paper we do not impose any blanket stability on the controlled processes.
We first establish these results for reflected diffusions in a bounded domain,
for which the nonlinear Krein--Rutman theorem of \cite{Ogiwara} paves the way.
This is not so if the state space is all of ${\mathds{R}^{d}}$.
Extension to the whole space turns out to be quite involved due to the lack
of compactness.
Even the well-posedness of the underlying nonlinear eigenvalue problem is pretty tricky.
Hence we proceed via the infinite volume limit of the finite volume problems.
This leads to an abstract Collatz--Wielandt formula and an
abstract Donsker--Varadhan formula.
More specifically, in \cref{T3.1} we show that the generalized eigenvalue of
the semilinear operator is simple, and identify some useful properties of its
eigenvector.
We proceed to prove equality between the risk-sensitive
value and the generalized principal eigenvalue in \cref{T3.2},
which also establishes a verification of optimality criterion.
The general result for the variational formula is in \cref{P4.1}, followed by
more specialized results in \cref{T4.1,T4.2}.
In the process of deriving these results,
we present some techniques that may have wider applicability.
Most prominent of these is perhaps the gradient estimate in \cref{L4.1} for operators
with measurable coefficients.
Lastly, in \cref{S5} we revisit the risk-sensitive minimization problem,
and with the aid of \cref{L4.1} we improve the main result
in \cite{AB-18} by extending it to unbounded drifts and running costs,
under suitable growth conditions (see \cref{A5.1}).
\subsection{A brief summary of the main results}\label{S1.1}
We summarize here the results concerning the variational formula
on the whole space.
We consider a controlled diffusion in ${\mathds{R}^{d}}$ of the form
\begin{equation*}
\mathrm{d} X_t \,=\, b(X_t,\xi_t)\,\mathrm{d} t + \upsigma (X_t)\,\mathrm{d} W_t
\end{equation*}
defined in a complete
probability space $(\Omega,\sF,\Prob)$.
The process $W$ is a $d$-dimensional standard Wiener process independent
of the initial condition $X_{0}$, and
the control process $\{\xi_t\}_{t\ge0}$ lives in a compact metrizable space ${\mathscr{K}}$.
We impose a standard set of assumptions on the coefficients
which guarantee existence and uniqueness of strong solutions under
all admissible controls.
Namely, local Lipschitz continuity in $x$
and at most affine growth of $b$ and $\upsigma$,
and local non-degeneracy of $a\coloneqq\upsigma\upsigma^{\mathsf{T}}$ (see \cref{A3.1}\,(i)).
But we do not impose any ergodicity assumptions on the controlled diffusion.
The process $\{X_t\}_{t\ge0}$ could be transient.
We let $c\colon{\mathds{R}^{d}}\times{\mathscr{K}}\to\mathds{R}$ be a continuous running reward function,
which is assumed bounded from above,
and define the \emph{optimal risk-sensitive value} $J_*$ by
\begin{equation*}
J_* \,\coloneqq\, \sup_{\{\xi_t\}_{t\ge0}}\;\liminf_{T\to\infty}\, \frac{1}{T}\,
\log \Exp \Bigl[\mathrm{e}^{\int^T_0 c(X_t,\xi_t)\,\mathrm{d} t} \Bigr]\,,
\end{equation*}
where the supremum is over all admissible controls, and $\Exp$ denotes
the expectation operator.
This problem is translated to an ergodic control problem for the operator
$\sA\colon \Cc^{2}({\mathds{R}^{d}}) \to \Cc({\mathds{R}^{d}}\times{\mathscr{K}}\times{\mathds{R}^{d}})$, defined by
\begin{equation}\label{EsA}
\sA\phi(x,\xi,y)\,\coloneqq\,\frac{1}{2}\trace\left(a(x)\nabla^{2}\phi(x)\right)
+ \bigl\langle b(x,\xi)+ a(x)y, \nabla \phi(x)\bigr\rangle\,,
\end{equation}
where $\nabla^{2}$ denotes the Hessian,
and $a(x)=\upsigma(x)\upsigma^{\mathsf{T}}(x)$,
that seeks to maximize the average value of the functional
\begin{equation}\label{EsR}
\sR(x,\xi,y) \,\coloneqq\, c(x,\xi) - \frac{1}{2}\abs{\upsigma^{\mathsf{T}}(x) y}^2\,,\quad
(x,\xi,y)\in {\mathds{R}^{d}}\times{\mathscr{K}}\times{\mathds{R}^{d}}\,.
\end{equation}
We first show that the generalized principal eigenvalue $\lambda_*$
(see \cref{E-princ2})
of the maximal operator
\begin{equation}\label{EcG}
\cG f(x) \,\coloneqq\, \frac{1}{2}\trace\left(a(x)\nabla^{2}f(x)\right)
+ \max_{\xi\in{\mathscr{K}}}\, \bigl[\bigl\langle b(x,\xi),
\nabla f(x)\bigr\rangle + c(x,\xi) f(x)\bigr]
\end{equation}
is simple.
An important hypothesis for this is that $c-\lambda_*$ is negative and
bounded from above away from zero on the complement of some compact set
(see \cref{A3.1}\,(iii)).
This is always satisfied if $-c$ is an inf-compact function
(i.e., the sublevel sets $\{-c \le \kappa\}$ are compact, or empty, in ${\mathds{R}^{d}}\times{\mathscr{K}}$
for each $\kappa\in\mathds{R}$), or if $c$ is a positive function vanishing at infinity
and the process $\{X_t\}_{t\ge0}$ is recurrent under some
stationary Markov control.
Let the positive function
$\Phi_{\mspace{-2mu}*}\in\Cc^2({\mathds{R}^{d}})$, normalized as $\Phi_{\mspace{-2mu}*}(0)=1$ to render it unique,
denote the principal eigenvector, that is, $\cG\Phi_{\mspace{-2mu}*}=\lambda_*\Phi_{\mspace{-2mu}*}$,
and define ${\varphi^{}_{\mspace{-2mu}*}}=\log\Phi_{\mspace{-2mu}*}$.
The function
\begin{equation}\label{Eentropy}
\cH(x)\,\coloneqq\,\frac{1}{2}\,\babs{\upsigma^{\mathsf{T}}(x)\nabla {\varphi^{}_{\mspace{-2mu}*}}(x)}^2\,,
\quad x\in{\mathds{R}^{d}}\,,
\end{equation}
plays a very important role in the analysis, and can be interpreted
as an \emph{infinitesimal relative entropy rate} (see \cref{S4}).
To keep the notation simple, we define ${\mathcal{Z}} \coloneqq {\mathds{R}^{d}}\times{\mathscr{K}}\times{\mathds{R}^{d}}$, and
use the single variable $z=(x,\xi,y)\in{\mathcal{Z}}$.
Let $\cP({\mathcal{Z}})$ denote the set of probability measures
on the Borel $\sigma$-algebra of ${\mathcal{Z}}$, and
$\eom_A$ denote the set of \emph{infinitesimal ergodic occupation measures}
for the operator $\sA$ defined by
\begin{equation}\label{Eeom}
\eom_{\sA}\,\coloneqq\,\biggl\{ \mu\in \cP({\mathcal{Z}})\,\colon
\int_{{\mathcal{Z}}} \sA f(z)\,\mu(\mathrm{d}{z}) \,=\, 0\quad \forall\, f\in\Cc^{2}_c({\mathds{R}^{d}})\biggr\}\,,
\end{equation}
where $\Cc^2_c({\mathds{R}^{d}})$ is the class of functions
in $\Cc^2({\mathds{R}^{d}})$ which have compact support.
We also define
\begin{equation}\label{Emufinite}
\begin{aligned}
{\cP^{}_{\mspace{-3mu}*}}({\mathcal{Z}}) &\,\coloneqq\,
\biggl\{\mu\in \cP({\mathcal{Z}})\,\colon
\int_{{\mathcal{Z}}} \cH(x)\,\mu(\mathrm{d}{x},\mathrm{d}{\xi},\mathrm{d}{y}) <\infty\biggr\}\,,\\
{\cP^{}_{\mspace{-3mu}\circ}}({\mathcal{Z}}) &\,\coloneqq\,
\biggl\{ \mu\in \cP({\mathcal{Z}})\,\colon
\int_{{\mathcal{Z}}} \sR(z)\,\mu(\mathrm{d}{z}) > -\infty\biggr\}\,.
\end{aligned}
\end{equation}
Then, under the mild hypotheses of \cref{A3.1}, we show in \cref{P4.1} that
\begin{equation}\label{E-var}
\begin{aligned}
J_* \,=\, \lambda_* &\,=\, \adjustlimits\sup_{\mu\in{\cP^{}_{\mspace{-3mu}*}}({\mathcal{Z}})}
\inf_{g \in \Cc^2_c({\mathds{R}^{d}})}\,\int_{{\mathcal{Z}}} \bigl(\sA g(z)+\sR(z)\bigr)\,\mu(\mathrm{d}{z})\\
&
\,=\, \max_{\mu\in\eom_{\sA}\cap{\cP^{}_{\mspace{-3mu}*}}({\mathcal{Z}})}\,\int_{{\mathcal{Z}}} \sR(z)\,\mu(\mathrm{d}{z})\,.
\end{aligned}
\end{equation}
We next specialize the results to the case where the diffusion matrix
$a$ is bounded and uniformly elliptic (see \cref{A4.1}), and
show in \cref{T4.1} that under any of the hypotheses of
\cref{A4.2} we have $\eom_{\sA}\cap{\cP^{}_{\mspace{-3mu}\circ}}({\mathcal{Z}})\subset
{\cP^{}_{\mspace{-3mu}*}}({\mathcal{Z}})$.
This permits us to replace ${\cP^{}_{\mspace{-3mu}*}}({\mathcal{Z}})$ with
$\cP({\mathcal{Z}})$ and $\eom_\sA\cap{\cP^{}_{\mspace{-3mu}*}}({\mathcal{Z}})$ with
$\eom_\sA$ in the second and third equalities of \cref{E-var}, respectively.
We note here that if $a$ is bounded and uniformly elliptic, then
\cref{A4.2} is satisfied when either
$-c$ is inf-compact, or $\langle b,x\rangle^-$ has subquadratic growth,
or $\frac{\abs{b}^2}{1+\abs{c}}$ is bounded.
We also show that if $\frac{\cH}{1+\abs{{\varphi^{}_{\mspace{-2mu}*}}}}$ is bounded
(see \cref{L4.3} for explicit conditions on the parameters under
which this holds), then we can commute
the `$\sup$' and the `$\inf$' to obtain
\begin{equation*}
J_* \,=\, \adjustlimits\inf_{g \in \Cc^2_c({\mathds{R}^{d}})} \sup_{\mu\in\cP({\mathcal{Z}})}\,
\int_{{\mathcal{Z}}} \bigl(\sA g(z)+\sR(z)\bigr)\,\mu(\mathrm{d}{z}) \,.
\end{equation*}
Also, in \cref{T4.2}, we establish the variational formula
over the class of functions in $\Cc^2({\mathds{R}^{d}})$ whose partial derivatives
up to second order have
at most polynomial growth in $\abs{x}$.
\subsection{Notation}
The standard Euclidean norm in $\mathds{R}^{d}$ is denoted by $\abs{\,\cdot\,}$,
and $\mathds{N}$ stands for the set of natural numbers.
The closure, the boundary and the complement
of a set $A\subset\mathds{R}^{d}$ are denoted
by $\Bar{A}$, $\partial{A}$ and $A^{c}$, respectively.
We denote by $\uptau(A)$ the \emph{first exit time} of the process
$\{X_{t}\}$ from the set $A\subset\mathds{R}^{d}$, defined by
\begin{equation*}
\uptau(A) \,\coloneqq\, \inf\,\{t>0\,\colon\, X_{t}\not\in A\}\,.
\end{equation*}
The open ball of radius $r$ in $\mathds{R}^{d}$, centered at $x\in{\mathds{R}^{d}}$,
is denoted by $B_{r}(x)$, and $B_r$ is the ball centered at $0$.
We let $\uptau_{r}\coloneqq \uptau(B_{r})$,
and ${\Breve\uptau}_{r}\coloneqq \uptau(B^{c}_{r})$.
For a Borel space $Y$, $\cP(Y)$ denotes the set of probability measures
on its Borel $\sigma$-algebra.
The term \emph{domain} in $\mathds{R}^{d}$
refers to a nonempty, connected open subset of the Euclidean space $\mathds{R}^{d}$.
For a domain $D\subset\mathds{R}^{d}$,
the space $\Cc^{k}(D)$ ($\Cc^{k}_b(D)$)
refers to the class of all real-valued functions on $D$ whose partial
derivatives up to order $k$ exist and are continuous (and bounded).
In addition $\Cc_c^k(D)$ denotes the class of functions in $\Cc^k(D)$ that
have compact support.
The space $\Lp^{p}(D)$, $p\in[1,\infty)$, stands for the Banach space
of (equivalence classes of) measurable functions $f$ satisfying
$\int_{D} \abs{f(x)}^{p}\,\mathrm{d}{x}<\infty$, and $\Lp^{\infty}(D)$ is the
Banach space of functions that are essentially bounded in $D$.
The standard Sobolev space of functions on $D$ whose generalized
derivatives up to order $k$ are in $\Lp^{p}(D)$, equipped with its natural
norm, is denoted by $\Sob^{k,p}(D)$, $k\ge0$, $p\ge1$.
In general, if $\mathcal{X}$ is a space of real-valued functions on $Q$,
$\mathcal{X}_{\mathrm{loc}}$ consists of all functions $f$ such that
$f\varphi\in\mathcal{X}$ for every $\varphi\in\Cc_{c}^{\infty}(Q)$,
the space of smooth functions on $Q$ with compact support.
In this manner we obtain for example the space $\Sobl^{2,p}(Q)$.
We adopt the notation
$\partial_{t}\coloneqq\tfrac{\partial}{\partial{t}}$, and for $i,j\in\mathds{N}$,
$\partial_{i}\coloneqq\tfrac{\partial~}{\partial{x}_{i}}$ and
$\partial_{ij}\coloneqq\tfrac{\partial^{2}~}{\partial{x}_{i}\partial{x}_{j}}$,
and use the standard summation rule that
repeated subscripts and superscripts are summed from $1$ through $d$.
\section{The problem on a bounded domain}
In this section, we consider the risk-sensitive reward maximization with state dynamics
given by a reflected diffusion on a bounded $\Cc^2$ domain $Q\subset{\mathds{R}^{d}}$
with co-normal direction of reflection.
In particular, the dynamics are given by
\begin{equation}\label{E-sde}
\mathrm{d} X_t \,=\, b(X_t,\xi_t)\,\mathrm{d} t + \upsigma (X_t)\,\mathrm{d} W_t
- \gamma (X_t)\,\mathrm{d} \eta_t\,,
\end{equation}
where $\eta_t$ denotes the local time of the process $X$ on the boundary
$\partial Q$.
The random processes in \cref{E-sde} live in a complete
probability space $(\Omega,\sF,\Prob)$.
The process $W=(W_t)_{t\ge0}$ is a $d$-dimensional standard Wiener process independent
of the initial condition $X_{0}$.
The control process $\xi=(\xi_t)_{t\ge0}$ takes values in a compact,
metrizable set ${\mathscr{K}}$, and
$\xi_t(\omega)$ is jointly measurable in
$(t,\omega)\in[0,\infty)\times\Omega$.
The set of \emph{admissible controls} ${\Xi}$ consists of the
control processes $\xi$ that are \emph{non-anticipative}:
for $s < t$, $W_{t} - W_{s}$ is independent of
\begin{equation}\label{E-sF}
\sF_{s} \,\coloneqq\,\text{the completion of~} \sigma\{X_{0},\xi_r,W_r,\,r\le s\}
\text{~relative to~}(\sF,\Prob)\,.
\end{equation}
Concerning the coefficients of the equation, we assume the following:
\begin{enumerate}
\item[(i)]
The drift $b$ is a continuous map from $\overline{Q}\times{\mathscr{K}}$ to ${\mathds{R}^{d}}$,
and Lipschitz in its first
argument uniformly with respect to the second.
\item[(ii)]
The \emph{diffusion matrix} $\upsigma\colon \overline{Q} \to\mathds{R}^{d\times d}$
is continuously
differentiable, its derivatives are H\"older continuous, and is non-degenerate in
the sense that the minimum eigenvalue of
$a(x)=\bigl[a^{ij}(x)\bigr]\coloneqq\upsigma(x)\upsigma^{\mathsf{T}}(x)$
on $Q$ is bounded away from zero.
\item[(iii)]
The \emph{reflection direction}
$\gamma = [\gamma_{1}(x), \dotsc, \gamma_d(x)]^{\mathsf{T}} \colon{\mathds{R}^{d}} \to {\mathds{R}^{d}}$ is co-normal,
that is,
$\gamma$ is given by
\begin{equation*}
\gamma_i(x) \,=\, \sum_{j=1}^d a^{ij}(x) n_{j}(x)\,,\quad x\in\partial Q\,,
\end{equation*}
where $\vec n(x)= [n_{1}(x), \dotsc, n_d(x)]^{\mathsf{T}}$ is the unit outward normal.
\end{enumerate}
We let ${\Xi_{\mathsf{sm}}}$ denote the set of stationary Markov controls, that is,
the set of Borel measurable functions $v\colon{\mathds{R}^{d}}\to{\mathscr{K}}$.
Given $\xi\in{\Xi}$, the stochastic differential equation in \cref{E-sde}
has a unique strong solution.
The same is true for the class of Markov controls \cite[Chapter~2]{ABG}.
Let $\Prob^x_\xi$ and $\Exp^x_\xi$ denote the
probability measure and expectation operator on the canonical space of the
process controlled under $\xi\in{\Xi}$, with initial condition $X_0=x$.
Given a continuous \emph{reward function} $c\colon\overline{Q}\times{\mathscr{K}}\to\mathds{R}$,
which is Lipschitz continuous in its first
argument uniformly with respect to the second,
the objective of the risk-sensitive reward problem is to maximize
\begin{equation}\label{E-JQ}
J^x_\xi(c;Q) \,=\, \liminf_{T\to\infty}\, \frac{1}{T}\,
\log \Exp^x_\xi \Bigl[\mathrm{e}^{\int^T_0 c(X_t,\xi_t)\,\mathrm{d} t} \Bigr]\,,\quad x\in Q\,,
\end{equation}
over all admissible controls $\xi\in{\Xi}$.
We define
\begin{equation}\label{E-J*}
J^x_*(c;Q) \,\coloneqq\, \sup_{\xi\in{\Xi}}\,J^x_\xi(c;Q)\,,\quad x\in Q\,,\quad
\text{and\ \ } J_*(c;Q) \,\coloneqq\, \sup_{x\in Q}\,J^x_*(c;Q)\,.
\end{equation}
The solution of this problem shows that $J^x_*(c;Q)$ does not depend on $x$.
We let
\begin{equation*}
\Cc^{2}_{\gamma}(\overline{Q}) \,\coloneqq\, \bigl\{ f \in \Cc^{2}(\overline{Q})\,\colon\,
\langle\nabla f, \gamma\rangle \,=\, 0 \text{\ on\ } \partial{Q} \bigr\}\,,
\end{equation*}
and $\Cc^{2}_{\gamma,+}(\overline{Q})$ denote its subspace consisting of nonnegative
functions.
For $f \in \Cc^{2}(\overline{Q})$, and $\xi\in{\mathscr{K}}$, we define
\begin{equation}\label{E-gen}
\begin{aligned}
\Lg_\xi f(x) &\,\coloneqq\, \tfrac{1}{2}\trace\left(a(x)\nabla^{2}f(x)\right)
+ \bigl\langle b(x,\xi),\nabla f(x)\bigr\rangle\,,\\
\cG f(x) &\,\coloneqq\, \tfrac{1}{2}\trace\left(a(x)\nabla^{2}f(x)\right)
+ \max_{\xi\in{\mathscr{K}}}\, \bigl[\bigl\langle b(x,\xi),
\nabla f(x)\bigr\rangle + c(x,\xi) f(x)\bigr]\,.
\end{aligned}
\end{equation}
We summarize some results from \cite{ABK-16} that are needed in \cref{T2.1} below.
Without loss of generality we assume that $0\in Q$.
Consider the operator $S_t\colon\Cc(\overline{Q})\to\Cc(\overline{Q})$, $t\in\mathds{R}_+$,
defined by
\begin{equation*}
S_{t}f(x)\,\coloneqq\,\sup_{\xi\in{\Xi}}\,
\Exp^x_\xi\Bigl[e^{\int_{0}^t c(X_s,\xi_s)\,\mathrm{d}{s}}f(X_t)\Bigr]\,.
\end{equation*}
The characterization of $S_t$ is exactly analogous to \cite[Theorem~3.2]{ABK-16},
which considers the minimization problem (see also \cite[Remark~4.2]{ABK-16}).
Specifically, for each $f \in C^{2+\delta}_{\gamma}(\overline{Q})$,
and $T>0$, the quasi-linear parabolic p.d.e.
$\partial_t\,u(t,x) = \cG u(t,x)$
in $(0,T]\times Q$,
with $u(0,x) = f(x)$ for all $x \in \overline{Q}$, and
$\langle\nabla u(t,x), \gamma(x)\rangle = 0$ for all
$(t,x) \in (0,T]\times\partial{Q}$,
has a unique solution in
$\Cc^{1+\nicefrac{\delta}{2},2+\delta}\bigl([0,T]\times\overline{Q}\bigr)$.
This solution has the stochastic representation
$u(t,x) \,=\, S_{t}f(x)$ for all $(t,x)\in[0,T]\times\overline{Q}$.
Following the analysis in \cite{ABK-16} we obtain the following characterization
of $J_*(c;Q)$ defined in \cref{E-J*}.
\begin{theorem}\label{T2.1}
There exists a unique pair
$(\rho,V) \in \mathds{R}\times \Cc^{2}_{\gamma,+}(\overline{Q})$
which solves
\begin{equation}\label{ET2.1A}
\cG V \,=\,\rho V \text{\ \ in\ }Q\,,\qquad
\langle\nabla V,\gamma\rangle \,=\, 0 \text{\ \ on\ } \partial{Q}\,,
\quad\text{and\ \ } V(0)\,=\,1\,.
\end{equation}
Also,
$S_{t}V(x)=e^{\rho t}V(x)$, for $(x,t)\in\overline{Q}\times[0,\infty)$.
In addition, we have
\begin{equation*}
J^x_*(c;Q)\,=\,J_*(c;Q)
\,=\,\rho\qquad \forall\, x\in Q\,,
\end{equation*}
and
\begin{equation}\label{ET2.1B}
\rho \,=\, \adjustlimits\inf_{f \in \Cc^{2}_{\gamma,+}(\overline{Q}),\, f>0\;}
\sup_{x\in\overline{Q}}\;\frac{\cG f(x)}{f(x)}
= \adjustlimits\sup_{f \in \Cc^{2}_{\gamma,+}(\overline{Q}),\, f>0\;}
\inf_{x\in\overline{Q}}\;\frac{\cG f(x)}{f(x)}\,.
\end{equation}
\end{theorem}
\begin{proof}
\Cref{ET2.1B} is the result in \cite[Lemma~2.1]{ABK-16}, while the other assertions
follow from Lemma~4.5 and Remark~4.2 in \cite{ABK-16}.
\end{proof}
\subsection{A variational formula}
Define
\begin{equation*}
\sR(x,\xi,y) \,\coloneqq\, c(x,\xi) - \frac{1}{2}\abs{\upsigma^{\mathsf{T}}(x) y}^2\,,\quad
(x,\xi,y)\in \overline{Q}\times{\mathscr{K}}\times{\mathds{R}^{d}}\,,
\end{equation*}
and an operator
$\sA\colon \Cc^{2}_{\gamma}(\overline{Q}) \to \Cc({\mathds{R}^{d}}\times{\mathscr{K}}\times{\mathds{R}^{d}})$ by
\begin{equation*}
\sA\phi(x,\xi,y)\,\coloneqq\,\frac{1}{2}\trace\left(a(x)\nabla^{2}\phi(x)\right)
+ \bigl\langle b(x,\xi)+ a(x)y, \nabla \phi(x)\bigr\rangle\,.
\end{equation*}
It is important to note that if $f \in \Cc^{2}_{\gamma,+}(\overline{Q})$ is a positive
function and $g=\log f$, then
\begin{equation*}
\frac{\cG f(x)}{f(x)} \,=\, \adjustlimits\max_{\xi\in{\mathscr{K}}}\max_{y\in{\mathds{R}^{d}}}\;
\bigl[\sA g(x,\xi,y) + \sR(x,\xi,y)\bigr]\,.
\end{equation*}
Thus, we obtain from \cref{ET2.1B} that
\begin{align}
\rho &\,=\, \adjustlimits\inf_{g \in \Cc^{2}_{\gamma}(\overline{Q})\,}
\sup_{x\in\overline{Q}\;}\sup_{\xi\in{\mathscr{K}},\,y\in{\mathds{R}^{d}}}\,
\Bigl(\sA g(x,\xi,y) + \sR(x,\xi,y) \Bigr)
\label{E-infsup}\\
&\,=\, \adjustlimits\sup_{g \in \Cc^{2}_{\gamma}(\overline{Q})\,}
\inf_{x\in\overline{Q}\;}\sup_{\xi\in{\mathscr{K}},\,y\in{\mathds{R}^{d}}}\,
\Bigl(\sA g(x,\xi,y) + \sR(x,\xi,y)\Bigr)
\nonumber\,.
\end{align}
We let
\begin{equation}\label{E-F}
F(g,\mu) \,\coloneqq\,
\int_{\overline{Q}\times{\mathscr{K}}\times{\mathds{R}^{d}}}
\bigl(\sA g(x,\xi,y)+\sR(x,\xi,y)\bigr)\,\mu(\mathrm{d}{x},\mathrm{d}{\xi},\mathrm{d}{y})
\end{equation}
for $g\in\Cc^{2}_{\gamma}(\overline{Q})$
and $\mu\in\cP(\overline{Q}\times{\mathscr{K}}\times{\mathds{R}^{d}})$.
It is clear that \cref{E-infsup} can be written as
\begin{equation}\label{E-infsup1}
\rho \,=\, \adjustlimits\inf_{g\in\Cc^{2}_{\gamma}(\overline{Q})}
\sup_{\mu\in\cP(\overline{Q}\times{\mathscr{K}}\times{\mathds{R}^{d}})}\,F(g,\mu)\,.
\end{equation}
Let $\eom_{\sA,Q}$ denote the class of infinitesimal
ergodic occupation measures for the operator $\sA$, defined by
\begin{equation*}
\eom_{\sA,Q}\,\coloneqq\,\biggl\{
\mu\in \cP(\overline{Q}\times{\mathscr{K}}\times{\mathds{R}^{d}})\,\colon
\int_{\overline{Q}\times{\mathscr{K}}\times{\mathds{R}^{d}}} \sA f\,\mathrm{d}\mu=0\quad
\forall\, f\in\Cc^{2}_{\gamma}(\overline{Q})\biggr\}\,.
\end{equation*}
Implicit in this definition is the requirement that
$\int \abs{\sA f}\,\mathrm{d}\mu<\infty$ for all $f\in\Cc^{2}_{\gamma}(\overline{Q})$
and $\mu\in\eom_{\sA,Q}$.
We have the following result.
\begin{theorem}\label{T2.2}
It holds that
\begin{equation}\label{ET2.2A}
\rho \,=\, \adjustlimits\inf_{g \in \Cc^{2}_{\gamma}(\overline{Q})}
\sup_{\mu\in\cP(\overline{Q}\times{\mathscr{K}}\times{\mathds{R}^{d}})}\,F(g,\mu)
\,=\,
\adjustlimits\sup_{\mu\in\cP(\overline{Q}\times{\mathscr{K}}\times{\mathds{R}^{d}})}
\inf_{g \in \Cc^{2}_{\gamma}(\overline{Q})}\,F(g,\mu)\,.
\end{equation}
Moreover, $\cP(\overline{Q}\times{\mathscr{K}}\times{\mathds{R}^{d}})$ may be replaced with
$\eom_{\sA,Q}$ in \cref{ET2.2A}, and thus
\begin{equation*}
\rho \,=\, \sup_{\mu\in\eom_{\sA,Q}}\,
\int_{\overline{Q}\times{\mathscr{K}}\times{\mathds{R}^{d}}} \sR(x,\xi,y)\,\mu(\mathrm{d}{x},\mathrm{d}{\xi},\mathrm{d}{y})\,.
\end{equation*}
\end{theorem}
\begin{proof}
The first equality in \cref{ET2.2A} follows by \cref{E-infsup1}.
We continue to prove the rest of the assertions.
First note that
\begin{equation*}
\adjustlimits\sup_{\mu\in\cP(\overline{Q}\times{\mathscr{K}}\times{\mathds{R}^{d}})}
\inf_{g \in \Cc^{2}_{\gamma}(\overline{Q})}\,F(g,\mu)
\,=\, \Hat\rho
\,\coloneqq\, \sup_{\mu\in\eom_{\sA,Q}}\,
\int_{\overline{Q}\times{\mathscr{K}}\times{\mathds{R}^{d}}} \sR(x,\xi,y)\,\mu(\mathrm{d}{x},\mathrm{d}{\xi},\mathrm{d}{y})\,,
\end{equation*}
because the infimum on the left hand side is $-\infty$ for $\mu \notin \eom_{\sA, Q}$.
It follows by \cref{E-infsup1} that $\Hat\rho\le\rho$.
Let $v_*$ be a measurable selector from the maximizer of \cref{ET2.1A}, that is,
\begin{equation*}
\bigl\langle b\bigl(x,v_*(x)\bigr),\nabla V(x)\bigr\rangle
+ c\bigl(x,v_*(x)\bigr) V(x)\,=\,
\max_{\xi\in{\mathscr{K}}}\,
\bigl[\bigl\langle b(x,\xi),\nabla V(x)\bigr\rangle + c(x,\xi) V(x)\bigr]\,.
\end{equation*}
With $\phi\coloneqq\log V$, \cref{ET2.1A} takes the form
\begin{equation}\label{PT2.2Xa}
\sA\phi\bigl(x,v_*(x),\nabla \phi(x)\bigr) + \sR\bigl(x,v_*(x),\nabla\phi(x)\bigr)
\,=\,\rho\,.
\end{equation}
The reflected diffusion
with drift $b\bigl(x,v_*(x)\bigr)+a(x)\nabla\phi(x)$ is of course
exponentially ergodic.
Let $\eta_*$ denote its invariant probability measure.
Then, \cref{PT2.2Xa} implies
that
\begin{equation}\label{PT2.2Xb}
\int_Q \sR\bigl(x,v_*(x),\nabla\phi(x)\bigr)\,\eta_*(\mathrm{d}{x})\,=\,\rho\,.
\end{equation}
Let $\mu_*\in\cP(\overline{Q}\times{\mathscr{K}}\times{\mathds{R}^{d}})$ be defined by
\begin{equation*}
\mu_*(\mathrm{d}{x},\mathrm{d}{\xi},\mathrm{d}{y})\,\coloneqq\, \eta_*(\mathrm{d}{x})\,\delta_{v_*(x)}(\mathrm{d}{\xi})\,
\delta_{\nabla\phi(x)}(\mathrm{d}{y})\,,
\end{equation*}
where $\delta_y$ denotes the Dirac mass at $y$.
Then $\mu_*$ is an ergodic occupation measure for the controlled reflected diffusion
with drift $b(x,\xi)+a(x)y$, and thus $\mu_*\in\eom_{\sA,Q}$.
Let $g \in \Cc^{2}_{\gamma}(\overline{Q})$ be arbitrary.
Then
\begin{equation*}
F(g,\mu_*)\,=\,
\int_{\overline{Q}\times{\mathscr{K}}\times{\mathds{R}^{d}}} \sR(x,\xi,y)\,\mu_*(\mathrm{d}{x},\mathrm{d}{\xi},\mathrm{d}{y})
\,=\,\rho\,,
\end{equation*}
where the second equality follows by \cref{PT2.2Xb}.
Thus $\Hat\rho\ge\rho$, and since we have already asserted the reverse inequality,
we must have equality.
This establishes \cref{ET2.2A}, and also proves the last assertion
of the theorem.
\end{proof}
\section{The risk-sensitive reward problem on \texorpdfstring{${\mathds{R}^{d}}$}{}}
In this section we study the risk-sensitive reward maximization problem
on ${\mathds{R}^{d}}$.
We consider a controlled diffusion of the form
\begin{equation}\label{E-sde1}
\mathrm{d} X_t \,=\, b(X_t,\xi_t)\,\mathrm{d} t + \upsigma (X_t)\,\mathrm{d} W_t\,.
\end{equation}
All random processes in \cref{E-sde1} live in a complete
probability space $(\Omega,\sF,\Prob)$.
The control process $\{\xi_t\}_{t\ge0}$ lives in a compact metrizable space ${\mathscr{K}}$.
We approach the problem in ${\mathds{R}^{d}}$ as a limit of Dirichlet or Neumann eigenvalue
problems on balls $B_r$, $r>0$.
Differentiability of the matrix $a$ can be relaxed here.
Consider the eigenvalue problem on a ball $B_r$, with Neumann boundary
conditions, and the reflection direction along the exterior normal
$\vec n(x)$ to $B_r$ at $x$. The drift $b:\Bar{B}_r\times{\mathscr{K}}\to{\mathds{R}^{d}}$ is continuous,
and Lipschitz in its first argument uniformly with respect to the second.
The diffusion matrix $a$ is Lipschitz continuous on $\Bar{B}_r$ and
non-degenerate.
Let $\rho_r$ denote the principal eigenvalue on $B_r$ under
Neumann boundary conditions of the operator
$\cG$ defined in \cref{E-gen}.
We refer to $\rho_r$ as the \emph{Neumann eigenvalue} on $B_r$.
It follows from the results in \cite{Patrizi-09}
(see in particular Theorems~5.1, 6.6, and Proposition~7.1) that there exists
a unique $V_r\in\Cc^2(B_r)\cap \Cc^{0,1}(\Bar{B}_r)$, with $V_r>0$ on $B_r$
and $V_r(0)=1$, solving
\begin{equation}\label{E-HJBN}
\tfrac{1}{2}\trace\left(a(x)\nabla^{2}V_r(x)\right)
+ \max_{\xi\in{\mathscr{K}}}\, \bigl[\bigl\langle b(x,\xi),
\nabla V_r(x)\bigr\rangle + c(x,\xi) V_r(x)\bigr]\,=\, \rho_r V_r(x)\,,
\end{equation}
and $\langle \nabla V_r(x),\vec n(x)\rangle=0$ on $\partial B_r$.
We also refer the reader to \cite[Theorem~12.1, p.~195]{Lady}.
We adopt the following structural hypotheses on the
coefficients of \cref{E-sde1} and the
reward function $c$ have the following structural properties.
\begin{assumption}\label{A3.1}
\begin{enumerate}
\item[\ttup i]
The drift $b\colon\mathds{R}^{d}\times{\mathscr{K}}\to\mathds{R}^{d}$ is continuous,
and for some constant $C_R>0$ depending on $R>0$, we have
\begin{align}\label{E-growth}
\abs{b(x,\xi) - b(y,\xi)} + \norm{\upsigma(x) - \upsigma(y)} &\,\le\,C_{R}\,\abs{x-y}
\qquad\forall\,x,y\in B_R\,,\ \forall\, \xi\in{\mathscr{K}}\,,\nonumber\\
\sum_{i,j=1}^{d} a^{ij}(x)\zeta_{i}\zeta_{j}
&\,\ge\, C^{-1}_{R} \abs{\zeta}^{2}
\qquad\forall\, (x,\zeta)\in B_{R}\times{\mathds{R}^{d}}\,,\nonumber
\intertext{and}
\abs{b(x,\xi)}^2 + \norm{\upsigma(x)}^{2} &\,\le\, C_0
\bigl(1 + \abs{x}^{2}\bigr) \qquad \forall\, (x,\xi)\in\mathds{R}^{d}\times{\mathscr{K}}\,,
\end{align}
where $\norm{\upsigma}\coloneqq\bigl(\trace\, \upsigma\upsigma^{\mathsf{T}}\bigr)^{\nicefrac{1}{2}}$
denotes the Hilbert--Schmidt norm of $\upsigma$.
\item[\ttup{ii}]
The reward function $c\colon{\mathds{R}^{d}}\times{\mathscr{K}}\to\mathds{R}$ is continuous
and locally Lipschitz in its first argument
uniformly with respect to $\xi\in{\mathscr{K}}$, is bounded from above in ${\mathds{R}^{d}}$,
and $x\mapsto\max_{\xi\in{\Xi}}\,\abs{c(x,\xi)}$ has polynomial growth in $\abs{x}$.
\item[\ttup{iii}]
We assume that the Neumann eigenvalues $\rho_n$ satisfy
\begin{equation}\label{EA3.1A}
{\rho^{}_*}\,\coloneqq\,\limsup_{n\to\infty}\,\rho_n\,>\,
\lim_{r\to\infty}\,\sup_{(x,\xi)\in B_r^c\times{\mathscr{K}}}\,c(x,\xi)\,.
\end{equation}
\end{enumerate}
\end{assumption}
\Cref{A3.1} is enforced throughout the rest of the paper, unless mentioned
otherwise.
Part (i) of this assumption are the usual hypotheses that guarantee existence
and uniqueness of strong solutions to \cref{E-sde1} under any admissible control.
\begin{remark}
\Cref{EA3.1A} is a version of the near-monotone assumption,
which is often used in ergodic control problems (see \cite{ABG}).
This has the effect of penalizing instability, ensuring tightness
of laws for optimal controls.
There are two important cases where \cref{EA3.1A} is always satisfied.
First, when $-c$ is inf-compact.
In this case we have ${\rho^{}_*}\le \sup_{{\mathds{R}^{d}}\times{\mathscr{K}}} c$ and ${\rho^{}_*}>-\infty$, since the
Dirichlet eigenvalues which are a lower bound for ${\rho^{}_*}$ are increasing
as a function of the domain \cite[Lemma~2.1]{ABS-19}.
Second, when $c$ is positive and vanishes at infinity, and under some
stationary Markov control the process $\{X_t\}_{t\ge0}$ in \cref{E-sde1} is recurrent.
This can be established by comparing $\rho_n$ with the Dirichlet eigenvalue
on $B_n$ (see \cref{S3.2}), and using \cite[Theorems~2.6 and 2.7\,(ii)]{ABS-19}.
For related studies concerning the class of running reward functions vanishing
at infinity, albeit in the uncontrolled case, see
\cite{Ichihara-13b,Ichihara-15,ABS-19,Armstrong-09}.
See also \cite[Theorem~2.12]{AB-18b} which studies the Collatz--Wielandt formula
for the risk-sensitive minimization problem.
\end{remark}
Recall that ${\Xi_{\mathsf{sm}}}$ denotes the set of stationary Markov controls.
For $v\in{\Xi_{\mathsf{sm}}}$, we use the simplifying notation
\begin{equation*}
b_v(x) \,\coloneqq\, b\bigl(x,v(x)\bigr)\,,\qquad
c_v(x) \,\coloneqq\, c\bigl(x,v(x)\bigr)\,,
\end{equation*}
and define $\Lg_v$ analogously.
We next review some properties of eigenvalues of linear and semilinear operators
on ${\mathds{R}^{d}}$.
For $f\in\Cc^2({\mathds{R}^{d}})$ and $\psi\in\Sobl^{2,d}({\mathds{R}^{d}})$, define
\begin{equation}\label{E-twist}
\widetilde\Lg^\psi_\xi f \,\coloneqq\,
\Lg_\xi f + \langle a\nabla\psi,\nabla f\rangle\,,
\end{equation}
with $\Lg_\xi$ as in \cref{E-gen}.
Let $v\in{\Xi_{\mathsf{sm}}}$.
Suppose that a positive function $\Psi\in\Sobl^{2,d}({\mathds{R}^{d}})$ and $\lambda\in\mathds{R}$
solve the equation
\begin{equation}\label{E-eigen}
\Lg_v \Psi(x) + c_v(x) \Psi(x) \,=\, \lambda \Psi(x)\quad\text{a.e.\ } x\in{\mathds{R}^{d}}\,.
\end{equation}
We refer to any such solution $(\Psi,\lambda)$ as an \emph{eigenpair of
the operator $\Lg_v+c_v$}, and we say that $\Psi$ is an eigenvector
with eigenvalue $\lambda$.
Note that by eigenvector we always mean a positive function.
Let $\psi=\log\Psi$.
We refer to the It\^o stochastic differential
equation
\begin{equation}\label{E-sde2}
\mathrm{d} \widetilde{X}_t \,=\, \bigl(b_v(\widetilde{X}_t)
+ a(\widetilde{X}_t)\nabla\psi(\widetilde{X}_t)\bigr)
\,\mathrm{d} t + \upsigma (\widetilde{X}_t)\,\mathrm{d} W_t
\end{equation}
as the \emph{twisted} SDE, and to its solution
as the \emph{twisted} process corresponding to $\Psi$.
Clearly $\widetilde\Lg^\psi_v$ is the extended generator of \cref{E-sde2}.
We define the \emph{generalized principal eigenvalue} $\lambda_v=\lambda_v(c_v)$ of
the operator $\Lg_v + c_v$ by
\begin{equation}\label{E-princ1}
\lambda_v \,\coloneqq\,\inf\,\Bigl\{\lambda\in\mathds{R}\,
\colon \exists\, \phi\in\Sobl^{2,d}({\mathds{R}^{d}}),\ \phi>0, \
\Lg_{v}\phi + (c_{v}-\lambda)\phi\le 0 \text{\ a.e.\ in\ } {\mathds{R}^{d}}\Bigr\}\,.
\end{equation}
A \emph{principal eigenvector} $\Psi_{\mspace{-2mu}v}\in\Sobl^{2,d}({\mathds{R}^{d}})$
is a positive solution of \cref{E-eigen} with $\lambda=\lambda_v$.
A principal eigenvector is also called a \emph{ground state}, and
we refer to the corresponding twisted SDE and twisted process as
a \emph{ground state SDE} and \emph{ground state process} respectively.
Unlike what is common in criticality theory, our definition of a
ground state does not require the minimal growth property of the principal eigenfunction
(see \cite{ABG-19}).
An easy calculation shows that any eigenpair
$(\Psi,\lambda)$ of $\Lg_v + c_v$ satisfies
\begin{equation}\label{E-eigenT}
\widetilde\Lg^\psi_v \Psi^{-1}(x) - c_v(x) \Psi^{-1}(x)
\,=\, -\lambda \Psi^{-1}(x)\quad\text{a.e.\ } x\in{\mathds{R}^{d}}\,,
\end{equation}
with $\psi=\log\Psi$.
In other words, $(\Psi^{-1},-\lambda)$ is an eigenpair of $\widetilde\Lg^\psi_v-c_v$.
Note also that $(\psi,\lambda)$ is a solution to the `linear' eigenvalue equation
\begin{equation}\label{E-eigen2}
\widetilde\Lg^\psi_v \psi - \tfrac{1}{2}\abs{\upsigma^{\mathsf{T}}\nabla\psi}^2
+ c_v \,=\, \lambda\,,
\end{equation}
and that this equation can also be written as
\begin{equation}\label{E-eigen3}
\Lg_v \psi + \max_{y\in{\mathds{R}^{d}}}\,\Bigl[\langle a y,\nabla\psi\rangle
-\tfrac{1}{2}\abs{\upsigma^{\mathsf{T}} y}^2\Bigr]
+ c_v \,=\, \lambda\,.
\end{equation}
An extensive study of generalized principal eigenvalues
with applications to risk-sensitive control can be found in \cite{AB-18,ABS-19}.
In these papers, the `potential' $c_v$ is assumed to be bounded below in ${\mathds{R}^{d}}$,
so the results cannot be quoted directly.
It is not our intention to reproduce all these results for potentials which
are bounded above, so we only focus on results that are needed later in this paper.
We only quote results in \cite{AB-18,ABS-19} which do not
depend on the assumption that $c_v$ is bounded below.
Generally speaking, caution should be exercised with arguments
in \cite{AB-18,ABS-19} that employ the Fatou lemma.
On the other hand, since $c$ usually appears in the exponent, invoking Fatou's lemma
hardly ever poses any problems.
Suppose that the twisted process in \cref{E-sde2} is regular, that is,
the solution exists for all times.
Then, an application of \cite[Lemma~2.3]{ABS-19} shows that
an eigenvector $\Psi$ has the stochastic representation
(semigroup property)
\begin{equation*}
\Psi(x) \,=\, \Exp^x_v\Bigl[\mathrm{e}^{\int_0^{t}[c_{v}(X_s)-\lambda]\, \mathrm{d}{s}}\,
\Psi(X_t)\Bigr]\,.
\end{equation*}
Recall that ${\Breve\uptau}_r$ denotes the first hitting time of the ball $B_r$, for $r>0$.
We need the following lemma.
\begin{lemma}\label{L3.1}
We assume only \cref{A3.1}\,\textup{(i)--(ii)}.
The following hold.
\begin{enumerate}
\item[\ttup a]
If $(\Psi,\lambda)$ is an eigenpair of $\Lg_v + c_v$ under some $v\in{\Xi_{\mathsf{sm}}}$,
and the twisted process in \cref{E-sde2} is exponentially ergodic, then
we have the stochastic representation
\begin{equation}\label{EL3.1A}
\Psi(x) \,=\, \Exp^x_v \Bigl[\mathrm{e}^{\int_0^{{\Breve\uptau}_r}[c_{v}(X_s)-\lambda]\, \mathrm{d}{s}}\,
\Psi(X_{{\Breve\uptau}_r})\,\Ind_{\{{\Breve\uptau}_r<\infty\}}\Bigr]\quad\forall\,x\in\Bar{B}_r^c\,,
\ \forall\,r>0\,.
\end{equation}
In addition, $\lambda=\lambda_v$, the generalized principal
eigenvalue of $\Lg_v + c_v$, and the ground state $\Psi=\Psi_{\mspace{-2mu}v}$ is unique up
to multiplication by a positive constant.
\item[\ttup b]
Any eigenpair $(\Psi,\lambda)\in\Sobl^{2,d}({\mathds{R}^{d}})\times{\mathds{R}^{d}}$
of $\Lg_v + c_v$ satisfying \cref{EL3.1A} is a principal eigenpair,
and $\lambda$ is a simple eigenvalue.
\end{enumerate}
\end{lemma}
\begin{proof}
Combining the proof of \cite[Theorem~2.2]{ABS-19} with
\cite[Theorem~3.1]{ABS-19}, we deduce that for every $r>0$, there exists a $\delta>0$
such that
\begin{equation}\label{PL3.1A}
\Exp^x_v\Bigl[\mathrm{e}^{\int_0^{{\Breve\uptau}_r} [c_v(X_s)-\lambda + \delta]\,\mathrm{d}{s}}
\,\Ind_{\{{\Breve\uptau}_r<\infty\}}\Bigr] \,<\, \infty\,,
\quad x\in B_r^c.
\end{equation}
Applying the It\^o formula to \cref{E-eigen} we obtain
\begin{equation}\label{PL3.1B}
\begin{aligned}
\Psi(x) &\,=\, \Exp^x_v \Bigl[\mathrm{e}^{\int_0^{t\wedge{\Breve\uptau}_r\wedge\uptau_n}
[c_{v}(X_s)-\lambda]\, \mathrm{d}{s}}\,
\Psi(X_{t\wedge{\Breve\uptau}_r\wedge\uptau_n})\Bigr]\\
&\,=\, \Exp^x_v \Bigl[\mathrm{e}^{\int_0^{{\Breve\uptau}_r} [c_{v}(X_s)-\lambda]\, \mathrm{d}{s}}\,
\Psi(X_{{\Breve\uptau}_r})\,\Ind_{\{{\Breve\uptau}_r<t\wedge\uptau_n\}}\Bigr]\\
&\mspace{70mu}
+ \mathrm{e}^{-\delta t} \Exp^x_v \Bigl[\mathrm{e}^{\int_0^{t} [c_{v}(X_s)-\lambda+\delta]\, \mathrm{d}{s}}\,
\Psi(X_{t})\,\Ind_{\{t<{\Breve\uptau}_r\wedge\uptau_n\}}\Bigr]\\
&\mspace{140mu}
+ \Exp^x_v \Bigl[\mathrm{e}^{\int_0^{\uptau_n} [c_{v}(X_s)-\lambda]\, \mathrm{d}{s}}\,
\Psi(X_{\uptau_n})\,\Ind_{\{\uptau_n<t\wedge{\Breve\uptau}_r\}}\Bigr]\,.
\end{aligned}
\end{equation}
We study separately the three integrals on the right-hand side of \cref{PL3.1B},
which we denote as $\mathscr{J}_i$, $i=1,2,3$.
For the first integral we have
\begin{equation*}
\lim_{n\to\infty}\,\lim_{t\to\infty}\, \mathscr{J}_1
\,=\, \Exp^x_v \Bigl[\mathrm{e}^{\int_0^{{\Breve\uptau}_r}[c_{v}(X_s)-\lambda]\, \mathrm{d}{s}}\,
\Psi(X_{{\Breve\uptau}_r})\,\Ind_{\{{\Breve\uptau}_r<\infty\}}\Bigr]
\end{equation*}
by monotone convergence. Note that the limit is also finite by \cref{PL3.1A}.
Let
$\widetilde\Prob^x_{\psi,v}$ and $\widetilde\Exp^x_{\psi,v}$ denote the
probability measure and expectation operator on the canonical space of the
twisted process in \cref{E-sde2} with initial condition $\Tilde{X}_0=x$.
Next, using again the technique in \cite[Theorem~2.2]{ABS-19},
we write
\begin{equation*}
\begin{aligned}
\mathscr{J}_2
&\,=\, \mathrm{e}^{-\delta t}\,\Exp^x_v\Bigl[\mathrm{e}^{\int_0^{t\wedge{\Breve\uptau}_r\wedge\uptau_n}
[c_v(X_s)-\lambda + \delta]\,\mathrm{d}{s}}
\,\Psi(X_{t\wedge{\Breve\uptau}_r\wedge\uptau_n})\,\Ind_{\{t<{\Breve\uptau}_r\wedge\uptau_n\}}\Bigr]\\
&
\,\le\, \mathrm{e}^{-\delta t} \, \Exp^x_v \Bigl[\mathrm{e}^{\int_0^{t\wedge{\Breve\uptau}_r\wedge\uptau_n}
[c_{v}(X_s)-\lambda+ \delta]\, \mathrm{d}{s}}\,
\Psi(X_{t\wedge{\Breve\uptau}_r\wedge\uptau_n})\Bigr]\\
&
\,\le\, \mathrm{e}^{-\delta t}\,
\widetilde\Exp^x_{\psi,v}\Bigl[\mathrm{e}^{\delta(t\wedge{\Breve\uptau}_r\wedge\uptau_n)}\Bigr]
\,\le\, \mathrm{e}^{-\delta t}\,
\widetilde\Exp^x_{\psi,v}\bigl[\mathrm{e}^{\delta{\Breve\uptau}_r}\bigr]\,,
\end{aligned}
\end{equation*}
where in the second inequality we apply \cite[Lemma~2.3]{ABS-19}.
Thus, $\mathscr{J}_2$ vanishes as $t\to\infty$.
Concerning $\mathscr{J}_3$, using monotone convergence, we obtain
\begin{equation}\label{PL3.1C}
\lim_{t\to\infty}\, \mathscr{J}_3
\,=\, \Exp^x_v \Bigl[\mathrm{e}^{\int_0^{\uptau_n} [c_{v}(X_s)-\lambda]\, \mathrm{d}{s}}\,
\Psi(X_{\uptau_n})\,\Ind_{\{\uptau_n<{\Breve\uptau}_r\}}\Bigr]
\,\le\, \Psi(x)\,\widetilde\Prob^x_{\psi,v}\bigl(\uptau_n<{\Breve\uptau}_r)\,.
\end{equation}
where the inequality follows from the proof of \cite[Lemma~2.3]{ABS-19}.
In turn, the right-hand side of \cref{PL3.1C}
vanishes as $n\to\infty$, since the twisted process is geometrically ergodic.
This completes the proof of \cref{EL3.1A}.
Suppose that a positive $\phi\in\Sobl^{2,d}({\mathds{R}^{d}})$ and $\Hat\lambda\le\lambda$ solve
\begin{equation*}
\Lg_v \phi(x) + c_v(x) \phi(x) \,\le\, \Hat\lambda \phi(x)
\quad\text{a.e.\ } x\in{\mathds{R}^{d}}\,.
\end{equation*}
An application of It\^o's formula and Fatou's lemma then shows that
\begin{equation}\label{PL3.1D}
\phi(x) \,\ge\,
\Exp^x_v \Bigl[\mathrm{e}^{\int_0^{{\Breve\uptau}_r}[c_{v}(X_s)-\Hat\lambda]\, \mathrm{d}{s}}\,
\phi(X_{{\Breve\uptau}_r})\,\Ind_{\{{\Breve\uptau}_r<\infty\}}\Bigr]\qquad\forall\, x\in\Bar{B}_r^c\,,
\ \ \forall\,r>0\,.
\end{equation}
\Cref{EL3.1A,PL3.1D} imply that if we scale $\phi$ by multiplying it with
a positive constant until it touches $\Psi$ at one point from above,
the function $\frac{\phi}{\Psi}$ attains its minimum value of $1$ at some
point in $\Bar{B}_r$.
A standard calculation shows that
\begin{equation*}
\widetilde\Lg^\psi_v \bigl(\tfrac{\phi}{\Psi}\bigr)(x) \,\le\, (\Hat\lambda-\lambda)
\bigl(\tfrac{\phi}{\Psi}\bigr)(x)\,.
\end{equation*}
Thus, $\frac{\phi}{\Psi}$ must equal a constant by the strong maximum principle,
which implies that $\Hat\lambda=\lambda$.
This of course means that $\lambda=\lambda_v$.
Uniqueness of $\Psi_{\mspace{-2mu}v}$ is evident from the preceding argument.
This completes the proof of part (a).
Part (b) is evident from the preceding paragraph.
This completes the proof.
\end{proof}
\subsection{The Bellman equation in \texorpdfstring{${\mathds{R}^{d}}$}{}}
Recall the solution $(V_r,\rho_r)$ of \eqref{E-HJBN}, the definition of ${\rho^{}_*}$ in
\cref{EA3.1A}, and the definition of $\cG$ in \cref{EcG}.
We define
\begin{equation}\label{E-princ2}
\lambda_*\,\coloneqq\,\inf\,\Bigl\{\lambda\in\mathds{R}\,
\colon \exists\, \phi\in\Sobl^{2,d}({\mathds{R}^{d}}),\ \phi>0, \
\cG\phi -\lambda\phi\le 0 \text{\ a.e.\ in\ } {\mathds{R}^{d}}\Bigr\}\,.
\end{equation}
Recall the definitions of $\sA$ and $\sR$ in \cref{EsA,EsR}.
Note that if $(\Phi,\lambda)$ is an eigenpair of $\cG$,
then similarly to \cref{E-eigen3}, we have
\begin{equation}\label{E-eigen4}
\adjustlimits\max_{\xi\in{\mathscr{K}}}
\max_{y\in{\mathds{R}^{d}}}\,\bigl[\sA\varphi(x,\xi,y) + \sR(x,\xi,y)\bigr]
\,=\,\lambda\,,
\end{equation}
with $\varphi=\log\Phi$.
\begin{theorem}\label{T3.1}
There exists $\Phi_{\mspace{-2mu}*}\in\Cc^2({\mathds{R}^{d}})$ satisfying
\begin{equation}\label{ET3.1A}
\max_{\xi\in{\mathscr{K}}}\,\bigl[\Lg_\xi \Phi_{\mspace{-2mu}*}(x) + c(x,\xi) \Phi_{\mspace{-2mu}*}(x) \bigr] \,=\,
{\rho^{}_*} \Phi_{\mspace{-2mu}*}(x) \quad\forall\,x\in{\mathds{R}^{d}}\,,
\end{equation}
and the following hold:
\begin{enumerate}
\item[\ttup a]
The function $\Phi_{\mspace{-2mu}*}^{-1}$ is inf-compact.
\item[\ttup b]
If $v_*$ is an a.e.\ measurable selector from the maximizer of \cref{ET3.1A},
then, the diffusion with extended generator $\widetilde\Lg_{v_*}^{{\varphi^{}_{\mspace{-2mu}*}}}$,
as defined in \cref{E-twist}, is exponentially ergodic and satisfies
\begin{equation}\label{ET3.1B}
\widetilde\Lg_{v_*}^{{\varphi^{}_{\mspace{-2mu}*}}}\Phi_{\mspace{-2mu}*}^{-1}(x) \,=\,
\bigl(c_{v_*}(x)-{\rho^{}_*}\bigr)\, \Phi_{\mspace{-2mu}*}^{-1}(x)\,,
\end{equation}
with ${\varphi^{}_{\mspace{-2mu}*}}\coloneqq\log\Phi_{\mspace{-2mu}*}$.
\item[\ttup c] ${\rho^{}_*} = \lambda_*$.
\item[\ttup d]
$\rho_n\to{\rho^{}_*}$ and $V_n\to\Phi_{\mspace{-2mu}*}$ as $n\to\infty$
uniformly on compact sets, and
the solution $\Phi_{\mspace{-2mu}*}$ to \cref{ET3.1A} is unique up to a scalar multiple, and
satisfies
\begin{equation}\label{ET3.1C}
\begin{aligned}
\Phi_{\mspace{-2mu}*}(x) \,\ge\, \Exp^x_v \Bigl[\mathrm{e}^{\int_0^{{\Breve\uptau}_r}[c_{v}(X_s)-{\rho^{}_*}]\, \mathrm{d}{s}}\,
\Phi_{\mspace{-2mu}*}(X_{{\Breve\uptau}_r})\,\Ind_{\{{\Breve\uptau}_r<\infty\}}\Bigr]\qquad\forall\,x\in\Bar{B}_r^c\,,
\end{aligned}
\end{equation}
for all $r>0$,
and for all $v\in{\Xi_{\mathsf{sm}}}$, with equality if and only if $v$ is an a.e.\ measurable selector
from the maximizer in \cref{ET3.1A}.
\end{enumerate}
\end{theorem}
\begin{proof}
Using Theorem~\ref{T2.1} and \eqref{E-JQ}-\eqref{E-J*}, it follows
that $\rho_n \le \sup_{{\mathds{R}^{d}}\times{\mathscr{K}}} c$, and this combined with \cref{A3.1}\,(iii)
shows that $\{\rho_n\}$ converges along some subsequence
$\{n_k\}_{k\in\mathds{N}}\subset\mathds{N}$ to ${\rho^{}_*}$.
Therefore, the convergence of $V_{n_k}$ along some further subsequence
$\{ n_k'\}\subset\{n_k\}$ to a $\Phi_{\mspace{-2mu}*}$ satisfying \cref{ET3.1A}
follows as in the proof of \cite[Lemma~2.1]{Biswas-11a}.
We now turn to part (a). Here in fact we show that $-\abs{{\varphi^{}_{\mspace{-2mu}*}}}$ has at least
logarithmic growth in $\abs{x}$.
Let $\delta\in(0,1)$ be a constant such that
${\rho^{}_*}-c(x,\xi) >4\delta$ for all $x$ outside some compact set in ${\mathds{R}^{d}}$.
Consider a function of the form
$\phi(x) = \bigl(1 + \abs{x}^2\bigr)^{-\theta}$, with $\theta>0$.
By \cref{E-growth}, there exists $\theta>0$ and $r_\circ>0$ such that
\begin{equation}\label{ER3.2A}
\max\,\bigl(\Lg_\xi \phi(x), \babs{\upsigma^{\mathsf{T}}(x)\nabla\phi(x)}\bigr)
\,\le\, \delta\phi(x)\qquad\forall\,(x,\xi)\in B_{r_\circ}^c\times{\mathscr{K}}\,.
\end{equation}
We fix such a constant $\theta$.
We restrict our attention to solutions $(V_n,\rho_n)$ of \cref{E-HJBN} over
an increasing sequence in $\mathds{N}$, also denoted as $\{n\}$,
such that $\rho^{}_n$ converges to ${\rho^{}_*}$.
It is clear then that we may enlarge the radius $r_\circ$, if needed, so that
\begin{equation}\label{ER3.2B}
\rho^{}_n-c(x,\xi)\,>\, 3\delta\,\qquad \forall\,(x,\xi)\in B_{r_\circ}^c\times{\mathscr{K}}\,,
\ \text{and\ } n\ge r_\circ\,.
\end{equation}
Next, let $\Breve\chi\colon\mathds{R}\to(0,\infty)$ be a convex function in $\Cc^2(\mathds{R})$
such that $\Breve\chi(t)=t$ for $t\ge2$, and $\Breve\chi(t)$ is constant
and positive for $t\le 1$.
This can be chosen so that $\Breve\chi''<2$ and $\sup_{t>0}\, t\Breve\chi''(t)<2$.
Such a function can be constructed by requiring, for example, that
$\Breve\chi''(t) = 6 (2-t)(t-1)$ for $t\in[1,2]$,
from which we obtain
$\Breve\chi(t) = -\frac{1}{2} t^4 + 3 t^3 -6t^2 + 5t$ for $t\in[1,2]$.
A simple calculation shows that $\Breve\chi(1) = \frac{3}{2}$.
Note that $\Breve\chi(t)-t\Breve\chi'(t)\ge0$ for all $t>0$ by convexity.
Let $\Breve\chi_\epsilon(t) \coloneqq \epsilon\Breve\chi\bigl(\nicefrac{t}{\epsilon}\bigr)$
for $\epsilon>0$.
Then
\begin{equation}\label{ER3.2C}
\Breve\chi_\epsilon(t)-t\Breve\chi'_\epsilon(t)\,\ge\,0\,,\quad\text{and\ \ }
t \Breve\chi''_\epsilon(t)\,<\,2 \qquad\forall\,t>0\,.
\end{equation}
Using \cref{ER3.2A,ER3.2B,ER3.2C},
we obtain
\begin{equation}\label{ER3.2D}
\begin{aligned}
\Lg_\xi \Breve\chi_\epsilon\bigl(\phi(x)\bigr)
&+ \bigl(c(x,\xi)-\rho^{}_n\bigr) \Breve\chi_\epsilon\bigl(\phi(x)\bigr)\\
&\,\le\, -3\delta \Breve\chi_\epsilon\bigl(\phi(x)\bigr)
+ \Breve\chi'_\epsilon\bigl(\phi(x)\bigr)\,\Lg_\xi \phi(x)
+\frac{1}{2}\Breve\chi''_\epsilon\bigl(\phi(x)\bigr)
\abs{\upsigma^{\mathsf{T}}(x)\nabla\phi(x)}^2\\
&\le -3\delta \Breve\chi_\epsilon\bigl(\phi(x)\bigr)
+\delta \phi(x)\,\Breve\chi'_\epsilon\bigl(\phi(x)\bigr)
+ \frac{1}{2} \delta^2 \bigl(\phi(x)\bigr)^2\Breve\chi''_\epsilon\bigl(\phi(x)\bigr)\\
&\le -\delta \Breve\chi_\epsilon\bigl(\phi(x)\bigr)\,.
\end{aligned}
\end{equation}
For the last inequality in \cref{ER3.2D}, we use
the properties $\Breve\chi_\epsilon(\phi)
\ge \phi\,\Breve\chi_\epsilon'(\phi)$
and $\phi\,\Breve\chi_\epsilon''(\phi)<2$
from \cref{ER3.2C},
that the fact that $\Breve\chi_\epsilon(\phi)\ge\phi$ and $\delta<1$.
Note that, due to radial symmetry, the support of $\Breve\chi'_\epsilon\comp\phi$
is a ball of the form $B_{R_\epsilon}$, with $\epsilon\mapsto R_\epsilon$ an
nonincreasing continuous function with $R_\epsilon\to\infty$ as $\epsilon\searrow0$.
Recall the functions $V_n$ in \cref{E-HJBN}.
Select $\epsilon$ such that $R_\epsilon=n>r_\circ$.
Scale $V_n$ until it touches $\Breve\chi_\epsilon\comp\phi$ at some point $\Hat{x}$
from below.
Here, $\Breve\chi_\epsilon\comp\phi$ denotes the composition of
$\Breve\chi_\epsilon$ and $\phi$.
Let $v^{}_n$ be a measurable selector from the minimizer in \cref{E-HJBN},
and define $h_n \coloneqq \Breve\chi_\epsilon\comp\phi - V_n$.
Then, by \cref{E-HJBN,ER3.2D}, we have
\begin{equation*}
\Lg_{v^{}_n} h_n(x)
+ \bigl(c_{v^{}_n}(x)-\rho^{}_n\bigr) h_n(x) \,<\,0
\qquad\forall\,x\in{\mathds{R}^{d}}\,,
\end{equation*}
and $\langle\nabla h_n,\gamma\rangle =0$ on $\partial B_n$,
since the gradient of $\Breve\chi_\epsilon\comp\phi$ vanishes on
$\partial B_{R_\epsilon}$.
It follows by the strong maximum principle that $\Hat{x}$ cannot lie in the
$B_{n}\setminus B_{r_\circ}$.
Thus $h_n>0$ on this set.
This implies that $\Hat{x}$ cannot lie on $\partial B_n$ either,
without contradicting the Hopf boundary point lemma.
Thus $\Hat{x}\in B_{r_\circ}$.
This however shows by taking limits as $\epsilon\searrow0$, and employing
the Harnack inequality which asserts that $V_n(x)\le C_{\mathsf{H}} V_n(y)$
for all $x,y\in B_{r_\circ}$ for some constant $C_{\mathsf{H}}$,
that $\Phi_{\mspace{-2mu}*}\le C\phi$ for some constant $C$.
This proves part (a).
\Cref{ET3.1B} follows by \cref{E-eigenT}.
Since $\Phi_{\mspace{-2mu}*}^{-1}$ is inf-compact and the right hand side of
\cref{ET3.1B} is negative and bounded away from zero outside a compact set
by \cref{A3.1}\,(iii),
the associated diffusion is ergodic \cite[Theorem~4.1]{Ichihara-13b}.
In turn, the Foster--Lyapunov equation in \cref{ET3.1B} shows
that the diffusion is exponentially ergodic \cite{MeTw}.
This proves part (b).
Moving to the proof of part (c), suppose
that for some $\rho\le{\rho^{}_*}$ we have
\begin{equation}\label{PT3.1E}
\max_{\xi\in{\mathscr{K}}}\,\bigl[\Lg_\xi \phi(x) + c(x,\xi) \phi(x) \bigr]
\,\le\, \rho\,\phi(x)\,.
\end{equation}
Evaluating this equation at measurable selector $v_*$ from the maximizer
of \cref{ET3.1A},
and following the argument in the proof
of \cref{L3.1} we obtain $\rho={\rho^{}_*}$ and $\phi=\Phi_{\mspace{-2mu}*}$.
This also shows that ${\rho^{}_*}\ge\lambda_*$ by the definition in \cref{E-princ2},
and thus we have equality by \cref{ET3.1A}.
In order to prove part (d), suppose that $\rho_n\to\rho\le{\rho^{}_*}$ along
some subsequence.
Taking limits along perhaps a further subsequence, we obtain a positive
function $\phi\in\Cc^2({\mathds{R}^{d}})$ that satisfies \cref{PT3.1E} with equality.
Thus $\rho={\rho^{}_*}$ and and $\phi=\Phi_{\mspace{-2mu}*}$ by part (c).
The stochastic representation in \cref{ET3.1C} follows as in the proof of \cref{L3.1}.
This completes the proof.
\end{proof}
\subsection{Dirichlet eigenvalues and the risk-sensitive value}\label{S3.2}
In this section we first show that the problem in ${\mathds{R}^{d}}$ can also be approached
by using Dirichlet eigensolutions.
The main result is \cref{T3.2}, which establishes that
${\rho^{}_*}$ equals the risk-sensitive value $J_*$,
and the usual verification of optimality criterion.
We borrow some results from \cite{BNV-94,Berestycki-15}.
These can also be found in \cite[Lemma~2.2]{AB-18}, and are summarized
as follows:
Fix any $v\in{\Xi_{\mathsf{sm}}}$.
For each $r\in(0,\infty)$ there exists a unique pair
$(\Psi_{\mspace{-2mu}v,r},\lambda_{v,r})
\in\bigl(\Sob^{2,p}(B_r)\cap\Cc(\Bar{B}_r)\bigr)\times\mathds{R}$,
for any $p>d$, satisfying
$\Psi_{\mspace{-2mu}v,r}>0$ on $B_r$, $\Psi_{\mspace{-2mu}v,r}=0$ on
$\partial B_r$, and $\Psi_{\mspace{-2mu}v,r}(0)=1$,
which solves
\begin{equation}\label{E-Leigen}
\Lg_v \Psi_{\mspace{-2mu}v,r}(x) + c_v(x)\,\Psi_{\mspace{-2mu}v,r}(x)
\,=\, \lambda_{v,r}\,\Psi_{\mspace{-2mu}v,r}(x)
\qquad\text{a.e.\ }x\in B_r\,.
\end{equation}
Moreover, the solution has the following properties:
\begin{enumerate}
\item[(i)]
The map $r\mapsto\lambda_{v,r}$ is continuous and strictly increasing.
\item[(ii)]
In its dependence on the function $c_v$,
$\lambda_{v,r}$ is nondecreasing, convex, and Lipschitz continuous
(with respect to the $\Lp^{\infty}$ norm) with Lipschitz constant $1$.
In addition, if $c_v\lneqq c_v'$ then $\lambda_{v,r}(c_v)<\lambda_{v,r}(c_v')$.
\end{enumerate}
We refer to $\lambda_{v,r}$ and $\Psi_{\mspace{-2mu}v,r}$ as the
(Dirichlet) eigenvalue
and eigenfunction, respectively, of the operator $\Lg_v + c_v$ on
$B_r$.
Recall the definition of $\cG$ in \cref{EcG}.
Based on the results in \cite{Quaas-08}, there exists a unique pair
$(\Psi_{\mspace{-2mu}*,r},\lambda_{*,r})\in
\bigl(\Cc^2(B_r)\cap\Cc(\Bar{B}_r)\bigr)\times\mathds{R}$, satisfying
$\Psi_{\mspace{-2mu}*,r}>0$ on $B_r$, $\Psi_{\mspace{-2mu}*,r}=0$ on
$\partial B_r$, and $\Psi_{\mspace{-2mu}*,r}(0)=1$, which solves
\begin{equation}\label{E-Geigen}
\cG \Psi_{\mspace{-2mu}*,r}(x)
\,=\, \lambda_{*,r}\,\Psi_{\mspace{-2mu}*,r}(x)
\qquad\forall\,x\in B_r\,,
\end{equation}
and properties (i)--(ii) above hold for $\lambda_{*,r}$.
Also recall the definitions of the generalized principal eigenvalues
in \cref{E-princ1,E-princ2}, and $\rho_r$ defined in \cref{E-HJBN}.
\begin{lemma}\label{L3.2}
The following hold:
\begin{enumerate}
\item[\ttup i]
For $r>0$, we have $\lambda_{v,r}\le \lambda_{*,r}$ for all $v\in{\Xi_{\mathsf{sm}}}$, and
$\lambda_{*,r}< \rho_r$.
\item[\ttup {ii}]
$\lim_{r\to\infty}\, \lambda_{v,r}=\lambda_v$ for all $v\in{\Xi_{\mathsf{sm}}}$, and
$\lim_{r\to\infty}\, \lambda_{*,r}=\lambda_*$.
\end{enumerate}
\end{lemma}
\begin{proof}
Part (i) is a straightforward application of the strong maximum principle.
By \cref{E-gen,E-Geigen} we have
\begin{equation}\label{PL3.2A}
\Lg_v \Psi_{\mspace{-2mu}*,r}(x) + c_v(x)\,\Psi_{\mspace{-2mu}*,r}(x)
\,\le\, \lambda_{*,r}\,\Psi_{\mspace{-2mu}*,r}(x)
\qquad\text{a.e.\ }x\in B_r\,.
\end{equation}
Let $r'<r$, and suppose that $\lambda_{v,r'}\,\ge\, \lambda_{*,r}$.
Scale $\Psi_{\mspace{-2mu}v,r'}$ so that it touches $\Psi_{\mspace{-2mu}*,r}$
at one point from below in $B_{r'}$.
Then $\Psi_{\mspace{-2mu}*,r}-\Psi_{\mspace{-2mu}v,r'}$ is nonnegative, and
by \cref{E-Leigen,PL3.2A} it satisfies
\begin{equation*}
\begin{aligned}
\Lg_v(\Psi_{\mspace{-2mu}*,r}&-\Psi_{\mspace{-2mu}v,r'})
-\bigl(c_v-\lambda_{*,r}\bigr)^{-} (\Psi_{\mspace{-2mu}*,r}-\Psi_{\mspace{-2mu}v,r'})\\
&\,=\, -\bigl(c_v-\lambda_{*,r}\bigr)^{+}
(\Psi_{\mspace{-2mu}*,r}-\Psi_{\mspace{-2mu}v,r'})
- \bigl(\lambda_{v,r}-\lambda_{*,r}\bigr)\Psi_{\mspace{-2mu}v,r'}
\,\le\, 0
\quad\text{a.e.\ on\ } B_{r'}\,.
\end{aligned}
\end{equation*}
This however implies that $\Psi_{\mspace{-2mu}*,r}=\Psi_{\mspace{-2mu}v,r'}$ on $B_{r'}$
which is a contradiction.
Hence $\lambda_{v,r'}\,<\, \lambda_{*,r}$ for all $r'<r$ and
the inequality $\lambda_{v,r}\le \lambda_{*,r}$ follows by the continuity
of $r\mapsto\lambda_{v,r}$.
Following the same method, with $r'=r$, we obtain $\lambda_{*,r}< \rho_r$.
Part (ii) follows by \cite[Lemma~2.2\,(ii)]{ABS-19}.
\end{proof}
Recall the definitions in \cref{E-JQ,E-J*}, and let
\begin{equation*}
J^x_\xi\,=\, J^x_\xi(c) \,\coloneqq\, J^x_\xi(c;{\mathds{R}^{d}})\,,
\end{equation*}
and similarly for $J^x_*$ and $J_*$.
Also, recall that
\begin{equation*}
J^x_v \,=\, J^x_v(c) \,=\, \liminf_{T\to\infty}\, \frac{1}{T}\,
\log \Exp^x_v \Bigl[\mathrm{e}^{\int^T_0 c_v(X_t)\,\mathrm{d} t} \Bigr]\,,
\quad x\in {\mathds{R}^{d}}\,,\ v\in{\Xi_{\mathsf{sm}}}\,.
\end{equation*}
The theorem that follows concerns the equality $\lambda_*=J_*$.
Recall the definition in \cref{EA3.1A}.
\begin{theorem}\label{T3.2}
We have
$\lambda_*={\rho^{}_*} =J_*$.
In addition, $J^x_v=J_*$ if and only if
$v$ is an a.e.\ measurable selector from the maximizer of \cref{ET3.1A}.
\end{theorem}
\begin{proof}
We already have ${\rho^{}_*}=\lambda_*$ from \cref{T3.1}. This also gives
\begin{equation*}
{\rho^{}_*} \,\le\, J^x_{v_*}(c) \,\le\, J_*\,.
\end{equation*}
Choose $R>0$ such that ${\rho^{}_*}>\sup_{B^c_R\times{\mathscr{K}}}\,c$.
This is possible by \cref{EA3.1A}.
Let $\delta>0$ be given, and select a smooth, non-negative cut-off function $\chi$ that
vanishes in $B_R$ and equals to $1$ in $B_{R+1}^c$.
Let $\Psi=\Phi_{\mspace{-2mu}*}+\varepsilon \chi$, and select $\epsilon>0$ small enough so that
\begin{equation*}
\epsilon\, \bigl(\cG\chi(x) -{\rho^{}_*}\chi(x)\bigr)\,\le\,
\delta\, \Phi_{\mspace{-2mu}*}(x)\qquad\forall\,x\in\bar{B}_{R+1}\,.
\end{equation*}
This is clearly possible since $\Phi_{\mspace{-2mu}*}$ is positive and
\begin{equation*}
\cG\chi(x) -{\rho^{}_*}\chi(x) \,=\,
\max_{\xi\in{\mathscr{K}}}\, (c(x,\xi)-{\rho^{}_*}) \chi(x) \,\le\, 0
\qquad\forall\,x\in B_{R+1}^c\,.
\end{equation*}
We have
\begin{equation}\label{PT3.2A}
\cG \Psi(x) - ({\rho^{}_*}+\delta)\Psi(x)
\,\le\, (\cG-{\rho^{}_*})\Phi_{\mspace{-2mu}*}(x) +\epsilon\,(\cG-{\rho^{}_*})\chi(x)
- \delta\,\Psi(x)\,\le\,0\quad\forall\,x\in{\mathds{R}^{d}}\,.
\end{equation}
Since $\Psi$ is bounded below away from zero, a standard use of
It\^o's formula and the Fatou lemma applied to \cref{PT3.2A} shows that
$J^x_\xi\le {\rho^{}_*}+\delta$ for all $\xi\in{\Xi}$.
Since $\delta$ is arbitrary this implies ${\rho^{}_*}\ge J_*$, and hence
we must have equality.
This also shows that every a.e.\ measurable selector from the maximizer of \cref{ET3.1A}
is optimal.
Next, for $v\in{\Xi_{\mathsf{sm}}}$, let $(\lambda_v,\Psi_{\mspace{-2mu}v})$ be an eigenpair,
obtained as a limit of
Dirichlet eigenpairs $\bigl\{(\lambda_{v,n},\Psi_{\mspace{-2mu}v,n})\bigr\}_{n\in\mathds{N}}$,
with $\Psi_{\mspace{-2mu}v,n}(0)=1$, along some subsequence (see \cref{L3.2}).
Let $\nu\in[-\infty,\infty)$ be defined by
\begin{equation*}
\nu \,\coloneqq\, \lim_{r\to\infty}\,\sup_{(x,\xi)\in B_r^c\times{\mathscr{K}}}\,c(x,\xi)\,.
\end{equation*}
First suppose that $\lambda_v>\nu$.
Then, using the the argument in the preceding paragraph, together with the fact that
$\lambda_v\le J^x_v$, we deduce that $\lambda_v= J^x_v$ for all $x\in{\mathds{R}^{d}}$.
Thus if $v\in{\Xi_{\mathsf{sm}}}$ is optimal, we must have $\lambda_v={\rho^{}_*}$.
This implies that we can select a ball $\sB$ such that
\begin{equation*}
\lambda_{v,n}-\sup_{(x,\xi)\in \sB^c\times{\mathscr{K}}}\,c(x,\xi)\,>\, 0
\end{equation*}
for all sufficiently large $n$.
Let ${\Breve\uptau}= \uptau(\sB^c)$.
By \cite[Lemma~2.10\,(i)]{AB-18}, we have the stochastic representation
\begin{equation*}
\Psi_{\mspace{-2mu}v,n}(x)\,=\, \Exp^x_v \Bigl[\mathrm{e}^{\int_{0}^{{\Breve\uptau}}
[c_v(X_{t})-\lambda_{v,n}]\,\mathrm{d}{t}}\, \Psi_{\mspace{-2mu}v,n}(X_{{\Breve\uptau}})\,
\Ind_{\{{\Breve\uptau}<\uptau_{n}\}}\Bigr]\qquad\forall\,
x\in B_{n}\setminus\Bar{\sB}\,.
\end{equation*}
Next we show that that $\Psi_{\mspace{-2mu}v}$ vanishes at infinity by
using the argument in the proof of \cref{T3.1}.
The analysis is simpler here.
Selecting the same function $\phi$ as in the proof of \cref{T3.1},
there exists $R>0$ such that
\begin{equation*}
\Lg_v \phi(x) +c_v(x) \phi(x) \,\le\, \lambda_v \phi(x)\qquad\forall\,
x\in B_R^c\,.
\end{equation*}
Since $\Psi_{\mspace{-2mu}v,n}(0)=1$, employing the Harnack inequality we scale
$\phi$ so that $\phi>\Psi_{\mspace{-2mu}v,n}$ on $B_R$ for all $n>R$.
The strong maximum principle then shows that $\Psi_{\mspace{-2mu}v,n}<\phi$ on ${\mathds{R}^{d}}$.
Thus $\Psi_{\mspace{-2mu}v}^{-1}$ is inf-compact, which together with
the Lyapunov equation $\widetilde\Lg^{\psi^{}_v}_{v}\Psi_{\mspace{-2mu}v}^{-1}
= \bigl(c_v-{\rho^{}_*})\Psi_{\mspace{-2mu}v}^{-1}$ imply that
the ground state process is exponentially ergodic.
By \cref{L3.1}, we then have
\begin{equation}\label{PT3.2B}
\Psi_{\mspace{-2mu}v}(x)\,=\, \Exp^x_v \Bigl[\mathrm{e}^{\int_{0}^{{\Breve\uptau}}
[c_v(X_{t})-{\rho^{}_*}]\,\mathrm{d}{t}}\, \Psi_{\mspace{-2mu}v}(X_{{\Breve\uptau}})\,
\Ind_{\{{\Breve\uptau}<\infty\}}\Bigr]\qquad\forall\, x\in \Bar{\sB}^c\,.
\end{equation}
On the other hand, it holds that
$\Lg_v \Phi_{\mspace{-2mu}*} + c_v \Phi_{\mspace{-2mu}*} \le {\rho^{}_*} \Phi_{\mspace{-2mu}*}$,
which implies that
\begin{equation}\label{PT3.2D}
\begin{aligned}
\Phi_{\mspace{-2mu}*}(x) \,\ge\, \Exp^x_v \Bigl[\mathrm{e}^{\int_0^{{\Breve\uptau}}[c_{v}(X_s)-{\rho^{}_*}]\, \mathrm{d}{s}}\,
\Phi_{\mspace{-2mu}*}(X_{{\Breve\uptau}})\,\Ind_{\{{\Breve\uptau}<\infty\}}\Bigr]\,.
\end{aligned}
\end{equation}
Comparing the functions in \cref{PT3.2B,PT3.2D} using the strong maximum principle,
as done in the proof of \cref{L3.1}, we deduce that $\Psi_{\mspace{-2mu}v}=\Phi_{\mspace{-2mu}*}$.
Thus $v$ is a measurable selector from the maximizer of \cref{ET3.1A}.
It remains to address the case $\lambda_v\le\nu$.
By \cite[Corollary~3.2]{ABG-19} there exists a positive constant $\delta$ such that
$\lambda_v(c_v+\delta\Ind_{B_1})>\nu$, and $\lambda_v(c_v+\delta\Ind_{B_1})<{\rho^{}_*}$.
Thus repeating the above argument we obtain
\begin{equation*}
{\rho^{}_*}\,>\,\lambda_v(c_v+\delta\Ind_{B_1})\,=\,
\liminf_{T\to\infty}\, \frac{1}{T}\,
\log \Exp^x_v \Bigl[\mathrm{e}^{\int^T_0 [c_v(X_t)+\delta\Ind_{B_1}(X_t)]\,\mathrm{d} t} \Bigr]
\,\ge\, J^v_x\qquad
\forall\,x\in{\mathds{R}^{d}}\,.
\end{equation*}
Therefore, $v$ cannot be optimal.
This completes the proof.
\end{proof}
\section{The variational formula on \texorpdfstring{${\mathds{R}^{d}}$}{}}\label{S4}
In this section we establish the variational formula on ${\mathds{R}^{d}}$.
As mentioned in \cref{S1.1}, the function $\cH$ in \cref{Eentropy}
plays a very important role in the analysis.
To explain how this function arises, let $\Prob^{x,t}_v$ denote the probability
measure on the canonical path space $\{X_s\colon 0\le s\le t\}$ of the diffusion
\cref{E-sde1} under a control $v\in{\Xi_{\mathsf{sm}}}$, and
$\widetilde\Prob^{x,t}_v$ the analogous probability measure
corresponding to the diffusion
\begin{equation*}
\mathrm{d} \widetilde{X}_t \,=\,
\bigl(b_v(\widetilde{X}_t)+a(x)\nabla{\varphi^{}_{\mspace{-2mu}*}}(\widetilde{X}_t)\bigr)\,\mathrm{d} t
+ \upsigma (\widetilde{X}_t)\,\mathrm{d} \widetilde{W}_t\,,
\end{equation*}
with ${\varphi^{}_{\mspace{-2mu}*}}$ as in \cref{T3.1}.
By the Cameron--Martin--Girsanov theorem we obtain
\begin{equation*}
\frac{\mathrm{d}\mathbb{P}^{x,t}_v}{\mathrm{d} \widetilde\Prob^{x,t}_v} \,=\,
\exp\biggl( -\int_0^{t} \bigl\langle \nabla{\varphi^{}_{\mspace{-2mu}*}}(\widetilde{X}_s),
\upsigma(\widetilde{X}_s) \mathrm{d}{\widetilde{W}_s}\bigr\rangle
-\frac{1}{2}\int_0^{t}
\babs{\upsigma^{\mathsf{T}}(\widetilde{X}_s)
\nabla {\varphi^{}_{\mspace{-2mu}*}}(\widetilde{X}_s)}^2\,\mathrm{d}{s}\biggr)\,.
\end{equation*}
Thus, the \emph{relative entropy}, or Kullback--Leibner
divergence between $\widetilde\Prob^{x,t}_v$ and $\Prob^{x,t}_v$ takes the form
\begin{equation*}
D_{\mathsf{KL}}\bigl(\widetilde\Prob^{x,t}_v \bigm\| \Prob^{x,t}_v\bigr)
\,=\, -\int \log\biggl(\frac{\mathrm{d} \mathbb{P}^{x,t}_v}{\mathrm{d} \widetilde\Prob^{x,t}_v}\biggr)\,
\mathrm{d} \widetilde\Prob^{x,t}_v
\,=\, \frac{1}{2}\,\widetilde\Exp^{x,t}_v\biggl[
\int_0^{t} \babs{\upsigma^{\mathsf{T}}(\widetilde{X}_s)
\nabla {\varphi^{}_{\mspace{-2mu}*}}(\widetilde{X}_s)}^2\,\mathrm{d}{s}\biggr]\,.
\end{equation*}
Dividing this by $t$, and letting $t\searrow0$, we see that
$\cH$ is the \emph{infinitesimal relative entropy rate}.
Recall from \cref{S1.1} the definition ${\mathcal{Z}} \coloneqq {\mathds{R}^{d}}\times{\mathscr{K}}\times{\mathds{R}^{d}}$, and
the use of the single variable $z=(x,\xi,y)\in{\mathcal{Z}}$ in the interest of notational
simplicity.
Also recall the definitions in \cref{Eeom,Emufinite}.
Recall the definitions in \cref{EsA,EsR}.
In analogy to \cref{E-F}, we define
\begin{equation*}
F(g,\mu) \,\coloneqq\, \int_{{\mathcal{Z}}} \bigl(\sA g(z)+\sR(z)\bigr)\,\mu(\mathrm{d}{z})
\quad \text{for\ } g \in \Cc^{2}({\mathds{R}^{d}})\text{\ and\ } \mu\in\cP({\mathcal{Z}})\,.
\end{equation*}
The following result plays a central role in this paper.
\begin{proposition}\label{P4.1}
We have
\begin{equation}\label{EP4.1A}
{\rho^{}_*} \,=\, \max_{\mu\in\eom_{\sA}\cap{\cP^{}_{\mspace{-3mu}*}}({\mathcal{Z}})}\,\int_{{\mathcal{Z}}} \sR(z)\,\mu(\mathrm{d}{z})
\,=\, \adjustlimits\sup_{\mu\in{\cP^{}_{\mspace{-3mu}*}}({\mathcal{Z}})}\inf_{g \in \Cc^2_c({\mathds{R}^{d}})}\,F(g,\mu) \,.
\end{equation}
In addition, if $\eom_{\sA}\cap{\cP^{}_{\mspace{-3mu}\circ}}({\mathcal{Z}})\subset{\cP^{}_{\mspace{-3mu}*}}({\mathcal{Z}})$, then
${\cP^{}_{\mspace{-3mu}*}}({\mathcal{Z}})$ may be replaced by $\cP({\mathcal{Z}})$ in \cref{EP4.1A}.
\end{proposition}
In the proof of \cref{P4.1} and elsewhere in the paper we use
a cut-off function $\chi$ defined as follows (compare this with the function
$\Breve\chi$ in the proof of \cref{T3.1}).
\begin{definition}\label{D4.1}
Let $\chi\colon\mathds{R}\to\mathds{R}$ be a smooth convex function such that
$\chi(s)= s$ for $s\ge0$, and $\chi(s) = -1$ for $s\le -2$.
Then $\chi'$ and $\chi''$ are nonnegative and the latter is supported
on $(-2,0)$.
It is clear that we can choose $\chi$ so that $\chi''< 1$.
We scale this function by defining
$\chi^{}_t(s) \coloneqq -t + \chi(s+t)$ for $t\in\mathds{R}$.
Thus $\chi^{}_t(s)=s$ for $s\ge -t$, and $\chi^{}_t(s) = -t-1$ for $s\le -t-2$.
Observe that if $-f$ is an inf-compact function
then $\chi_t^{}(f)+t+1$ is compactly supported by the definition of $\chi$.
\end{definition}
\begin{proof}[Proof of \cref{P4.1}]
We start with the first equality in \cref{EP4.1A}.
By \cref{E-eigen2}, we have
\begin{equation}\label{PP4.1A}
\widetilde\Lg_{v_*}^{{\varphi^{}_{\mspace{-2mu}*}}}{\varphi^{}_{\mspace{-2mu}*}}(x)
+c_{v_*}(x) -\cH(x) \,=\, {\rho^{}_*}\,.
\end{equation}
As shown in \cref{T3.1} the twisted process $\Tilde{X}$ with extended
generator $\widetilde\Lg_{v_*}^{{\varphi^{}_{\mspace{-2mu}*}}}$ is exponentially ergodic.
Let $\eta_{v_*}$ denote its invariant probability measure.
Since $\frac{\abs{{\varphi^{}_{\mspace{-2mu}*}}}}{\Phi_{\mspace{-2mu}*}^{-1}}$ vanishes at infinity,
and $\Phi_{\mspace{-2mu}*}^{-1}$ is a Lyapunov function by \cref{ET3.1B},
it then follows from \cref{PP4.1A}, by using
the It\^o formula and applying \cite[Lemma~3.7.2\,(ii)]{ABG}, that
\begin{equation}\label{PP4.1B}
{\rho^{}_*} \,=\, \int_{\mathds{R}^{d}} \bigl(c_{v_*}(x)- \cH(x)\bigr)\,\eta_{v_*}(\mathrm{d}{x})
\,=\, \int_{\mathds{R}^{d}} \sR\bigl(x,v_*(x), \nabla{\varphi^{}_{\mspace{-2mu}*}}(x)\bigr)\,\eta_{v_*}(\mathrm{d}{x})\,.
\end{equation}
Next, we show that
\begin{equation}\label{PP4.1C}
{\rho^{}_*} \,\ge\, \int_{{\mathcal{Z}}} \sR(z)\,\mu(\mathrm{d}{z})
\quad\forall\, \mu\in\eom_{\sA}\cap{\cP^{}_{\mspace{-3mu}*}}({\mathcal{Z}})\,.
\end{equation}
We write \cref{ET3.1A} as
\begin{equation*}
\max_{\xi\in{\mathscr{K}}}\,\Bigl[\Lg_\xi {\varphi^{}_{\mspace{-2mu}*}}(x)
+\tfrac{1}{2} \babs{\upsigma^{\mathsf{T}}(x)\nabla{\varphi^{}_{\mspace{-2mu}*}}(x)}^2
+ c(x,\xi) \Bigr] \,=\,{\rho^{}_*} \quad\forall\,x\in{\mathds{R}^{d}}\,,
\end{equation*}
and using the identity
\begin{equation*}
\Lg_\xi {\varphi^{}_{\mspace{-2mu}*}} + \tfrac{1}{2} \babs{\upsigma^{\mathsf{T}}\nabla{\varphi^{}_{\mspace{-2mu}*}}}^2
\,=\, \Lg_\xi {\varphi^{}_{\mspace{-2mu}*}} + \langle a y, \nabla{\varphi^{}_{\mspace{-2mu}*}}\rangle
+\tfrac{1}{2} \babs{\upsigma^{\mathsf{T}}(y-\nabla{\varphi^{}_{\mspace{-2mu}*}})}^2
- \tfrac{1}{2} \abs{\upsigma^{\mathsf{T}} y}^2
\end{equation*}
to obtain (compare with \cref{E-eigen4})
\begin{equation}\label{PP4.1D}
\sA{\varphi^{}_{\mspace{-2mu}*}}(x,\xi,y)
+ \tfrac{1}{2} \babs{\upsigma^{\mathsf{T}}(x)\bigl(y-\nabla{\varphi^{}_{\mspace{-2mu}*}}(x)\bigr)}^2
+ \sR(x,\xi,y) \,\le\,{\rho^{}_*}\,.
\end{equation}
Using the function $\chi^{}_t$ in \cref{D4.1},
the identity
\begin{equation*}
\sA \chi^{}_t({\varphi^{}_{\mspace{-2mu}*}}) \,=\, \chi'_t({\varphi^{}_{\mspace{-2mu}*}}) \sA{\varphi^{}_{\mspace{-2mu}*}}
+ \tfrac{1}{2} \chi''_t({\varphi^{}_{\mspace{-2mu}*}})
\babs{\upsigma^{\mathsf{T}}\nabla{\varphi^{}_{\mspace{-2mu}*}}}^2\,,
\end{equation*}
and the definition of $\cH$,
we obtain from \cref{PP4.1D} that
\begin{equation}\label{PP4.1E}
\begin{aligned}
\sA (\chi^{}_t\comp{\varphi^{}_{\mspace{-2mu}*}})&(x,\xi,y) - \chi''_t\bigl({\varphi^{}_{\mspace{-2mu}*}}(x)\bigr)\,\cH(x)\\
& + \chi'_t\bigl({\varphi^{}_{\mspace{-2mu}*}}(x)\bigr)\Bigl(
\tfrac{1}{2} \babs{\upsigma^{\mathsf{T}}(x)\bigl(y-\nabla{\varphi^{}_{\mspace{-2mu}*}}(x)\bigr)}^2
+ \sR(x,\xi,y) - {\rho^{}_*}\Bigr) \,\le\,0\,.
\end{aligned}
\end{equation}
Let $\mu\in\eom_{\sA}\cap{\cP^{}_{\mspace{-3mu}*}}({\mathcal{Z}})$, and without loss of generality assume
that $\mu\in{\cP^{}_{\mspace{-3mu}\circ}}({\mathcal{Z}})$.
The integral of the first term in \cref{PP4.1E} with respect to $\mu$
vanishes by the definition of $\eom_{\sA}$.
Thus, we have
\begin{equation}\label{PP4.1F}
\begin{aligned}
\int_{{\mathcal{Z}}}
\chi'_t\bigl({\varphi^{}_{\mspace{-2mu}*}}(x)\bigr)\Bigl(
\tfrac{1}{2} \babs{\upsigma^{\mathsf{T}}(x)\bigl(y-\nabla{\varphi^{}_{\mspace{-2mu}*}}(x)\bigr)}^2
&+ \sR(x,\xi,y) - {\rho^{}_*}\Bigr)\,\mu(\mathrm{d}{x},\mathrm{d}{\xi},\mathrm{d}{y})\\
&\,\le\, \int_{{\mathds{R}^{d}}} \chi''_t\bigl({\varphi^{}_{\mspace{-2mu}*}}(x)\bigr)\,\cH(x)\,\eta(\mathrm{d}{x})\,,
\end{aligned}
\end{equation}
with $\eta(\cdot) = \int_{{\mathscr{K}}\times{\mathds{R}^{d}}}\mu(\cdot\,,\mathrm{d}{\xi},\mathrm{d}{y})$.
Since $\int\cH\mathrm{d}\eta<\infty$,
then taking limits as $t\to\infty$ in \cref{PP4.1F}, using
dominated convergence
together with the fact that $\chi''_t(s)\to 0$ as $t\to\infty$, we see that
the right-hand side of \cref{PP4.1F} goes to $0$. Also, using Fatou's lemma and
the fact that
$\chi'_t(s)\to 1$ as $t\to\infty$, we obtain from \cref{PP4.1F} that
\begin{equation}\label{PT4.1H}
\int_{{\mathcal{Z}}}
\Bigl(\tfrac{1}{2} \babs{\upsigma^{\mathsf{T}}(x)\bigl(y-\nabla{\varphi^{}_{\mspace{-2mu}*}}(x)\bigr)}^2
+ \sR(x,\xi,y)\Bigr) \,\mu(\mathrm{d}{x},\mathrm{d}{\xi},\mathrm{d}{y})\,\le\,{\rho^{}_*}\,.
\end{equation}
This proves \cref{PP4.1C}.
Now, if we let
\begin{equation*}
\mu_*(\mathrm{d}{x},\mathrm{d}{\xi},\mathrm{d}{y})\coloneqq
\eta_{v_*}(\mathrm{d}{x}) \delta_{v_*(x)}(\mathrm{d}{\xi})\delta_{\nabla\varphi_*(x)}(\mathrm{d}{y})\,,
\end{equation*}
then
\begin{equation*}
\int_{{\mathcal{Z}}} \sA f(z)\,\mu_*(\mathrm{d}{z}) \,=\, \int_{{\mathds{R}^{d}}}
\widetilde\Lg_{v_*}^{{\varphi^{}_{\mspace{-2mu}*}}} f(x)\,\eta_{v_*}(\mathrm{d}{x})\,=\,0
\quad\forall\,f\in\Cc^{2}_c({\mathds{R}^{d}})\,,
\end{equation*}
which implies that $\mu_*\in\eom_{\sA}$.
Then, the second equality in \cref{PP4.1B} can be written as
\begin{equation}\label{PP4.1I}
{\rho^{}_*} \,=\, \int_{{\mathcal{Z}}} \sR(z)\,\mu_*(\mathrm{d}{z})\,,
\end{equation}
while the first equality in \cref{PP4.1B} together with
the fact that $c$ is bounded above and ${\rho^{}_*}$ is finite
implies that $\mu_*\in{\cP^{}_{\mspace{-3mu}*}}({\mathcal{Z}})$.
Therefore, $\mu_*\in\eom_{\sA}\cap{\cP^{}_{\mspace{-3mu}*}}({\mathcal{Z}})$,
and the first equality in \cref{EP4.1A} now follows from \cref{PP4.1C,PP4.1I}.
We now turn to the proof of the second equality in \cref{EP4.1A}.
Note that it $\mu\notin{\cP^{}_{\mspace{-3mu}\circ}}({\mathcal{Z}})$ then $F(0,\mu)=-\infty$.
On the other hand, if $\mu \notin\eom_\sA$ then, as also
stated in the proof of \cref{T2.2}, $\inf_{g \in \Cc^2_c({\mathds{R}^{d}})}\,F(g,\mu)=-\infty$.
The remaining case is $\mu\in\eom_\sA\cap{\cP^{}_{\mspace{-3mu}*}}({\mathcal{Z}})$, for which we have
$F(g,\mu)=\int_{{\mathcal{Z}}} \sR(z)\,\mu(\mathrm{d}{z})$, thus proving the equality.
The second statement of the proposition follows directly from the arguments
used above.
\end{proof}
\begin{remark}
One can follow the argument in the proof of \cite[Theorem~1.4]{ABB-18},
using Radon--Nikodym derivatives
instead of densities, to show that every maximizing infinitesimal
ergodic occupation measure for \cref{EP4.1A} has the form
\begin{equation*}
\mu(\mathrm{d}{x},\mathrm{d}{\xi},\mathrm{d}{y}) \,=\,
\uppi(\mathrm{d}{x},\mathrm{d}{\xi})\,\delta_{\nabla{\varphi^{}_{\mspace{-2mu}*}}(x)}(\mathrm{d}{y})\,,
\end{equation*}
where $\delta_y$ denotes the Dirac mass at $y\in{\mathds{R}^{d}}$,
and $\uppi(\mathrm{d}{x},\mathrm{d}{\xi})$ is an optimal ergodic occupation measure of
the diffusion associated with operator $\sA^*$ defined by
\begin{equation*}
\sA^*\phi(x,\xi)\,\coloneqq\,\frac{1}{2}\trace\left(a(x)\nabla^{2}\phi(x)\right)
+ \bigl\langle b(x,\xi)+ a(x)\nabla{\varphi^{}_{\mspace{-2mu}*}}(x), \nabla \phi(x)\bigr\rangle
\end{equation*}
for $(x,\xi)\in{\mathds{R}^{d}}\times{\mathscr{K}}$ and
$f\in\Cc^2({\mathds{R}^{d}})$.
We leave the verification of this assertion to the reader.
\end{remark}
We continue our analysis by investigating conditions on the model parameters which
imply that $\eom_{\sA}\cap{\cP^{}_{\mspace{-3mu}\circ}}({\mathcal{Z}})\subset{\cP^{}_{\mspace{-3mu}*}}({\mathcal{Z}})$.
We impose the following hypothesis on the matrix $a$.
\begin{assumption}\label{A4.1}
The matrix $a$ is bounded and
has a uniform modulus of continuity on ${\mathds{R}^{d}}$, and is uniformly non-degenerate in
the sense that the minimum eigenvalue of $a$ is bounded away from zero on ${\mathds{R}^{d}}$.
\end{assumption}
We start with the following lemma, which can be viewed as a generalization of
\cite[Lemma~3.3]{AB-18}.
\cref{A3.1}, which applies by default throughout the paper, need not be
enforced in this lemma.
\begin{lemma}\label{L4.1}
Consider a linear operator in $\mathds{R}^d$, of the form
\begin{equation*}
\Lg \,\coloneqq\, \tfrac{1}{2} a^{ij}\partial_{ij} + b^i \partial_i + c\,,
\end{equation*}
and suppose that the matrix $a=\upsigma\upsigma^{\mathsf{T}}$ satisfies \cref{A4.1},
and the coefficients $b$ and $c$ are locally bounded and measurable.
Then, there exists a constant $\widetilde{C}_0$ such that any strong positive solution
$u \in\Sobl^{2,p}(\mathds{R}^d)$, $p>d$, to the equation
\begin{equation}\label{EL4.1B}
\Lg u(x) \,=\, 0 \quad \text{on } \mathds{R}^d
\end{equation}
satisfies
\begin{equation*}
\frac{\babs{\nabla u(x)}}{u(x)} \,\le\, \widetilde{C}_0
\,\Bigl[1+ \sup_{y\in B_1(x)}\,\Bigl(\abs{b(y)} + \sqrt{\abs{c(y)}}\Bigr)\Bigr]
\qquad\forall\,x\in{\mathds{R}^{d}}\,.
\end{equation*}
\end{lemma}
\begin{proof}
We use scaling.
For any fixed $x_0\in\mathds{R}^d$, with $\abs{x_0} \ge 1$, we define
\begin{equation*}
M_{x_0} \coloneqq 1+ \sup_{x\in B_3(x_0)}\,\Bigl(\abs{b(x)} + \sqrt{\abs{c(x)}}\Bigr)\,,
\end{equation*}
and the scaled function
\begin{equation*}
\Tilde{u}_{x_0}(y) \,\coloneqq\, u\bigl(x_0 + M_{x_0}^{-1} y \bigr)\,,\quad y\in{\mathds{R}^{d}}\,,
\end{equation*}
and similarly for the functions $\Tilde{a}_{x_0}$, $\Tilde{b}_{x_0}$,
and $\Tilde{c}_{x_0}$.
The equation in \cref{EL4.1B} then takes the form
\begin{equation}\label{PL4.1A}
\frac{1}{2}\,\Tilde{a}^{ij}_{x_0}(y)\,
\partial_{ij}\Tilde{u}_{x_0}(y)
+ \frac{\Tilde{b}^i_{x_0}(y)}{M_{x_0}}\,
\partial_{i}\Tilde{u}_{x_0}(y)
+ \frac{\Tilde{c}_{x_0}(y)}{M_{x_0}^2}\,\Tilde{u}_{x_0}(y) \,=\, 0
\quad \text{on }\mathds{R}^d\,.
\end{equation}
It is clear from the hypotheses that the coefficients of \cref{PL4.1A}
are bounded in the ball $B_3$, with a bound independent of $x_0$, and that
the modulus of continuity and ellipticity constants of the matrix $\Tilde{a}_{x_0}$
in $B_3$ are independent of $x_0$.
We follow the argument in \cite[Lemma~3.3]{AB-18}, which is repeated here for
completeness.
First, by the Harnack inequality \cite[Theorem~9.1]{GilTru}, there exists
a positive constant $C_{\mathsf{H}}$ independent of the point $x_0$ chosen, such that
$\Tilde{u}_{x_0}(y)\le C_{\mathsf{H}}\, \Tilde{u}_{x_0}(y')$ for all $y,y' \in B_2$.
Let
\begin{equation*}
\Lg_0 \coloneqq \frac{1}{2}\,\Tilde{a}^{ij}_{x_0}(y)\,
\partial_{ij}
+ \frac{\Tilde{b}^i_{x_0}(y)}{M_{x_0}}\,\partial_i\,.
\end{equation*}
By a well known a priori estimate \cite[Lemma~5.3]{ChenWu}, there
exists a constant $C_{\mathsf{a}}$, again independent of $x_0$, such that,
\begin{equation}\label{PL4.1B}
\begin{aligned}
\bnorm{\Tilde{u}_{x_0}}_{\Sob^{2,p}(B_1)} &\,\le\, C_{\mathsf{a}}\,
\Bigl(\bnorm{\Tilde{u}_{x_0}}_{\Lp^{p}(B_2)}
+\bnorm{\Lg_0\,\Tilde{u}_{x_0}}_{\Lp^{p}(B_2)}\Bigr)\\
&\,\le\, C_{\mathsf{a}}\,\biggl(1+\sup_{y\in B_2}\,
\frac{\Tilde{c}_{x_0}(y)}{M_{x_0}^2}\biggr)\,
\bnorm{\Tilde{u}_{x_0}}_{\Lp^{p}(B_2)}\\
&\,\le\,\widetilde{C}_1\,\Tilde{u}_{x_0}(0)\,,
\end{aligned}
\end{equation}
where in the last inequality, we used the Harnack property.
Clearly then, the resulting constant $\widetilde{C}_1$ does not depend on $x_0$.
Next, invoking Sobolev's theorem, which asserts the compactness of the embedding
$\Sob^{2,p}\bigl(B_1(x_0)\bigr)\hookrightarrow \Cc^{1,r}\bigl(B_1(x_0)\bigr)$,
for $p>d$
and $r<1-\frac{d}{p}$ (see \cite[Proposition~1.6]{ChenWu}), and
combining this with \cref{PL4.1B}, we obtain
\begin{equation*}
\sup_{y\in B_1}\,\babs{\nabla\Tilde{u}_{x_0}(y)} \,\le\,
\widetilde{C}_2\, \Tilde{u}_{x_0}(x_0)
\end{equation*}
for some constant $\widetilde{C}_2$ independent of $x_0$.
Thus
\begin{equation}\label{PL4.1C}
\frac{\abs{\nabla\Tilde{u}_{x_0}(0)}}{\Tilde{u}_{x_0}(0)}
\,\le\, \widetilde{C}_2
\qquad\forall\,x_0\in B_1^c\,.
\end{equation}
Using \cref{PL4.1C} and the identity
$\nabla{u}(x_0) = M_{x_0}
\,\nabla\Tilde{u}_{x_0}(0)$ for all $x_0\in B_1^c$,
we obtain
\begin{equation*}
\frac{\babs{\nabla{u}(x_0)}}{{u}(x_0)} \,=\,
M_{x_0}\,
\frac{\babs{\nabla\Tilde{u}_{x_0}(0)}}{\Tilde{u}_{x_0}(0)}
\,\le\,\widetilde{C}_2\,
\biggl[1+ \sup_{x\in B_3(x_0)}\,\Bigl(\abs{b(x)} + \sqrt{\abs{c(x)}}\Bigr)\biggr]
\qquad \forall\,x_0\in B_1^c\,.
\end{equation*}
Of course $B_3(x_0)$ is arbitrary. The same is true with any radius, with
perhaps a different constant.
This completes the proof.
\end{proof}
\begin{remark}
\Cref{L4.1} should be compared with similar gradient estimates in the literature.
Its benefit is that it matches or exceeds the estimates
in \cite[Lemma~5.1]{Metafune-05}
and \cite[Theorem~A.2]{Chasseigne-19}, without requiring
any regularity on the coefficients.
\end{remark}
\begin{assumption}\label{A4.2}
One of the following holds:
\begin{enumerate}
\item[\ttup a] The function $-c$ is inf-compact.
\item[\ttup b] The drift $b$ satisfies
\begin{equation}\label{E-weak}
\max_{(x,\xi)\in B_r^c\times{\mathscr{K}}}\;
\frac{ \bigl\langle b(x,\xi),\, x\bigr\rangle^{-}}{\abs{x}^{2}}
\;\xrightarrow[r\to\infty]{}\;0\,.
\end{equation}
\item[\ttup c]
There exists a constant $\widehat{C}_0$ such that
(compare this with \cite[Theorem~3.1\,(b)]{AB-18b})
\begin{equation*}
\frac{\cH(x)}{\bigl(1+\abs{{\varphi^{}_{\mspace{-2mu}*}}(x)}\bigr)\,\bigl(1+\abs{c(x,\xi)}\bigr)}
\,\le\,\widehat{C}_0\qquad\forall\,(x,\xi)\in{\mathds{R}^{d}}\times{\mathscr{K}}\,,
\end{equation*}
where ${\varphi^{}_{\mspace{-2mu}*}}=\log\Phi_{\mspace{-2mu}*}$, and $\Phi_{\mspace{-2mu}*}$ is as in \cref{T3.1}.
\end{enumerate}
\end{assumption}
\begin{remark}
\Cref{A4.2}\,(c) is not specified in terms of the parameters of
the equation. However, \cref{A4.1} together with the hypothesis that
$\frac{\abs{b}^2}{1+\abs{c}}$ is bounded implies \cref{A4.2}\,(c).
This is asserted by \cref{L4.1}.
See also \cref{L4.3} later in this section.
\end{remark}
We have the following estimate concerning the growth of the function $\Phi_{\mspace{-2mu}*}$
in \cref{T3.1}. This does not require the uniform ellipticity hypothesis
in \cref{A4.1}.
\begin{lemma}\label{L4.2}
Grant \cref{A4.2} part \textup{(a)} or \textup{(b)}.
Then there exists a function $\zeta\colon(0,\infty)\to(0,\infty)$,
with $\lim_{r\to\infty}\zeta(r)=\infty$, such that the solution $\Phi_{\mspace{-2mu}*}$ in
\cref{ET3.1A} satisfies
\begin{equation}\label{EL4.2A}
\babs{\log\Phi_{\mspace{-2mu}*}(x)} \,\ge\, \zeta(r)\,\log\bigl(1+\abs{x}\bigr)
\qquad\forall\,x\in B_r^c\,.
\end{equation}
\end{lemma}
\begin{proof}
We start with part (a).
Let $\alpha\colon(0,\infty)\to(0,\infty)$ be a strictly increasing function,
satisfying $\alpha(r)\to\infty$ and $\frac{\alpha(r)}{r}\to0$ as $r\to\infty$, and
\begin{equation}\label{PL4.2C}
\log\alpha(r) \,\ge\, \log r - \inf_{B_r^c}\,\abs{{\varphi^{}_{\mspace{-2mu}*}}}^{\nicefrac{1}{3}}\,.
\end{equation}
This is always possible.
A specific function satisfying these properties is given by
\begin{equation*}
\alpha(r) \,\coloneqq\, \sqrt r + \sup_{s\in (0, r]}\,
\biggl(s \exp\Bigl(-\inf_{B_r^c}\,\abs{{\varphi^{}_{\mspace{-2mu}*}}}^{\nicefrac{1}{3}}\Bigr)\biggr)\,.
\end{equation*}
Let $c_1$ be a constant such that $\babs{\Lg_{v_*} (\log\abs{x})} \le c_1$
for all $\abs{x}>1$. Such a constant exists since $\upsigma$ and
$b$ have at most linear growth in $\abs{x}$ by \cref{E-growth}.
We define
\begin{equation}\label{PL4.2D}
\kappa(r) \,\coloneqq\, \min\, \biggl(\sqrt r\,,\,\frac{1}{c_1}\inf_{B_r^c\times{\mathscr{K}}}\,
\babs{c(x,\xi)-{\rho^{}_*}}^{\nicefrac{1}{2}}\,,\,
\inf_{B_r^c}\,\abs{{\varphi^{}_{\mspace{-2mu}*}}}^{\nicefrac{1}{3}}\biggr)\,.
\end{equation}
Since the functions $-{\varphi^{}_{\mspace{-2mu}*}}$ and $-c$ are inf-compact, it
is clear that $\kappa(r)\to\infty$ as $r\to\infty$.
Define the family of functions
\begin{equation*}
h_r(x)\,\coloneqq\,-\kappa(r)\bigl(\log\abs{x} - \log \alpha(r)\bigr)\,,
\qquad r\ge1\,,\ x\in B_r^c\,.
\end{equation*}
Note that for any $g\in\Cc^2({\mathds{R}^{d}})$ we have
\begin{equation}\label{PL4.2E}
\Lg_\xi \chi^{}_t(g) \,=\, \chi'_t(g) \Lg_\xi(g) +
\frac{1}{2}\chi''_t(g) \babs{\upsigma^{\mathsf{T}}\nabla g}^2\,.
\end{equation}
Thus, applying \cref{PL4.2E} and the bound $\babs{\Lg_{v_*} (\log\abs{x})} \le c_1$,
we obtain
\begin{equation}\label{PL4.2F}
\begin{aligned}
\widetilde\Lg^{{\varphi^{}_{\mspace{-2mu}*}}}_{v_*}
\chi_t\bigl(h_r(x)\bigr) \,\le\, c_1\,\kappa(r)\,\chi'_t & \bigl(h_r(x)\bigr)
+ \bigl\langle a(x)\nabla{\varphi^{}_{\mspace{-2mu}*}}(x), \nabla \chi_t\bigl(h_r(x)\bigr) \bigr\rangle\\
&+ \frac{1}{2}\, \chi_t''\bigl(h_r(x)\bigr)\,\babs{\upsigma^{\mathsf{T}}(x)\nabla h_r(x)}^2
\qquad\forall\,x\in B_r^c\,.
\end{aligned}
\end{equation}
Combining \cref{PP4.1A,PL4.2F}, and completing the squares, we have
\begin{equation}\label{PL4.2G}
\begin{aligned}
\widetilde\Lg^{{\varphi^{}_{\mspace{-2mu}*}}}_{v_*}\bigl(\chi_t\comp h_r-{\varphi^{}_{\mspace{-2mu}*}}\bigr)(x)
&\,\le\, c_v(x)-{\rho^{}_*} + c_1\,\kappa(r)\,\chi'_t\bigl(h_r(x)\bigr)\\
&\mspace{5mu}
+ \frac{1}{2}\, \chi_t''\bigl(h_r(x)\bigr)\,\babs{\upsigma^{\mathsf{T}}(x)\nabla h_r(x)}^2
+ \frac{1}{2}\babs{\upsigma^{\mathsf{T}}(x)\nabla \chi_t\bigl(h_r(x)\bigr)}^2\\
&\mspace{50mu} -\frac{1}{2}\babs{\upsigma^{\mathsf{T}}(x)
\bigl[\nabla{\varphi^{}_{\mspace{-2mu}*}}(x) - \nabla \chi_t\bigl(h_r(x)\bigr)\bigr]}^2\,.
\end{aligned}
\end{equation}
Recall that $\chi'\le1$, and $\chi''\le1$.
Choose $r$ large enough so that ${\varphi^{}_{\mspace{-2mu}*}}<-1$ on $B_r^c$.
It then follows by the definitions in \cref{PL4.2C,PL4.2D} that
${\varphi^{}_{\mspace{-2mu}*}}- \chi_t\comp h_r<0$ on $\partial B_r$ for all $t\ge0$.
Also, for each $t>0$, the difference ${\varphi^{}_{\mspace{-2mu}*}}- \chi_t\comp h_r$ is negative
outside some compact set by the inf-compactness of $-{\varphi^{}_{\mspace{-2mu}*}}$.
Note also that $\abs{\nabla h_r}\le \frac{\kappa(r)}{r}$ on $B_r^c$.
Hence \cref{E-growth,PL4.2D} imply that there exists $r_0$ such
the right-hand side of \cref{PL4.2G} is negative on $B_r^c$ for all
$r>r_0$ and all $t\ge0$.
An application of the strong maximum principle then shows
that ${\varphi^{}_{\mspace{-2mu}*}}<h_r$ on $B_r^c$ for all $r>r_0$.
Now, note that
\begin{equation*}
\log \frac{\abs{x}}{\alpha(r)} \ge \frac{1}{2} \log\bigl(1+\abs{x}\bigr)
\qquad\text{when\ \ } \abs{x}\ge \max\,\bigl(1,2\bigl(\alpha(r)\bigr)^2\bigr)\,.
\end{equation*}
Since $\alpha(r)$ is strictly increasing, the inequality
\cref{EL4.2A} holds with
\begin{equation*}
\zeta(r) \coloneqq \frac{1}{2}\,\kappa\Bigl(\alpha^{-1}
\bigr(\sqrt{\nicefrac{r}{2}}\bigr)\Bigr)\qquad\text{for all\ }
r\ge 2\bigl(\alpha(r_0)\bigr)^2\,.
\end{equation*}
This completes the proof under \cref{A4.2}\,(a)\,.
The proof under part (b) of the assumption is similar.
The only difference is that here we use the fact that
$m_r\,\coloneqq\,\sup_{x\in B_r^c}\,\bigl(\Lg_{v_*} (\log\abs{x})\bigr)^-\to0$
as $t\to\infty$,
which is implied by \cref{E-weak}.
Thus with $\epsilon>0$ any constant such that ${\rho^{}_*}-c>\epsilon$ outside some
compact set, we choose $\kappa(r)$ as
\begin{equation*}
\kappa(r) \,\coloneqq\, \min\, \biggl(\sqrt r\,,\,\sup_{B_r^c\times{\mathscr{K}}}\,
\frac{\epsilon}{2 \sqrt m_r}\,,\,
\inf_{B_r^c}\,\abs{{\varphi^{}_{\mspace{-2mu}*}}}^{\nicefrac{1}{3}}\biggr)\,.
\end{equation*}
The rest is completely analogous to the analysis above.
This concludes the proof.
\end{proof}
The first part of the theorem which follows is quite technical,
but identifies a rather deep property of the ergodic occupation measures
of the operator $\sA$. It shows that under \cref{A4.1,A4.2}\,\textup{(a)} or \textup{(b)},
or \cref{A4.2}\,\textup{(c)}, if such a measure $\mu$ is feasible for the
maximization problem, or in other words, it satisfies
$\int_{{\mathcal{Z}}} \sR(z)\,\mu(\mathrm{d}{z}) > -\infty$, then it necessarily has
``finite average'' entropy, that is $\int \cH\,\mathrm{d}\mu<\infty$, or equivalently,
it belongs in the class ${\cP^{}_{\mspace{-3mu}*}}({\mathcal{Z}})$.
The proof uses the method of contradiction.
We first show that if such a measure $\mu$ is not in the class ${\cP^{}_{\mspace{-3mu}*}}({\mathcal{Z}})$,
then the left hand side of \cref{PP4.1F} grows at a geometric rate
as a function of $t$. Then we obtain a contradiction by
evaluating the right-hand side of \cref{PP4.1F}
using this geometric growth together
with the bound in \cref{L4.2}.
\begin{theorem}\label{T4.1}
\begin{enumerate}
\item[\ttup i]
Under \cref{A4.1,A4.2}\,\textup{(a)} or \textup{(b)},
or \cref{A4.2}\,\textup{(c)}, we have $\eom_{\sA}\cap{\cP^{}_{\mspace{-3mu}\circ}}({\mathcal{Z}})\subset{\cP^{}_{\mspace{-3mu}*}}({\mathcal{Z}})$.
This of course implies by \cref{P4.1} that
\begin{equation*}
{\rho^{}_*} \,=\, \max_{\mu\in\eom_{\sA}}\,\int_{{\mathcal{Z}}} \sR(z)\,\mu(\mathrm{d}{z})
\,=\, \adjustlimits\sup_{\mu\in\cP({\mathcal{Z}})} \inf_{g \in \Cc^2_c({\mathds{R}^{d}})}\,F(g,\mu) \,.
\end{equation*}
\item[\ttup{ii}]
Let \cref{A4.1} hold, and suppose that
\begin{equation}\label{ET4.1C}
\sup_{x\in {\mathds{R}^{d}}}\, \frac{\cH(x)}{1+\abs{{\varphi^{}_{\mspace{-2mu}*}}(x)}} \,<\,\infty\,.
\end{equation}
Then
\begin{equation}\label{ET4.1D}
{\rho^{}_*} \,=\, \adjustlimits\inf_{g \in \Cc^2_c({\mathds{R}^{d}})\,} \sup_{\mu\in\cP({\mathcal{Z}})}\,F(g,\mu)\,.
\end{equation}
\end{enumerate}
\end{theorem}
\begin{proof}
We first prove part (i) under under \cref{A4.2}\,(a) or (b).
We argue by contradiction. Let $\mu\in\eom_{\sA}\cap{\cP^{}_{\mspace{-3mu}\circ}}({\mathcal{Z}})$,
and suppose that $\mu\notin{\cP^{}_{\mspace{-3mu}*}}({\mathcal{Z}})$.
As in the proof of \cref{P4.1} we let
$\eta(\cdot) = \int_{{\mathscr{K}}\times{\mathds{R}^{d}}}\mu(\cdot\,,\mathrm{d}{\xi},\mathrm{d}{y})$.
Let $\mathscr{I}_1(t)$ and $\mathscr{I}_2(t)$ denote the left and the right-hand
side of \cref{PP4.1F}, respectively, and define
\begin{equation*}
\mathcal{I}(t) \,\coloneqq\, \int_{{\mathds{R}^{d}}}
\chi'_t\bigl({\varphi^{}_{\mspace{-2mu}*}}(x)\bigr)\,\cH(x)\,\eta(\mathrm{d}{x})\,.
\end{equation*}
Then of course $\mathcal{I}(t)\to\infty$ as $t\to\infty$ by the hypothesis.
Expanding $\mathscr{I}_1(t)$ we see that
\begin{equation*}
\mathscr{I}_1(t) \,=\, \mathcal{I}(t) - \int_{{\mathcal{Z}}}\chi'_t\bigl(\varphi^*(x)\bigr)
\bigl\langle a(x) y,\nabla\varphi^*(x)\bigr\rangle\, \mathrm{d}{\mu}
+ \int_{{\mathcal{Z}}} \chi'_t(\varphi^*(x)) (c-{\rho^{}_*})\,\mathrm{d}{\mu}\,.
\end{equation*}
Since $\int \sR\,\mathrm{d}\mu$ is finite, it follows that
$\int_{{\mathcal{Z}}} \abs{\upsigma^{\mathsf{T}} y}^2 \mathrm{d}{\mu}$ and
$\int_{{\mathcal{Z}}} \max\{-c,0\} \, \mathrm{d}{\mu}$ are also finite.
Moreover, the second assertion and the fact that $c$ is bounded above
imply that
$\int_{{\mathcal{Z}}} |c| \, \mathrm{d}{\mu}<\infty$.
Thus, using the Cauchy--Schwarz inequality in the above display
and the fact $|\chi'_t|$ is bounded, we have
\begin{equation}\label{PT4.1I}
\alpha_0(t) - \alpha_1(t)\sqrt{\mathcal{I}(t)} + \mathcal{I}(t)
\,\le\,\mathscr{I}_1(t)
\,\le\,\alpha_0(t) + \alpha_1(t)\sqrt{\mathcal{I}(t)} + \mathcal{I}(t)
\end{equation}
for some constants $\alpha_0(t)$ and $\alpha_1(t)$ which are bounded
in $t\in[0,\infty)$.
First suppose that over some sequence $t_n\to\infty$ we have
$\frac{\mathscr{I}_2(t_n)}{\mathscr{I}_1(t_n)}\to\delta<1$ as $n\to\infty$.
This implies by \cref{PT4.1I} that
$\frac{\mathscr{I}_2(t_n)}{\mathcal{I}(t_n)}\to\delta$.
However, if this is the case, then the inequality
\begin{equation*}
\alpha_0(t_n) - \alpha_1(t_n)\sqrt{\mathcal{I}(t_n)} +
\Bigl(1- \tfrac{\mathscr{I}_2(t_n)}{\mathcal{I}(t_n)}\Bigr)\mathcal{I}(t_n)\,\le\,0\,,
\end{equation*}
which is implied by \cref{PP4.1F,PT4.1I},
contradicts the fact that $\mathcal{I}(t)\to\infty$ as $t\to\infty$.
Thus we must have $\liminf_{t\to\infty} \frac{\mathscr{I}_2(t)}{\mathscr{I}_1(t)}\ge1$,
and same applies to the fraction $\frac{\mathscr{I}_2(t)}{\mathcal{I}(t)}$.
Define
\begin{equation*}
g_k \,\coloneqq\, \int_{{\mathds{R}^{d}}}\cH(x)\,
\Ind_{\{ -2k<{\varphi^{}_{\mspace{-2mu}*}}(x)< -2k+2\}}\, \eta(\mathrm{d}{x})\,,\qquad
k\in\mathds{N}\,.
\end{equation*}
We have $\mathcal{I}(2n)\ge \sum_{k=1}^n g_k$ for $n\in\mathds{N}$,
by definition of these quantities.
Recall that $\mathscr{I}_2(t)$ is defined as the right-hand
side of \cref{PP4.1F}.
Note then that, since $\chi''<1$, we have
$\mathscr{I}_2(2n)< \delta g_{n+1}$ for some $\delta<1$.
Therefore, since $\liminf_{t\to\infty}\,\frac{\mathscr{I}_2(t)}{\mathcal{I}(t)}\ge1$,
there exists $n_0\in\mathds{N}$ such that
\begin{equation}\label{AB1}
S_n \,\coloneqq\,\sum_{k=1}^{n} g_k \,\le\, g_{n+1}\qquad\forall\,n\ge n_0\,.
\end{equation}
Thus $S_{n+1} - S_n = g_{n+1} \ge S_n$, which implies that $S_{n+1}\ge 2 S_n$.
This of course means that $S_n$ diverges at a geometric rate in $n$,
that is, $S_{n}\ge 2^{n-1} S_1$.
Let $h$ denote the inverse of the map $y\mapsto\zeta(y)\log(1+y)$.
Note that $\cH(x)\le C(1+\abs{x}^p)$ for
some positive constants $C$ and $p$ by \cref{L4.1}
and the hypothesis that $c$ has polynomial growth in \cref{A3.1}\,(ii).
Thus, by \cref{L4.2}, we obtain
\begin{align*}
g_n &\,\le\, C \int_{{\mathds{R}^{d}}} (1+|x|^p)\,\Ind_{\{-2n<{\varphi^{}_{\mspace{-2mu}*}}(x)< -2n+2\}}\,\eta(\mathrm{d}{x})\\
&\,\le\, C \int_{{\mathds{R}^{d}}} (1+|x|^p)\,\Ind_{\{\zeta(|x|)\log(1+|x|)<2n\}}\, \eta(\mathrm{d}{x})\\
&\,\le\, C \bigl(1 + h(2n)^p\bigr)
\end{align*}
for all $n\in\mathds{N}$.
However, this implies from \cref{AB1} that
\begin{equation*}
\begin{aligned}
\log 2 \,\le\, \limsup_{n\to\infty}\,\frac{\log S_n}{n}
&\,\le\, C'\limsup_{n\to\infty}\,\frac{\log h(n)}{n}\\
&\,=\, C' \limsup_{k\to\infty}\,\frac{\log k}{ \zeta(k) \log(1+k)}\,=\,0
\end{aligned}
\end{equation*}
for some constant $C'$, and we reach a contradiction.
Therefore, $\eom_{\sA}\cap{\cP^{}_{\mspace{-3mu}\circ}}({\mathcal{Z}})\subset{\cP^{}_{\mspace{-3mu}*}}({\mathcal{Z}})$.
Moving on to the proof under \cref{A4.2}\,(c),
we replace the function $\chi^{}_t$ in \cref{D4.1} by
a function $\Tilde{\chi}^{}_t$ defined as follows.
For $t>0$, we let $\Tilde{\chi}^{}_t$ be a convex $\Cc^2(\mathds{R})$ function such that
$\Tilde{\chi}^{}_t(s)= s$ for $s\ge -t$, and $\Tilde{\chi}^{}_t(s) =
\text{constant}$ for $s\le -t\mathrm{e}^2$.
Then $\Tilde{\chi}'_t$ and $\Tilde{\chi}''_t$ are nonnegative.
In addition, we select $\Tilde{\chi}^{}_t$ so that
$\Tilde{\chi}''_t(s) \le -\frac{1}{s}$ for
$s\in[-t\mathrm{e}^2,-t]$ and $t\ge0$.
This is always possible.
We follow the same analysis as in the proof of \cref{P4.1}, with the function
$\Tilde{\chi}^{}_t$ as chosen, and obtain
\begin{equation}\label{PT4.1J}
\begin{aligned}
\int_{{\mathcal{Z}}}
\Tilde{\chi}'_t&\bigl({\varphi^{}_{\mspace{-2mu}*}}(x)\bigr)\Bigl(
\tfrac{1}{2} \babs{\upsigma^{\mathsf{T}}(x)\bigl(y-\nabla{\varphi^{}_{\mspace{-2mu}*}}(x)\bigr)}^2
+ \sR(x,\xi,y) - {\rho^{}_*}\Bigr)\,\mu(\mathrm{d}{x},\mathrm{d}{\xi},\mathrm{d}{y})\\
&\,\le\,\int_{{\mathds{R}^{d}}} \Tilde{\chi}''_t\bigl({\varphi^{}_{\mspace{-2mu}*}}(x)\bigr)\,\cH(x)\,\eta(\mathrm{d}{x})\\
&\,\le\, \int_{{\mathds{R}^{d}}}\frac{\cH(x)}{\abs{{\varphi^{}_{\mspace{-2mu}*}}(x)}}\,\Ind_{A_t}(x) \,\eta(\mathrm{d}{x})\\
&\,\le\, \widehat{C}_0\,
\int_{{\mathds{R}^{d}}\times{\mathscr{K}}\times{\mathds{R}^{d}}} \frac{1+\abs{{\varphi^{}_{\mspace{-2mu}*}}(x)}}{\abs{{\varphi^{}_{\mspace{-2mu}*}}(x)}}
\bigl(1+\abs{c(x,\xi)}\bigr)
\Ind_{A_t}(x) \,\mu(\mathrm{d}{x},\mathrm{d}{\xi},\mathrm{d}{y})\,,
\end{aligned}
\end{equation}
where $A_t \coloneqq \{x\colon {\varphi^{}_{\mspace{-2mu}*}}(x)\le -t\}$.
The integral on the right-hand side of \cref{PT4.1J} vanishes as $t\to\infty$ by
the hypothesis that $\int c\,\mathrm{d}\mu>-\infty$,
so again we obtain \cref{PT4.1H} which implies the result.
This completes the proof of part (i).
We continue with part (ii).
We use a $\Cc^2$ convex function $\Hat\chi_t\colon\mathds{R}\to\mathds{R}$, for $t\ge1$,
satisfying $\Hat\chi_t(s)=s$ for $s\le -t$,
$\Hat\chi''_t(s)\le -\frac{1}{s\log \abs{s}}$
for $s<-t$, and $\Hat\chi_t(s)=\text{constant}$ for $s\ge \Hat\zeta(t)$,
for some $\Hat\zeta(t)<-t$.
We let $h_t(x) = \Hat\chi_t\bigl({\varphi^{}_{\mspace{-2mu}*}}(x)\bigr)$.
We may translate ${\varphi^{}_{\mspace{-2mu}*}}$ so that it is smaller than $-1$ on ${\mathds{R}^{d}}$.
By \eqref{PP4.1E}, we have
\begin{equation}\label{PT4.1L}
\begin{aligned}
\sA h_t(z) +\sR(z)-{\rho^{}_*} &\,\le\,
\bigl[1-\Hat\chi'_t\bigl({\varphi^{}_{\mspace{-2mu}*}}(x)\bigr)\bigr]
\bigl(\sR(z)-{\rho^{}_*}\bigr)\\
&\mspace{10mu}-\tfrac{1}{2} \Hat\chi'_t\bigl({\varphi^{}_{\mspace{-2mu}*}}(x)\bigr)
\babs{\upsigma^{\mathsf{T}}(x)\bigl(y-\nabla{\varphi^{}_{\mspace{-2mu}*}}(x)\bigr)}^2
+ \Hat\chi''_t\bigl({\varphi^{}_{\mspace{-2mu}*}}(x)\bigr)\,\cH(x)\,.
\end{aligned}
\end{equation}
We claim that given any $\epsilon>0$ there exists $t>0$ such that
$F(h_t,\mu)\le {\rho^{}_*}+\epsilon$ for all $\mu\in\cP({\mathcal{Z}})$.
This of course suffices to establish \cref{ET4.1D}.
By \cref{A3.1}\,(iii) there exists $t_1>0$ such that the first term on the
right-hand side of \cref{PT4.1L} is nonpositive for all $t\ge t_1$.
Also, using the definition of $\Hat\chi$, we have
\begin{equation*}
\Hat\chi''_t\bigl({\varphi^{}_{\mspace{-2mu}*}}(x)\bigr)\,\cH(x) \,\le\,
\frac{\cH(x)}{\abs{{\varphi^{}_{\mspace{-2mu}*}}(x)} \log \abs{{\varphi^{}_{\mspace{-2mu}*}}(x)}}\,
\Ind\{x\in{\mathds{R}^{d}}\colon {\varphi^{}_{\mspace{-2mu}*}}(x) \le -t\}\,\xrightarrow[t\to\infty]{}\,0
\end{equation*}
by the hypothesis, and
since $-{\varphi^{}_{\mspace{-2mu}*}}$ is inf-compact by \cref{T3.1}.
This proves the claim, and completes the proof.
\end{proof}
There is a large class of problems which satisfy \cref{ET4.1C}.
It consists of equations with $\abs{b}^2+\abs{c}$ having at most linear growth
in $\abs{x}$ and $\abs{x}^{-1}\langle b,x\rangle^-$ growing no faster than
$\abs{c}^2$.
This fact is stated in the following lemma.
\begin{lemma}\label{L4.3}
Grant \cref{A4.1} and suppose that
\begin{equation*}
\sup_{(x,\xi)\in{\mathds{R}^{d}}\times{\mathscr{K}}}\, \max\;
\biggl(\frac{\langle b(x,\xi),x\rangle^-}{1+\abs{x}\abs{c(x,\xi)}},
\frac{\abs{b(x,\xi)}^2+\abs{c(x,\xi)}}{1+\abs{x}}\biggr)\,<\,\infty\,.
\end{equation*}
Then \cref{ET4.1C} holds.
\end{lemma}
\begin{proof}
We use the function $\chi_t$ in \cref{D4.1}.
Let $\Tilde{r}>0$ be such that ${\rho^{}_*}-c(x,\xi)>\delta>0$
on $B_{\Tilde{r}}^c\times{\mathscr{K}}$.
Note that there exists a constant $C$ such that
\begin{equation*}
\widetilde\Lg^{{\varphi^{}_{\mspace{-2mu}*}}}_{v_*} \chi_t \bigl(\epsilon (\Tilde{r}-\abs{x})\bigr)
\,\le\, C\epsilon\bigl(1 + \abs{x}^{-1}\langle b_{v_*}(x),x\rangle^-
+ \abs{\nabla{\varphi^{}_{\mspace{-2mu}*}}(x)}\bigr)
\quad \forall\, t>0\,.
\end{equation*}
Thus for some $\epsilon>0$ small enough, using \cref{PP4.1A}, we obtain
\begin{equation*}
\widetilde\Lg^{{\varphi^{}_{\mspace{-2mu}*}}}_{v_*}\bigl({\varphi^{}_{\mspace{-2mu}*}}(x)
-\chi_t \bigl(\epsilon (\Tilde{r}-\abs{x})\bigr)\bigr) \,>\,0
\qquad\forall\, x\in B_{\Tilde{r}}^c\,,\ \ \forall\, t>0\,.
\end{equation*}
An application of the strong maximum principle then shows that
${\varphi^{}_{\mspace{-2mu}*}}(x)\le \epsilon (\Tilde{r}-\abs{x})^-$.
Therefore, using \cref{L4.1}, we obtain
\begin{equation*}
\babs{\nabla{\varphi^{}_{\mspace{-2mu}*}}(x)}^2 \,\le\,
C' (1+\abs{x}) \,\le\, C'\bigl(1+\Tilde{r}-\epsilon^{-1}{\varphi^{}_{\mspace{-2mu}*}}(x)\bigr)
\qquad \forall\, x\in B_{\Tilde{r}}^c\,,
\end{equation*}
for some constant $C'$.
\end{proof}
We next present the variational formula over functions in
$\Cc^2({\mathds{R}^{d}})$ whose derivatives up to second order have at most polynomial growth
in $\abs{x}$.
Let $\Cc_{\mathsf{pol}}^2({\mathds{R}^{d}})$ denote this space of functions.
\begin{theorem}\label{T4.2}
Under \cref{A3.1} alone, we have
\begin{equation}\label{ET4.2A}
{\rho^{}_*} \,= \, \adjustlimits\inf_{g \in \Cc^2({\mathds{R}^{d}})}
\sup_{\mu\in\cP({\mathcal{Z}})}\,F(g,\mu)\,.
\end{equation}
Under \cref{A4.1,A4.2}\,\textup{(a)} or \textup{(b)}, we have
\begin{equation}\label{ET4.2B}
{\rho^{}_*} \,= \, \adjustlimits\inf_{g \in \Cc_{\mathsf{pol}}^2({\mathds{R}^{d}})}
\sup_{\mu\in\cP({\mathcal{Z}})}\,F(g,\mu)
\,=\, \adjustlimits\sup_{\mu\in\cP({\mathcal{Z}})}
\inf_{g \in \Cc_{\mathsf{pol}}^2({\mathds{R}^{d}})}\, F(g,\mu)\,.
\end{equation}
\end{theorem}
\begin{proof}
By \cref{E-eigen4,ET3.1A} we have
\begin{equation*}
\adjustlimits\max_{\xi\in{\mathscr{K}}\,}\max_{y\in{\mathds{R}^{d}}}\;
\bigl[\sA {\varphi^{}_{\mspace{-2mu}*}} (x,\xi,y) + \sR(x,\xi,y)\bigr]\,=\, {\rho^{}_*}\,.
\end{equation*}
Since ${\varphi^{}_{\mspace{-2mu}*}}\in\Cc^2({\mathds{R}^{d}})$, this implies that
\begin{equation*}
\adjustlimits\inf_{g \in \Cc^2({\mathds{R}^{d}})}
\sup_{\mu\in\cP({\mathcal{Z}})}\,F(g,\mu) \,\le\, {\rho^{}_*}\,.
\end{equation*}
On the other hand, by \cref{T3.1}\,(d), it follows that for any
$g\in \Cc^2({\mathds{R}^{d}})$ we have
\begin{equation*}
\sup_{z\in{\mathcal{Z}}}\, \bigl[\sA g(z) + \sR(z)\bigr] \,\ge\, {\rho^{}_*}\,,
\end{equation*}
which then implies the converse inequality
\begin{equation*}
\adjustlimits\inf_{g \in \Cc^2({\mathds{R}^{d}})}
\sup_{\mu\in\cP({\mathcal{Z}})}\,F(g,\mu) \,\ge\, {\rho^{}_*}\,.
\end{equation*}
This proves \cref{ET4.2A}.
Concerning \cref{ET4.2B}, the first equality follows as in the preceding
paragraph since ${\varphi^{}_{\mspace{-2mu}*}}\in\Cc_{\mathsf{pol}}^2({\mathds{R}^{d}})$ by
Assumptions \ref{A3.1}\,(i)--(ii) and \ref{A4.1}, and \cref{L4.1}.
Turning now our attention to the second equality in \cref{ET4.2B},
recall from the proof of \cref{P4.1}
that $\eta_{v_*}$ denotes the invariant probability measure of
$\widetilde\Lg_{v_*}^{{\varphi^{}_{\mspace{-2mu}*}}}$.
Under \cref{A4.2}\,(a) or (b), \cref{L4.2} shows
that $\Phi_{\mspace{-2mu}*}^{-1}(x)$ grows faster in $\abs{x}$ than any polynomial.
Therefore, $\int_{\mathds{R}^{d}} \abs{x}^n\, \eta_{v_*}(\mathrm{d}{x})<\infty$
for all $n\in\mathds{N}$ by \cref{ET3.1B}.
Since $\abs{\nabla{\varphi^{}_{\mspace{-2mu}*}}(x)}$ has at most polynomial growth,
and $b$ has at most linear growth, we obtain
\begin{equation}\label{PT4.2A}
\int_{{\mathds{R}^{d}}}\babs{\widetilde\Lg_{v_*}^{{\varphi^{}_{\mspace{-2mu}*}}} f(x)}\,\eta_{v_*}(\mathrm{d}{x})\,<\,\infty
\qquad\forall\,f\in\Cc_{\mathsf{pol}}^2({\mathds{R}^{d}})\,.
\end{equation}
Continuing, if \cref{PT4.2A} holds, then
it is standard to show by employing a cut-off function, that
\begin{equation}\label{PT4.2B}
\int_{{\mathds{R}^{d}}}\widetilde\Lg_{v_*}^{{\varphi^{}_{\mspace{-2mu}*}}} f(x)\,\eta_{v_*}(\mathrm{d}{x})\,=\,0
\qquad\forall\, f\in\Cc_{\mathsf{pol}}^2({\mathds{R}^{d}})\,.
\end{equation}
Let $\mu_*\in\eom_\sA$ denote the ergodic occupation measure corresponding
to $\eta_{v_*}$, that is,
\begin{equation*}
\mu_*(\mathrm{d}{x},\mathrm{d}{\xi},\mathrm{d}{y})\,=\,\eta_{v_*}(\mathrm{d}{x})\,\delta_{v_*(x)}(\mathrm{d}{\xi})\,
\delta_{\nabla{\varphi^{}_{\mspace{-2mu}*}}}(\mathrm{d}{y})\,.
\end{equation*}
\Cref{PT4.2B} implies that
\begin{equation}\label{PT4.2C}
F(g,\mu_*) \,=\, \,\int_{{\mathcal{Z}}} \sR(z)\,\mu_*(\mathrm{d}{z})
\,=\, {\rho^{}_*}\qquad\forall\,g \in \Cc_{\mathsf{pol}}^2({\mathds{R}^{d}})\,.
\end{equation}
Since
\begin{equation*}
\adjustlimits\sup_{\mu\in\cP({\mathcal{Z}})}\inf_{g \in \Cc_{\mathsf{pol}}^2({\mathds{R}^{d}})}\;
F(g,\mu)\,\le\,
\adjustlimits\inf_{g \in \Cc_{\mathsf{pol}}^2({\mathds{R}^{d}})} \sup_{\mu\in\cP({\mathcal{Z}})}\,F(g,\mu)\,,
\end{equation*}
the second equality in \cref{ET4.2B} then follows by \cref{ET4.2A,PT4.2C}.
\end{proof}
\section{The risk-sensitive cost minimization problem}\label{S5}
Using \cref{L4.1}, we can improve the main result in
\cite{AB-18} which assumes bounded drift and running cost.
We say that a function $f\colon{\mathcal{X}}\to\mathds{R}$ defined on a locally compact space
is \emph{coercive, or near-monotone,
relative to a constant $\beta\in\mathds{R}$} if there exists a compact set $K$ such
that $\inf_{K^c}\,f >\beta$.
Recall that an admissible control $\xi$ for \cref{E-sde1} is a process
$\xi_t(\omega)$ which takes values in ${\mathscr{K}}$, is jointly measurable in
$(t,\omega)\in[0,\infty)\times\Omega$, and is
non-anticipative, that is,
for $s < t$, $W_{t} - W_{s}$ is independent of $\sF_{s}$ given in \cref{E-sF}.
We let ${\Xi}$ denote the class of admissible controls, and $\Exp^x_\xi$ the
expectation operator on the canonical space of the
process under the control $\xi\in{\Xi}$, conditioned on the
process $X$ starting from $x\in\mathds{R}^{d}$ at $t=0$.
Let $c\colon{\mathds{R}^{d}}\times{\mathscr{K}}\to\mathds{R}$ be continuous, and
Lipschitz continuous in its first argument
uniformly with respect to the second.
We define the risk-sensitive penalty by
\begin{equation*}
\sE^x_\xi\,=\, \sE^x_\xi(c)\,\coloneqq\, \limsup_{T\to\infty}\;\frac{1}{T}\,
\log\Exp^x_\xi\Bigl[\mathrm{e}^{\int_{0}^{T} c(X_{t},\xi_t)\,\mathrm{d}{t}}\Bigr]\,,
\quad \xi\in{\Xi}\,,
\end{equation*}
and the risk-sensitive optimal values by
$\sE^x_* \coloneqq \inf_{\xi\in\,{\Xi}}\,\sE^x_\xi$, and
$\sE_* \coloneqq \inf_{x\in\,{\mathds{R}^{d}}}\,\sE^x_*$.
Let
\begin{equation*}
\widehat\cG f(x) \,\coloneqq\, \frac{1}{2}\trace\left(a(x)\nabla^{2}f(x)\right)
+ \min_{\xi\in{\mathscr{K}}}\, \bigl[\bigl\langle b(x,\xi),
\nabla f(x)\bigr\rangle + c(x,\xi) f(x)\bigr]\,,\quad f\in\Cc^2({\mathds{R}^{d}})\,,
\end{equation*}
and
\begin{equation*}
\widehat\lambda_*\,=\,\widehat\lambda_*(c)\,\coloneqq\,\inf\,\Bigl\{\lambda\in\mathds{R}\,
\colon \exists\, \varphi\in\Sobl^{2,d}({\mathds{R}^{d}}),\ \varphi>0, \
\widehat\cG\varphi -\lambda\varphi\le 0 \text{\ a.e.\ in\ } {\mathds{R}^{d}}\Bigr\}\,.
\end{equation*}
We say that $\widehat\lambda_*$ is \emph{strictly monotone at $c$ on the right}
if $\widehat\lambda_*(c+h)>\widehat\lambda_*(c)$ for all non-trivial nonnegative
functions $h$ with compact support.
\Cref{E-Prop} below improves \cite[Proposition~1.1]{AB-18}.
We first state the assumptions.
\begin{assumption}\label{A5.1}
In addition to \cref{A4.1} we require the following.
\begin{enumerate}
\item[\ttup i]
The drift $b$ and running cost $c$ satisfy, for some $\theta\in[0,1)$
and a constant $\kappa_0$, the bound
\begin{equation*}
\abs{b(x,\xi)} \,\le\, \kappa_0\bigl(1+\abs{x}^\theta\bigr)\,,\quad\text{and\ \ }
\abs{c(x,\xi)} \,\le\, \kappa_0\bigl(1+\abs{x}^{2\theta}\bigr)
\end{equation*}
for all $(x,\xi)\in{\mathds{R}^{d}}\times{\mathscr{K}}$.
\item[\ttup{ii}]
The drift $b$ satisfies
\begin{equation}\label{EA5.1A}
\frac{1}{\abs{x}^{1-\theta}}\;
\max_{\xi\in{\mathscr{K}}}\;\bigl\langle b(x,\xi),\, x\bigr\rangle^{+}
\;\xrightarrow[\abs{x}\to\infty]{}\;0\,.
\end{equation}
\end{enumerate}
\end{assumption}
\begin{proposition}\label{E-Prop}
Grant \cref{A5.1}, and suppose that $c$ is coercive relative to $\sE_*$.
Then the HJB equation
\begin{equation}\label{E1-HJB}
\min_{\xi\in{\mathscr{K}}}\;
\bigl[\Lg_\xi V_{\mspace{-2mu}*}(x) + c(x,\xi)\,V_{\mspace{-2mu}*}(x)\bigr] \,=\, \sE_*\,V_{\mspace{-2mu}*}(x)
\qquad\forall\,x\in{\mathds{R}^{d}}
\end{equation}
has a solution $V_{\mspace{-2mu}*}\in\Cc^{2}(\mathds{R}^{d})$,
satisfying $\inf_{{\mathds{R}^{d}}}\,V_{\mspace{-2mu}*}>0$, and the following hold:
\begin{enumerate}
\item[\ttup a]
$\sE^x_*=\sE_*=\widehat\lambda_*$ for all $x\in{\mathds{R}^{d}}$.
\item[\ttup b]
Any $v\in{\Xi_{\mathsf{sm}}}$ that satisfies
\begin{equation}\label{EP1.1A}
\Lg_v V_{\mspace{-2mu}*}(x) + c\bigl(x,v(x)\bigr)\,V_{\mspace{-2mu}*}(x)\,=\,
\min_{\xi\in{\mathscr{K}}}\; \bigl[\Lg_\xi V_{\mspace{-2mu}*}(x) + c(x,\xi)\,V_{\mspace{-2mu}*}(x)\bigr]
\end{equation}
a.e.\ $x\in{\mathds{R}^{d}}$, is stable, and is optimal, that is, $\sE^v_x=\sE_*$ for all $x\in{\mathds{R}^{d}}$.
\item[\ttup c]
It holds that
\begin{equation*}
V_{\mspace{-2mu}*}(x) \,=\, \Exp^x_v\Bigl[\mathrm{e}^{\int_{0}^{T}
[c(X_{t},v(X_{t}))-\sE_*]\,\mathrm{d}{t}}\,V_{\mspace{-2mu}*}(X_T)\Bigr]
\qquad\forall\, (T,x)\in\mathds{R}_+\times{\mathds{R}^{d}}\,,
\end{equation*}
for any $v\in{\Xi_{\mathsf{sm}}}$ that satisfies \cref{EP1.1A}.
\item[\ttup d]
If $\widehat\lambda_*$ is strictly monotone at $c$ on the right,
then there exists a unique positive
solution to \cref{E1-HJB}, up to a multiplicative constant,
and any optimal $v\in{\Xi_{\mathsf{sm}}}$ satisfies \cref{EP1.1A}.
\end{enumerate}
\end{proposition}
\begin{proof}
A modification of \cite[Lemma~3.2]{AB-18}
(e.g., applying It\^{o}'s formula to the function $f(x)= \abs{x}^{2+2\theta}$)
shows that \cref{EA5.1A} implies that
\begin{equation*}
\limsup_{t\to\infty}\; \frac{1}{t}\;\Exp^x_\xi\bigl[\abs{X_{t}}^{1+\theta}\bigr]
\, =\, 0 \qquad\forall\,\xi\in{\Xi}\,.
\end{equation*}
From this point on, the proof follows as in \cite{AB-18}, using \cref{L4.1}.
Indeed, parts (a) and (b) follow from \cite[Theorem~3.4]{AB-18}
by using the above estimate and \cref{L4.1}.
Since $\inf_{{\mathds{R}^{d}}}\,V_{\mspace{-2mu}*}>0$, any minimizing selector is recurrent. Moreover,
the twisted diffusion corresponding to the minimizing selector is regular.
Thus part (c) follows from \cite[Theorem~1.5]{AB-18}.
In addition, the hypothesis in (d) implies that for any minimizing selector $v$,
$\lambda_v=\hat\lambda_*$ is right monotone at $c$ which, in turn,
implies the simplicity of the principal eigenvalue by
\cite[Theorem~1.2]{AB-18}. This also implies the last claim
by \cite[Lemma~3.6]{AB-18}.
\end{proof}
\section*{Acknowledgements}
The work of Ari Arapostathis was supported in part by
the National Science Foundation through grant DMS-1715210, in part
the Army Research Office through grant W911NF-17-1-001,
and in part by the Office of Naval Research through grant N00014-16-1-2956
which was approved for public release under DCN \#43-5025-19.
The research of Anup Biswas was supported in part by an INSPIRE faculty fellowship
and DST-SERB grant EMR/2016/004810,
while the work of Vivek Borkar was supported by a J.\ C.\ Bose Fellowship.
|
2,869,038,153,991 | arxiv | \section{Introduction}
Subject to the truth of the Riemann Hypothesis (RH), the nontrivial zeros of the Riemann zeta-function can be written as $\rho=\tfrac{1}{2}+i\gamma$, where $\gamma\in\mathbb{R}$. Denote consecutive ordinates of zeros by $0<\gamma\leq\gamma'$, we define the normalized gap
\begin{equation*}
\delta(\gamma):=(\gamma'-\gamma)\frac{\log\gamma}{2\pi}.
\end{equation*}
It is well-known that
\begin{displaymath}
N(T):=\sum_{0<\gamma\leq T}1=\frac{T}{2\pi}\log\frac{T}{2\pi}-\frac{T}{2\pi}+O(\log T)
\end{displaymath}
for $T\geq 10$. Hence $\delta(\gamma)$ is $1$ on average. It is expected that there are arbitrarily large and arbitrarily small (normalized) gaps between consecutive zeros of the Riemann zeta-function on the critical line, i.e.
\begin{equation*}
\lambda:=\limsup_{\gamma}\delta(\gamma)=\infty\quad\textrm{and}\ \ \mu:=\liminf_{\gamma}\delta(\gamma)=0.
\end{equation*}
In this article, we focus only on the large gaps, and prove the following theorem.
\begin{theorem}
Assuming RH. Then we have $\lambda>2.9$.
\end{theorem}
Very little is known about $\lambda$ unconditionally. Selberg [\textbf{\ref{S}}] remarked that he could prove $\lambda>1$. Conditionally, Bredberg [\textbf{\ref{B1}}] showed that $\lambda>2.766$ under the assumption of RH (see also [\textbf{\ref{M}},\textbf{\ref{MO}},\textbf{\ref{CGG3}},\textbf{\ref{H}},\textbf{\ref{BMN}},\textbf{\ref{FW1}}] for work in this direction), and on the Generalized Riemann Hypothesis (GRH) it is known that $\lambda>3.072$ [\textbf{\ref{FW}}] (see also [\textbf{\ref{CGG2}},\textbf{\ref{Ng}},\textbf{\ref{B}}]). These results either use Hall's approach using Wirtinger's inequality, or exploit the following idea of Mueller [\textbf{\ref{M}}].
Let $H:\mathbb{C}\rightarrow\mathbb{C}$ and consider the following functions
\begin{equation*}
\mathcal{M}_1(H,T)=\int_{0}^{T}\big|H(\tfrac{1}{2}+it)\big|^2dt
\end{equation*}
and
\begin{equation*}
\mathcal{M}_2(H,T;c)=\int_{-c/L}^{c/L}\sum_{0<\gamma\leq T}\big|H(\tfrac{1}{2}+i(\gamma+\alpha))\big|^2d\alpha,
\end{equation*}
where $L=\log\frac{T}{2\pi}$. We note that if
\begin{equation*}
h(c):=\frac{\mathcal{M}_2(H,T;c)}{\mathcal{M}_1(H,T)}<1
\end{equation*}
as $T\rightarrow\infty$, then $\lambda>c/\pi$, and if $h(c)>1$ as $T\rightarrow\infty$, then $\mu<c/\pi$.
Mueller [\textbf{\ref{M}}] applied this idea to $H(s)=\zeta(s)$. Using $H(s)=\sum_{n\leq T^{1-\varepsilon}}d_{2.2}(n)n^{-s}$, where
the arithmetic function $d_k(n)$ is defined in terms of the Dirichlet series
\begin{equation*}
\zeta(s)^k=\sum_{n=1}^{\infty}\frac{d_k(n)}{n^s}\qquad(\sigma>1)
\end{equation*}
for any real number $k$, Conrey, Ghosh and Gonek [\textbf{\ref{CGG1}}] showed that $\lambda>2.337$. Later, assuming GRH, they applied to $H(s)=\zeta(s)\sum_{n\leq T^{1/2-\varepsilon}}n^{-s}$ and obtained $\lambda>2.68$ [\textbf{\ref{CGG2}}]. By considering a more general choice
\begin{equation*}
H(s)=\zeta(s)\sum_{n\leq T^{1/2-\varepsilon}}\frac{d_r(n)P(\frac{\log y/n}{\log y})}{n^s},
\end{equation*}
where $P(x)$ is a polynomial, Ng [\textbf{\ref{Ng}}] improved that result to $\lambda>3$ (using $r=2$ and $P(x)=(1-x)^{30}$). In the last two papers, GRH is needed to estimate some certain exponential sums resulting from the evaluation of the discrete mean value over the zeros in $\mathcal{M}_2(H,T;c)$. Recently, Bui and Heath-Brown [\textbf{\ref{BH-B}}] showed how one can use a generalization of the Vaughan identity and the hybrid large sieve inequality to circumvent the assumption of GRH for such exponential sums. Here we use that idea to obtain a weaker version of Ng's result without provoking GRH. It is possible that Feng and Wu's result $\lambda>3.072$ can also be obtained just assuming RH by this method. However, we opt to work on Ng's result for simplicity.
Instead of using the divisor function $d(n)=d_2(n)$, we choose
\begin{equation*}
H(s)=\zeta(s)\sum_{n\leq y}\frac{h(n)P(\frac{\log y/n}{\log y})}{n^s},
\end{equation*}
where $y=T^{\vartheta}$, $P(x)$ is a polynomial and $h(n)$ is a multiplicative function satisfying
\begin{equation}\label{500}
h(n)=\left\{ \begin{array}{ll}
d(n) &\qquad \textrm{if $n$ is square-free,}\\
0 & \qquad\textrm{otherwise.}
\end{array} \right.
\end{equation}
In Section 3 and Section 4 we shall prove the following two key lemmas.
\begin{lemma}
Suppose $0<\vartheta<\tfrac{1}{2}$. We have
\begin{eqnarray*}
\mathcal{M}_1(H,T)=\frac{AT(\log y)^{9}}{6}\int_{0}^{1}(1-x)^{3}\bigg(\vartheta^{-1}P_{1}(x)^2-2P_{1}(x)P_{2}(x)\bigg)dx+O(TL^8),
\end{eqnarray*}
where
\begin{equation*}
A=\prod_{p}\bigg(1+\frac{8}{p}\bigg)\bigg(1-\frac{1}{p}\bigg)^{8}
\end{equation*}
and
\begin{equation*}
P_{r}(x)=\int_{0}^{x}t^{r}P(x-t)dt.
\end{equation*}
\end{lemma}
\begin{lemma}
Suppose $0<\vartheta<\tfrac{1}{2}$ and $P(0)=P'(0)=0$. We have
\begin{eqnarray*}
\sum_{0<\gamma\leq T}H(\rho+i\alpha)H(1-\rho-i\alpha)&=&\frac{ATL(\log y)^{9}}{6\pi}\int_{0}^{1}(1-x)^{3}\emph{Re}\bigg\{{\sum_{j=1}^{\infty}(i\alpha\log y)^jB(j;x)}\bigg\}dx\\
&&\qquad+O_\varepsilon(TL^{9+\varepsilon})
\end{eqnarray*}
uniformly for $\alpha\ll L^{-1}$, where
\begin{eqnarray*}
&&B(j;u)=-\frac{2P_1(u)P_{j+2}(u)}{(j+2)!}+\frac{2\vartheta P_2(u)P_{j+2}(u)}{(j+2)!}+\frac{4\vartheta P_1(u)P_{j+3}(u)}{(j+3)!}\nonumber\\
&&\qquad-\frac{\vartheta}{(j+2)!}\int_{0}^{u}t(\vartheta^{-1}-t)^{j+2}P_{1}(u)P(u-t)dt\\
&&\qquad+\frac{\vartheta}{(j+1)!}\int_{0}^{u}t(\vartheta^{-1}-t)^{j+1}P_{2}(u)P(u-t)dt-\frac{\vartheta}{6j!}\int_{0}^{u}t(\vartheta^{-1}-t)^{j}P_{3}(u)P(u-t)dt.
\end{eqnarray*}
\end{lemma}
\noindent\textit{Proof of} Theorem 1.1. We take $\vartheta=\tfrac{1}{2}^{-}$. On RH we have
\begin{equation*}
\sum_{0<\gamma\leq T}\big|H(\tfrac{1}{2}+i(\gamma+\alpha))\big|^2=\sum_{0<\gamma\leq T}H(\rho+i\alpha)H(1-\rho-i\alpha).
\end{equation*}
Note that this is the only place we need to assume RH. Lemma 1.2 then implies that
\begin{eqnarray*}
\int_{-c/L}^{c/L}\sum_{0<\gamma\leq T}\big|H(\tfrac{1}{2}+i(\gamma+\alpha))\big|^2d\alpha\sim\frac{AT(\log y)^{9}}{6\pi}\sum_{j=1}^{\infty}\frac{(-1)^jc^{2j+1}}{2^{2j-1}(2j+1)}\int_{0}^{1}(1-x)^{3}B(2j;x)dx.
\end{eqnarray*}
Hence
\begin{eqnarray*}
h(c)=\frac{1}{2\pi}\frac{\sum_{j=1}^{\infty}\frac{(-1)^jc^{2j+1}}{2^{2j-1}(2j+1)}\int_{0}^{1}(1-x)^{3}B(2j;x)dx}{\int_{0}^{1}(1-x)^{3}(P_1(x)^2-P_1(x)P_2(x))dx}+o(1),
\end{eqnarray*}
as $T\rightarrow\infty$. Consider the polynomial $P(x)=\sum_{j=2}^{M}c_jx^j$. Choosing $M=6$ and running Mathematica's Minimize command, we obtain $\lambda>2.9$. Precisely, with
\begin{eqnarray*}
P(x)=1000x^2 - 9332x^3 + 30134x^4 - 40475x^5 + 19292x^6,
\end{eqnarray*}
we have
\begin{eqnarray*}
h(2.9\pi)=0.99725\ldots<1,
\end{eqnarray*}
and this proves the theorem.
\begin{remark}
\emph{The above lemmas are unconditional. We note that in the case $r=2$ apart from the arithmetical factor $a_3$ being replaced by $A$, Lemma 1.1 is the same as what stated in [\textbf{\ref{Ng}}; Lemma 2.1] (see also [\textbf{\ref{B}}; Lemma 2.3]), while Lemma 1.2, under the additional condition $P(0)=P'(0)=0$, recovers Theorem 2 of Ng [\textbf{\ref{Ng}}] (and also Lemma 2.6 of Bui [\textbf{\ref{B}}]) without assuming GRH, though the latters are written in a slightly different and more complicated form. This is as expected because replacing the divisor function $d(n)$ by the arithmetic function $h(n)$ (as defined in \eqref{500}) in the definition of $H(s)$ only changes the arithmetical factor in the resulting mean value estimates. This substitution, however, makes our subsequent calculations much easier. Our arguments also work if we set $h(n)=d_r(n)$ when $n$ is square-free for some $r\in\mathbb{N}$ without much changes, but we choose $r=2$ to simplify various statements and expressions in the paper.}
\end{remark}
\begin{remark}
\emph{In the course of evaluating $\mathcal{M}_2(H,T;c)$, we encounter an exponential sum of type (see Section 4.2)
\begin{equation*}
\sum_{n\leq y}\frac{h(n)P(\frac{\log y/n}{\log y})}{n}\sum_{m\leq nT/2\pi}a(m)e\bigg(-\frac{m}{n}\bigg)
\end{equation*}
for some arithmetic function $a(m)$. At this point, assuming GRH, Ng [\textbf{\ref{Ng}}] applied Perron's formula to the sum over $m$, and then moved the line of integration to $\textrm{Re}(s)=1/2+\varepsilon$. The main term arises from the residue at $s=1$ and the error terms in this case are easy to handle. To avoid being subject to GRH, we instead use the ideas in [\textbf{\ref{CGG1}}] and [\textbf{\ref{BH-B}}]. That leads to a sum of type
\begin{equation*}
\sum_{n\leq y}\frac{\mu(n)h(n)P(\frac{\log y/n}{\log y})}{n}.
\end{equation*}
This is essentially a variation of the prime number theorem, and here the polynomial $P(x)$ is required to vanish with order at least $2$ at $x=0$ (see Lemma 2.6). As a result, we cannot take the choice $P(x)=(1-x)^{30}$ as in [\textbf{\ref{Ng}}]. Here it is not clear how to choose a ``good" polynomial $P(x)$. Our theorem is obtained by numerically optimizing over polynomials $P(x)$ with degree less than $7$. It is probable that by considering higher degree polynomials, we can establish Ng's result $\lambda>3$ under only RH.}
\end{remark}
\noindent\textbf{Notation.} Throughout the paper, we denote
\begin{equation*}
[n]_y:=\frac{\log y/n}{\log y}.
\end{equation*}
For $Q,R\in C^\infty[(0,1)]$ we define
\begin{equation*}
Q_{r}(x)=\int_{0}^{x}t^{r}Q(x-t)dt\qquad\textrm{and}\qquad R_{r}(x)=\int_{0}^{x}t^{r}R(x-t)dt.
\end{equation*}
We let $\varepsilon>0$ be an arbitrarily small positive number, and can change from time to time.
\section{Various lemmas}
The following two lemmas are in [\textbf{\ref{CGG1}}; Lemma 2 and Lemma 3].
\begin{lemma}\label{501}
Suppose that $A(s)=\sum_{m=1}^{\infty}a(m)m^{-s}$, where $a(m)\ll_\varepsilon m^\varepsilon$, and $B(s)=\sum_{n\leq y}b(n)n^{-s}$, where $b(n)\ll_\varepsilon n^\varepsilon$. Then we have
\begin{eqnarray*}
&&\frac{1}{2\pi i}\int_{a+i}^{a+iT}\chi(1-s)A(s)B(1-s)ds=\sum_{n\leq y}\frac{b(n)}{n}\sum_{m\leq nT/2\pi}a(m)e\bigg(-\frac{m}{n}\bigg)+O_\varepsilon(yT^{1/2+\varepsilon}),
\end{eqnarray*}
where $a=1+L^{-1}$.
\end{lemma}
\begin{lemma}\label{300}
Suppose that $A_j(s)=\sum_{n=1}^{\infty}a_{j}(n)n^{-s}$ is absolutely convergent for $\sigma>1$, $1\leq j\leq k$, and that
\begin{equation*}
A(s)=\sum_{n=1}^{\infty}\frac{a(n)}{n^s}=\prod_{j=1}^{k}A_j(s).
\end{equation*}
Then for any $l\in\mathbb{N}$, we have
\begin{equation*}
\sum_{n=1}^{\infty}\frac{a(ln)}{n^s}=\sum_{l=l_1\ldots l_k}\prod_{j=1}^{k}\bigg(\sum_{\substack{n\geq1\\(n,\prod_{i<j}l_i)=1}}\frac{a_{j}(l_jn)}{n^s}\bigg).
\end{equation*}
\end{lemma}
We shall need estimates for various divisor-like sums. Throughout the paper, we let
\begin{equation*}
F_\tau(n)=\prod_{p|n}\big(1+O(p^{-\tau})\big),
\end{equation*}
for $\tau>0$ and the constant in the $O$-term is implicit and independent of $\tau$.
\begin{lemma}\label{504}
For any $Q\in C^\infty([0,1])$, there exists an absolute constant $\tau_0>0$ such that
\begin{eqnarray*}
&&\emph{(i)}\ \sum_{an\leq y}\frac{h(an)Q([an]_y)}{n}=C(\log y)^2h(a)\prod_{p|a}\bigg(1+\frac{2}{p}\bigg)^{-1}Q_1([a]_y)+O(d(a)F_{\tau_0}(a)L),\\
&&\emph{(ii)}\ \sum_{an\leq y}\frac{h(an)Q([an]_y)\log n}{n}=C(\log y)^3h(a)\prod_{p|a}\bigg(1+\frac{2}{p}\bigg)^{-1}Q_2([a]_y)+O(d(a)F_{\tau_0}(a)L^2),
\end{eqnarray*}
where
\begin{equation*}
C=\prod_{p}\bigg(1+\frac{2}{p}\bigg)\bigg(1-\frac{1}{p}\bigg)^2.
\end{equation*}
\end{lemma}
\begin{proof}
By a method of Selberg [\textbf{\ref{S}}] we have
\begin{equation*}
\sum_{n\leq t}\frac{h(an)}{n}=\frac{C(\log t)^2}{2}h(a)\prod_{p|a}\bigg(1+\frac{2}{p}\bigg)^{-1}+O(d(a)F_{\tau_0}(a)L)
\end{equation*}
for any $t\leq T$. The first statement then follows from partial summation.
The second statement is an easy consequence of the first one.
\end{proof}
\begin{lemma}\label{507}
For any $Q\in C^\infty([0,1])$, we have
\begin{equation*}
\sum_{n\leq y}\frac{h(n)^2\varphi(n)Q([n]_y)}{n^2}\prod_{p|n}\bigg(1+\frac{2}{p}\bigg)^{-2}=\frac{D(\log y)^4}{6}\int_{0}^{1}(1-x)^3Q(x)dx+O(L^3),
\end{equation*}
where
\begin{equation*}
D=\prod_p\bigg[1+\frac{4(p-1)}{p^2}\bigg(1+\frac{2}{p}\bigg)^{-2}\bigg]\bigg(1-\frac{1}{p}\bigg)^4.
\end{equation*}
\end{lemma}
\begin{proof}
The proof is similar to the above lemma.
\end{proof}
We need a lemma concerning the size of the function $F_{\tau_0}(n)$ on average.
\begin{lemma}\label{505}
Suppose $-1\leq\sigma\leq 0$. We have
\begin{equation*}
\sum_{n\leq y} \frac{d_k(n)F_{\tau_0}(n)}{n}\bigg(\frac{y}{n}\bigg)^{\sigma}\ll_k L^{k-1} \min\big\{|\sigma|^{-1},L\big\}.
\end{equation*}
\end{lemma}
\begin{proof}
We use Lemma 4.6 in [\textbf{\ref{BCY}}] that
\begin{equation*}
\sum_{n\leq y} \frac{d_k(n)}{n}\bigg(\frac{y}{n}\bigg)^{\sigma}\ll_k L^{k-1} \min\big\{|\sigma|^{-1},L\big\}.
\end{equation*}
We have
\begin{equation*}
F_{\tau_0}(n)\leq\prod_{p|n}\big(1+Ap^{-\tau_0}\big)=\sum_{l|n}l^{-\tau_0}A^{w(l)}
\end{equation*}
for some $A>0$, where $w(n)$ is the number of prime factors of $n$. Hence
\begin{eqnarray*}
\sum_{n\leq y} \frac{d_k(n)F_{\tau_0}(n)}{n}\bigg(\frac{y}{n}\bigg)^{\sigma}\ll\sum_{l\leq y}\frac{d_k(l)A^{w(l)}}{l^{1+\tau_0}}\sum_{n\leq y/l}\frac{d_k(n)}{n}\bigg(\frac{y/l}{n}\bigg)^{\sigma}\ll_k L^{k-1} \min\big\{|\sigma|^{-1},L\big\},
\end{eqnarray*}
since $d_k(l)A^{w(l)}\ll l^{\tau_0/2}$ for sufficiently large $l$.
\end{proof}
\begin{lemma}\label{600}
Let $F(n)=F(n,0)$, where
\begin{equation*}
F(n,\alpha)=\prod_{p|n}\bigg(1-\frac{1}{p^{1+\alpha}}\bigg).
\end{equation*}
For any $Q\in C^\infty([0,1])$ satisfying $Q(0)=Q'(0)=0$, there exist an absolute constant $\tau_0>0$ and some $\nu\asymp (\log\log y)^{-1}$ such that
\begin{eqnarray*}
\mathcal{A}_1(y,Q;a,b,\underline{\alpha})&=&\sum_{\substack{an\leq y\\(n,b)=1}}\frac{\mu(n)h(n)Q([an]_y)}{\varphi(n)n^{\alpha_1}}F(n,\alpha_2)F(n,\alpha_3)\\
&=&U_1V_1(b)\bigg(\frac{Q''([a]_y)}{(\log y)^2}+\frac{2\alpha_1 Q'([a]_y)}{\log y}+\alpha_{1}^{2}Q([a]_y)\bigg)\\
&&\qquad+O(F_{\tau_0}(b)L^{-3})+O_\varepsilon\bigg(F_{\tau_0}(b)\bigg(\frac{y}{a}\bigg)^{-\nu} L^{-2+\varepsilon}\bigg)
\end{eqnarray*}
uniformly for $\alpha_j\ll L^{-1}$, $1\leq j\leq 3$, where $U_1=U_1(0,\underline{0})$ and $V_1(n)=V_1(0,n,\underline{0})$, with
\begin{equation*}
U_1(s,\underline{\alpha})=\prod_{p}\bigg[1-\frac{2F(p,\alpha_2)F(p,\alpha_3)}{\varphi(p)p^{s+\alpha_1}}\bigg]\bigg(1-\frac{1}{p^{1+s+\alpha_1}}\bigg)^{-2}
\end{equation*}
and
\begin{equation*}
V_1(s,n,\underline{\alpha})=\prod_{p|n}\bigg[1-\frac{2F(p,\alpha_2)F(p,\alpha_3)}{\varphi(p)p^{s+\alpha_1}}\bigg]^{-1}.
\end{equation*}
\end{lemma}
\begin{proof}
This is essentially a variation of the prime number theorem.
It suffices to consider $Q(x)=\sum_{j\geq 2}a_jx^j$. We have
\begin{equation*}
\mathcal{A}_1(y,Q;a,b,\underline{\alpha})=\sum_{j\geq 2}\frac{a_jj!}{(\log y)^j}\sum_{(n,b)=1}\frac{1}{2\pi i}\int_{(2)}\bigg(\frac{y}{a}\bigg)^s\frac{\mu(n)h(n)}{\varphi(n)n^{s+\alpha_1}}F(n,\alpha_2)F(n,\alpha_3)\frac{ds}{s^{j+1}}.
\end{equation*}
The sum over $n$ converges absolutely. Hence
\begin{equation*}
\mathcal{A}_1(y,Q;a,b,\underline{\alpha})=\sum_{j\geq 2}\frac{a_jj!}{(\log y)^j}\frac{1}{2\pi i}\int_{(2)}\bigg(\frac{y}{a}\bigg)^s\sum_{(n,b)=1}\frac{\mu(n)h(n)}{\varphi(n)n^{s+\alpha_1}}F(n,\alpha_2)F(n,\alpha_3)\frac{ds}{s^{j+1}}.
\end{equation*}
The sum in the integrand equals
\begin{equation*}
\prod_{p\nmid b}\bigg(1-\frac{2F(p,\alpha_2)F(p,\alpha_3)}{\varphi(p)p^{s+\alpha_1}}\bigg)=\frac{U_1(s,\underline{\alpha})V_1(s,b,\underline{\alpha})}{\zeta(1+s+\alpha_1)^2}.
\end{equation*}
Let $Y = o(T)$ be a large parameter to be chosen later. By Cauchy's theorem, $\mathcal{A}_1(y,Q;a,b,\underline{\alpha})$ is equal to the residue at $s=0$ plus integrals over the line segments $\mathcal{C}_1=\{s=it,t\in\mathbb{R},|t|\geq Y\}$, $\mathcal{C}_{2}=\{s=\sigma\pm iY,-\frac{c}{\log{Y}}\leq\sigma\leq 0\}$, and $\mathcal{C}_3=\{s=-\frac{c}{\log{Y}}+it,|t|\leq Y\}$, where $c$ is some fixed positive constant such that $\zeta(1+ s+\alpha_1)$ has no zeros in the region on the right hand side of the contour determined by the $\mathcal{C}_j$'s. Furthermore, we require that for such $c$ we have $1/\zeta(\sigma + it) \ll \log(2 + |t|)$ in this region [see \textbf{\ref{T}}; Theorem 3.11]. Then the integral over $\mathcal{C}_1$ is
\begin{equation*}
\ll F_{\tau_0}(b)L^{-j}(\log{Y})^2/Y^{j} \ll_\varepsilon F_{\tau_0}(b)L^{-2}Y^{-2+\varepsilon},
\end{equation*}
since $j\geq2$. The integral over $\mathcal{C}_2$ is
\begin{equation*}
\ll F_{\tau_0}(b)L^{-j}(\log{Y})/Y^{j+1} \ll_\varepsilon F_{\tau_0}(b)L^{-2}Y^{-3+\varepsilon}.
\end{equation*}
Finally, the contribution from $\mathcal{C}_3$ is
\begin{equation*}
\ll F_{\tau_0}(b)L^{-j}(\log Y)^j \bigg(\frac{y}{a}\bigg)^{-c/\log Y}\ll_\varepsilon F_{\tau_0}(b)\bigg(\frac{y}{a}\bigg)^{-c/\log Y}L^{-2+\varepsilon}.
\end{equation*}
Choosing $Y \asymp L$ gives an error so far of size $O_\varepsilon\big(F_{\tau_0}(b)(y/a)^{-\nu} L^{-2+\varepsilon}\big) + O_\varepsilon(F_{\tau_0}(b)L^{-4+\varepsilon})$.
For the residue at $s=0$, we write this as
\begin{equation*}
\sum_{j\geq 2}\frac{a_jj!}{(\log y)^j}\frac{1}{2 \pi i} \oint \bigg(\frac{y}{a}\bigg)^s \frac{U_1(s,\underline{\alpha})V_1(s,b,\underline{\alpha})}{\zeta(1+s+\alpha_1)^2} \frac{ds}{s^{j+1}},
\end{equation*}
where the contour is a circle of radius $\asymp L^{-1}$ around the origin. This integral is trivially bounded by $O(L^{-2})$ so that taking the first term in the Taylor series of $\zeta(1+s+\alpha_1)$ finishes the proof.
\end{proof}
\begin{lemma}\label{601}
For any $Q,R\in C^\infty([0,1])$, there exists an absolute constant $\tau_0>0$ such that
\begin{eqnarray*}
&&\mathcal{A}_2(y,Q,R;a_1,a_2,\alpha_1)=\sum_{\substack{a_1a_2l\leq y\\a_1m\leq y}}\frac{h(a_1a_2l)h(a_1m)Q([a_1m]_y)R([a_1a_2l]_y)V_1(a_1a_2lm)}{lm^{1+\alpha_1}}\\
&&\qquad\qquad=U_2(\log y)^4h(a_1a_2)h(a_1)V_1(a_1a_2)V_2(a_1)V_3(a_2)V_4(a_1a_2)\\
&&\qquad\qquad\qquad\qquad \int_{0}^{[a_1]_y}y^{-\alpha_1t}tQ([a_1]_y-t)R_1([a_1a_2]_y)dt+O(d_4(a_1)d(a_2)F_{\tau_0}(a_1a_2)L^3)
\end{eqnarray*}
uniformly for $\alpha_1\ll L^{-1}$, where
\begin{eqnarray*}
U_2=\prod_{p}\bigg(1+\frac{2V_1(p)}{p}\bigg)\bigg[1+\frac{2V_1(p)}{p}\bigg(1+\frac{2}{p}\bigg)\bigg(1+\frac{2V_1(p)}{p}\bigg)^{-1}\bigg]\bigg(1-\frac{1}{p}\bigg)^4,
\end{eqnarray*}
\begin{equation*}
V_2(n)=\prod_{p|n}\bigg(1+\frac{2V_1(p)}{p}\bigg)^{-1},\qquad
V_3(n)=\prod_{p|n}\bigg(1+\frac{2}{p}\bigg)\bigg(1+\frac{2V_1(p)}{p}\bigg)^{-1}
\end{equation*}
and
\begin{eqnarray*}
V_4(n)=\prod_{p|n}\bigg[1+\frac{2V_1(p)}{p}\bigg(1+\frac{2}{p}\bigg)\bigg(1+\frac{2V_1(p)}{p}\bigg)^{-1}\bigg]^{-1}.
\end{eqnarray*}
\end{lemma}
\begin{proof}
The proof uses Selberg's method [\textbf{\ref{S}}] similarly to Lemma \ref{504}. One first executes the sum over $m$, and then the sum over $l$.
\end{proof}
\begin{lemma}\label{602}
For any $Q,R\in C^\infty([0,1])$, we have
\begin{eqnarray*}
&&\emph{(i)}\quad\sum_{l_1l_2\leq y}\frac{h(l_1l_2)h(l_1)Q([l_1]_y)R([l_1l_2]_y)}{l_{1}l_{2}^{1+\alpha_1}} F(l_1,\alpha_2)F(l_1l_2,\alpha_3)V_1(l_1l_2)V_2(l_1)V_3(l_2)V_4(l_1l_2)\\
&&\qquad\qquad=\frac{W(\log y)^6}{6}\int_{0}^{1}\int_{0}^{x}(1-x)^3y^{-\alpha_1t_1}t_1Q(x)R(x-t_1)dt_1dx+O(L^5),\\
&&\emph{(ii)}\quad\sum_{pl_1l_2\leq y}\frac{\log p}{(p^{1+\alpha_4}-1)p^{\alpha_5}}\frac{h(pl_1l_2)h(l_1)Q([l_1]_y)R([pl_1l_2]_y)}{l_{1}l_{2}^{1+\alpha_1}}\\
&&\qquad\qquad\qquad\qquad F(pl_1,\alpha_2)F(pl_1l_2,\alpha_3)V_1(pl_1l_2)V_2(l_1)V_3(pl_2)V_4(pl_1l_2)\\
&&\qquad\qquad=\frac{W(\log y)^7}{3}\int_{0}^{1}\int_{\substack{t_j\geq0\\t_1+t_2\leq x}}(1-x)^3y^{-\alpha_1t_1-(\alpha_4+\alpha_5)t_2}t_1Q(x)R(x-t_1-t_2)dt_1dt_2dx\\
&&\qquad\qquad\qquad\qquad+O(L^6)
\end{eqnarray*}
uniformly for $\alpha_j\ll L^{-1}$, $1\leq j\leq5$, where
\begin{eqnarray*}
W=\prod_{p}\bigg(1+\frac{2F(p)V_1(p)V_3(p)V_4(p)}{p}+\frac{4F(p)^2V_1(p)V_2(p)V_4(p)}{p}\bigg)\bigg(1-\frac{1}{p}\bigg)^6.
\end{eqnarray*}
\end{lemma}
\begin{proof}
We consider the first statement. We start with the sum over $l_2$ on the left hand side of (i), which is
\begin{equation*}
\sum_{\substack{l_2\leq y/l_1\\(l_2,l_1)=1}}\frac{h(l_2)R([l_1l_2]_y)}{l_{2}^{1+\alpha_1}}F(l_2,\alpha_3)V_1(l_2)V_3(l_2)V_4(l_2).
\end{equation*}
As in how we prove Lemma \ref{504}, this equals
\begin{eqnarray}\label{701}
\prod_{p}\bigg\{W_1(p)^{-1}\bigg(1-\frac{1}{p}\bigg)^2\bigg\}(\log y)^2W_1(l_1)\int_{0}^{[l_1]_y}y^{-\alpha_1t_1}t_1R([l_1]_y-t_1)dt_1+O(L),
\end{eqnarray}
where
\begin{equation*}
W_1(n)=\prod_{p|n}\bigg(1+\frac{2F(p)V_1(p)V_3(p)V_4(p)}{p}\bigg)^{-1}.
\end{equation*}
Hence the required expression is
\begin{eqnarray}\label{700}
&&\prod_{p}\bigg\{W_1(p)^{-1}\bigg(1-\frac{1}{p}\bigg)^2\bigg\}(\log y)^2\sum_{l_1\leq y}\frac{h(l_1)^2Q([l_1]_y)}{l_{1}}\\
&&\qquad F(l_1,\alpha_2)F(l_1,\alpha_3)V_1(l_1)V_2(l_1)V_4(l_1)W_1(l_1)\int_{0}^{[l_1]_y}y^{-\alpha_1t_1}t_1R([l_1]_y-t_1)dt_1+O(L^5).\nonumber
\end{eqnarray}
Using Selberg's method [\textbf{\ref{S}}] again we have
\begin{eqnarray*}
&&\sum_{l_1\leq t}\frac{h(l_1)^2}{l_{1}}F(l_1,\alpha_2)F(l_1,\alpha_3)V_1(l_1)V_2(l_1)V_4(l_1)W_1(l_1)\\
&&\qquad\qquad=\prod_p\bigg\{W_2(p)^{-1}\bigg(1-\frac{1}{p}\bigg)^4\bigg\}\frac{(\log t)^4}{24}+O(L^3)
\end{eqnarray*}
for any $t\leq T$, where
\begin{equation*}
W_2(n)=\prod_{p|n}\bigg\{1+\frac{4F(p)^2V_1(p)V_2(p)V_4(p)W_1(p)}{p}\bigg\}^{-1}.
\end{equation*}
Partial summation then implies that \eqref{700} is equal to
\begin{eqnarray*}
\prod_p\bigg\{W_1(p)^{-1}W_2(p)^{-1}\bigg(1-\frac{1}{p}\bigg)^6\bigg\}\frac{(\log y)^4}{6}\int_{0}^{1}\int_{0}^{x}(1-x)^3y^{-\alpha_1t_1}t_1Q(x)R(x-t_1)dt_1dx+O(L^5).
\end{eqnarray*}
It is easy to check that the arithmetical factor is $W$, and we obtain the first statement.
For the second statement, we first notice that the contribution of the terms involving $p^{-s}$ with $\textrm{Re}(s)>1$ is $O(L^6)$. Hence the left hand side of (ii) is
\begin{eqnarray*}
&&2\sum_{l_1l_2\leq y}\frac{h(l_1l_2)h(l_1)Q([l_1]_y)}{l_{1}l_{2}^{1+\alpha_1}}F(l_1,\alpha_2)F(l_1l_2,\alpha_3)V_1(l_1l_2)V_2(l_1)V_3(l_2)V_4(l_1l_2)\nonumber\\
&&\qquad\sum_{\substack{p\leq y/l_1l_2\\(p,l_1l_2)=1}}\frac{(\log p)R([pl_1l_2]_y)}{p^{1+\alpha_4+\alpha_5}}+O(L^6).
\end{eqnarray*}
The same argument shows that we can include the terms $p|l_1l_2$ in the innermost sum with an admissible error $O(L^6)$, so that the above expression is equal to
\begin{eqnarray*}
&&2\sum_{p\leq y}\frac{\log p}{p^{1+\alpha_4+\alpha_5}}\sum_{l_1l_2\leq y/p}\frac{h(l_1l_2)h(l_1)Q([l_1]_y)R([pl_1l_2]_y)}{l_{1}l_{2}^{1+\alpha_1}}\\
&&\qquad\qquad\qquad\qquad\qquad\qquad F(l_1,\alpha_2)F(l_1l_2,\alpha_3)V_1(l_1l_2)V_2(l_1)V_3(l_2)V_4(l_1l_2)+O(L^6).
\end{eqnarray*}
We have
\begin{equation*}
\sum_{p\leq t}\frac{\log p}{p}=\log t+O(1)
\end{equation*}
for any $t\leq T$. The result follows by using Part (i) and partial summation.
\end{proof}
\section{Proof of Lemma 1.1}
To evaluate $\mathcal{M}_1(H,T)$, we first appeal to Theorem 1 of [\textbf{\ref{BCH}}] and obtain
\begin{eqnarray*}
\mathcal{M}_1(H,T)&=&T\sum_{m,n\leq y}\frac{h(m)h(n)P([m]_y)P([n]_y)(m,n)}{mn}\bigg(\log\frac{T(m,n)^2}{2\pi mn}+2\gamma-1\bigg)\\
&&\qquad+O_B(TL^{-B})+O_\varepsilon(y^2T^\varepsilon)
\end{eqnarray*}
for any $B>0$, where $\gamma$ is the Euler constant. Using the M\"obius inversion formula
\begin{equation*}
f\big((m,n)\big)=\sum_{\substack{l|m\\l|n}}\sum_{d|l}\mu(d)f\bigg(\frac{l}{d}\bigg),
\end{equation*}
we can write the above as
\begin{eqnarray*}
T\sum_{l\leq y}\sum_{d|l}\frac{\mu(d)}{dl}\sum_{m,n\leq y/l}\frac{h(lm)h(ln)P([lm]_y)P([ln]_y)}{mn}\bigg(\log\frac{T}{2\pi d^2mn}+2\gamma-1\bigg)+O_B(TL^{-B}).
\end{eqnarray*}
We next replace the term in the bracket by $\log \frac{T}{2\pi mn}$. This produces an error of size
\begin{eqnarray*}
\ll T\sum_{l\leq y}\frac{d(l)^2}{l}\bigg(\sum_{n\leq y/l}\frac{d(n)}{n}\bigg)^2\sum_{d|l}\frac{\log d}{d}\ll TL^8.
\end{eqnarray*}
Hence
\begin{eqnarray*}
\mathcal{M}_1(H,T)&=&T\sum_{l\leq y}\frac{\varphi(l)}{l^2}\sum_{m,n\leq y/l}\frac{h(lm)h(ln)P([lm]_y)P([ln]_y)}{mn}\big(L-\log m-\log n\big)+O(TL^8)\\
&=&TL\sum_{l\leq y}\frac{\varphi(l)}{l^2}\bigg(\sum_{n\leq y/l}\frac{h(ln)P([ln]_y)}{n}\bigg)^2\\
&&\qquad-2T\sum_{l\leq y}\frac{\varphi(l)}{l^2}\sum_{m,n\leq y/l}\frac{h(lm)h(ln)P([lm]_y)P([ln]_y)\log n}{mn}+O(TL^8).
\end{eqnarray*}
The result follows by using Lemma \ref{504}, Lemma \ref{507} and Lemma \ref{505}. Here we use a fact which is easy to verify that $C^2D=A$.
\section{Proof of Lemma 1.2}
We denote $H(s)=\zeta(s)G(s)$, i.e.
\begin{equation*}
G(s)=\sum_{n\leq y}\frac{h(n)P([n]_y)}{n^s}.
\end{equation*}
By Cauchy's theorem we have
\begin{eqnarray*}
\sum_{0<\gamma\leq T}H(\rho+i\alpha)H(1-\rho-i\alpha)=\frac{1}{2\pi i}\int_{\mathcal{C}}\frac{\zeta'}{\zeta}(s)\zeta(s+i\alpha)\zeta(1-s-i\alpha)G(s+i\alpha)G(1-s-i\alpha)ds,
\end{eqnarray*}
where $\mathcal{C}$ is the positively oriented rectangle with vertices at $1-a+i$, $a+i$, $a+iT$ and $1-a+iT$. Here $a=1+L^{-1}$ and $T$ is chosen so that the distance from $T$ to the nearest $\gamma$ is $\gg L^{-1}$. It is standard that the contribution from the horizontal segments of the contour is $O_\varepsilon(yT^{1/2+\varepsilon}$).
We denote the contribution from the right edge by $\mathcal{N}_1$, where
\begin{equation}\label{100}
\mathcal{N}_1=\frac{1}{2\pi i}\int_{a+i}^{a+iT}\chi(1-s-i\alpha)\frac{\zeta'}{\zeta}(s)\zeta(s+i\alpha)^2G(s+i\alpha)G(1-s-i\alpha)ds.
\end{equation}
From the functional equation we have
\begin{equation*}
\frac{\zeta'}{\zeta}(1-s)=\frac{\chi'}{\chi}(1-s)-\frac{\zeta'}{\zeta}(s).
\end{equation*}
Hence the contribution from the left edge, by substituting $s$ by $1-s$, is
\begin{eqnarray*}
&&\frac{1}{2\pi i}\int_{a-i}^{a-iT}\frac{\zeta'}{\zeta}(1-s)\zeta(1-s+i\alpha)\zeta(s-i\alpha)G(1-s+i\alpha)G(s-i\alpha)ds\nonumber\\
&=&\frac{1}{2\pi i}\int_{a-i}^{a-iT}\bigg(\frac{\chi'}{\chi}(1-s)-\frac{\zeta'}{\zeta}(s)\bigg)\zeta(1-s+i\alpha)\zeta(s-i\alpha)G(1-s+i\alpha)G(s-i\alpha)ds\nonumber\\
&=&-\overline{\mathcal{N}_2}+\overline{\mathcal{N}_1}+O_\varepsilon(yT^{1/2+\varepsilon}),
\end{eqnarray*}
where
\begin{equation}\label{506}
\mathcal{N}_2(\beta,\gamma)=\frac{1}{2\pi i}\int_{a+i}^{a+iT}\frac{\chi'}{\chi}(1-s)\zeta(1-s+i\alpha)\zeta(s-i\alpha)G(1-s+i\alpha)G(s-i\alpha)ds.
\end{equation}
Thus
\begin{equation}\label{806}
\sum_{0<\gamma\leq T}H(\rho+i\alpha)H(1-\rho-i\alpha)=2\textrm{Re}\big(\mathcal{N}_1\big)-\overline{\mathcal{N}_2}+O_\varepsilon(yT^{1/2+\varepsilon}).
\end{equation}
\subsection{Evaluate $\mathcal{N}_2$}
We move the line of integration in \eqref{506} to the $\tfrac{1}{2}$-line. As before, this produces an error of size $O_\varepsilon(yT^{1/2+\varepsilon})$. Hence we get
\begin{equation*}
\mathcal{N}_2=\frac{1}{2\pi}\int_{1-\alpha}^{T-\alpha}\frac{\chi'}{\chi}\big(\tfrac{1}{2}-it-i\alpha\big)\big|H(\tfrac{1}{2}+it)\big|^2dt+O_\varepsilon(yT^{1/2+\varepsilon}).
\end{equation*}
From Stirling's approximation we have
\begin{displaymath}
\frac{\chi'}{\chi}(\tfrac{1}{2}-it)=-\log\frac{t}{2\pi}+O(t^{-1})\qquad(t\geq 1).
\end{displaymath}
Combining this with Lemma 1.1 and integration by parts, we easily obtain
\begin{equation}\label{805}
\mathcal{N}_2=-\frac{ATL(\log y)^9}{12\pi}\int_{0}^{1}(1-x)^3\bigg(\vartheta^{-1}P_1(x)^2-2P_1(x)P_2(x)\bigg)dx+O(TL^9).
\end{equation}
\subsection{Evaluate $\mathcal{N}_1$}
It is easier to start with a more general sum
\begin{eqnarray*}
\mathcal{N}_1(\beta,\gamma)&=&\frac{1}{2\pi i}\int_{a+i(1+\alpha)}^{a+i(T+\alpha)}\chi(1-s)\bigg(\frac{\zeta'}{\zeta}(s+\beta)\zeta(s+\gamma)\zeta(s)\sum_{m\leq y}\frac{h(m)P([m]_y)}{m^{s}}\bigg)\\
&&\qquad\qquad\qquad\qquad\bigg(\sum_{n\leq y}\frac{h(n)P([n]_y)}{n^{1-s}}\bigg)ds,
\end{eqnarray*}
so that $\mathcal{N}_1=\mathcal{N}_1(-i\alpha,0)$. From Lemma \ref{501}, we obtain
\begin{equation*}
\mathcal{N}_1(\beta,\gamma)=\sum_{n\leq y}\frac{h(n)P([n]_y)}{n}\sum_{m\leq nT/2\pi}a(m)e\bigg(-\frac{m}{n}\bigg)+O_\varepsilon(yT^{1/2+\varepsilon}),
\end{equation*}
where the arithmetic function $a(m)$ is defined by
\begin{equation}\label{1}
\frac{\zeta'}{\zeta}(s+\beta)\zeta(s+\gamma)\zeta(s)\sum_{m\leq y}\frac{h(m)P([m]_y)}{m^{s}}=\sum_{m=1}^{\infty}\frac{a(m)}{m^s}.
\end{equation}
By the work of Conrey, Ghosh and Gonek [\textbf{\ref{CGG1}}; Sections 5--6 and (8.2)], and the work of Bui and Heath-Brown [\textbf{\ref{BH-B}}], we can write
\begin{equation*}
\mathcal{N}_1(\beta,\gamma) =\mathcal{Q}(\beta,\gamma) + E+O_\varepsilon(yT^{1/2+\varepsilon}),
\end{equation*}
where
\begin{equation}\label{2}
\mathcal{Q}(\beta,\gamma)=\sum_{ln\leq y}\frac{h(ln)P([ln]_y)}{ln}\frac{\mu(n)}{\varphi(n)}\sum_{\substack{m\leq nT/2\pi\\(m,n)=1}}a(lm)
\end{equation}
and
\begin{equation*}
E\ll_{B,\varepsilon} T\mathscr{L}^{-B}+y^{1/3}T^{5/6+\varepsilon}
\end{equation*}
for any $B>0$.
Let
\begin{equation}\label{301}
\frac{\zeta'}{\zeta}(s+\beta)\zeta(s+\gamma)\zeta(s)=\sum_{n=1}^{\infty}\frac{g(n)}{n^s}.
\end{equation}
From \eqref{1} and Lemma \ref{300} we have
\begin{equation*}
a(lm)=\sum_{\substack{l=l_1l_2\\m=m_1m_2\\l_1m_1\leq y\\(m_2,l_1)=1}} h(l_1m_1)P([l_1m_1]_y)g(l_2m_2).
\end{equation*}
Hence
\begin{eqnarray}\label{502}
\mathcal{Q}(\beta,\gamma)=\sum_{l_1l_2n\leq y}\frac{h(l_1l_2n)P([l_1l_2n]_y)}{l_1l_2n}\frac{\mu(n)}{\varphi(n)}\sum_{\substack{l_1m_1\leq y\\(m_1,n)=1}} h(l_1m_1)P([l_1m_1]_y)\sum_{\substack{m_2\leq nT/2\pi m_1\\(m_2,l_1n)=1}}g(l_2m_2).
\end{eqnarray}
\begin{lemma}
Suppose $a$ and $b$ are coprime, squarefree integers. Then we have
\begin{eqnarray*}
G(x;a,b)&:=&\sum_{\substack{n\leq x\\(n,b)=1}}g(an)\\
&=&-\frac{x^{1-\beta}}{1-\beta}\sum_{a=a_2a_3}\frac{1}{a_{2}^{\gamma}}\zeta(1-\beta+\gamma)\zeta(1-\beta)F(b,-\beta+\gamma)F(a_2b,-\beta)\\
&&+\frac{x^{1-\gamma}}{1-\gamma}\sum_{a=a_2a_3}\frac{1}{a_{2}^{\gamma}}\bigg(\frac{\zeta'}{\zeta}(1+\beta-\gamma)+\sum_{p|b}\frac{\log p}{p^{1+\beta-\gamma}-1}\bigg)\zeta(1-\gamma)F(b)F(a_2b,-\gamma)\\
&&-\frac{x^{1-\gamma}}{1-\gamma}\sum_{a=pa_2a_3}\frac{1}{p^\beta a_{2}^{\gamma}}\frac{\log p}{1-p^{-(1+\beta-\gamma)}}\zeta(1-\gamma)F(pb)F(pa_2b,-\gamma)\\
&&+x\sum_{a=a_2a_3}\frac{1}{a_{2}^{\gamma}}\bigg(\frac{\zeta'}{\zeta}(1+\beta)+\sum_{p|b}\frac{\log p}{p^{1+\beta}-1}\bigg)\zeta(1+\gamma)F(b,\gamma)F(a_2b)\\
&&-x\sum_{a=pa_2a_3}\frac{1}{p^\beta a_{2}^{\gamma}}\frac{\log p}{1-p^{-(1+\beta)}}\zeta(1+\gamma)F(pb,\gamma)F(pa_2b)\\
&&\qquad\ \ \ +O_{B,\varepsilon}\big((\log ab)^{1+\varepsilon}x(\log x)^{-B}\big).
\end{eqnarray*}
\end{lemma}
\begin{proof}
It is standard that up to an error term of size $O_{B,\varepsilon}\big((\log ab)^{1+\varepsilon}x(\log x)^{-B}\big)$ for any $B>0$, $G(x;a,b)$ is the sum of the residues at $s=1-\beta$, $s=1-\gamma$ and $s=1$ of
\begin{equation*}
\frac{x^s}{s}\sum_{(n,b)=1}\frac{g(an)}{n^s}.
\end{equation*}
Combining \eqref{301} and Lemma \ref{300}, the above expression is
\begin{eqnarray*}
&&\frac{x^s}{s}\sum_{a=a_1a_2a_3}\bigg(-\sum_{(n,b)=1}\frac{\Lambda(a_1n)}{(a_1n)^{\beta}n^s}\bigg)\bigg(\sum_{(n,a_1b)=1}\frac{1}{(a_2n)^{\gamma}n^s}\bigg)\bigg(\sum_{(n,a_1a_2b)=1}\frac{1}{n^s}\bigg)\\
&=&\frac{x^s}{s}\sum_{a=a_1a_2a_3}\frac{1}{a_{1}^{\beta}a_{2}^{\gamma}}\bigg(-\sum_{(n,b)=1}\frac{\Lambda(a_1n)}{n^{s+\beta}}\bigg)\zeta(s+\gamma)\zeta(s)F(a_1b,s+\gamma-1)F(a_1a_2b,s-1).
\end{eqnarray*}
We have
\begin{displaymath}
-\sum_{(n,b)=1}\frac{\Lambda(a_1n)}{n^{s+\beta}}=\left\{ \begin{array}{ll}
\frac{\zeta'}{\zeta}(s+\beta)+\sum_{p|b}\frac{\log p}{p^{s+\beta}-1} &\qquad \textrm{if $a_1=1$},\\
-\frac{\log p}{1-p^{-(s+\beta)}} & \qquad\textrm{if $a_1=p$,}\\
0 & \qquad\textrm{otherwise}.
\end{array} \right.
\end{displaymath}
The result follows.
\end{proof}
In view of the above definition, the innermost sum in \eqref{502} is
\begin{equation*}
G(nT/2\pi m_1;l_2,l_1n).
\end{equation*}
We then write
\begin{equation*}
\mathcal{Q}(\beta,\gamma)=\sum_{j=1}^{6}\mathcal{Q}_j(\beta,\gamma)
\end{equation*}
corresponding to the decomposition of $G(x;a,b)$ in Lemma 4.1.
We begin with $\mathcal{Q}_1(\beta,\gamma)$. Writing $l_2l_3$ for $l_2$, and $m$ for $m_1$, we have $\mathcal{Q}_1(\beta,\gamma)$ equals
\begin{eqnarray*}
&&-\frac{(T/2\pi)^{1-\beta}}{1-\beta}\zeta(1-\beta+\gamma)\zeta(1-\beta)\sum_{\substack{l_1l_2l_3\leq y\\l_1m\leq y}}\frac{h(l_1l_2l_3)h(l_1m)P([l_1m]_y)}{l_1l_{2}^{1+\gamma}l_3m^{1-\beta}}F(l_1,-\beta+\gamma)F(l_1l_2,-\beta)\\
&&\qquad\sum_{\substack{n\leq y/l_1l_2l_3\\(n,l_1l_2l_3m)=1}}\frac{\mu(n)h(n)P([l_1l_2l_3n]_y)}{\varphi(n)n^\beta}F(n,-\beta+\gamma)F(n,-\beta).
\end{eqnarray*}
From Lemma \ref{600}, the innermost sum is
\begin{eqnarray*}
&&U_1V_1(l_1l_2l_3m)\bigg(\frac{P''([l_1l_2l_3]_y)}{(\log y)^2}+\frac{2\beta P'([l_1l_2l_3]_y)}{\log y}+\beta^{2}P([l_1l_2l_3]_y)\bigg)\\
&&\qquad+O(F_{\tau_0}(l_1l_2l_3m)L^{-3})+O_\varepsilon\bigg(F_{\tau_0}(l_1l_2l_3m)\bigg(\frac{y}{l_1l_2l_3}\bigg)^{-\nu}L^{-2+\varepsilon}\bigg).
\end{eqnarray*}
By Lemma \ref{505}, the contributions of the $O$-terms to $\mathcal{Q}_1(\beta,\gamma)$ is $O_\varepsilon(TL^{9+\varepsilon})$. Hence
\begin{eqnarray*}
&&\mathcal{Q}_1(\beta,\gamma)=-U_1(T/2\pi)^{1-\beta}\zeta(1-\beta+\gamma)\zeta(1-\beta)\sum_{l_1l_2\leq y}\frac{F(l_1,-\beta+\gamma)F(l_1l_2,-\beta)}{l_1l_{2}^{1+\gamma}}\\
&&\qquad\bigg(\frac{\mathcal{A}_2(y,P,P'';l_1,l_2,-\beta)}{(\log y)^2}+\frac{2\beta \mathcal{A}_2(y,P,P';l_1,l_2,-\beta)}{\log y}+\beta^2\mathcal{A}_2(y,P,P;l_1,l_2,-\beta)\bigg)\\
&&\qquad\qquad+O_\varepsilon(TL^{9+\varepsilon}).
\end{eqnarray*}
Using Lemmas \ref{601}--\ref{602} we obtain
\begin{eqnarray}\label{800}
&&\mathcal{Q}_1(\beta,\gamma)=-\frac{A(T/2\pi)^{1-\beta}(\log y)^{10}}{6}\zeta(1-\beta+\gamma)\zeta(1-\beta)\int_{0}^{1}\int_{0}^{x}\int_{0}^{x}(1-x)^3y^{\beta t-\gamma t_1}tt_1\nonumber\\
&&\qquad P(x-t)\bigg(\frac{P(x-t_1)}{(\log y)^2}+\frac{2\beta P_0(x-t_1)}{\log y}+\beta^2P_1(x-t_1)\bigg)dtdt_1dx+O_\varepsilon(TL^{9+\varepsilon}).
\end{eqnarray}
Here we have used a fact which is easy to verify that $U_1U_2W=A$.
For $\mathcal{Q}_2(\beta,\gamma)$, we write the sum $\sum_{p|l_1n}$ as $\sum_{p|l_1}+\sum_{p|n}$, since the function $h(n)$ is supported on square-free integers. In doing so we have $\mathcal{Q}_2(\beta,\gamma)$ equals
\begin{eqnarray}\label{605}
&&\frac{(T/2\pi)^{1-\gamma}}{1-\gamma}\zeta(1-\gamma)\sum_{\substack{l_1l_2l_3\leq y\\l_1m\leq y}}\frac{h(l_1l_2l_3)h(l_1m)P([l_1m]_y)}{l_1l_{2}^{1+\gamma}l_3m^{1-\gamma}}\bigg(\frac{\zeta'}{\zeta}(1+\beta-\gamma)+\sum_{p|l_1}\frac{\log p}{p^{1+\beta-\gamma}-1}\bigg)\nonumber\\
&&\qquad\qquad F(l_1)F(l_1l_2,-\gamma)\sum_{\substack{n\leq y/l_1l_2l_3\\(n,l_1l_2l_3m)=1}}\frac{\mu(n)h(n)P([l_1l_2l_3n]_y)}{\varphi(n)n^{\gamma}}F(n)F(n,-\gamma)\nonumber\\
&&\qquad+\frac{(T/2\pi)^{1-\gamma}}{1-\gamma}\zeta(1-\gamma)\sum_{\substack{l_1l_2l_3\leq y\\l_1m\leq y}}\frac{h(l_1l_2l_3)h(l_1m)P([l_1m]_y)}{l_1l_{2}^{1+\gamma}l_3m^{1-\gamma}}F(l_1)F(l_1l_2,-\gamma)\nonumber\\
&&\qquad\qquad\sum_{\substack{p|n\\n\leq y/l_1l_2l_3\\(n,l_1l_2l_3m)=1}}\frac{\log p}{p^{1+\beta-\gamma}-1}\frac{\mu(n)h(n)P([l_1l_2l_3n]_y)}{\varphi(n)n^{\gamma}}F(n)F(n,-\gamma).
\end{eqnarray}
We consider the contribution from the terms $\sum_{p|l_1}$. From Lemma \ref{600}, the sum over $n$ is
\begin{equation*}
\ll L^{-2}+F_{\tau_0}(l_1l_2l_3m)L^{-3}+O_\varepsilon\bigg(F_{\tau_0}(l_1l_2l_3m)\bigg(\frac{y}{l_1l_2l_3}\bigg)^{-\nu}L^{-2+\varepsilon}\bigg).
\end{equation*}
Hence the contribution of the terms $\sum_{p|l_1}$ to $\mathcal{Q}_2(\beta,\gamma)$ is
\begin{eqnarray*}
&\ll_\varepsilon& TL^{-1}\sum_{\substack{p|l_1\\l_1l_2l_3\leq y\\l_1m\leq y}}\frac{\log p}{p-1}\frac{d_4(l_1)d(l_2)d(l_3)d(m)}{l_1l_2l_3m}\\
&&\qquad\qquad\qquad\qquad\bigg(1+F_{\tau_0}(l_1l_2l_3m)L^{-1}+F_{\tau_0}(l_1l_2l_3m)\bigg(\frac{y}{l_1l_2l_3}\bigg)^{-\nu}L^{\varepsilon}\bigg)\\
&\ll_\varepsilon&TL^{5}\sum_{\substack{p|l_1\\l_1\leq y}}\frac{\log p}{p-1}\frac{d_4(l_1)}{l_1}\big(1+F_{\tau_0}(l_1)L^{-1+\varepsilon}\big)\ll_\varepsilon TL^{9+\varepsilon}.
\end{eqnarray*}
The same argument shows that the last term in \eqref{605} is also $O_\varepsilon(TL^{9+\varepsilon})$. The remaining terms are
\begin{eqnarray*}
&&\frac{(T/2\pi)^{1-\gamma}}{1-\gamma}\frac{\zeta'}{\zeta}(1+\beta-\gamma)\zeta(1-\gamma)\sum_{\substack{l_1l_2l_3\leq y\\l_1m\leq y}}\frac{h(l_1l_2l_3)h(l_1m)P([l_1m]_y)}{l_1l_{2}^{1+\gamma}l_3m^{1-\gamma}}\nonumber\\
&&\qquad\qquad F(l_1)F(l_1l_2,-\gamma)\sum_{\substack{n\leq y/l_1l_2l_3\\(n,l_1l_2l_3m)=1}}\frac{\mu(n)h(n)P([l_1l_2l_3n]_y)}{\varphi(n)n^{\gamma}}F(n)F(n,-\gamma).
\end{eqnarray*}
Similarly to $\mathcal{Q}_1(\beta,\gamma)$, we thus obtain
\begin{eqnarray}\label{801}
&&\mathcal{Q}_2(\beta,\gamma)=\frac{A(T/2\pi)^{1-\gamma}(\log y)^{10}}{6}\frac{\zeta'}{\zeta}(1+\beta-\gamma)\zeta(1-\gamma)\int_{0}^{1}\int_{0}^{x}\int_{0}^{x}(1-x)^3y^{\gamma (t-t_1)}tt_1\nonumber\\
&&\qquad P(x-t)\bigg(\frac{P(x-t_1)}{(\log y)^2}+\frac{2\gamma P_0(x-t_1)}{\log y}+\gamma^2P_1(x-t_1)\bigg)dtdt_1dx+O_\varepsilon(TL^{9+\varepsilon}).
\end{eqnarray}
The fourth term $\mathcal{Q}_4(\beta,\gamma)$ is in the same form as $\mathcal{Q}_2(\beta,\gamma)$. The same calculations yield
\begin{eqnarray}\label{802}
\mathcal{Q}_4(\beta,\gamma)&=&\frac{A(T/2\pi)(\log y)^8}{6}\frac{\zeta'}{\zeta}(1+\beta)\zeta(1+\gamma)\int_{0}^{1}\int_{0}^{x}(1-x)^3y^{-\gamma t_1}t_1\nonumber\\
&&\qquad\qquad P_1(x)P(x-t_1)dt_1dx+O_\varepsilon(TL^{9+\varepsilon}).
\end{eqnarray}
To evaluate $\mathcal{Q}_3(\beta,\gamma)$, we rearrange the sums and write $\mathcal{Q}_3(\beta,\gamma)$ in the form
\begin{eqnarray*}
&&-\frac{(T/2\pi)^{1-\gamma}}{1-\gamma}\zeta(1-\gamma)\sum_{\substack{pl_1l_2l_3\leq y\\l_1m\leq y}}\frac{\log p}{(p^{1+\beta-\gamma}-1)p^{\gamma}}\frac{h(pl_1l_2l_3)h(l_1m)P([l_1m]_y)}{l_1l_{2}^{1+\gamma}l_3m^{1-\gamma}}\\
&&\qquad F(pl_1)F(pl_1l_2,-\gamma)\sum_{\substack{n\leq y/pl_1l_2l_3\\(n,pl_1l_2l_3m)=1}}\frac{\mu(n)h(n)P([pl_1l_2l_3n]_y)}{\varphi(n)n^{\gamma}}F(n)F(n,-\gamma).
\end{eqnarray*}
By Lemma \ref{600}, the innermost sum is
\begin{eqnarray*}
&&U_1V_1(pl_1l_2l_3m)\bigg(\frac{P''([pl_1l_2l_3]_y)}{(\log y)^2}+\frac{2\gamma P'([pl_1l_2l_3]_y)}{\log y}+\gamma^{2}P([pl_1l_2l_3]_y)\bigg)\\
&&\qquad\qquad+O(F_{\tau_0}(pl_1l_2l_3m)L^{-3})+O_\varepsilon\bigg(F_{\tau_0}(pl_1l_2l_3m)\bigg(\frac{y}{pl_1l_2l_3}\bigg)^{-\nu}L^{-2+\varepsilon}\bigg).
\end{eqnarray*}
The contribution of the $O$-terms, using Lemma \ref{505}, is $O_\varepsilon(TL^{9+\varepsilon})$. The remaining terms contribute
\begin{eqnarray*}
&&-\frac{U_1(T/2\pi)^{1-\gamma}}{(1-\gamma)}\zeta(1-\gamma)\sum_{pl_1l_2\leq y}\frac{\log p}{(p^{1+\beta-\gamma}-1)p^{\gamma}}\frac{F(pl_1)F(pl_1l_2,-\gamma)}{l_1l_{2}^{1+\gamma}}\\
&&\qquad\bigg(\frac{\mathcal{A}_2(y,P,P'';l_1,pl_2,-\gamma)}{(\log y)^2}+\frac{2\gamma \mathcal{A}_2(y,P,P';l_1,pl_2,-\gamma)}{\log y}+\gamma^2\mathcal{A}_2(y,P,P;l_1,pl_2,-\gamma)\bigg).
\end{eqnarray*}
In view of Lemma \ref{601}, this equals
\begin{eqnarray*}
&&-U_1U_2(T/2\pi)^{1-\gamma}(\log y)^4\zeta(1-\gamma)\sum_{pl_1l_2\leq y}\frac{\log p}{(p^{1+\beta-\gamma}-1)p^{\gamma}}\frac{h(pl_1l_2)h(l_1)}{l_1l_{2}^{1+\gamma}}\\
&&\qquad F(pl_1)F(pl_1l_2,-\gamma)V_1(pl_1l_2)V_2(l_1)V_3(pl_2)V_4(pl_1l_2)\int_{0}^{[l_1]_y}y^{\gamma t}tP([l_1]_y-t)\\
&&\qquad\qquad \bigg(\frac{P([pl_1l_2]_y)}{(\log y)^2}+\frac{2\gamma P_0([pl_1l_2]_y)}{\log y}+\gamma^2P_1([pl_1l_2]_y)\bigg)dt+O(TL^9).
\end{eqnarray*}
From Lemma 2.8(ii) we obtain
\begin{eqnarray}\label{803}
&&\!\!\!\!\!\!\!\!\mathcal{Q}_3(\beta,\gamma)=-\frac{A(T/2\pi)^{1-\gamma}(\log y)^{11}}{3}\zeta(1-\gamma)\int_{0}^{1}\int_{\substack{t,t_j\geq 0\\t\leq x\\t_1+t_2\leq x}}(1-x)^3y^{\gamma(t-t_1)-\beta t_2}tt_1P(x-t)\nonumber\\
&&\!\!\!\!\!\!\!\!\ \bigg(\frac{P(x-t_1-t_2)}{(\log y)^2}+\frac{2\gamma P_0(x-t_1-t_2)}{\log y}+\gamma^2P_1(x-t_1-t_2)\bigg)dtdt_1dt_2dx+O_\varepsilon(TL^{9+\varepsilon}).
\end{eqnarray}
The term $\mathcal{Q}_5(\beta,\gamma)$ is in the same form as $\mathcal{Q}_3(\beta,\gamma)$. The same calculations give
\begin{eqnarray}\label{804}
\mathcal{Q}_5(\beta,\gamma)&=&-\frac{A(T/2\pi)(\log y)^9}{3}\zeta(1+\gamma)\int_{0}^{1}\int_{\substack{t_j\geq 0\\t_1+t_2\leq x}}(1-x)^3y^{-\gamma t_1-\beta t_2}t_1\nonumber\\
&&\qquad\qquad P_1(x)P(x-t_1-t_2)dt_1dt_2dx+O_\varepsilon(TL^{9+\varepsilon}).
\end{eqnarray}
Finally, we have $\mathcal{Q}_6(\beta,\gamma)=O_B(TL^{-B})$ for any $B>0$.
Collecting the estimates \eqref{806}, \eqref{805}, \eqref{800}, \eqref{801}--\eqref{804}, and letting $\beta=- i\alpha$, $\gamma\rightarrow 0$ we easily obtain Lemma 1.2.
|
2,869,038,153,992 | arxiv | \section{Introduction and setting the stage for DYN model}
\label{S-Introduction}
Measurements of White Light (WL) brightnesses and polarization (pB)
during solar eclipses have often been used in the past to infer and
calculate the electron density distribution ($\mathrm{n_e}$), the
radial distribution of coronal electrons densities, or of the
hypothetic Coronium atomic element.
According to \citet{baumbach37} pioneering analysis of coronal WL
brightnesses, $\mathrm{n_e(r)}$ can best be approximated by a sum of
terms inversely proportional powers of the r, the radial distance from
the solar center. This finding is at odds with the standard
exponential decreases, generally postulated for density profiles in
stellar and planetary atmospheres, at this epoch.
Using Baumbach's empirical formula for fitting observed coronal
densities distributions, and assuming cylindrical symmetry of the
corona around the Sun's axis of rotation, \citeauthor{saito70}
(\citeyear{saito70}, hereafter \citetalias{saito70}) constructed a
two-dimensional model for $\mathrm{n_e(r,\phi)}$ as a function of r,
and heliospheric latitude, $\mathrm\phi$. Their empirical 2D-model is
based on a series of available eclipse observations corresponding to
epochs of minimum solar activity. \citeauthor{saito70} 's empirical
coronal electron density 2D-model became popular, and has been adopted
in many studies of the solar corona as well as in the present one,
although its range of application is restricted to r $<$ 4
$\mathrm{R_S}$ because of signal-to-noise (S/N) issues related to the
WL coronal brightnesses beyond this distance.
In order to extend \citetalias{saito70}'s density distributions up to
the Earth's orbit and beyond, \citet{lemaire16} added an extra-term
inversely proportional to the square of r. This additional power law
term fits well the solar wind distribution, whose electron density and
bulk velocity at 1 AU, will be input parameters designed hereafter by
$\mathrm{n_E}$ and $\mathrm{u_E}$, respectively. Typical values of
these input parameters for $\mathrm{n_e(r,\phi)}$, will be chosen
within the ranges of SW observations reported by \cite{ebert09}. The
table 1 in \citet{lemaire16} contains the values of these inputs for a
set of DYN models illustrated and discussed in the present paper, as
well as in the previous one.
The analytical expression of \citetalias{saito70}'s extended density
distributions (in electrons / $\mathrm{cm^3}$) employed to determine the
temperature distributions for all DYN models is recalled here:
\begin{equation}
\label{ne}
\begin{aligned}
n_e(r,\phi)= 10^8\ [3.09\ r^{-16}\ (1-0.5\sin\phi ) +
1.58\ r^{-6}\ (1-0.95 \sin\phi ) + \\
0.0251\ r^{-2.5}\ (1 - \sqrt{\sin\phi})] + n_E\ (215/r)^2,
\end{aligned}
\end{equation}
\noindent where 1 AU is assumed to be 215 $\mathrm{R_S}$ and r is in
units of $\mathrm{R_S}$.
A few typical density distributions derived from this formula are
shown in figure \ref{F-density}. In order to expand the inner and
middle regions of the corona, where the greatest SW acceleration takes
place, the logarithm of h, the altitude above the photosphere -
normalised by the solar radius $\mathrm{R_S}$ - can be recommended for
the horizontal axis, and has been used in all of our graphs.
The analytical expression (\ref{ne}) happens to be a very convenient
approximation in many respects.
\begin{enumerate}[(i)]
\item First of all, it is most convenient to determine the radial
distributions of the coronal density gradients, and therefore that
of H, the electron density scale height. This enables the easy
calculation of the radial profile of the scale-height temperature,
hereafter labelled SHM temperature, because it is determined by the
well-known Scale-Height-Method (SHM).
\item Furthermore, equation (\ref{ne}) can be used to derive an
analytical expression for the SW bulk velocity, u(r), by
integrating the continuity equation - the conservation of the
particle flux - from the Earth's radial distance ($\mathrm{r_E}$)
down to the base of the corona ($\mathrm{r_b}$), where
$\mathrm{r_b}$ is defined hereafter to be at 1.003 $\mathrm{R_S}$.
In the following DYN models this integration is performed along flow
tubes whose geometrical cross-section, A(r), is an empirical
function of r. The analytical formulas 10, 11, and 12 of
\cite{Kopp76} are used for A(r) (for more details see section 7 and
the appendix of \citeauthor{lemaire16}, \citeyear{lemaire16}).
The downward integration of the hydrodynamical continuity equation
leads to an analytical expression for u(r) defined by:
\begin{equation}
\label{E-u}
u(r,\phi)=u_E\frac{A_E}{A(r)}\frac{n_E}{n_e(r,\phi)},
\end{equation}
\noindent where $\mathrm{A_E}$ is the cross-section of the flow tube
at 1 AU.
To minimise the length of this paper, the mathematical formula used
for A(r) will not be repeated here; it can be found in
\citet{lemaire16}, equation 12. Furthermore, we will restrict
our DYN model calculations to spherical expansions of the SW,
i.e. $\mathrm{A_E/A(r)}$ = (215/r)$^2$.
We have assumed, like most other modellers of the SW that flow tubes
of the plasma coincide with interplanetary magnetic flux tubes. This
common assumption might be relaxed in the future, however, by
implementing ad-hoc distributions of curl-free E-fields into the
medium.
\item The analytical expressions \ref{ne} and \ref{E-u} allow the
straightforward calculation of the radial gradient of the bulk
velocity, u(r), as well as the dimensionless function F(r),
corresponding to the ratio of the inertial force and the
gravitation force acting on the expanding SW plasma.
\begin{equation}
\label{E-Fx}
F(r)= \frac{1}{g_SR_S}\ r^2\ u(r)\ \frac{d[u(r)]}{dr},
\end{equation}
\noindent where $\mathrm{g_S}$ is the gravitational acceleration at
the solar surface (274 m s$^{-2}$).
\end{enumerate}
It can already be emphasized that the analytical distribution of
u(r,$\phi$) provided by equation \ref{E-u} is a continuous function of
r, and most importantly, that this function has no point of
singularity (saddle point) at the altitude where the radial expansion
of the SW becomes supersonic. This key property constitutes a major
difference between DYN models and the steady state hydrodynamical SW
models where u(r) is a singular solution of the hydrodynamical
moment/transport equations introduced by
\citet{parker58,parker63}. This issue will be discussed in greater
details in Section 6.
\section{Temperature calculation by the DYN model}
\label{S-DYN-temperature}
To obtain the radial distribution of the DYN temperature
\citet{lemaire16} integrated the simplest approximation of
hydrodynamic momentum transport equation, from infinity (where they
assumed that the plasma temperature is equal to zero), down to
$\mathrm{r_b}$, the base of the corona :
\begin{equation}
\label{E-Te}
T_e(r) = -\frac{T^*}{n_e(r)}\int_\infty^r\frac{n_e(r)}{r^2}
\left[1+F(r)\right]dr,
\end{equation}
\noindent where $\mathrm{T^*}$ is a normalisation temperature defined
by equation 9 of \citet{lemaire16} (see also \citeauthor{alfven41},
\citeyear{alfven41}). The value of $\mathrm{T^*}$ is proportional to
the mass of the Sun, and inversely proportional to the solar
radius. It is equal to 17 MK in all of the following applications.
A non-zero additive constant temperature, $\mathrm{T_\infty}$,
corresponding to the actual electron temperature at the outer edge of
the heliosphere could have been added to the right-hand-side of
equation \ref{E-Te}. However, the addition of a constant temperature
of the order of 2000-3000 K does not change considerably the DYN
temperature profile close to the Sun, indeed $\mathrm{T_e(r)}$ is
orders of magnitude larger for $\mathrm{r_b < r < 10~R_S}$ than
$\mathrm{T_\infty}$. Therefore, it is not far from reality to set
$\mathrm{T_\infty = 0}$ as assumed in equation \ref{E-Te}. We
verified that the DYN temperatures profiles with two widely different
values for $\mathrm{T_\infty}$ converge to the same temperatures for r
$<$ 1.1 $\mathrm{R_S}$. This remarkable convergence of the DYN
temperatures distributions at the base of the Corona, holds not only
at equatorial latitudes but also over the poles.
In a next generation of our computer code we will start the numerical
integration of equation \ref{E-Te} at $\mathrm{r_E}$, where the SW
electron temperature at 1AU, $\mathrm{T_E}$, will then be an input
parameter of the DYN model, like $\mathrm{n_E}$ and $\mathrm{u_E}$.
\citet{ebert09} is again a good source of typical $\mathrm{T_E}$
values that can then be used as a free input parameter in DYN
calculations.
Let us re-emphasized that in the DYN model the boundary conditions are
set in a large solar distance (i.e.~1 AU or infinity), and that the
continuity and momentum equations are integrated downwards. This way
DYN solutions deviate from the singular hydrodynamical solutions whose
boundary conditions are set at the bottom of the corona, and for which
the numerical integration is made upwards.
In section \ref{S-expansion} we complete the work initiated in
\citet{lemaire16} by analysing and discussing the properties of the
DYN models for other sets of the $\mathrm{u_E}$ and $\mathrm{n_E}$
input parameters. Nevertheless, it is preferable to first present and
discuss a few characteristic fits of the $\mathrm{n_e(r)}$
distribution (section \ref{S-Density}) and some properties of the DYN
temperature profiles (section \ref{S-Distributions}).
\section{Corona Electron Density Distributions inferred from
eclipse observations.}
\label{S-Density}
\begin{figure}
\centerline{\includegraphics[width=\textwidth,clip]
{four_densities.eps}}
\caption{Expanded coronal electron density distributions for the
equatorial (blue-dashed curve, Seq), and polar regions described
by \citet{lemaire16} (black curve, Spv), taken from
\citetalias{saito70} (red dotted curve, Spv/nE=0), and from
\cite{pottasch60} (green-dashed-dotted curve, P). Pottasch's
density distribution was also included in figure 6.5 of
\cite{parker63}.}
\label{F-density}
\end{figure}
The electron density distribution, $\mathrm{n_e(r)}$, implemented by
different authors from eclipse observations is displayed in figure
\ref{F-density}. This graph is similar to the figure 1 of
\citet{lemaire16} . It is included here for completion and easier
access.
The black curve (Spv) corresponds to Saito's polar density
distribution which has been extended to large distances by adding the
contribution of the Solar Wind density. The red-dotted curve
(Spv/nE=0) is the same distribution but without the last term of
equation (\ref{ne}), i.e.\ without the contribution of the solar wind
density. The DYN temperature determined for this (red) density
distribution corresponds the HST temperature profile for which which
u(r)= 0. Indeed, according to equation (\ref{E-u}), u(r) is
proportional to $\mathrm{n_E}$, thus u(r)=0 when $\mathrm{n_E}$=0.
This (red) density profile can thus be viewed as a radial density
distribution wherein the class of escaping SW particles would be
missing in the exospheric coronal models of
\cite{lemaire71,lemaire73}.
In collisionless/kinetic models, it is exclusively the class of
escaping electrons that contributes to the net outward flux of
evaporating coronal electrons. The classes of ballistic and trapped
particles don't contribute to this net SW flux of particles. Indeed
these latter electrons do not have sufficiently large energy to escape
out Lemaire-Scherer's electrostatic potential well. Nevertheless,
they play a major role by contributing their electric charges to the
total negative charge density of the coronal and SW plasma. Note also
that these collisionless ballistic and trapped particles do not
contribute either to the net outward flux of kinetic energy that is
carried out of the corona into interplanetary space by the SW. Thus
these low energy (sub-thermal and thermal) electrons contribute
exclusively to the total negative charge density which must balance
the positive charge density of the ions, in order to keep the plasma
locally quasi-neutral. In contrast, in fluid or hydrodynamical
representations of the coronal and SW plasma no such discrimination
between sub-thermal and supra-thermal electrons, or between escaping,
ballistic and trapped electrons is made explicitly. All classes of
electrons are assumed to take part to the net outward fluxes of SW
particles and of the SW kinetic energy. This constitutes a fundamental
distinction between both types of plasma representations. These key
differences are discussed in greater details in the review article by
\citet{echim11}.
The blue-dashed curve (Seq) corresponds to \citetalias{saito70}'s
extended equatorial density model (i.e. for ~$\mathrm\phi = 0$). Note
that the latter equatorial density (Seq) is significantly larger than
the the polar density distribution (Spv) in the inner and middle
corona.
The green curve (P) corresponds to a best fit of an equatorial
electron density distribution during solar minimum determined by
\citet{pottasch60}. It was derived from WL brightness and polarization
measurements during the solar eclipse of 1952, under the assumption
that the corona would be in hydrostatic equilibrium.
Although not stressed any further here, we noted that the density
profiles associated with the critical solutions of the SW hydrodynamic
momentum/ transport equations have significantly smaller density
gradients (i.e. significantly larger density scale-heights) in the
inner corona than the empirical models shown in the figure
\ref{F-density}, which were derived from of eclipse observations. To
our knowledge this misfit has generally been overlooked, except in
figure 1 of \citet{scarf65}.
\section{Properties of the DYN temperature profiles}
\label{S-Distributions}
\begin{figure}
\centerline{\includegraphics[width=\textwidth,clip]
{temperature_3methods_graph.eps}}
\caption{The electron temperature profiles over the polar regions
(Spv) calculated by using the three different methods: the scale
height (SHM), the hydrostatic (HST), and the hydrodynamical
(DYN). All three curves are calculated with the same polar
electron density profile (Spv; the black-curve in figure 1) for
which $\mathrm{n_E}$= 2.2 cm$^{-3}$ at 1 AU. The red dotted
curve (Spv/DYN) is obtained by assuming that the SW velocity at
1 AU is equal to $\mathrm{u_E}$ = 329 km/s (which is an average
value for slow SW flows), while the solid black-curve (Spv/HST)
is obtained for $\mathrm{u_E}$ = 0; the DYN model coincides then
with the hydrostatic HST model).}
\label{F-temperature}
\end{figure}
The three electron temperatures profiles shown in figure
\ref{F-temperature} are obtained for the same polar density profile,
Spv, by using the three different methods of calculation (SHM, HST,
and DYN method) recalled above. This polar density profile is
illustrated by the black curve in figure \ref{F-density}. It
corresponds to \citetalias{saito70}'s expended polar density
distribution (~$\mathrm\phi = 90\hbox{$^\circ$}$) with $\mathrm{n_E}$ = 2.22
electrons/cc at 1 AU. A similar trio of temperature profiles were
shown in figure 3 of \citet{lemaire16}, obtained for the equatorial
density distribution, Seq, corresponding to \citetalias{saito70}'s
extended equatorial density distribution with $\mathrm{n_E}$ = 5.75
electrons/cm$^3$ at 1 AU (i.e. the blue dashed curve in figure
\ref{F-density}).
In the DYN models shown in figure \ref{F-temperature} and in all the
following it is assumed that the temperature of the coronal protons is
the same as that of the electrons ($\mathrm{T_p/T_e = \tau_p =
1}$). Furthermore, it is assumed that the concentration of heavier
ions is equal to zero ($\mathrm{n_{He^{++}}/n_{H^+} = \alpha = 0}$).
However, these questionable simplifications can easily be relaxed. In
the current MATLAB\circledR\ code developed by \citet{lemaire16},
the value of $\alpha$ and $\mathrm{\tau_p}$ can be given different
constant values which are independent of r. Results for such more
evolved DYN models have been reported in Table 2 of \citet{lemaire16},
and will not be repeated here.
Comparing both figures (figure \ref{F-temperature} and figure 3 of
\citeauthor{lemaire16}, \citeyear{lemaire16}) it can be seen that:
\begin{enumerate}[(i)]
\item The maximum value of the SHM temperature distribution is
always situated at a somewhat higher altitudes than the maximum
value of the HST temperature. Indeed in the SHM method of
calculation of $\mathrm{T_e(r)}$, the effect of the temperature
gradient, $\mathrm{dT_e(r)/dr}$, is ignored (see
\citeauthor{lemaire16}, \citeyear{lemaire16}), while it is
properly taken into account in the HST method first developed by
\citet{alfven41}.
\item Both the SHM and the HST methods give maximum values,
$\mathrm{T_{e,max}}$, that are nearly equal to each other (circa 1
MK, over the poles, and slight larger than 1.2 MK, over the
equator).
\item the maximum of the DYN temperature is much larger than the
maximum of the HST temperature over the poles (see figure
\ref{F-temperature}), while over the equatorial region, these two
temperature maxima are almost identical (see figure 3 of
\citet{lemaire16}); this is, of course, a consequence of the much
larger coronal density over the equator.
\item The DYN and HST methods give almost identical temperatures
profiles at low altitudes, for $\mathrm{h < 0.1~R_S}$. This result
is clearly foreseeable because at these lowest altitudes in the
inner corona the coronal plasma is almost in hydrostatic
equilibrium i.e. u(r) $\approx$ 0 both over the polar
and equatorial regions.
\item In the inner corona the temperature gradients are positive :
$\mathrm{dT_e(r)/dr}~>$ 0, but they tend to become smaller and
smaller when r decreases to $\mathrm{r_b}$. This is a basic
property satisfied both over the equator and the poles by all DYN
solutions. This trend is, however, at odds with the temperature
gradients predicted by the usual singular solutions of the
hydrodynamical transport equations. Indeed in the latter critical
hydrodynamical models the coronal temperature is in general a
decreasing function of r, even at the base of the corona.
\end{enumerate}
As a consequence of the much lower densities over the poles than over
the equator, the radial distributions of the DYN and HST temperatures
begin to depart from each other at much lower altitudes over the
poles, than over the equatorial region. This occurs at h $\gtrapprox$
0.2 $\mathrm{R_S}$ at high latitudes, while only at h $\gtrapprox$ 1
$\mathrm{R_S}$ (i.e.\ r $\gtrapprox$ 2 $\mathrm{R_S}$) over the lower
equatorial latitudes.
any of the trends of the DYN temperatures outlined above and
illustrated in figure \ref{F-temperature}, as well as in figure 3 of
\citet{lemaire16}, are consistent with the observed properties of the
solar corona temperatures reported in reviews, such as
\citet{echim11}.
The ongoing solar missions carry new kinds of instruments, such as the
Wide-field Imager for Solar Probe (WISPR, \citeauthor{vourlidas16},
\citeyear{vourlidas16}) and Metis \citep{antonucci19}, capable of
observing the $\mathrm{n_e(r)}$ distribution far more accurately and
much more regularly than previously. The first results from WISPR are
already published by \citet{howard19b} and their figure 1 is in-line
with S70's density profiling. More results from those instruments are
greatly anticipated by the authors as valuable input to the DYN
models.
\section{The effects of the input parameters $\mathrm{n_E}$ and
$\mathrm{u_E}$ on the DYN temperature distribution}
\label{S-expansion}
\begin{figure}
\centerline{\includegraphics[width=\textwidth,clip]
{multi_eq_Te_nE.eps}}
\caption{Equatorial distributions of the DYN-temperature obtained
for the following sets of the SW bulk velocity and electron
density at 1 AU: $\mathrm{n_E}$= 1 $\mathrm{e^-/cm^3}$,
$\mathrm{u_E}$ = 329 km/s (blue-dashed line); $\mathrm{n_E}$
=5.75 $\mathrm{e^-/cm^3}$, $\mathrm{u_E}$ = 329 km/s
(black solid line); $\mathrm{n_E}$ = 30 $\mathrm{e^-/cm^3}$,
$\mathrm{u_E}$ = 329 km/s (red dotted line); and $\mathrm{n_E}$
= 5.75 $\mathrm{e^-/cm^3}$, $\mathrm{u_E}$ = 600 km/s
(dashed green line).}
\label{F-nE}
\end{figure}
The figure 3 displays the DYN temperature profiles based on
four different equatorial density profiles, $\mathrm{n_e(r,\phi)}$,
corresponding to Saito's extended density models for
$\mathrm\phi=~90\hbox{$^\circ$}$, $\mathrm{n_E}$ = 1.0; 5.75; 30.0
$\mathrm{e/cm^3}$, and $\mathrm{u_E}$ = 329; 600 km/s.
For the black curve in figure \ref{F-nE}, $\mathrm{u_E}$ = 329 km/s
and $\mathrm{n_E}$ = 5.75 $\mathrm{e/cm^3}$. These input parameters
are respectively the SW bulk velocity and number density at 1 AU of
the average slow SW flow (reported by \citeauthor{ebert09},
\citeyear{ebert09}). Two other curves (blue dashed and red dotted)
show the DYN temperatures respectively for smaller and larger values
of $\mathrm{n_E}$. It can be seen that when the solar wind density at
1 AU is reduced (from $\mathrm{n_E}$ = 5.75 to 1 $\mathrm{e/cm^3}$),
the temperature profile is slightly reduced and tends to the HST
temperature distribution. This can be see by comparing this curve to
the Seq/HST curve in figure 3 of \citet{lemaire16}; indeed, the latter
was calculated by using the HST method introduced by
\citet{alfven41}. It comes as no surprise that the two are identical,
since for a negligible amount of SW at 1 AU the DYN model becomes
equivalent to the HST model.
However, when the SW density at 1AU is arbitrarily enhanced
($\mathrm{n_E}$ = 30 $\mathrm{e/cm^3}$ or more) it can be seen that
a higher value of the maximum temperature is obtained in the
mid-corona as expected to boost the coronal plasma to
a bulk speed of $\mathrm{u_E}$ = 329 km/s or more, at 1 AU.
The green dotted curve in figure \ref{F-nE} has a bump at h = 2
$\mathrm{R_S}$. This leads to the evidence that an enhanced maximum
temperature, and thus an enhanced coronal heating rate, is required in
the mid-corona in order to boost the SW speed at 1 AU up from 329 km/s
(black curve corresponding to a slow SW flow) to 600 km/s (green
curve; corresponding a fast wind speed).
The remarkable convergence at low altitudes of all four curves shown
in figure \ref{F-nE} tells us that the coronal temperature in the
inner corona is almost unaffected by $\mathrm{n_E}$ and
$\mathrm{u_E}$, the SW density and speed at 1 AU. It is basically the
maximum DYN temperature within the mid-corona that determines the SW
at 1 AU and in the distant interplanetary medium. In other words to
enhance the SW expansion velocities or/and to enhance the plasma
densities in the interplanetary medium, increased heating is not
required at the base of the corona, but higher up in the corona at a
radial distance of 3-4 $\mathrm{R_S}$. This is a most important new
finding grounded on our DYN model calculations.
\section{The differences between DYN models and Parker's hydrodynamical models}
\label{S-boundary}
The fundamental limitations of the hydrostatic coronal models are well
understood. They were first pointed out in the papers by
\citet{parker58,parker63} The hydrodynamical plasma transport
equations he introduced were integrated upwards from a low altitude
reference level, $\mathrm{r_0}$, up to infinity. Very precisely
chosen boundary condition, $\mathrm{u_0}$, had to be chosen at
$\mathrm{r_0}$ to obtain a continuous solution for u(r) crossing a
saddle point at the altitude where the SW expansion velocity becomes
supersonic. Any other slightly different boundary conditions would
produce diverging steady state solutions. In the 60's this critical
solution for the SW flow velocity had been compared to the similar
hydrodynamical solution describing the supersonic flow velocity in a
de Laval nozzle.
On the contrary, the DYN distributions of u(r) are continuous but
non-singular solutions of the continuity and momentum equations
describing the SW expansion. Indeed they are not characterized by a
saddle point. In the DYN models these hydrodynamical transport
equations are integrated downwards from 1 AU, where appropriate
boundary conditions are taken as free input parameters. As indicated
above this has the remarkable advantage to generate wide ranges of
continuous solutions for the SW expansion velocity, and for the
electron temperature distributions. Furthermore, all the latter
DYN temperature profiles happen to converge at the base of the corona
to HST temperature which corresponds to the hydrostatic model,
whatever the values of $\mathrm{n_E}$ and $\mathrm{u_E}$, may have
been assumed at 1 AU.
The singular solutions crossing a saddle point, lead to coronal
temperatures that maximize at the base of the corona, having negative
temperature gradients in the inner corona. But this is in contrast to
the DYN temperature profiles which predict always that the values of
$\mathrm{dT_e(r)/dr}$ are positive and small, both over the coronal
poles (see figure \ref{F-temperature}), and over the equator (see
figure \ref{F-nE}).
Note however, these nearly uniform values of $\mathrm{T_e(r)}$, in the
inner corona, are larger in equatorial region (1.0-1.3 MK), than at
high latitudes, over the poles and in coronal holes (0.7-0.8 MK).
This remarkable difference of temperatures between the equatorial and
polar in the inner corona, is well supported by SKYLAB and other EUV
and X-ray observations.
Albeit in the inner corona ($\mathrm{h < 0.1 R_S}$) the nearly
isothermal polar temperatures are much lower than the equatorial ones,
the DYN models predict that the reverse is true at higher altitudes:
i.e. in the mid-corona for h = 2-3 $\mathrm{R_S}$, when realistic
values are adopted for $\mathrm{n_E}$ and $\mathrm{u_E}$.
From the results presented above, it can be seen that at such higher
altitudes the maximum of the DYN temperatures is then much larger over
the poles or in coronal holes ($\mathrm{T_{e,max}(r)}$ = 1.8-2.5 MK),
than over the equatorial regions ($\mathrm{T_{e,max}(r) < 1.3}$ MK).
This result implies, thus, that much larger electron temperatures are
indeed needed at mid-altitudes in coronal holes to accelerate SW
streams to high speeds (600 km/s or more), than is needed over the
equatorial regions from where the slower SW streams (330 km/s or so)
are suspected to originate.
This conclusion is fully consistent with Parker's expectation in the
60's and \cite{lemaire71,lemaire73}'s one in the 70's that larger
coronal temperatures would necessarily be required in the corona, to
boost the coronal plasma to larger speeds in interplanetary
space. This leads us to infer that larger energy deposition rates are
needed in the mid-corona, but not necessarily at lower altitudes
inside the inner-corona.
Albeit in the present paper we do not address the pending issue of
possible coronal heating mechanisms able to account for the DYN
temperature profiles displayed in figures \ref{F-temperature} and
\ref{F-nE}, we wish to insist that these profiles have their maximum
in the mid-corona, not in the transition region at
$\mathrm{h_{tr}\approx 0.003~R_S}$.
\section{Conclusions}
\label{S-Conclusions}
The calculated DYN temperature distributions have been compared with
those determined by using older methods of calculation, especially,
the SHM commonly used by assuming a corona in hydrostatic equilibrium
and in isothermal equilibrium. It has been shown that the latter
untenable assumptions are leading to coronal electron temperature
distributions that are quite different from those obtained by the DYN
method introduced by \citeauthor{lemaire16}.
The DYN model is a straightforward extension of the hydrostatic model
developed decades ago by \citet{alfven41}. Indeed, it this more
general model takes into account the radial expansion of the coronal
plasma, without any transverse motion of plasma across magnetic field
lines. Indeed, here it has been assumed that the coronal plasma is
flowing up in open flow tubes that coincide with magnetic flux tubes
whose geometry and cross-section, A(r), are the same as those adopted
by \cite{Kopp76}. Here A(r) is an ad-hoc analytic input function
associate to the DYN model.
The radial electron density distribution, $\mathrm{n_e(r)}$, is an
additional input function required to create a DYN model. It can be
derived from WL eclipse observations, or from an empirical model like
that of \citeauthor{saito70}. Unfortunately, for A(r) there does not
yet exist such a ``steering oar'' to guide our ``educated guesses''.
After having pointed out the major differences between the temperature
profiles obtained by the new DYN method in comparison to the SHM and
HST models, we have shown that in all cases the calculated coronal
temperatures have a maximum value in the mid-corona, and never at the
base of the corona, as often implied in publications.
This important finding has lead us to the conjecture that the source
of the coronal heating is not at the base of the corona, but higher up
in the mid-corona, where the DYN temperatures distributions have a
well defined maximum at all heliospheric latitudes. Note that even the
HST temperature and SHM temperature profiles have temperature maxima
well above the base of the corona.
These theoretical results put in question the common hypothesis that
the corona is heated exclusively from below. Indeed, although widely
spread, this believe is only a hypothesis based on the reasonable
expectation that heating takes place where the energy density is at
its highest. However, to the best of the authors knowledge, there are
no evidence to rule out the possibility that a significant amount of
heating takes place higher up.
Conversely, the DYN model cannot rule out the existence of the
commonly-suggested heating mechanisms (such as those based on
reconnection or magneto-hydrodynamical dumping). As it can be seen in
figures \ref{F-temperature} and \ref{F-nE} and also in
\citet{lemaire16}, the temperature calculated at $\mathrm{r_b}$ is
much higher than the observed chromospheric temperatures.
Obviously this cannot be considered as the end of the SW modelling
venture but it is nevertheless a basic new step ahead. Much more
elaborate and difficult work stays to model the kinetic pressure
anisotropies of the electrons and ionic populations as well as their
mutual collisional interactions, the most important and interesting
challenge remaining, of course, the determination of the coronal heat
deposition rate versus heliospheric distances and latitudes.
\begin{acks}
We acknowledge the logistic support of BELSPO, the Belgian Space
Research Office, as well as the help of the IT teams of BIRA and ROB.
JFL wishes also to thanks Viviane Pierrard (BIRA-IASB), Marius Echim
(BIRA-IASB), Koen Stegen (ROB), and the early assistance of Cl\'ement
Botquin, IT student hired during the Summer 2010. We acknowledge
Serge Koutchmy for his interest in our work, and for pointing out in
page 3664 of \citeauthor{lemaire16} a mistake in the given numerical
value of H (the density scale height at the base of the Corona when
the temperature is assumed to be equal to 1 MK). ACK acknowledges
funding from the Solar-Terrestrial Centre of Excellence (STCE), a
collaborative framework funded by the Belgian Science Policy Office
(BELSPO). Some work for this paper was also done in the framework of
the SOL3CAM project (Contract: BR/154/PI/SOL3CAM), funded by BELSPO.
The authors are not aware of any conflicts of interest. The authors
have no relevant financial or non-financial interests to disclose. All
work done for this article was funded by the Royal Belgian Institute
for Space Aeronomy (BIRA-IASB), and the Royal Observatory of Belgium
(ROB). Both institutes are funded by the BELgian Science Policy Office
(BELSPO).
\end{acks}
\bibliographystyle{spr-mp-sola}
|
2,869,038,153,993 | arxiv | \section{Introduction}\label{sec1}
Single-view multi-object tracking (MOT) has been extensively explored in recent years \cite{tracktor_2019_ICCV,wang2020towards,centertrack,wang2019exploit,zhang2021fairmot,wang2022recent,braso2022multi}. However, the limitation of the single viewpoint causes occluded objects to be lost in long-term tracking \cite{wang2021track,wang2022split}.
The above issue can be alleviated under tracking with the cross-view setting
\cite{hofmann2013hypergraphs,xu2017cross,han2020cvmht}.
In specific, given multiple synchronized videos capturing the same scene from different viewpoints, there is a high probability that an object obscured in one view is visible in another. Cross-view settings can compensate for the occlusion information of single-view monitoring with their complementary information.
Due to its effectiveness, cross-view multi-object tracking has attracted considerable interest, and numerous cross-view tracking methods \cite{ayazoglu2011dynamic,hofmann2013hypergraphs,tang2018joint,fleuret2007multicamera,liu2016multi,xu2017cross,han2020cvmht,gan2021mvmhat} have been proposed in the literature. For example, some cross-view tracking methods focus on excavating information from multi-views \cite{ayazoglu2011dynamic,hofmann2013hypergraphs,tang2018joint}. Some methods explore new formulations and solutions to the problem \cite{fleuret2007multicamera,liu2016multi}. Moreover, some recent works \cite{xu2017cross,han2020cvmht} apply graph clustering to the cross-view tracking problem.
However, due to the limitations of current cross-view tracking datasets, several significant challenges still exist when comparing the present and exploring new approaches for cross-view tracking. On the one hand, although various cross-view tracking datasets \cite{fleuret2007multicamera,xu2017cross,chavdarova2018wildtrack,gan2021mvmhat} have appeared in recent years, these existing datasets have significant drawbacks.
To be specific, existing datasets suffer from 1) missing real-world scenarios, 2) lacking diverse tracking scenes, and 3) owning a limited number of tracks. Hence, the datasets can be hardly used to test the efficacy of cross-view tracking approaches comprehensively. Moreover, the vast majority of videos in known datasets were captured with static cameras, restricting research on tracking algorithms for moving cameras.
To overcome the aforementioned difficulties and facilitate future research on cross-view tracking, we present a novel cross-view dataset for multi-object tracking in \textbf{DIV}erse \textbf{O}pen Scenes, dubbed \textit{DIVOTrack}.
In particular, our DIVOTrack dataset has the following primary characteristics:
1) DIVOTrack video recordings are captured in real-world circumstances and contain a mixture of a limited number of pre-selected individuals and a large number of non-experimental pedestrians.
2) DIVOTrack offers diverse scenes. It contains outdoor and indoor scenes with various surrounding environments, such as streets, shopping malls, buildings, squares, and public infrastructures.
3) DIVOTrack provides a large collection of IDs and tracks focusing on crowded settings. It has a total of 999 single-view tracks and 550 cross-view tracks, both of which are significantly larger than the previous cross-view multi-object tracking datasets.
4) DIVOTrack contains a large movement of cameras, enabling the study of cross-view tracking with moving cameras in the community.
In addition to the proposed DIVOTrack dataset, we propose an end-to-end cross-view multi-object tracking baseline framework named \textit{CrossMOT} to learn object embeddings from multiple views, extended from the single-view tracker FairMOT \cite{zhang2021fairmot}. CrossMOT is a unified joint detection and cross-view tracking framework, which uses an integrated embedding model for object detection, single-view tracking, and cross-view tracking.
Specifically, CrossMOT uses decoupled multi-head embedding that can learn object detection, single-view feature embedding, and cross-view feature embedding simultaneously.
To address the ID conflict problem between cross-view and single-view embeddings, we use locality-aware and conflict-free loss to improve the embedding performance. During the inference stage, the model takes advantage of the joint detector as well as separate embeddings for cross-frame association and cross-view matching.
Our main contributions are summarized as follows.
\begin{itemize}
\item A novel cross-view multi-object tracking dataset is proposed, which is more realistic and diverse, has more crowded tracks and incorporates moving cameras. The dataset is with high image quality and clean ground truth labels.
\item We propose a novel cross-view tracker termed \textit{CrossMOT}, which is the first work that extends the joint detection and embedding from the single-view tracker to the cross-view. The proposed CrossMOT is an all-in-one embedding model that simultaneously learns object detection, single-view, and cross-view features.
\item
We build a standardized benchmark for cross-view tracking evaluation. Extensive experiments are conducted using baseline tracking methods, including single-view and cross-view tracking. We show that the proposed CrossMOT achieves high cross-view tracking accuracy and significantly outperforms state-of-the-art (SOTA) methods on DIVOTrack, MvMHAT \cite{gan2021mvmhat} and CAMPUS \cite{xu2016multi}. The experiment results can be used as reference for future research.
\end{itemize}
The outline of the paper is as follows: In Section~\ref{sec:related_work}, we review state-of-the-art (SOTA) cross-view MOT methods and datasets. Section~\ref{sec:wild_scene} describes the details of the proposed DIVOTrack dataset. We introduce our proposed CrossMOT in Section~\ref{sec:method}. The experiments of baseline methods on the benchmark are provided in Section~\ref{sec:exp}, followed by the conclusion and future work in Section~\ref{sec:conclude}.
\section{Related Work}
\label{sec:related_work}
\subsection{Inter-Camera Tracking}
Generally, inter-camera tracking \cite{tesfaye2017multi,cai2014exploring,lee2017online,tang2018single,hsu2019multi,hsu2021multi,ma2021deep} does not assume overlapping views between cameras. Usually, an object may leave the view of one camera and then enter the view of another camera. Research in this category attempts to match single-camera trajectories across non-overlapping cameras by exploiting intrinsic information of objects, such as appearance features \cite{tesfaye2017multi,cai2014exploring}, motion patterns \cite{hofmann2013hypergraphs}, and camera topological configuration \cite{lee2017online}. For appearance cues, \cite{zhang2017multi} uses convolutional neural networks (CNNs) to generate the feature
representation for each target and proposes a feature re-ranking mechanism to find correspondences among tracklets. \cite{ristani2018features} considers not only the CNN-based appearance features but also motion patterns. Moreover, it formulates the inter-camera MOT task as a binary integer program problem and proposes the deep feature correlation clustering approach to match the trajectories of a single camera to all other cameras. Some works consider the camera topology in inter-camera MOT. For example, \cite{cheng2017part} attempts to match local tracklets between every two neighboring cameras.
\subsection{Cross-View Tracking}
Cross-view tracking is one specific category of inter-camera tracking with shared large overlapping views among different cameras. Cross-view tracking has not been widely explored due to the challenges of data collection, cross-view object association, and multi-modality feature fusion.
Some existing methods focus on excavating multi-view information, such as \cite{ayazoglu2011dynamic,hofmann2013hypergraphs,tang2018joint}. Some focus on new problem formulations and solutions, such as \cite{fleuret2007multicamera,liu2016multi}.
Recent works \cite{xu2017cross,han2020cvmht} formulate cross-view tracking as a graph clustering problem. The graph is constructed with detections or tracklets as nodes. Afterward, the similarities between nodes are measured with appearance and motion features. However, the similarity measure is based on hand-crafted feature fusion, which may be sub-optimal. Besides, optimization for graph clustering in the inference stage is usually computationally expensive. How to automatically combine features from different modalities is still an open question in the cross-view tracking area.
\begin{figure*}[!ht]
\begin{center}
\includegraphics[width=0.9\linewidth]{figs/dataset_fig4.png}
\end{center}
\caption{Examples of the DIVOTrack dataset. From top to bottom: \textit{Circle}, \textit{Shop}, \textit{Side}, and \textit{Ground} scenes of three views, respectively. The same person that appears in different views is shown in the same color.}
\label{fig:example}
\end{figure*}
\subsection{Cross-View Tracking Datasets}
There are several existing commonly used cross-view multi-object tracking datasets, including EPFL \cite{fleuret2007multicamera}, CAMPUS \cite{xu2017cross}, WILDTRACK \cite{chavdarova2018wildtrack}, and MvMHAT \cite{gan2021mvmhat}. EPFL dataset is one of the traditional cross-view tracking datasets, captured in three or four different views by static cameras. The major limitation of this dataset is that almost all sequences are captured in the experimental environment, which is far from real-world scenarios. Besides, the videos have very low resolutions, causing difficulty in learning informative appearance embeddings of the objects. CAMPUS dataset contains more realistic scenarios. However, most subjects are pre-selected, and the ground truth annotations are not very accurate. WILDTRACK is captured in an outdoor square with crowded pedestrians. However, it only contains one single scene, and some of the pedestrians are not annotated, hindering the usage of the dataset. MvMHAT is one of the recently released datasets. MvMHAT still suffers from a very limited number of subjects, and all the video recordings are collected in an identical and experimental environment. Compared with these datasets, our DIVOTrack is more realistic and diverse, has more crowded tracks, and incorporates dynamic scenes.
\section{DIVOTrack Dataset}
\label{sec:wild_scene}
We present a self-collected cross-view dataset for diverse open scenes, namely DIVOTrack, to facilitate cross-view tracking research in the community. The dataset collection, annotation, and statistics are explained as follows.
\subsection{Data Collection}
We collect data in 10 different real-world scenarios, including indoor and outdoor public scenes. All the sequences are captured by using three moving cameras and are manually synchronized. We pre-select a specific overlapping area among different views for each scene. All the cameras were controlled to shoot at the pre-selected area with random smooth movements.
The data is collected from ten diverse open scenes with varying population densities and public spaces. It contains nine outdoor scenes and one indoor scene from streets, shopping malls, gardens, and squares, namely \textit{Circle}, \textit{Shop}, \textit{Moving}, \textit{Park}, \textit{Ground}, \textit{Gate1}, \textit{Floor}, \textit{Side}, \textit{Square}, and \textit{Gate2}. There are both moving dense crowds and sparse pedestrians in outdoor scenes. The surrounding environment of outdoor scenes is diverse, including streets, vehicles, buildings, and public infrastructures. Meanwhile, the indoor scene comes from a large shopping mall, with a more complicated and severe occlusion of the crowd than the outdoor environment.
We record each scene with three types of moving cameras: one is mounted on a flying UAV with a resolution of $1920\times 1080$, overlooking the ground with a pitch angle of around $45^{\circ}$; the other two cameras are from mobile phones held by two people, with the resolution of $3640\times 2048$ and $1920\times 1080$, respectively. All the raw video recordings are with 60 FPS. We record two sets of videos for every scene using all three cameras. One set is for training, while the other is for testing. In total, we have 60 video sequences captured in 10 scenes. It should be noted that during the recording process, both the UAV and the mobile phone camera have a certain degree of shaking, which is normal for moving camera-based recording.
After recording, we synchronize the videos manually. We align the timestamps with the beginning and ending frames of each recording batch. Since the FPS for all cameras is 60, the synchronization error ranges between -1/120 and +1/120 milliseconds (ms), which is bounded by 8 ms. Because pedestrian movement is not fast, the synchronization error for the pedestrian tracking task is acceptable. After alignment, each video is downsampled to 30 FPS. For people who are close to the camera, we also add mosaics on human faces.
\subsection{Data Annotation}
For data annotation, we aim to obtain the ground truth bounding box and global pedestrian ID across different views in each scene. The data annotation contains three main steps, \textit{i.e.}, track initialization, single-view correction, and cross-view matching. The annotation process is demonstrated as follows.
We utilize the pre-trained single-view tracker to initialize the object bounding boxes and tracklets, which can significantly reduce the labor cost of annotation. Specifically, CenterTrack \cite{centertrack} is adopted to generate the raw tracks with the pre-trained model on the MOT Challenge dataset \cite{milan2016mot16}.
To further save labeling time, we manually correct the tracking results, including both bounding boxes and IDs, for every ten frames. After correction, the boxes are linearly interpolated for intermediate frames.
After the single-view correction, the objects that appear in multiple views are still not matched. The same global IDs should be assigned to these identical objects across all views.
Based on the corrected single-view tracklets, objects that appear in two or three views are re-assigned with the same IDs. The IDs are renumbered according to the first time the object appears in any of the three views.
Ultimately, tracklets matched in different views are assigned with an identical global ID. The tracklet that only appears in a single view is also assigned a global ID.
\subsection{Dataset Statistics}
We count the number of bounding boxes and tracks in the training and testing sets for each scene. We also show the important statistics of the dataset for both single-view and cross-view tracking.
\begin{figure}
\centering
\subfigure[Number of training set boxes.]{
\begin{minipage}[b]{0.5\textwidth}
\includegraphics[width=0.9\textwidth]
{figs/dataset_fig21.pdf}
\end{minipage}
}
\subfigure[Number of testing set boxes.]{
\begin{minipage}[b]{0.5\textwidth}
\includegraphics[width=0.9\textwidth]{figs/dataset_fig22.pdf}
\end{minipage}
}
\caption{Numbers of boxes for each view on training and testing set, respectively. The different colors of each bar represent different views.} \label{fig:box}
\end{figure}
\begin{figure}
\centering
\subfigure[Number of training set tracks.]{
\begin{minipage}[b]{0.5\textwidth}
\includegraphics[width=0.9\textwidth]{figs/dataset_fig31.pdf}
\end{minipage}
}
\subfigure[Number of testing set tracks.]{
\begin{minipage}[b]{0.5\textwidth}
\includegraphics[width=0.9\textwidth]{figs/dataset_fig32.pdf}
\end{minipage}
}
\caption{Number of tracks for each view on training and testing set, respectively. The different colors of each bar represent different views.} \label{fig:track}
\end{figure}
\subsubsection{Boxes and Tracks}
The detailed bounding box statistics of the DIVOTrack dataset are shown in Fig.~\ref{fig:box}. The whole DIVOTrack dataset has 560K boxes, of which 270K boxes belong to the training set, while the rest belongs to the test. The different colors of each bar represent different views. The number of bounding boxes reflects the density of crowds in each scene. For example, there are less than 10K boxes in the \textit{Moving} scene but more than 50K boxes in the \textit{Ground} scene, demonstrating a diverse density of the dataset.
The number of tracks is shown in Fig.~\ref{fig:track}. We count the number of tracks from 60 videos. We can observe a large variation in the number of tracks in different scenes. For example, more than 140 tracks in \textit{Shop} but less than 20 tracks in \textit{Gate2} from the training set. Besides that, the proportion of tracks among the three views is very close. These results prove that our dataset has enough cross-view matching trajectories.
\subsubsection{Cross-View Statistics}
To show the cross-view statistics, we plot the number of boxes from the same object across two and three views in the left part of Fig.~\ref{fig:cv_statistics}, respectively. There are more boxes across two views than three views in several scenes, such as \textit{Shop}, \textit{Floor}, and \textit{Side}, showing that some pedestrians are not visible from at least one view. This demonstrates that a large view angle variance exists in the dataset. To better present the dataset, we show some sampled frames of the dataset in Fig.~\ref{fig:example}. From top to bottom are examples of \textit{Circle}, \textit{Shop}, \textit{Side}, and \textit{Ground} scenes in three views, respectively. The same person that appears in different views is shown in the same color.
We also count the duration of object trajectories appearing across multiple views, as shown in the right part of Fig.~\ref{fig:cv_statistics}. The colored bars under each scene represent the average duration of pedestrians across different views. The cross-view overlapping duration accounts for over half of the total time, demonstrating that our dataset has sufficient cross-view tracklets.
\begin{figure}
\centering
\subfigure[Number of cross-view boxes.]{
\begin{minipage}[b]{0.5\textwidth}
\includegraphics[width=0.9\textwidth]{figs/dataset_fig41.pdf}
\end{minipage}
}
\subfigure[Number of cross-view track duration.]{
\begin{minipage}[b]{0.5\textwidth}
\includegraphics[width=0.9\textwidth]{figs/dataset_fig42.pdf}
\end{minipage}
}
\caption{The number of cross-view boxes and track duration for each scene on the training and testing set, respectively.}
\label{fig:cv_statistics}
\end{figure}
\begin{table*}[!t]
\small
\caption{Comparison between cross-view multi-object tracking datasets.}
\centering
\begin{tabular}{l|ccccc}
\toprule
Attribute & EPFL & CAMPUS & MvMHAT & WILDTRACK & \textbf{DIVOTrack} \\
\midrule
Scenes & 5 & 4 & 1 & 1 & \textbf{10}\\
Groups & 5 & 4 & 12 & 1 & \textbf{20}\\
Views & 3-4 & 4 & 3-4 & \textbf{7} & 3\\
Sequences & 19 & 16 & 46 & 7 & \textbf{60}\\
Frames & \textbf{97K} & 83K & 31K & 3K & 54K\\
Single-View Tracks & 154 & 258 & 178 & - & \textbf{999}\\
Cross-View Tracks & 41 & 70 & 60 & 313 & \textbf{550}\\
Boxes & \textbf{625K} & 490K & 208K & 40K & 560K\\
Moving Camera & No & No & \textbf{Yes} & No & \textbf{Yes}\\
Subject & Actor & Actor & Actor & \textbf{Mixed} & \textbf{Mixed}\\
\bottomrule
\end{tabular}
\label{tab:dataset}
\end{table*}
\subsection{Comparison with Existing Datasets}
There are several existing cross-view multi-object tracking datasets, namely EPFL \cite{fleuret2007multicamera}, CAMPUS \cite{xu2017cross}, MvMHAT \cite{gan2021mvmhat}, and WILDTRACK \cite{chavdarova2018wildtrack}. Most existing datasets usually have non-diverse scenes and a limited number of tracking objects.
Specifically, the EPFL dataset \cite{fleuret2007multicamera} contains five sequences: Terrace, Passageway, Laboratory, Campus, and Basketball. In general, each sequence consists of three or four different views and films with 6-11 pedestrians walking or running around, lasting 3.5-6 minutes. Each view is shot at 25 FPS with a relatively low resolution of $360\times 288$.
CAMPUS dataset \cite{xu2017cross} contains four sequences, \textit{i.e.}, two gardens, one parking lot, and one auditorium, shot by four 1080P cameras. The recorded videos last three to four minutes with 30 FPS.
The MvMHAT dataset \cite{gan2021mvmhat} contains 12 video groups and 46 sequences, where each group includes three to four views. The videos are collected with four wearable cameras, \textit{i.e.}, GoPro, covering an overlapped area with multiple people from significantly different directions, \textit{e.g.}, near 90-degree view-angle difference. The videos are manually synchronized and annotated with bounding boxes and IDs on 30,900 frames.
The WILDTRACK dataset \cite{chavdarova2018wildtrack} is captured by seven static cameras with 60 FPS in seven distinct views. WILDTRACK provides a joint calibration and synchronization of sequences. There are about 3000 annotated frames, 40,000 bounding boxes, and over 300 individuals.
The detailed comparison is reported in Table~\ref{tab:dataset}, where ``DIVO.'' represents our DIVOTrack dataset.
We can observe that the DIVOTrack dataset has four main advantages.
1) DIVOTrack contains a mixture of a small number of pre-selected subjects and a large number of non-experimental walking pedestrians in the video recording, which is captured in real-world scenarios and is much more realistic than existing datasets.
2) DIVOTrack has more diverse scenes.
3) DIVOTrack has a much larger set of IDs and tracks, focusing on more crowded scenarios.
4) Our dataset contains a large movement of cameras, enabling cross-view tracking research with moving cameras in the community.
\section{CrossMOT}
\label{sec:method}
To demonstrate the effectiveness of the proposed DIVOTrack dataset and deal with the challenges of cross-view tracking, a baseline cross-view tracking method is highly needed.
To jointly detect and track objects from multiple views, an effective embedding model is of great importance. Intuitively, the learned embeddings of the same object should have high similarity, and embeddings of different objects should have low similarity, following the metric learning framework. We can adopt shared embeddings for both single-view and cross-view tracking. However, we observe that the degradation occurs in our experiments since the targets of single-view association and cross-view matching have slight differences. Specifically, single-view embeddings focus on learning temporal continuity, while cross-view embeddings focus on learning the invariant appearance of objects. As a result, cross-view tracking methods should consider both single-view and cross-view characteristics for embedding.
In this section, we demonstrate the overview of cross-view tracking and our proposed baseline in Sub-section~\ref{sec:overview}. The details of the baseline are described in Sub-section~\ref{sec:decouple_emb}. The inference stage is provided in Sub-section~\ref{sec:inference}.
\subsection{Overview}
\label{sec:overview}
We first demonstrate the cross-view tracking task. Given a set of synchronized video sequences $\mathcal{V}=\{\boldsymbol{V}_1,\boldsymbol{V}_2,...,\boldsymbol{V}_{N}\}$ from multiple views of the same scene, we aim to simultaneously detect and track objects across different views, where $N$ is the number of different camera views, and each video $\boldsymbol{V}_i$ contains a successive $T_i$ frames $\{\boldsymbol{I}_1,\boldsymbol{I}_2,...,\boldsymbol{I}_{T_i}\}$. We aim to detect and track objects across multiple views simultaneously, \textit{i.e.}, distinguish identical objects with a shared global ID across frames and views.
We propose a novel cross-view tracker, namely \textit{CrossMOT}. The proposed CrossMOT adopts the backbone of CenterNet \cite{zhou2019objects}, denoted as $f(\cdot;\boldsymbol{\theta}_f)$, followed by three sub-networks, including a detection head $h_{d}(\cdot;\boldsymbol{\theta}_{d})$, a cross-view Re-ID head $h_{c}(\cdot;\boldsymbol{\theta}_{c})$, and a single-view Re-ID head $h_{s}(\cdot;\boldsymbol{\theta}_{s})$, where $\boldsymbol{\theta}_f$, $\boldsymbol{\theta}_{d}$, $\boldsymbol{\theta}_{c}$, $\boldsymbol{\theta}_{s}$ are model parameters.
However, when using multiple heads for separate embeddings, the definition of ground truth IDs between single-view and cross-view is different, \textit{i.e.}, the same objects across different videos are regarded as different objects in the single-view tracking since the temporal continuity is disobeyed, which causes the conflict issue in training.
In the following sub-sections, we introduce our proposed CrossMOT and illustrate how to decouple multi-head embedding and address the conflict issue in multi-task learning. We also summarize our association method for inference with separate embeddings.
\begin{figure*}[!t]
\centering
\includegraphics[width=\textwidth]{figs/framework.pdf}
\caption{The proposed CrossMOT framework. The input cross-view video clips are fed into the backbone and then followed by three heads for embedding. The blue and green arrows represent the forward and backward flow, respectively.
Detection branch provides the detection results and the other two embedding branches help the single-view and cross-view association.
}
\label{fig_fw}
\end{figure*}
\subsection{Decoupled Multi-Head Embedding}
\label{sec:decouple_emb}
The framework of CrossMOT is shown in Fig.~\ref{fig_fw}.
The CrossMOT conducts object detection, single-view tracking, and cross-view tracking simultaneously. To fulfill multiple tasks, we decouple the embedding into three head networks, including object detection head, cross-view Re-ID head, and single-view Re-ID head. The details are as follows.
\subsubsection{Object Detection Embedding}
The object detection head follows CenterNet \cite{zhou2019objects}, which includes the prediction of object confidence heatmap $\boldsymbol{h}_{hm}$, object size $\boldsymbol{h}_{size}$, and the object offset $\boldsymbol{h}_{offset}$. The loss is defined as follows,
\begin{equation}
\mathcal{L}_{d} = \sum_{\boldsymbol{I} \in \mathcal{V}} \boldsymbol{w}_d^T \phi_{d}(h_d(f(\boldsymbol{I};\boldsymbol{\theta}_{f});\boldsymbol{\theta}_{d}), \boldsymbol{y}_{d}),
\label{loss_det}
\end{equation}
where $\boldsymbol{y}_d$ is the ground truth of object class, size and location heatmap; $\phi_{d}(\cdot, \boldsymbol{y})$ contains individual losses of the detection, including the focal loss for classification and $l_1$ loss for size and offset regression; $h_{d}(\cdot;\boldsymbol{\theta}_{d}) = \{{\boldsymbol{h}_{hm}},{\boldsymbol{h}_{size}},{\boldsymbol{h}_{offset}}\}$; and $\boldsymbol{w}_d=\{w_{{hm}},w_{{size}},w_{{offset}}\}$ are the weights of individual losses.
\subsubsection{Cross-view Re-ID Embedding}
The cross-view Re-ID embedding aims to provide the cross-view feature which is used to associate the same person from different views in the same scene.
First, we extract the cross-view Re-ID embedding of each object based on its center pixel location.
As long as the objects across different views are from the same object, a unique global ID is assigned to the object. The global IDs are unique IDs in the entire training set, including multiple scenes with different views. We follow the conventional cross-entropy loss for the cross-view Re-ID, \textit{i.e.},
\begin{equation}
\mathcal{L}_{c} = \sum_{\boldsymbol{I} \in \mathcal{V}} \phi_c(h_{c}(f(\boldsymbol{I};\boldsymbol{\theta}_f);\boldsymbol{\theta}_{h_c}), \boldsymbol{y}^{GID}),
\label{loss_cross}
\end{equation}
where $\phi_c(\cdot, \cdot)$ is cross-entropy loss, and $\boldsymbol{y}^{GID}$ is the one-hot vector of object global ID.
\subsubsection{Locality-aware and Conflict-free Single-view Embedding}
With the combination of object detection embedding and cross-view Re-ID embedding, the tracker can already achieve the goal of end-to-end cross-view tracking. However, from our experimental observations, the shared embedding from cross-view Re-ID has degradation on both single-view association and cross-view matching tasks. This is due to the different goals of the two tasks. Single-view association focuses on the temporal continuity of the object embedding without many variant poses and views; however, cross-view matching focuses on the view-independent features, such as clothes colors, types, and gaits of objects. As a result, we decouple the embedding into cross-view Re-ID and single-view Re-ID heads.
To learn the single-view Re-ID embedding, we follow the similar loss defined in Eq.~(\ref{loss_cross}), \textit{i.e.},
\begin{equation}
\mathcal{L}_{s} = \sum_{\boldsymbol{I} \in \mathcal{V}} \phi_s(h_{s}(f(\boldsymbol{I};\boldsymbol{\theta}_f);\boldsymbol{\theta}_{h_s}), \boldsymbol{y}^{LID}),
\label{loss_single}
\end{equation}
where $\boldsymbol{y}^{LID}$ represents the one-hot vector of the object local ID, in which only the same object appears in the same video has the identical local ID. Unlike the global ID, the same objects across different views are assigned with different local IDs.
The cross-entropy loss is a common choice for $\phi_s(\cdot, \cdot)$ in Eq.~(\ref{loss_single}). However, with such a definition, we find there is a large conflict between the cross-view Re-ID loss and single-view Re-ID loss due to the different definitions of ground truth IDs. The same objects in other views are treated as positive samples in the cross-view Re-ID, while they are treated as negative samples in the single-view Re-ID, leading to further degradation of the tracking performance from the observations of experimental results. To address such a conflicting issue, we define a conflict-free cross-entropy loss as follows,
\begin{equation}
\begin{aligned}
& \phi_s = \\
& -\frac{1}{N_d}\sum_{i=1}^{N_d}\log\frac{e^{\boldsymbol{W}_{y_i}^T\boldsymbol{x}_i+b_{y_i}}}{e^{\boldsymbol{W}_{y_i}^T\boldsymbol{x}_i+b_{y_i}}+\sum_{y_j\neq y_i}\mathbbm{M}_{i,j}e^{\boldsymbol{W}_{y_j}^T\boldsymbol{x}_i+b_{y_j}}},
\label{conflict_free_softmax}
\end{aligned}
\end{equation}
where $\{\boldsymbol{W}_i, b_i\}$ are learnable parameters from the last fully-connected layer of the single-view Re-ID head with respect to the $i$-th local ID; $\boldsymbol{x}_i$ is the input feature to the last layer; $y_i$ is the local ID;
$N_d$ is the number of objects; and $\mathbbm{M}_{i,j}=\textbf{1}_{v_j=v_i}$ is the indicator function, which returns 1 if only if the local id $y_i$ and $y_j$ are from the same video sequence; otherwise returns 0. In other words, we only conduct softmax cross-entropy loss on objects from the individual video sequence of the same view. Without the cross-view distraction, the single-view Re-ID can address the previous conflict problem. Based on this definition, the positive and negative samples used in the softmax remain consistent for both global IDs and local IDs. The process is demonstrated in the bottom-right of Fig.~\ref{fig_fw}.
\subsubsection{Final Loss}
Following \cite{zhang2021fairmot} in FairMOT, we use learnable parameters $w_1$ and $w_2$ to adjust individual losses for training by the uncertainty loss \cite{kendall2018multi}. The final loss for training CrossMOT can be formulated as:
\begin{equation}
\mathcal{L}_{total}=\frac{1}{2}(\frac{1}{e^{w_1}}\mathcal{L}_{d}+\frac{1}{e^{w_2}} (\mathcal{L}_{s}+\mathcal{L}_{c})+w_1+w_2),
\end{equation}
where
$w_1$ and $w_2$ are learnable parameters used to balance between the detection and Re-ID branches.
\subsection{Inference of CrossMOT}
\label{sec:inference}
In the inference stage, we first feed the image into the trained network, and the produced detection head is translated into bounding boxes using a decoder. Each bounding box is matched with corresponding single-view and cross-view features. After that, we choose $n_i$ bounding boxes $\mathcal{B}_i=\{b_i^j\vert j=1,2,...,n_i\}$ in video $\boldsymbol{V}_i$ with confidence greater than detection threshold ${\delta_d}$ and take them into our tracking framework. In the tracking process, the association of single-view and cross-view is alternate.
We employ DeepSORT \cite{wojke2017simple} for single-view tracking.
For each frame $t$ in video $\boldsymbol{V}_i$, we first calculate the cost matrix $\boldsymbol{C}_i$ using single-view Re-ID embedding and then generate the gate matrix $\boldsymbol{G}_i$ to reduce the excessive value in $\boldsymbol{C}_i$ using single-view matching threshold $\delta_s$. After that, the permutation matrix $\boldsymbol{P}_s^{t,t-1}$ is created by running the Hungarian algorithm \cite{kuhn1955hungarian} to the cost matrix $\boldsymbol{C}_i$.
For cross-view tracking, we follow the MvMHAT \cite{gan2021mvmhat} and calculate the association matrix $\boldsymbol{A}^{ij} \in \mathbbm{R}^{n_i \times n_j}$ of frame $t$ in video $\boldsymbol{V}_i$ and $\boldsymbol{V}_j$ as: $\boldsymbol{A}^{ij} = \boldsymbol{E}_i \cdot \boldsymbol{E}_j^T$, where $\boldsymbol{E}_i \in \mathbbm{R}^{n_i \times K_c}$ and $\boldsymbol{E}_j \in \mathbbm{R}^{n_j \times K_c}$ are cross-view embedding matrices of video $V_i$ and $V_j$, respectively; $K_c$ is the dimension of cross-view embedding. We use a temperature-adaptive softmax operation \cite{hinton2015distilling} to compute the matching matrix $\boldsymbol{M}^{ij}$ as: $\boldsymbol{M}^{ij}_{ab}=\frac{exp(\tau \boldsymbol{A}^{ij}_{ab})}{\sum_{b'=1}^{A_c} exp(\tau \boldsymbol{A}^{ij}_{ab'})}$, where $a$, $b$, and $A_c$ denote the row index, column index, and number of columns in $\boldsymbol{A}^{ij}$, respectively; $\tau$, the adaptive temperature, is calculated by two predefined parameter $\epsilon$ and $\gamma$: $\tau=\frac{1}{\epsilon} log[\frac{\gamma(A_c-1)+1}{1-\gamma}]$. All entries in $\boldsymbol{M}^{ij}$ that are less than or equal to the cross-view matching threshold $\delta_c$ are set to 0. Like single-view tracking, we generate the permutation matrix $\boldsymbol{P}^{ij}_c$ by adopting Hungarian algorithm to $\boldsymbol{M}^{ij}$.
\section{Experiments}
\label{sec:exp}
\subsection{Experiment Settings and Tasks}
In our dataset, the 60 video sequences are in chronological order, with 20 groups in 10 scenes. 30 videos from the first 10 groups are used for training, and the rest are used for testing. Our DIVOTrack could be used for the research of object detection, single-view tracking, and cross-view tracking. In this paper, we mainly introduce the settings and tasks of single-view tracking and cross-view tracking.
\subsubsection{Single-View Tracking Setting}
We treat each video sequence independently for single-view trackers.
We employ ID F1 measure (IDF1) \cite{ristani2016performance}, higher order tracking accuracy (HOTA) \cite{luiten2020hota}, multiple object tracking accuracy (MOTA), multiple object tracking precision (MOTP), mostly tracked targets (MT), mostly lost targets (ML), association accuracy (AssA), fragments (FM) and identity switches (IDSw) \cite{milan2016mot16} as the evaluation metrics of the tracking performance, which are widely used in MOT.
\subsubsection{Cross-View Tracking Setting}
Cross-view trackers, unlike single-view trackers, process multiple views within each batch of synchronized videos.
If the same object appears in different recordings from the same group, the object should have the same ID.
As for evaluation, we use the metrics introduced in \cite{han2020cvmht} as the standardized measurements, in which cross-view ID F1 metric (CVIDF1) and cross-view matching accuracy (CVMA) are proposed based on IDF1 and MOTA metrics. Specifically, CVIDF1 and CVMA are defined as follows,
\begin{equation}
\textrm{CVIDF1} = \frac{ 2\textrm{CVIDP} \times \textrm{CVIDR}}{\textrm{CVIDP}+\textrm{CVIDR}},
\end{equation}
\begin{equation}
\textrm{CVMA} = 1-(\frac{\sum_t \textrm{m}_t+\textrm{fp}_t+2\textrm{mme}_t}{\sum_t \textrm{gt}_t}),
\label{eq:cvma}
\end{equation}
where CVIDP and CVIDR denote the cross-view object matching precision and recall, respectively. $\textrm{m}_t$, $\textrm{fp}_t$, $\textrm{mme}_t$, and $\textrm{gt}_t$ are the numbers of misses, false positives, mismatched pairs, and the total number of objects in all views at time $t$, respectively.
\begin{table*}[!t]
\small
\caption{Comparison between single-view tracking baseline methods. The best performance is shown in bold.}
\centering
\begin{tabular}{l|ccccccccc}
\toprule
Methods & HOTA$\uparrow$ & IDF1$\uparrow$ & MOTA$\uparrow$ & MOTP$\uparrow$ & MT$\uparrow$ & ML$\downarrow$ & AssA$\uparrow$ & IDSw$\downarrow$ & FM$\downarrow$\\
\midrule
Deepsort \cite{wojke2017simple} & 52.3 & 58.3 & 76.3 & 81.0 & 382 & 73 & 43.6 & 2,013 & 2,521 \\
CenterTrack \cite{centertrack} & 54.2 & 61.0 & 72.3 & 80.3 & \textbf{479} & 49 & 48.1 & 1,732& \textbf{2,438} \\
Tracktor \cite{tracktor_2019_ICCV} & 46.7 & 54.5 & 63.7 & 80.4 & 420 & \textbf{33} & 39.3 & 1,517 & 3,601 \\
FairMOT \cite{zhang2021fairmot} & \textbf{62.7} & \textbf{76.1} & \textbf{78.8} & 81.8 & 417 & 66 & \textbf{60.9} & \textbf{788} & 3,725\\
TraDes \cite{Wu2021TraDeS} & 57.5 & 66.0 & 72.6 & \textbf{82.0} & 467 & 53 & 52.9 & 1,341 & 2,612\\
\bottomrule
\end{tabular}
\label{tab:singelview}
\end{table*}
\begin{table*}[!t]
\small
\centering
\caption{Comparison between cross-view tracking baseline methods on DIVOTrack with CVMA (``CA'') and CVIDF1 (``C1''). The best performance is shown in bold.}\label{tab:crossview}
\begin{tabular}{L{2cm}|C{0.8cm}C{0.8cm}|C{0.8cm}C{0.8cm}|C{0.8cm}C{0.8cm}|C{0.8cm}C{0.8cm}|C{0.8cm}C{0.8cm}}
\toprule
Scenes & \multicolumn{2}{|c}{Circle} & \multicolumn{2}{|c}{Shop} & \multicolumn{2}{|c}{Moving} & \multicolumn{2}{|c}{Park} & \multicolumn{2}{|c}{Ground} \\
\midrule
Methods & CA$\uparrow$ & C1$\uparrow$ & CA$\uparrow$ & C1$\uparrow$ & CA$\uparrow$ & C1$\uparrow$ & CA$\uparrow$ & C1$\uparrow$ & CA$\uparrow$ & C1$\uparrow$ \\
\cmidrule(r){1-11}
OSNet \cite{zhou2019osnet} & 34.7 & 46.3 & 47.9 & 47.2 & 38.2 & 48.6 & 29.1 & 42.4 & 21.5 & 36.8\\
Strong \cite{Luo_2019_Strong_TMM} & 35.8 & 42.9 & 50.6 & 39.5 & 43.1 & 43.4 & 36.6 & 47.4 & 27.4 & 36.4\\
AGW \cite{pami21reidsurvey} & 55.4 & 55.8 & 54.6 & 44.3 & 42.0 & 45.6 & 52.9 & 62.3 & 46.7 & 48.1\\
MvMHAT \cite{gan2021mvmhat} & 67.5 & 65.1 & 55.2 & 49.6 & 50.7 & \textbf{55.7} & 55.5 & 63.4 & 46.6 & 48.8\\
CT \cite{wieczorek2021unreasonable} & 69.6 & 67.1& 56.2 & 49.7 & 47.5 & 53.1 & 62.8 & 69.9 & 54.7 & 53.4\\
MGN \cite{wang2018learning} & 34.7 & 41.6 & 46.8 & 37.7 & 35.4 & 36.6 & 29.1 & 40.8 & 21.7 & 29.6\\
\textbf{CrossMOT} & \textbf{72.8} & \textbf{73.3} & \textbf{58.7} & \textbf{51.0} & \textbf{53.1} & 48.7 & \textbf{75.7} & \textbf{78.7} & \textbf{61.1} & \textbf{63.6} \\
\midrule
Scenes & \multicolumn{2}{|c}{Gate1} & \multicolumn{2}{|c}{Floor} & \multicolumn{2}{|c}{Side} & \multicolumn{2}{|c}{Square} & \multicolumn{2}{|c}{Gate2} \\
\midrule
Methods & CA$\uparrow$ & C1$\uparrow$ & CA$\uparrow$ & C1$\uparrow$ & CA$\uparrow$ & C1$\uparrow$ & CA$\uparrow$ & C1$\uparrow$ & CA$\uparrow$ & C1$\uparrow$ \\
\cmidrule(r){1-11}
OSNet \cite{zhou2019osnet} & 28.9 & 48.9 & 35.9 & 42.4 & 44.3 & 53.0 & 24.4 & 42.1 & 22.0 & 47.8\\
Strong \cite{Luo_2019_Strong_TMM} & 41.9 & 56.7 & 42.4 & 38.1 & 49.6 & 54.7 & 35.5 & 45.5 & 24.7 & 50.7\\
AGW \cite{pami21reidsurvey} & 59.8 & 68.4 & 57.1 & 47.8 & 58.2 & 59.9 & 49.0 & 55.7 & 79.1 & 87.4\\
MvMHAT \cite{gan2021mvmhat} & 59.8 & 70.8 & 62.2 & 60.7 & 62.2 & 66.6 & 57.4 & 68.0 & 89.0 & 87.1\\
CT \cite{wieczorek2021unreasonable} & 67.8 & 76.8 & 62.9 & 52.4 & 65.3 & 69.6 & 58.7 & 70.5 & 87.9 & \textbf{93.5}\\
MGN \cite{wang2018learning} & 27.5 & 43.4 & 32.9 & 32.1 & 41.2 & 45.4 & 24.8 & 40.1 & 21.0 & 46.8\\
\textbf{CrossMOT} & \textbf{79.0} & \textbf{82.5} & \textbf{72.5} & \textbf{61.7} & \textbf{74.3} & \textbf{79.2} & \textbf{65.1} & \textbf{74.5} & \textbf{92.7}& 86.7 \\
\bottomrule
\end{tabular}
\end{table*}
\begin{table*}[!t]
\small
\centering
\caption{Cross-view tracking results on DIVOTrack and other existing datasets with CVMA (``CA'') and CVIDF1 (``C1''). The best performance is shown in bold.}\label{tab:existdataset}
\begin{tabular}{l|cc|cc|cc|cc|>{\columncolor{Gray}}c>{\columncolor{Gray}}c}
\toprule
~ & \multicolumn{2}{|c|}{EPFL} & \multicolumn{2}{|c|}{CAMPUS} & \multicolumn{2}{|c|}{MvMHAT}& \multicolumn{2}{|c}{WILDTRACK}& \multicolumn{2}{|>{\columncolor{Gray}}c}{\textbf{DIVOTrack}} \\
\midrule
Methods & CA$\uparrow$ & C1$\uparrow$ & CA$\uparrow$ & C1$\uparrow$ & CA$\uparrow$ & C1$\uparrow$ & CA$\uparrow$ & C1$\uparrow$ & CA$\uparrow$ & C1$\uparrow$ \\
\cmidrule(r){1-11}
OSNet \cite{zhou2019osnet} & 73.0 & 40.3 & 58.8 & 47.8 & \textbf{92.6} & \textbf{87.7} & 10.8 & 18.2 & 33.0 & 44.9 \\
Strong \cite{Luo_2019_Strong_TMM} & \textbf{75.6} & 45.2 & 63.4 & 55.0 & 49.0 & 55.1 & 28.6 & 41.6 & 39.1 & 44.7 \\
AGW \cite{pami21reidsurvey} & 73.9 & 43.2 & 60.8 & 52.8 & 92.5 & 86.6 & 15.6 & 23.8 & 54.3 & 55.3 \\
MvMHAT \cite{gan2021mvmhat} & 30.5 & 33.7 & 56.0 & 55.6 & 70.1 & 68.4 & 10.3 & 16.2 & 58.2 & 60.7 \\
CT \cite{wieczorek2021unreasonable} & 75.5 & 45.1 & 63.7 & 55.0 & 46.7 & 53.5 & 19.0 & 42.0 & 62.0 & 63.2 \\
MGN \cite{wang2018learning} & 73.3 & 42.6 & 63.3 & 56.1 &
92.3 & 87.4 & 32.6 & 46.2 & 32.0 & 38.4 \\
\textbf{CrossMOT} & 74.4 & \textbf{47.3} & \textbf{65.6} & \textbf{61.2} & 92.3 & 87.4 & \textbf{42.3} & \textbf{56.7} & \textbf{68.8} & \textbf{69.1} \\
\bottomrule
\end{tabular}
\end{table*}
\subsection{Implementation Details of CrossMOT}
We adopt DLA-34 \cite{duan2019centernet} as our backbone network. We use the pre-trained model on COCO dataset \cite{lin2014microsoft} to initialize our model. In our backbone, the resolution of the feature map is 272$\times$152, and the size of the input image is resized to four times as the feature map, \textit{i.e.}, 1088$\times$608. The feature dimension of cross-view embedding and single-view embedding are both set to 512. Our model is trained with the Adam optimizer \cite{kingma2014adam} for 30 epochs with a start learning rate $10^{-4}$ and batch size of 8. The learning rate decays to $10^{-5}$ at 20 epochs. The loss function balance parameters $w_1$ and $w_2$ are set to $-1.85$ and $-1.05$ at the initial of training, following \cite{zhang2021fairmot}. The detection threshold, single-view distance threshold and cross-view matching threshold are set to $\delta_d = 0.5$, $\delta_s=0.3$ and $\delta_c=0.5$, respectively. The two parameters used to calculate the adaptive temperature $\tau$ are set to $\epsilon=0.5$ and $\gamma=0.5$, following \cite{hinton2015distilling}. We train/test our model on a single NVIDIA RTX 3090 24GB GPU.
\subsection{Single-View Tracking Baselines on DIVOTrack}
We compare five single-view tracking methods, including Deepsort \cite{wojke2017simple}, CenterTrack \cite{centertrack}, Tracktor \cite{tracktor_2019_ICCV}, FairMOT \cite{zhang2021fairmot}, and TraDes \cite{Wu2021TraDeS}. These are all widely used trackers, and they can be adopted as references for comparison.
We use the default configurations for training and testing the aforementioned trackers. Note that we finetune the detector of Tracktor with 5 epochs and finetune TraDes with 30 epochs based on its pre-trained model. All the models are trained using four Nvidia RTX 3090 GPUs.
The comparison of baseline methods is shown in Table~\ref{tab:singelview}. FairMOT is a strong baseline single-view MOT method and performs better than other trackers, where the HOTA, IDF1, and MOTA of FairMOT are $62.7\%$, $76.1\%$, and $78.8\%$, respectively. The results prove that a feature embedding network can significantly improve tracking performance with detections.
We can see that there are large variations in performances from different methods. For example, HOTA ranges from $46.7\%$ to $62.7\%$, demonstrating DIVOTrack dataset has discrimination for different trackers.
\subsection{Cross-View Tracking}
\label{sec:crossview}
To evaluate the performance of cross-view tracking, we first obtain the detection results from the trained CenterNet model, then follow different embedding networks \cite{zhou2019osnet,Luo_2019_Strong_TMM,pami21reidsurvey,gan2021mvmhat} to obtain the object features, and finally achieve the cross-view tracking following the association framework \cite{gan2021mvmhat}. Six different feature embedding networks are adopted for the comparison, including OSNet \cite{zhou2019osnet}, Strong \cite{Luo_2019_Strong_TMM}, AGW \cite{pami21reidsurvey}, MvMHAT \cite{gan2021mvmhat}, Centroids (CT) \cite{wieczorek2021unreasonable}, MGN \cite{wang2018learning}, and our proposed CrossMOT method.
\subsubsection{Cross-View Tracking Results on DIVOTrack}
The detailed results of all baseline methods on each scene of DIVOTrack are reported in Table~\ref{tab:crossview}.
Our proposed method CrossMOT outperforms other cross-view tracking methods of almost all the ten scenarios, showing the effectiveness of CrossMOT.
For all the methods, the \textit{Ground} scene has worse performance than other scenes since the \textit{Ground} scene contains more cross-view pedestrians, as seen in Fig.~\ref{fig:cv_statistics}. The dense cross-view objects provide additional difficulties for the trackers. As for \textit{Gate2}, tracking becomes significantly easier since there are fewer objects in the scene. The detailed results demonstrate the diversity of different scenes proposed in DIVOTrack.
\subsubsection{Cross-View Tracking Results on Other Datasets}
The results of these approaches on DIVOTrack and other aforementioned existing datasets are shown in Table~\ref{tab:existdataset}.
On the other datasets like CAMPUS and WILDTRACK, CrossMOT also outperforms other methods, which proves the efficacy of our proposed approach. For the MvMHAT dataset, OSNet and AGW achieve relatively good performance. This is because videos from MvMHAT are collected from the same scenario and share identical subjects. On EPFL, CrossMOT also outperforms other methods, showing that CrossMOT is more suitable for complex real-world scenarios.
As shown in Table~\ref{tab:existdataset}, most of the methods have better results on EPFL, CAMPUS, and MvMHAT datasets because of limited scenes and the limited number of subjects. In addition, WILDTRACK only has 1 scene and missing lots of annotations, which causes worse results. The noisy annotations on WILDTRACK are shown in Fig.~\ref{fig:quality_wild}. Comparing other datasets, DIVOTrack has more diverse scenes and a large number of annotated subjects. And other methods still have room for improvement on DIVOTrack.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=1\linewidth]{figs/wild_noise.pdf}
\end{center}
\caption{Some examples of noisy annotations on WILDTRACK. Yellow circles represent the missing annotations, and red circles represent incorrect boxes. ``T'' is the frame index.}
\label{fig:quality_wild}
\end{figure}
\subsubsection{Qualitative Results of CrossMOT} To better show the effectiveness of our CrossMOT, we show some qualitative examples in Fig.~\ref{fig_ex}. Left and right sub-figures are results on DIVOTrack and CAMPUS datasets, respectively. Rows represent camera views, and columns represent different methods. Blue and red arrows represent correctly matched pairs and mis-matched pairs, respectively. Compared with other baseline methods, CrossMOT generates much fewer cross-view matching errors, demonstrating the effectiveness of our method.
\begin{figure*}[htb]
\centering
\includegraphics[width=1\textwidth]{figs/ex.jpg}
\caption{Qualitative examples of cross-view tracking performance for the proposed CrossMOT, AGW, and MvMHAT on DIVOTrack and CAMPUS datasets. Rows and columns represent camera views and different methods, respectively. Blue and red arrows represent correctly matched pairs and mis-matched pairs, respectively.}
\label{fig_ex}
\end{figure*}
\subsection{Ablation Studies of CrossMOT}
\subsubsection{Different Variants of the Model}
We show a comparison between three variants of the proposed model on DIVOTrack, MvMHAT and CAMPUS datasets in Table~\ref{tab:variant}. The three variants include \textit{shared emb.}, \textit{w/o conflict-free}, and \textit{full model}. Specifically, \textit{Shared emb.} represents using a single Re-ID head for both single-view and cross-view embedding, and \textit{w/o conflict-free} represents using the original cross-entropy loss without conflict-free loss for embedding. For all three datasets, we see consistent improvements of the full model compared with the other two variants, which verifies the effectiveness of our decoupled multi-head embedding strategy and the designed conflict-free loss.
\begin{table}[!t]
\centering
\caption{Comparison between different variants of the proposed model on DIVOTrack, MvMHAT and CAMPUS datasets. \textit{Shared emb.} represents using a single Re-ID head for both single-view and cross-view embedding. \textit{W/O conflict-free} represents using the original cross-entropy loss without conflict-free loss for embedding.}
\label{tab:variant}
\begin{tabular}{L{1.6cm}|L{2.6cm}|C{0.8cm}C{0.8cm}}
\toprule
Dataset & Variant & CVMA & CVIDF1\\
\midrule
\multirow{3}*{DIVOTrack} &
Shared Emb. & 66.9 & 68.9 \\
& W/O Conflict-free & 66.8 & 67.9\\
& \textbf{Full Model} & \textbf{68.8} & \textbf{69.1} \\
\midrule
\multirow{3}*{MvMHAT} &
Shared Emb. & 92.2 & 86.2 \\
& W/O Conflict-free & 91.9 & 87.0\\
& \textbf{Full Model} & \textbf{92.3} & \textbf{87.4}\\
\midrule
\multirow{3}*{CAMPUS} &
Shared Emb. & 64.8 & 61.9 \\
& W/O Conflict-free & 64.4 & 58.0 \\
& \textbf{Full Model} & \textbf{65.6} & \textbf{61.2}\\
\bottomrule
\end{tabular}
\end{table}
\subsubsection{Variants on the Tracking Inference Thresholds}
We vary the tracking thresholds, \textit{i.e.}, $\delta_c$ and $\delta_s$, and see the influence on the final results. Experiments are conducted on the DIVOTrack dataset. We vary $\delta_s$ from 0.1 to 0.4 when $\delta_c \in \{0.3,0.5\}$. The other four baseline methods are shown as well for reference in Fig.~\ref{fig_thres}. Although there are some fluctuations on the results for selecting different thresholds, the proposed method can consistently outperform other methods, demonstrating the robustness of our proposed method.
\subsection{Benefits from DIVOTrack}
Our DIVOTrack benchmark has several benefits compared with existing benchmarks \cite{fleuret2007multicamera,xu2017cross,gan2021mvmhat,chavdarova2018wildtrack}.
\textbf{First}, one publicly accessible detector is used for the baseline methods if they follow the tracking-by-detection framework. However, the publicly accessible detection results are missing in existing benchmarks \cite{xu2017cross,gan2021mvmhat}. This may cause unfair comparisons for methods that are applied in such benchmarks.
\textbf{Second}, some of the existing benchmarks do not provide clear cross-view tracking results, where only single-view tracking metrics are used in the evaluation \cite{fleuret2007multicamera,xu2017cross,chavdarova2018wildtrack}.
\textbf{Third}, the detailed performance of each scene is analyzed, where the influence of different backgrounds of environments on the tracking performance is demonstrated. Other benchmarks do not support this since most of the previous datasets do not contain diverse scenes.
\textbf{In addition}, in our benchmark, we also release a unified framework that can combine the Re-ID embedding networks \cite{zhou2019osnet,Luo_2019_Strong_TMM,pami21reidsurvey} in the cross-view tracking. Researchers are free to verify the performance of applying any open-sourced Re-ID embedding networks in our cross-view tracking.
\textbf{Last}, we also provide the source code of all the compared baseline methods. We summarize the benefits of our benchmark in Table~\ref{tab:datacom}. Our benchmark provides accessible public detections (Det), cross-view evaluations (CE), individual scene-based analysis (SA), accessible cross-view framework (CF), and cross-view baseline methods (CVBM).
\begin{table}[!t]
\small
\centering
\caption{Comparison of benchmark evaluations, including EPFL \cite{fleuret2007multicamera}, CAMPUS ("CAM.") \cite{xu2017cross}, MvMHAT ("Mv.") \cite{gan2021mvmhat}, WILDTRACK ("WILD.") \cite{chavdarova2018wildtrack} and DIVOTrack ("DIVO.").}
\label{tab:datacom}
\begin{tabular}{c|ccccc}
\toprule
Benchmarks & Det & CE & SA & CF & CVBM \\
\midrule
WILD. & - & - & - & - & -\\
EPFL & - & \checkmark & \checkmark & - & -\\
CAM. & - & - & \checkmark & - & - \\
Mv. & - & \checkmark & - & \checkmark & \checkmark\\
\rowcolor{Gray}
\textbf{DIVO.} (Ours) & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark\\
\bottomrule
\end{tabular}
\end{table}
\begin{figure}[!t]
\centering
\includegraphics[width=1\linewidth]{figs/threshold.jpg}
\caption{The CVIDF1 of the proposed method (with solid line) with variant thresholds on the DIVOTrack dataset. Red and blue curves represent the proposed method with $\delta_c=0.3$ and $\delta_c=0.5$, respectively. The x-axis represents the change of $\delta_s$. The performances of the other four baseline methods (with dashed lines) are shown for reference.}
\label{fig_thres}
\end{figure}
\section{Conclusion and Future Work}
\label{sec:conclude}
In this paper, we propose a novel cross-view multi-object tracking dataset, namely \textit{DIVOTrack}, which is more realistic, has more tracks and diverse environments and incorporates moving cameras. Accordingly, a standardized benchmark is built for cross-view tracking, with a clear split of training and testing sets, publicly accessible detection, and standard cross-view tracking evaluation metrics. We also propose a novel end-to-end cross-view tracking baseline, CrossMOT, which integrates object detection, single-view tracking, and cross-view tracking in a unified embedding model. CrossMOT adopts decoupled multi-head embedding that simultaneously learns object detection, single-view Re-ID and cross-view Re-ID. Moreover, we design a locality-aware and conflict-free loss function for single-view embedding to address the ID conflict issue between cross-view embedding and single-view embedding.
With the proposed dataset, benchmark and baseline, the cross-view tracking methods can be fairly compared in the future, which will improve the development of cross-view tracking techniques.
In future work, we will continuously collect more videos in different weather conditions to enlarge the dataset since the weather condition is not analyzed in the current work. For cross-view tracking methods, there are still unresolved issues. We will also try to design an end-to-end joint detection and tracking framework that can take multiple views with variant spatial-temporal relations. In addition, we will also explore how to better utilize the cross-frame and cross-view geometry consistency.
\section*{Acknowledgments}
The authors would also like to thank Tianqi Liu, Zining Ge, Kuangji Chen, Xubin Qiu, Shitian Yang, Jiahao Wei, Yuhao Ge, Hao Chen, Bingqi Yang, Kaixun Jin, Zeduo Yu and Donglin Gu for their work on the dataset collection and annotation.
This work is supported by National Natural Science Foundation of China (62106219).
|
2,869,038,153,994 | arxiv | \section{Introduction}
The study of phases of strongly interacting matter under extreme conditions \cite{Akiba:2015jwa, Fukushima:2010bq} is attracting broad interest in recent years, extending far beyond nuclear matter.
Fruitful interdisciplinary collaborations, among others, with the field of ultracold atomic gases (for a review see e.g. \cite{Adams:2012th}) have broadened the scope of how to explore the origins of the universe.
One example of extreme conditions is very high temperatures, which are particularly interesting, since they bear direct relevance to the birth and early history of our universe.
The properties of the hottest matter ever created on the Earth are being investigated at current collider facilities, such as the Relativistic Heavy-Ion Collider (RHIC) and the Large Hadron Collider (LHC) and upcoming facilities, such as NICA and FAIR. There, two heavy nuclei are collided at ultra-relativistic energies in order to create a novel deconfined state of matter known as the quark-gluon plasma (QGP) \cite{Jacak:2012dx}.
The transition from hadronic matter to the QGP is characterized by the liberation of colored degrees of freedom (quarks and gluons) otherwise confined inside hadrons.
This qualitative picture is confirmed by lattice QCD calculations of, for example, the QCD entropy, which show a significant rise around the pseudocritical temperature \cite{Borsanyi:2013bia, Bazavov:2014pvz,Borsanyi:2016ksw,Bazavov:2017dsy}.
In analogy with the Debye screening phenomenon in electromagnetic plasmas, the liberated colored particles may rearrange themselves around a test color charge such that the test charge is screened with a finite screening length \cite{McLerran:1981pb}.
Compared to the confining string-like force between color sources in the vacuum, the in-medium force in a high temperature QGP is hence drastically modified and becomes short ranged \cite{Kaczmarek:2005ui,Maezawa:2007fc,Bazavov:2016uvm}.
The in-medium modification of the QCD force has been expected to have dramatic consequences on the behavior of bound states of heavy quarks and antiquarks, so called heavy quarkonium \cite{Brambilla:2010cs,Andronic:2015wma}.
In turn the dynamics of quarkonium states observed in heavy-ion collisions promises a direct window on the phase structure of the bulk matter, in which they are immersed.
One classic prediction in this context is the enhanced probability of dissociation of charm quark pairs in nuclear collisions, in case that a QGP is created \cite{Matsui:1986dk}.
The qualitative behavior of $J/\psi$ and $\Upsilon$ yields observed at RHIC and $\Upsilon$ yields at LHC support this idea \cite{Adamczyk:2013poh, Adare:2014hje, Adare:2006ns, Adare:2008sh, Adare:2011yf, Adamczyk:2013tvk, Chatrchyan:2011pe, Chatrchyan:2012lxa, Khachatryan:2016xxp, Abelev:2014nua}.
However, the enhanced production rate of charm quark pairs at LHC complicates an interpretation of $J/\psi$ yields at LHC \cite{Abelev:2013ila, Adam:2016rdg}.
The reason is that now another process to create $J/\psi$ needs to be taken into account, i.e. the non-negligible probability that initially uncorrelated charm quarks end up forming bound states at the phase boundary \cite{BraunMunzinger:2000px}, i.e. in the late stages of the collision at freezeout.
The successful prediction of $J/\psi$ yields at the LHC by means of the statistical model of hadronization \cite{Andronic:2010dt} is seen as support for the existence of such a production mechanism.
At present it is still not clear at which collision energy the two effects, i.e. the suppressed and enhanced yields of quarkonium become comparable.
In order to more clearly interpret the collected experimental results we thus need to better understand the dynamics of quarkonia in the QGP.
I.e.\ the development of a unified quantum mechanical description for the real-time equilibration of quarkonia is called for.
Recently, the dynamics of heavy quark pairs in the QGP has been actively studied by various methods \cite{Aarts:2016hap}.
Among them are kinetic descriptions \cite{Rapp:2008tf, Zhao:2010nk, Zhao:2011cv, Zhou:2014kka, Song:2011nu, Emerick:2011xu, Zhou:2014hwa, Yao:2017fuc} or dynamical models involving a complex potential \cite{Strickland:2011mw, Strickland:2011aa, Krouppa:2015yoa, Krouppa:2016jcl, Krouppa:2017jlg} for quarkonium.
One more recent promising approach is the open quantum system
formulation for the heavy quark pairs \cite{Young:2010jq,
Borghini:2011ms, Akamatsu:2011se, Rothkopf:2013kya, Akamatsu:2014qsa,
Akamatsu:2015kaa, Kajimoto:2017rel, Blaizot:2015hya, Blaizot:2017ypk,
Blaizot:2018oev, Brambilla:2016wgg, Brambilla:2017zei, DeBoni:2017ocl,
Katz:2015qja}, which developed concurrently to computations of the in-medium complex potential \cite{Laine:2006ns, Beraudo:2007ky, Brambilla:2008cx, Rothkopf:2011db, Burnier:2014ssa, Burnier:2015tda, Burnier:2016mxc}.
In the open quantum system formulation \cite{BRE02}, we distinguish the
subsystem of interest (quarkonium) from the environment (QGP), which is made possible by a hierarchy of time scales in each sector.
The dynamics of the environment is fast enough, so that we can trace it out and replace its coupling to the subsystem by the average response to a slowly changing external source.
In this way a master equation for the reduced density matrix of a heavy quark pair can be obtained, which can be expressed as arising from the contribution of three kinds of forces:
the screened potential, thermal fluctuations, and dissipation \cite{Akamatsu:2014qsa}.
The first two forces combine into a fluctuating potential force
(stochastic potential \cite{Akamatsu:2011se, Rothkopf:2013kya,
Kajimoto:2017rel}), while the last one is an irreversible force.
So far there exist only a few numerical analyses of the quantum dissipation of quarkonium in the QGP \cite{DeBoni:2017ocl, Brambilla:2017zei, Katz:2015qja}.
However, quantum dissipation is an essential ingredient to understand the long-time behavior of a heavy quark pair in the QGP.
It is particularly important to know how and when quantum dissipation influences the time evolution of the heavy quark pair because the lifetime of the QGP in heavy-ion collisions is not long enough for the heavy quark pair to get fully equilibrated.
In this paper we consider as a first step the physics of a single heavy quark immersed in a hot medium.
We numerically solve the corresponding master equation and study its equilibrium solution as well as the effects of dissipation.
The central conceptual result of this study is a stochastic unravelling prescription for the master equation, in which the wave function is evolved in terms of a stochastic Schr\"odinger equation and the mixed states of the density matrix emerge from an ensemble average.
Our results are obtained following the ``Quantum State Diffusion" approach developed by Gisin and Percival \cite{gisin1992quantum, percival1998quantum} and by subsequently solving the corresponding nonlinear stochastic Schr\"odinger equation.
Note that by applying the Quantum State Diffusion approach to a heavy quark master equation derived in the Lindblad form \cite{Akamatsu:2014qsa, lindblad1976generators} (see Section \ref{sec:QSD} for the definition) we obtain for the first time in this context a non-linear Schr\"odinger equation with a clear connection to the underlying microscopic theory.
For simplicity, we consider a heavy quark in the QGP in one spatial dimension.
It is then shown that the master equation possesses a steady state solution consistent with the Boltzmann distribution $\rho_{\rm eq}\propto e^{-\beta H}$.
Furthermore, we observe that the approach to equilibrium depends on initial conditions and cannot be captured by a single decay rate.
Finally, we analyze the effect of quantum dissipation by comparing with simulations in which the dissipative terms are dropped.
Their effect, as expected, is essential at later times but interestingly already plays an important role at rather early times if the initial wave function is well localized and decoherence by thermal fluctuations is ineffective.
This paper is organized as follows. In Sec.~\ref{sec:QSD}, we introduce the Quantum State Diffusion approach applicable to a general Lindblad master equation.
We then apply this approach to the Lindblad equation for a single heavy quark in the quark-gluon plasma at high temperature and derive the corresponding nonlinear stochastic Schr\"odinger equation.
In Sec.~\ref{sec:results}, we solve this nonlinear stochastic Schr\"odinger equation numerically and analyze the relaxation process of the heavy quark.
In addition we study the effect of quantum dissipation on the time evolution of a heavy quark, before we summarize our work in Sec.~\ref{sec:conclusion}.
\section{Quantum state diffusion for open quantum systems}
\label{sec:QSD}
\subsection{Lindblad equation and quantum state diffusion}
There is a particularly useful class of master equations for open quantum systems, which is Markovian and fulfills basic physical requirements: the reduced density matrix $\rho$ is hermitian ($\rho = \rho^{\dagger}$), correctly normalized (${\rm Tr}\rho = 1$), and positive ($\langle\alpha|\rho|\alpha\rangle\geq 0$ for any state $|\alpha\rangle$) during its time evolution.
Such master equations may be written in general as
\begin{align}
\label{eq:Lindblad}
\frac{d}{dt}\rho(t) = -i\left[H,\rho\right]
+\sum_n\left(
2L_n\rho L_n^{\dagger} - L_n^{\dagger}L_n\rho - \rho L_n^{\dagger} L_n
\right),
\end{align}
in the so called Lindblad form \cite{lindblad1976generators}.
Here the evolution of the density matrix operator is described in terms of the full dimension of the system Hilbert space.
This makes a direct numerical simulation computationally highly demanding, especially in realistic 3+1 dimensions.
There are, however, several ways to solve the Lindblad equation by what is known as stochastic unravelling.
I.e. by carrying out a stochastic evolution of the wave functions of the system instead, whose ensemble average then correctly reproduces the density matrix.
One such stochastic unravelling corresponds to the quantum state diffusion (QSD) approach \cite{gisin1992quantum}.
Given the Lindblad equation \eqref{eq:Lindblad}, the corresponding QSD equation is a stochastic nonlinear Schr\"odinger equation:
\begin{align}
\label{eq:qsd}
&|d\psi\rangle = |\psi(t+dt)\rangle - |\psi(t)\rangle \nonumber \\
&=-i H|\psi(t)\rangle dt
+\sum_n\left(
\begin{aligned}
&2\langle L_n^{\dagger}\rangle_{\psi} L_n - L_n^{\dagger}L_n \\
&- \langle L_n^{\dagger}\rangle_{\psi}\langle L_n\rangle_{\psi}
\end{aligned}
\right) |\psi(t)\rangle dt \nonumber \\
& \quad +\sum_n \left(L_n - \langle L_n\rangle_{\psi}\right)|\psi(t)\rangle d\xi_n,
\end{align}
with complex white noises $d\xi_n$ whose mean and variance are given by
\begin{subequations}
\begin{align}
{\rm M}\left( d\xi_n\right)&={\rm M}\left( \Re (d\xi_n)\Im (d\xi_m)\right) = 0, \\
{\rm M}\left(\Re (d\xi_n)\Re (d\xi_m)\right)
&={\rm M}\left( \Im (d\xi_n)\Im (d\xi_m)\right) = \delta_{nm} dt.
\end{align}
\end{subequations}
Here $\langle O \rangle_{\psi}\equiv \langle\psi| O |\psi\rangle$ denotes the quantum expectation value of an operator with respect to a state $\psi$ and ${\rm M}(O)$ denotes the statistical average of $O$.
This stochastic evolution equation is solved in the It$\hat{\rm o}$ discretization scheme.
By using $\psi(t)$ everywhere in Eq.~\eqref{eq:qsd} we obtain the wave function at the next discrete time step $\psi(t+dt)$.
In the limit $dt\to 0$, the QSD equation \eqref{eq:qsd} preserves the norm of $\psi$ in each stochastic update.
The initial wave function is distributed according to the initial mixed (or pure) state of the density matrix.
The density matrix is subsequently constructed from an ensemble average of wave functions $\psi(t)$,
\begin{align}
\rho(t) = {\rm M}\left(|\psi(t)\rangle\langle\psi(t)|\right),
\end{align}
and obeys the Lindblad equation \eqref{eq:Lindblad} in the $dt\to 0$ limit.
The QSD equation can also be formulated for unnormalized wave functions $\phi$
\begin{align}
\label{eq:qsd2}
& |d\phi\rangle = |\phi(t+dt)\rangle - |\phi(t)\rangle \nonumber\\
&= -i H|\phi(t)\rangle dt
+\sum_n\left(
2\langle L_n^{\dagger}\rangle_{\phi} L_n
- L_n^{\dagger}L_n\right) |\phi(t)\rangle dt \nonumber\\
& \quad +\sum_n L_n |\phi(t)\rangle d\xi_n,
\end{align}
with the same complex noise.
Here $\langle O \rangle_{\phi} \equiv \langle\phi| O |\phi\rangle/\langle\phi|\phi\rangle$ denotes the quantum expectation value. The density matrix constructed by
\begin{align}
\rho(t)= {\rm M}\left(\frac{|\phi(t)\rangle\langle\phi(t)|}{\langle\phi(t)|\phi(t)\rangle}\right)
\end{align}
is again a solution of the master equation \eqref{eq:Lindblad}.
In our numerical simulation, we implement eq.\eqref{eq:qsd2}.
\subsection{Quantum state diffusion for a heavy quark in the quark-gluon plasma}
Let us now consider the theory of open quantum systems for a single heavy quark in the QGP and stochastically unravel its Lindblad equation via the QSD approach.
The Lindblad equation for heavy quarks has been derived in
\cite{Akamatsu:2014qsa} by treating the scattering between heavy quarks
and medium particles perturbatively, i.e. by assuming that the QCD coupling constant $g$ is small.
It is further assumed that the heavy quark mass $M$ is much larger than the temperature $T/M\ll 1$ so that there exists a time scale hierarchy between heavy quarks and medium particles.
The Lindblad master equation for a single heavy quark is given by the following operators \cite{Akamatsu:2014qsa}
\begin{subequations}
\label{eq:QCDLindblad}
\begin{align}
H &= -\frac{\nabla^2}{2M} + V_{\rm ext}(x), \\
L_{k} &= \sqrt{\frac{\tilde D(k)}{2V}} e^{i\bm k \cdot \bm x/2}\left(1+\frac{i\bm k \cdot\bm\nabla}{4MT}\right)e^{i\bm k \cdot \bm x/2},\\
\tilde D(k) &= g^2 T\frac{\pi m_D^2}{k(k^2 + m_D^2)^2}, \quad
m_D = gT\sqrt{\frac{N_c}{3} + \frac{N_f}{6}},
\end{align}
\end{subequations}
with $N_c$ and $N_f$ being the numbers of colors and quark flavors, respectively.
The Lindblad operator $L_k$ describes the scattering process between a heavy quark and medium particles with momentum transfer $\bm k$, taking place with rate $\tilde D(k)$.
The term $\propto e^{i\bm k \cdot\bm x}$ in $L_k$ describes thermal fluctuations, while the term $\propto e^{i\bm k \cdot\bm x/2}\frac{i\bm k\cdot\bm\nabla}{4MT}e^{i\bm k \cdot\bm x/2}$ describes dissipation and originates in the recoil of the heavy quark during the collision. For simplicity, we ignore the effects of internal color degrees of freedom
\footnote{
Also, we assume that it is admissible to set the second order coefficient in the derivative expansion of the Feynman-Vernon influence functional $\tilde A(k) = \tilde D(k)/8T^2$. This reduces the number of Lindblad operators that need to be considered.
}.
We can calculate the parts of the QSD equation as follows.
The nonlinear term is given by
\begin{align}
\label{eq:qsdhq1}
&2\sum_{k}\langle L_{k}^{\dagger}\rangle_{\phi}L_{k}\phi(x)
= \frac{1}{\int d^3 y |\phi(y)|^2} \\
&\times \int d^3 y
\left[n_{\phi}(y) f(x-y) +\frac{i}{4T}\bm j_{\phi}(y) \cdot \bm g(x-y) \right]\phi(x), \nonumber
\end{align}
where $n_{\phi}$ and $\bm j_{\phi}$ denote the probability density and current:
\begin{subequations}
\label{eq:nandj}
\begin{align}
n_{\phi}(x) &\equiv \phi^*(x)\phi(x), \\
\bm j_{\phi}(x) &\equiv \frac{1}{2iM} \left[\phi^*(x)\bm\nabla\phi(x) - \left(\bm\nabla\phi^*(x)\right) \phi(x)\right],
\end{align}
\end{subequations}
and $f$ and $g_i$ are operators defined as
\begin{subequations}
\label{eq:fandg}
\begin{align}
f(x-y) &\equiv \left(1+ \frac{\nabla^2}{8MT}\right)D(x-y) + \frac{\bm \nabla D(x-y)}{4MT}\cdot\bm\nabla_x,\\
g_i(x-y) &\equiv \left(1+ \frac{\nabla^2}{8MT}\right)\nabla_i D(x-y) \nonumber \\
& \quad + \frac{\bm \nabla \nabla_i D(x-y)}{4MT}\cdot \bm\nabla_x.
\end{align}
\end{subequations}
The function $D(x)$ in the equations above is the inverse Fourier transform of $\tilde D(k)$.
The linear deterministic term reads
\begin{align}
\label{eq:qsdhq2}
\sum_k L_k^{\dagger} L_k\phi(x)
=&\frac{1}{2}
\left(D(0)+\frac{\nabla^2 D(0)}{4MT} + \frac{\nabla^4 D(0)}{64M^2T^2}\right) \phi(x) \nonumber \\
&+ \frac{\nabla_i\nabla_j D(0)}{32M^2T^2}\nabla_i\nabla_j\phi(x),
\end{align}
and the linear stochastic term amounts to
\begin{align}
\label{eq:qsdhq3}
&\sum_k L_k\phi(x) d\xi_k \nonumber \\
& \quad =\left[d\zeta(x)+\frac{\nabla^2d\zeta(x)}{8MT}+ \bm\nabla d\zeta(x)\cdot \frac{\bm \nabla}{4MT}
\right]\phi(x),
\end{align}
where the definition and the correlation of the complex noise field $d\zeta(x)$ is given by
\begin{subequations}
\begin{align}
&d\zeta(x)\equiv \sqrt{\frac{V}{2}} \int \frac{d^3 k}{(2\pi)^3} \sqrt{\tilde D(k)} e^{i\bm k\cdot \bm x}d\xi_k, \\
&{\rm M}\left(d\zeta(x)d\zeta^*(y)\right) = D(x-y)dt, \\
&{\rm M}\left(d\zeta(x)d\zeta(y)\right) = {\rm M}\left(d\zeta^*(x)d\zeta^*(y)\right) = 0.
\end{align}
\end{subequations}
Using Eqs.~\eqref{eq:qsdhq1}, \eqref{eq:qsdhq2}, and \eqref{eq:qsdhq3}, we may now perform the QSD simulation for a single heavy quark.
\section{Results of numerical simulation}
\label{sec:results}
In this section we explicitly check the application of the QSD approach and investigate the properties of the Lindblad equation in three simple settings.
We consider a single heavy quark in the QGP in one spatial dimension either with or without external potentials.
In particular, we study the equilibration of the heavy quark in each setting and discuss the importance of quantum dissipation.
As external potentials we deploy either the harmonic potential or the regularized Coulomb potential:
\begin{align}
\label{eq:potential}
V_{\rm ext}(x) = \frac{1}{2}M\omega^2x^2, \quad -\frac{\alpha}{\sqrt{x^2 + r_c^2}}.
\end{align}
where $r_c=1/M$. The noise correlation function $D(x)$ is set to have correlation length $\sim 1/m_D$ and is approximated with a Gaussian dependence on distance
\begin{align}
\label{eq:dgaussian}
D(x) = \gamma \exp\left[-x^2/l_{\rm corr}^2\right].
\end{align}
The parameters of our numerical setup are summarized in Table \ref{tbl:setup}.
The Hamiltonian time evolution is solved by the fourth-order Runge-Kutta method and that for the other parts is implemented via an explicit forward step according to the QSD equation \eqref{eq:qsd2} with $dt=\Delta t$.
We check that the discretization effects for both $\Delta t$ and $\Delta x$ are negligible by comparing with results with smaller $\Delta t$ or $\Delta x$.
In the simulation periodic boundary conditions are employed so that the noise correlation is replaced with
\begin{subequations}
\begin{align}
&{\rm M} \left(d\zeta(x)d\zeta^*(y)\right) = D(r_{xy})\Delta t , \\
& r_{xy} = \min \{ |x-y|, N_x\Delta x - |x-y|\},
\end{align}
\end{subequations}
and the function $D(x-y)$ in the QSD equation \eqref{eq:fandg} is also replaced by $D(r_{xy})$.
We confirm by changing $N_x$ that the volume $N_x\Delta x/l_{\rm corr} \simeq 13$ is large enough so that we can neglect finite volume effects.
\begin{table}[b]
\caption{Numerical setup and parameters of the potentials.
$N_x=128 (127)$ is used for the harmonic (regularized Coulomb) potential.}
\label{tbl:setup}
\begin{center}
\begin{tabular}{ccc}
\hline
\hline
$\Delta x$ & $\Delta t$ & $N_x$ \\ \hline
\ $1/M$ \ & \ $0.1M(\Delta x)^2$ \ & \ 128, 127 \ \\ \hline
\end{tabular}
\end{center}
\begin{center}
\begin{tabular}{ccc|ccc}
\hline
\hline
$T$ & $\gamma$ & $l_{\rm corr}$ & $\omega$ & $\alpha$ & $r_c$ \\ \hline
\ $0.1M$ \ & \ $T/\pi$ \ & \ $1/T$ \ & \ $0.04M$ \ & \ 0.3 \ & \ $1/M$ \ \\ \hline
\end{tabular}
\end{center}
\end{table}
\subsection{Equilibration of a heavy quark}
The Lindblad equation itself does not guarantee the Boltzmann distribution $\rho\propto \exp(-H/T)$ to be the static solution (see Appendix \ref{app:steadystate}).
In the derivation of the Lindblad equation, the fluctuation-dissipation theorem for the environment, i.e. the QGP sector, constrains the terms implementing the fluctuation and dissipation of the heavy quarks.
To be specific, the coefficient $i/4MT$ in $L_k$ is determined from the fluctuation-dissipation theorem for a thermal QGP medium.
Therefore, we expect that the equilibrium density matrix is close to the Boltzmann distribution $\rho\propto \exp(-H/T)$.
Here we analyze how well the equilibration of the density matrix is achieved and study its equilibrium properties.
\subsubsection{In the absence of an external potential}
\begin{figure}
\includegraphics[clip, angle=-90, width=0.45\textwidth]{wavefunctions.eps}
\caption{
Profiles of a normalized wavefunction at different times in one sample event.
}
\label{fig:wavefunctions}
\end{figure}
\begin{figure}
\includegraphics[clip, angle=-90, width=0.5\textwidth]{momentum_m1w0E0_128dx1_gs2t0.1w0.eps}
\caption{
Time evolution of the momentum distribution of a heavy quark.
The bars denote statistical errors.
The dashed line corresponds to the Boltzmann distribution with $T=0.1M$.
}
\label{fig:free_pdist}
\end{figure}
The initial heavy quark wave function is taken to be uniform (plane wave with zero momentum).
In Fig.~\ref{fig:wavefunctions}, we show the profiles of a normalized wavefunction at different times in one sample event.
A wavefunction type typically encountered here is a localized solitonic state, arising from the nonlinearity of the evolution equation.
Figure \ref{fig:free_pdist} on the other hand contains the time evolution of the momentum distribution of the heavy quark:
\begin{subequations}
\begin{align}
N(p,t) &\equiv {\rm M}\left(\frac{|\tilde\phi(p,t)|^2}{\langle\phi(t)|\phi(t)\rangle}\right),\\
\tilde\phi(p,t)&=\int dx e^{-ipx}\phi(x,t),
\end{align}
\end{subequations}
where $p$ takes on values available on a periodic lattice of size $N_x\times \Delta x$.
The corresponding classical dynamics of the heavy quark is a Brownian motion with a drag force
\begin{align}
\frac{dp}{dt} = -\frac{\gamma}{MTl_{\rm corr}^2} p.
\end{align}
Its typical relaxation time is $\tau_{\rm relax} = \frac{MTl_{\rm corr}^2}{\gamma}=100\pi/M$.
We can see that the momentum distribution approaches the Boltzmann distribution with temperature $T=0.1M$ over a time scale $\sim \tau_{\rm relax}$.
Note also that at late times ($t=620/M, 1240/M$) slight deviations from the Boltzmann distribution are observed above $p\agt 1.5M$.
The reason lies in the poor convergence of the gradient expansion, applied in evaluating the Lindblad operators, at high momenta ($p\gtrsim M$), when one takes into account the effects of dissipation.
On the other hand, we should not rely on the nonrelativistic description for a heavy quark with such a high momentum, so this limitation is not so restrictive in practice.
\subsubsection{In the presence of external potentials}
\begin{figure}
\centering
\includegraphics[clip, angle=-90, width=0.65\textwidth]{m1w0.04_128dx1_gs2t0.1w0.04.eps}
\caption{
Time evolution of the occupation number of the eigenstates in the harmonic potential with $\omega=0.04M$.
The bars denote statistical errors.
}
\label{fig:harmonic_evolution}
\end{figure}
\begin{figure}
\centering
\includegraphics[clip, angle=-90, width=0.65\textwidth]{m1a0.3md0rc1_127dx1_gs2t0.1.eps}
\caption{
Time evolution of the occupation number of the eigenstates in the regularized Coulomb potential with $\alpha=0.3$ and $r_c=1/M$.
The bars denote statistical errors.
}
\label{fig:coulomb_evolution}
\end{figure}
Let us turn to the case with external potentials present next.
The potential here is added by hand out of pure theoretical interest and should not be confused with the potential for a quarkonium state.
One should rather think of a single heavy quark in a fictitious trap.
The initial heavy quark wave function is taken to be the ground state, the first, or the second excited states of the corresponding Hamiltonian.
The time evolution of the occupation number of these levels,
\begin{align}
N_i(t)\equiv {\rm M}\left(\frac{|\langle \psi_i|\phi(t)\rangle|^2}{\langle\phi(t)|\phi(t)\rangle}\right), \quad
H|\psi_i\rangle = E_i|\psi_i\rangle,
\end{align}
is shown in Fig.~\ref{fig:harmonic_evolution} for the harmonic potential and in Fig.~\ref{fig:coulomb_evolution} for the regularized Coulomb potential.
Independent of the initial conditions, the occupation numbers converge to their equilibrium values.
In contrast, the relaxation time depends on the initial condition.
One might expect a naive relaxation process
\begin{align}
N_i(t) = (N_i^{\rm ini}-N_i^{\rm eq})\exp(-\Gamma_i t) + N_i^{\rm eq},
\end{align}
to describe the dynamics, as motivated and applied in the rate equation approach to heavy quarks.
However, from Fig.~\ref{fig:harmonic_evolution} and Fig.~\ref{fig:coulomb_evolution} it is obvious that a single decay rate cannot capture the relaxation of the eigenstate occupation.
Note that actually there are cases where the occupation number shows a non-monotonic approach to equilibrium.
In order to investigate the properties of the equilibrium density matrix, we show in Fig.~\ref{fig:potential_equilibrium} the equilibrium distribution of the lowest ten levels as a function of the eigenenergy for the harmonic and the regularized Coulomb potentials.
In the figures, we plot the results for several different potential parameters: $\omega/M = 0.01, 0.04, 0.09$ for the harmonic potential and $\alpha = 0.2, 0.3, 0.4$ and $r_c=1/M$ for the regularized Coulomb potential.
The initial condition is chosen to be the ground state of each Hamiltonian.
The equilibrium distribution is calculated at late enough time for each
setup: $Mt=1550, 3100, 4650$ for $\omega/M=0.01, 0.04, 0.09$ and
$Mt=4650, 7750, 9300$ for $\alpha=0.2, 0.3, 0.4$, respectively.
In all cases, the distribution is close to the Boltzmann distribution with $T=0.1M$.
We also calculate the real and imaginary parts of the off-diagonal elements of the density matrix in equilibrium and check that they are consistent with zero within the statistical uncertainty.
\begin{figure}
\centering
\includegraphics[angle=-90, width=0.7\textwidth]{m1all_127-8dx1_gs2t0.1all_eq.eps}
\caption{
Equilibrium occupation of the lowest ten eigenstates for the harmonic potential (upper panel) and the regularized Coulomb potential (lower panel).
The parameters are varied as $\omega/M=0.01, 0.04, 0.09$ for the harmonic potential and $\alpha=0.2, 0.3, 0.4$ and $r_c=1/M$ for the regularized Coulomb potential.
The bars denote statistical errors.
The data are fitted by $C\cdot\exp(-E_i/T_{\rm fit})$ and the dashed lines indicate the slope of a Boltzmann distribution with $T=0.1M$.
}
\label{fig:potential_equilibrium}
\end{figure}
\subsection{The effect of dissipation}
\begin{figure}
\centering
\includegraphics[clip, angle=-90, width=0.5\textwidth]{momentum_m1w0E0_128dx1_gs2t0.1w0_nodiss.eps}
\caption{
Time evolution of the momentum distribution of a heavy quark without dissipation.
The bars denote statistical errors and the dashed line corresponds to a Boltzmann distribution with $T=0.1M$.
}
\label{fig:free_pdist_nodiss}
\end{figure}
As a last consideration let us now turn off the quantum dissipation.
Since quantum dissipation is described by the term $\propto e^{i\bm k \cdot\bm x/2}\frac{i\bm k\cdot\bm\nabla}{4MT}e^{i\bm k \cdot\bm x/2}$, we can switch it off by taking the $M\to \infty$ limit in the QSD equation everywhere, except in the Hamiltonian part.
In Fig.~\ref{fig:free_pdist_nodiss}, we show the corresponding time evolution of the momentum distribution as given by the QSD equation without dissipation.
Clearly, the distribution does not approach the equilibrium Boltzmann distribution.
Instead the energy gained by the heavy quark from thermal fluctuations is not dissipated back to the system and the heavy quark overheats.
\begin{figure}
\centering
\includegraphics[angle=-90, width=0.65\textwidth]{m1all_127-8dx1_gs2t0.1all_nodiss.eps}
\caption{
Time evolution of the occupation number of the eigenstates without dissipation.
For comparison, the time evolution with dissipation is also plotted.
The bars denote statistical errors.
}
\label{fig:potential_evolution_nodiss}
\end{figure}
In Fig.~\ref{fig:potential_evolution_nodiss}, we show the time evolution of the eigenstate occupation as given by the QSD equation without dissipation.
The lowest three levels become equally occupied regardless of the energy gaps
\footnote{
For the free case, the second and the fourth excited states are used because of the degeneracy of positive and negative momentum states.
}.
It is expected that not only these levels but all the levels are eventually occupied equally if the quantum dissipation is neglected.
We also observe that the effect of quantum dissipation at early time strongly depends on the external potential.
This dependence can be understood by analyzing the Lindblad equation as follows.
The initial decay rate of an eigenstate $\psi_i$ of the Hamiltonian is given by
\begin{align}
\Gamma_i &= -2\sum_{n}\left(
\langle L_n\rangle_{\psi_i}\langle L_n^{\dagger}\rangle_{\psi_i}
-\langle L_n^{\dagger}L_n\rangle_{\psi_i}
\right).
\end{align}
Using the Lindblad operators in Eq.~\eqref{eq:QCDLindblad}, we obtain
\begin{align}
\Gamma_i
&= D(0) - \int d^3xd^3y D(x-y)n_{\psi_i}(x) n_{\psi_i}(y) \\
& \quad +\frac{\nabla^2 D(0)}{4MT} + \frac{\nabla^4 D(0)}{64M^2T^2}
+\frac{\nabla_i\nabla_j D(0)}{16M^2T^2} \langle \nabla_i\nabla_j\rangle_{\psi_i}, \nonumber
\end{align}
in which the first (second) line arises from thermal fluctuations (dissipation).
With the Gaussian approximation \eqref{eq:dgaussian} for $D(x)$, we get for our one-dimensional model
\begin{align}
\label{eq:decayrate}
\frac{\Gamma_i}{\gamma} &= 1 - \int dxdy \exp\left[-\frac{(x-y)^2}{l_{\rm corr}^2}\right]n_{\psi_i}(x) n_{\psi_i}(y) \nonumber \\
& \quad -\frac{1}{2MTl_{\rm corr}^2} +\frac{3}{16M^2T^2l_{\rm corr}^4}
- \frac{\langle\nabla^2\rangle_{\psi_i}}{8M^2T^2l_{\rm corr}^2}.
\end{align}
With the values from Table.~\ref{tbl:setup}, the effect of dissipation (the second line in Eq.~\eqref{eq:decayrate}) is $\simeq -0.048 - 0.125\langle\nabla^2\rangle_{\psi_i} /M^2$.
The ground state wave function yields
$\langle\nabla^2\rangle_{\psi_0}/M^2\simeq 0, -0.02, -0.08$ for the free
case, the harmonic potential, and the regularized Coulomb potential, respectively.
Therefore, the effect of dissipation in this case ranges from $-0.048$ to $-0.038$ and only slightly depends on the potential.
On the other hand, the decay rate due to thermal fluctuation is quite sensitive to the size of the wave function as can be seen from the first line in Eq.~\eqref{eq:decayrate}.
In fact, the wave function size is $M\Delta x\simeq 3.5, 1.9 < 10=Ml_{\rm corr}$ for the ground states of the harmonic and the regularized Coulomb potentials respectively.
If the size is smaller than the correlation length $l_{\rm corr}$, the decay rate in the absence of dissipation is already comparatively small so that the relative importance of dissipation increases.
\section{Conclusion}
\label{sec:conclusion}
In this paper, we investigate how quantum dissipation influences the time evolution of the density matrix of a heavy quark in the quark-gluon plasma (QGP).
The master equation for heavy quark systems in the QGP has been obtained in the Lindblad form \cite{Akamatsu:2014qsa, lindblad1976generators} and thus possesses particularly useful properties: The density matrix $\rho$ stays hermitian, remains correctly normalized and positive during the time evolution.
We solve this Lindblad equation for a single heavy quark by stochastic unraveling.
Applying the approach of quantum state diffusion (QSD) \cite{gisin1992quantum} to our Lindblad equation, we derive a nonlinear stochastic Schr\"odinger equation for the heavy quark wave function.
Subsequently we solve the QSD equation for a heavy quark in simple settings, i.e. in one spatial dimension and with or without external potentials (harmonic and regularized Coulomb potentials).
We found that in both cases the density matrix relaxes to $\rho_{\rm eq}\propto e^{-H/T}$ within statistical errors.
This property is expected from a constraint in the Lindblad equation introduced by the fluctuation-dissipation theorem for the QGP sector but was not explicitly guaranteed by the Lindblad equation itself (see Appendix \ref{app:steadystate}).
We also found that the relaxation process strongly depends on the initial condition so that it is not captured by a simple rate equation.
As a further topic we study the effect of quantum dissipation by switching off the dissipative terms in the QSD equation.
Without the dissipative terms, the heavy quark is overheated because it only receives energy from the thermal medium which is not dissipated back.
It is shown that the importance of dissipation, as compared to the thermal fluctuations, strongly depends on the wave function size.
The relative importance of dissipation increases when the wave function is small.
In the future, we plan to extend our analysis to the description of heavy quarkonium in the QGP.
In that case, not only thermal fluctuations but also dissipation takes place nontrivially because the collisions of a heavy quark and those of a heavy antiquark interfere with each other.
It would be interesting and phenomenologically relevant to study the effects of quantum dissipation in that case.
The computations for quarkonium in three spatial dimensions and in evolving fluid background for heavy-ion collisions will be one of the ultimate goals of our project.
\begin{acknowledgments}
The work of Y. A. is partially supported by JSPS KAKENHI Grant Number JP18K13538. M. A. is supported in part by JSPS KAKENHI Grant Number JP18K03646.
Y.A. thanks the DFG Collaborative Research Centre SFB 1225 (ISOQUANT) for hospitality during his stay at Heidelberg University and A.R. was supported by SFB 1225 in full.
Y.A. also thanks T. Hirano for recommending \cite{percival1998quantum}.
\end{acknowledgments}
\ \\
|
2,869,038,153,995 | arxiv | \section{\label{sec:Introduction}Introduction}
All magnetically ordered materials, depending on the alignment of spins, are divided into two primary classes: ferro- and antiferromagnets. Ferromagnets are characterized by parallel alignment of spins which results in net magnetic moment, while spins in antiferromagnets are aligned in a mutually antiparallel way with zero net magnetization in the unperturbed state. Antiferromagnets represent the largest, but the least explored class of magnets with a potential to have a dramatic impact on spintronics and other magnetic technologies. In particular, the higher frequency ($\sim$ THz) of spin resonances in antiferromagnets can bring the clock-speed of spintronics devices into the THz range \cite{RevModPhys.90.015005, NatKimelAFMspintronics, AfmspintronicsNanoNat}.
Unfortunately, proceedings in both fundamental research and the development of antiferromagnetic spintronics are considerably hindered by the lack of net magnetization in antiferromagnets, as even the discovery of antiferromagnetic order itself had to wait for the advent of neutron diffraction experiments in the late 1940s \cite{PhysRev.76.1256.2}. This is why approaches and mechanisms allowing efficient excitation of antiferromagnetic spins in the THz range became a subject of not only intense, but also challenging and intriguing research. In particular, recently it was suggested that THz magnetic fields can excite antiferromagnetically coupled spins with a significantly higher efficiency when accounting for the new, relativistic mechanism of field derivative torque (rFDT) \cite{Mondal2016}. This torque can reach strengths comparable with conventional the Zeeman torque \cite{Mondal2019}. However, the lack of methods for quantitative detection of spins in antiferromagnets prevents these claims from experimental verification and can even lead to mistakes in interpretation of experimental results \cite{PhysRevLett.124.039901}.
A substantial progress in understanding THz light-spin coupling can be achieved by studying ferrimagnets, which are a subclass of antiferromagnets having two non-equivalent magnetic sublattices. Within each sublattice the spins are aligned ferromagnetically, while the intersublattice interaction is antiferromagnetic. The sublattice magnetizations can be different in size, and therefore the net magnetization is not necessarily zero. The latter greatly simplifies experimental studies, but it does not ruin the presence of THz resonances called exchange modes, as antiferromagnetic order is still present. In this article, we demonstrate and explore the high-frequency response of antiferromagnetic spins in a ferrimagnet to THz magnetic field. We experimentally reveal the orientation of the THz field which causes the largest deviation of spins from their equilibrium. Using simulations we show that the oscillations correspond to the exchange mode of spin resonance. The applied experimental technique is shown to have a great potential to facilitate quantitative conclusions. In particular, due to the non-zero Faraday rotation in the unperturbed state ($\alpha_F$) and having the calibrated dynamic Faraday rotation ($\Delta \alpha_F$), the ratio $\Delta \alpha_F / \alpha_F$ unambiguously defines spin deviations caused by the calibrated THz magnetic field. The technique allows us to show that the conventional Zeeman torque does play in the spin-excitation the dominant role, while alternative mechanisms can essentially be neglected.
The garnet structure (crystallographic space group Ia$\bar{3}$d) of rare-earth iron garnets (REIGs) gives rise to unusual magnetic properties \cite{neelferri, HoIG_spinflop}. Three of five Fe\textsuperscript{3+}-ions per formula unit (R\textsubscript{3}Fe\textsubscript{5}O\textsubscript{12}) form a sublattice with tetrahedral symmetry and are antiferromagnetically coupled to the remaining two iron ions occupying sites of octahedral symmetry. The imbalance between these iron ions results in a net magnetic moment $\mathbf{M}_{Fe}$ to which the rare-earth site magnetization $\mathbf{M}_{R}$ aligns anti-parallel. The result is a three-sublattice ferrimagnet with net magnetization $\mathbf{M} = \mathbf{M}_{R} + \mathbf{M}_{Fe}$. The antiferromagnetic exchange between the iron sublattices is large compared to any other interactions experienced by the Fe\textsuperscript{3+} spins, justifying the approximation of treating it as a single sublattice with magnetization $\mathbf{M}_{Fe}$ \cite{Levitin}. The RE-sublattice experiences the exchange-field generated by this iron magnetization \cite{neelferri}, while intra-sublattice exchange interaction is weak and can be ignored, resembling a paramagnet in the exchange field.
The REIG structure studied in this work is a $19$ $\mu$m film of Bi- and Ga- substituted thulium iron garnet Tm\textsubscript{3-x}Bi\textsubscript{x}Fe\textsubscript{5-y}Ga\textsubscript{y}O\textsubscript{12} (TmIG) with targeted composition $x = 1$, $y = 0.8$. The film was grown by liquid phase epitaxy on a $500$ $\mu$m thick ($111$)-oriented GGG substrate. The sample was doped with Bi\textsuperscript{3+} to enhance magneto-optical effects \cite{Hansen,Hibiya_1985, zvezdin1997modern}. Previous research on films grown in this way show that the sample is characterized by an uniaxial out-of-plane type anisotropy, as the thin-film shape anisotropy is shown to be overcome by stress-induced anisotropy from a lattice mismatch between substrate and sample \cite{Kubota_2012} together with a small contribution of growth-induced anisotropy due to the site preference of bismuth ions along the growth direction \cite{TmBiFeGaO12_substitutionsgerhard, Growth_anisotropy_euIG}. Consequentially, this gives an ``easy-axis'' along the [$111$] crystallographic direction. The expectations are confirmed by measurements of static magneto-optical Faraday rotation as a function of magnetic field (Supplemental Material \footnote{\label{footnote}See Supplemental Material for magneto-optical characterization of the sample, experimental details of THz generation, pump and probe polarization dependencies, amplitude of dynamics vs THz and external magnetic field, supplemental waveforms and Fourier spectra over a wide temperature range, details on the numerical modelling and comprehensive description of the Lagrangian formalism, which includes Refs. \cite{HoIG_spinflop,PhysRevLett.123.157202,Davydova_2019, Sajadi:15, PhysRev.129.1995}.}).
In the pump$-$probe experiment, we use optical pulses from a Ti:Sapphire amplifier with a central wavelength of $800$ nm, $4$ mJ energy per pulse, $100$ fs pulse duration and $1$ kHz repetition rate. These pulses were employed to generate single-cycle THz pulses by a titled-front optical rectification technique in a lithium niobate crystal as described in Ref. \cite{Hebling:02} and written in detail in Ref. \cite{Hirori_beamdivergence}. The generated THz beam was tightly focused onto the sample \cite{Hirori_review_4ftheta} and spatially overlapped with a low intensity optical probe beam that was chopped out beforehand from the original beam. Varying time retardation between the THz pump and optical probe pulse, time-resolved measurements were obtained by mapping probe polarization changes induced by the THz pulse using a balanced photo-detector. The strength of the THz electric field was calibrated using the Pockels effect in a thin ($110$)-cut GaP crystal and yields a maximum peak strength of $|\mathbf{E}_{THz}| \approx 1$ MV/cm, implying a peak magnetic field of $0.33$ T. The THz pulse waveform and the corresponding Fourier spectrum are shown in the Supplemental Material. Both the generated THz and optical probe pulses are linearly polarized. The experimental geometry is schematically depicted in Fig. \ref{fig:1}(a). The THz magnetic field is initially along the $x$-axis, but this direction can be controlled by a set of wire-grid polarizers. Note that using this approach, a polarization rotation of $\pi/2$ from the initial state always reduces the THz magnetic field at least by one half. A static external magnetic field $\mu_0\mathbf{H}_{ext}$ of at most $250$ mT was applied at an angle of $\sim 10^\circ$ with the sample plane. Using static Faraday rotation $\alpha_F$ we see that such maximum field strength is sufficient to saturate magnetization in the garnet film.
Figure \ref{fig:1}(b) shows THz-induced ultrafast dynamics of the probe polarization $\Delta \alpha_F$ and how it depends on the THz-pump polarization. By rotating the THz polarization from $\mathbf{H}_{THz}\parallel \mathbf{M}_\parallel$ to $\mathbf{H}_{THz}\perp \mathbf{M}_\parallel$, the symmetry of the high-frequency oscillations with respect to the polarity of the external magnetic field is altered. To reveal the origin of these peculiar THz-induced modulations, we performed systematic studies as a function of pump and probe polarizations, external magnetic field, THz field strength and temperature.
\begin{figure}[h!]
\centering
\includegraphics{Fig1.eps}
\caption{\small{(a) Schematic of the experimental setup. The illustration on the top-right shows the distribution of dodecahedral Tm\textsuperscript{3+} and tetrahedral/octahedral Fe\textsuperscript{3+} ions. Any magnetic moment will tend to align along the [$111$] ``easy-axis''. (b) Polarization rotation $\Delta\alpha_F$ measured as a function of the delay $\tau$ between THz pump and visible probe pulses. Depending on the THz polarization, the mapped dynamics is either odd ($\mathbf{H}_{THz}\parallel y \perp \mathbf{M}_\parallel$) with the external magnetic field or even ($\mathbf{H}_{THz}\parallel x \parallel \mathbf{M}_\parallel $). The measurements were performed at $T=6 $ K.}}
\label{fig:1}
\end{figure}
The observed oscillations of the probe polarization rotation, obviously, are a result of a periodic modulation of optical anisotropy (birefringence) in the sample. A THz pulse is able to induce such optical anisotropy by modifying the dielectric permittivity tensor $\epsilon_{ij}$. If one neglects dissipation, which is a safe approximation for iron garnets at the wavelength of $800$ nm \cite{Wood, zvezdin1997modern}, the tensor is Hermitian \cite{ElectrodynamicsContmedia}. Such type of tensor $\epsilon_{ij}$ can be written as a sum of the symmetric (real) $\epsilon_{ij}^{(s)} = \epsilon_{ji}^{(s)}$ and antisymmetric (imaginary) $\epsilon_{ij}^{(a)}=-\epsilon_{ji}^{(a)}$ parts. Measurements of the THz-induced dynamics as a function of probe polarization angle show no dependency (Supplemental Material \cite{Note1}), indicating that the THz-induced modulations originate from the anti-symmetric part of the dielectric tensor. It means that the polarization rotation $\Delta \alpha_F$ must be assigned to the magneto-optical Faraday effect. In a $[111]$ garnet crystal, this effect is a measure of the magnetization along the $z$-axis \cite{birss1964symmetry,Pisarev_1993}:
\begin{equation}
\epsilon_{xy}^{(a)} \sim M_z.
\label{anti}
\end{equation}
When $\mathbf{H}_{THz} \perp \mathbf{M}_\parallel$, changing the external magnetic field polarity from $+\mathbf{H}_{ext}$ to $-\mathbf{H}_{ext}$ flips the sign of the observed dynamics (Fig. \ref{fig:1}(b), red waveforms). Moreover, by increasing the strength of the static magnetic field we found that the amplitude of the oscillations and the net magnetization saturate at the same field (Supplemental Material \cite{Note1}). This fact implies that the THz-induced dynamics must by assigned to dynamics of the magnetization $\mathbf{M}$. Due to peculiarities of the detection technique (Eq. \eqref{anti}), the measurements are sensitive to modulations of the out-of-plane magnetization. Thus, if we compare the size of the amplitude of the oscillations $\Delta \alpha_F$ with the saturated static Faraday rotation $\alpha_F$ ($\sim 20^\circ$ at $800$ nm), this allows us to quantitatively estimate the relative change of the magnetization along the z-axis during the oscillations $\Delta M_z / M_z \sim 0.012$. For another case $\mathbf{H}_{THz}\parallel \mathbf{M}_\parallel$, the signal also saturates in line with the magnetization $\mathbf{M}$, but the phase of the oscillations is unaffected by the polarity of the external field (see Fig. \ref{fig:1}(b)).
\begin{figure}[h!]
\centering
\includegraphics{Fig2.eps}
\caption{\small{Fourier spectrum of the THz-induced signal ($\mathbf{H}_{THz}\parallel \mathbf{M}_\parallel$) measured at various temperatures. Central frequencies of the peaks deduced from the fit are plotted as a function of temperature in the inset. The dotted line denotes a fit with Eq. \eqref{fitfreq} ($\omega_0 = 400 $ GHz, $T_C = 314 $ K) and the bars denote $\pm$ half-width-half-maximum of the fitted Lorentzians. The FFT spectrum for $\mathbf{H}_{THz}\perp \mathbf{M}$ is added to the Supplemental Material \cite{Note1}.}}
\label{fig:fig2}
\end{figure}
Figure \ref{fig:fig2} shows the Fourier spectra of the THz-induced waveforms ranging in the entire accessible temperature range when $\mathbf{H}_{THz}\parallel \mathbf{M}_\parallel $. The inset summarizes the temperature dependence of the peak frequency, and this behaviour is in qualitative agreement with what could be expected for an exchange mode in rare-earth iron garnets \cite{PhysRev.129.1995, PhysRevLett.105.107402}. In order to get a better insight into the THz-induced magnetization dynamics, we modelled the response with the help of the Landau-Lifshitz-Gilbert (LLG) equations \cite{Kirilyuk_rev}. The equations, in particular, account for rFDT derived by \cite{Mondal2016}:
\begin{equation}
\frac{d\mathbf{M_i}}{dt} = - \gamma_i \mathbf{M_i} \times \mathbf{B}_{i}^{eff}(t) + \frac{\alpha_i}{M_i}\mathbf{M_i} \times \Big(\frac{d\mathbf{M_i}}{dt} + \frac{a_i^3}{\mu_B}\frac{d \mathbf{H}}{d t}\Big),
\label{LLG}
\end{equation}
where $i =$ Fe, Tm. We use literature $g$-values for thulium $g_{Tm} = 7/6$ and iron $g_{Fe} = 2$ \cite{wohlfarth1986handbook}.
Based on the Ga-content, the sublattice magnetization of iron $|\mathbf{M}_{Fe}| = 4.2$ ($\mu_B$ per formula unit R\textsubscript{3}Fe\textsubscript{5}O\textsubscript{12}) \cite{wohlfarth1986handbook} is antiferromagnetically coupled to the magnetization of thulium $|\mathbf{M}_{Tm}| = 2 $. The latter is taken to match the effective $g$-factor $g_{ef} \equiv (M_{Fe}-M_{Tm})/((M_{Fe}/g_{Fe}) - M_{Tm}/g_{Tm}) \approx 6$ measured in this sample (Supplemental Material \cite{Note1}). The volume of the unit cell $a_i^3$ \footnote{We have used the following set of values for calculating the magnitude of rFDT terms: $a^3_{Fe} = 1.221 \times 10^{-28}\text{m}^3$, $a^3_{Tm} = 5.815 \times 10^{-29}\text{m}^3$ and vacuum permeability $\mu_0 = 1.257 \times 10^{6}\text{T m A}^{-1}$. } per spin constitutes a small factor $a^3/\mu_B \sim 10^{-5}$ m/A. The effective magnetic fields $\mathbf{B}_i^{eff} \equiv - \delta \Phi/ \delta \mathbf{M}_i $ (in T) are derived from the thermodynamic potential $\Phi$ \cite{Kirilyuk_rev}, containing exchange interaction and Zeeman coupling to the external field and THz magnetic field $\mathbf{H}(t)$ (in A/m). For the model we use a realistic exchange constant $\Lambda = -30$ T/$\mu_B$ \cite{TmIG_molecularfield, Molecularfields, wohlfarth1986handbook} and THz magnetic field modelled by the Gaussian derivative function fitted to the experimental waveform (see Supplemental Material \cite{Note1}). The initial state of the net magnetization vector is taken along the external field, considering we saturated the magnetization experimentally. The numerical solution of these equations reveals that the THz magnetic field induces dynamics of the Néel vector $\mathbf{L} \equiv \mathbf{M}_{Fe} - \mathbf{M}_{Tm} $ and the magnetization $\mathbf{M} \equiv \mathbf{M}_{Fe} + \mathbf{M}_{Tm}$. The dynamics of $\mathbf{M}_{Fe}$, which dominates the detected magneto-optical signal, is shown in Fig. \ref{fig:fig4}. The phenomenological Gilbert damping factors of $\alpha_{Fe}/M_{Fe} = \alpha_{Tm}/M_{Tm} = 0.0015$ have been taken to match the experimental observations
\begin{figure}
\centering
\includegraphics[width = \linewidth]{Fig3.eps}
\caption{\small{Dynamics in the $z$-component of iron $\mathbf{M}_{Fe}$ modeled by LLG equations.}}
\label{fig:fig4}
\end{figure}
The simulation contains a high-frequency magnetic resonance at around $380$ GHz, which we identify as the Kaplan-Kittel exchange mode since its frequency depends linearly on the exchange constant \cite{doi:10.1063/1.1699018}. The dynamics of $M_{Fe,z}(t)$ in Fig. \ref{fig:fig4} is in agreement with our experimental results in Fig. \ref{fig:1}(b). It has a larger amplitude and changes sign upon reversing $\mathbf{M}$ when $\mathbf{H}_{THz} \perp \mathbf{M}_\parallel$, while the sign is conserved if $\mathbf{H}_{THz} \parallel \mathbf{M}_\parallel$. The amplitude matches very well to the experimental values even if the rFDT term is not taken into account. As proposed in Ref. \cite{Mondal2019} the contribution of this term will be indeed small in cases of low damping $\alpha_{1,2} < 0.01$. Altogether, the simulations point out that the observed oscillations correspond to the exchange mode of spin resonance and show that Zeeman-torque plays the dominant role in the excitation of this mode with THz magnetic field.
These conclusions can also be confirmed analytically using Lagrangian mechanics and the effective Lagrangian (see Supplemental Material \cite{Note1} for full derivation):
\begin{eqnarray}
\begin{aligned}
\label{Leff}
\mathcal{L}_{eff} &= \frac{\mathcal{M}^2}{2\delta}\Bigg[\left(\Big(\frac{\dot\phi}{\overline{\gamma}} - H\Big)\sin\theta + h_y\cos\theta\cos\phi \right)^2 \\&\quad + \left(\frac{\dot\theta}{\overline{\gamma}} + h_y\sin\phi\right)^2 \Bigg]
+ m \Big(H - \frac{\dot\phi}{\gamma_{ef}}\Big) \cos\theta \\&\quad + mh_y\sin\theta\cos\phi + K_U\sin^2\theta \sin^2\phi.
\end{aligned}
\end{eqnarray} \newline
Where $\mathcal{M} = M_{Fe} + M_{Tm}$, $m = M_{Fe} - M_{Tm}$, $\delta \equiv -4\Lambda M_{Fe} M_{Tm}$, $1/\overline{\gamma} = (M_{Fe}/\gamma_{Fe} + M_{Tm}/\gamma_{Tm})/(M_{Fe} + M_{Tm})$, $\gamma_{ef} = g_{ef} \mu_B/\hbar$, $h_x(t)$ and $h_y(t)$ are the THz magnetic field components in the sample $x-y$ plane as in Fig. \ref{fig:1} and $H(t) \equiv H_{ext} + h_x(t)$ the total field along the external field $x$-direction (here $10^{\circ}$ inclination angle of external magnetic field is ignored). The polar angle $\theta \in [0,\pi]$ is defined with respect to the external field $x$-axis.
In this coordinate system, the net magnetization vector can be expressed as $\mathbf{M} = m(\cos\theta, \sin\theta \cos\phi, \sin\theta\sin\phi)$. Equations of motion now follow from Euler-Lagrange equations, taking into account a phenomenological damping term through a Rayleigh function \cite{Davydova_2019}. The results can be linearized about the ground state angles $\theta_0, \phi_0$, found by minimization of the thermodynamic potential $\Phi$ for which we find $\phi_0 = \pi/2$ and $\theta_0$ depending on the ratio of external field to anisotropy. This has been done for general $\theta_0$ in the Supplemental Material \cite{Note1}, yielding complex equations of motion. In the special case of zero external field, the spins lie along the easy axis of anisotropy ($\theta_0 = \pi/2$). Linearizing around the ground-state angles $\theta = \theta_0 + \theta_l$, $\phi = \phi_0 + \phi_l$ with $\theta_l, \phi_l \ll 1$, the equations of motion then take the simple form:
\begin{multline}
\label{motiontheta}
\ddot{\theta}_l + \frac{\alpha\mathcal{M}\overline{\gamma}}{\chi_\perp}\dot{\theta}_l + \frac{2K_U\overline{\gamma}^2}{\chi_\perp}\theta_l - \frac{m\overline{\gamma}^2}{\gamma_{ef}\chi_\perp}\dot\phi_l = - \overline{\gamma}\dot h_y - \frac{m\overline{\gamma}^2h_x}{\chi_\perp},
\end{multline}
\begin{multline}
\label{motionphi}
\ddot\phi_l + \frac{\alpha \mathcal{M}\overline{\gamma}}{\chi_\perp}\dot\phi_l +\frac{2K_U\overline{\gamma}^2}{\chi_\perp}\phi_l + \frac{m\overline{\gamma}^2}{\gamma_{ef}\chi_\perp}\dot\theta_l = \overline{\gamma} \dot h_x - \frac{m\overline{\gamma}^2h_y}{\chi_\perp}.
\end{multline}
Here $\chi_\perp \equiv \frac{\mathcal{M}^2}{\delta}$ is a constant inversely proportional to the exchange constant. It is seen that the large THz field derivative term $\gamma \dot{h}_i$ appears as the dominant driving force, in accordance with our understanding how dynamical THz fields may excite antiferromagnetic magnons in antiferromagnets (where $m \to 0$) by Zeeman interaction \cite{PhysRevLett.123.157202, KIMEL20201}. Moreover, each equation of motion contains a mutually orthogonal component of the field-derivative $\dot{h}_{x,y}$. Noting that $\mathbf{H}_{THz} \perp \mathbf{M}_\parallel$ leads to $\dot h_x = 0$ and $\mathbf{H}_{THz} \parallel \mathbf{M}_{\parallel}$ to $\dot h_y = 0$, the symmetry with respect to external field $\pm\mathbf{H}_{ext}$, as observed experimentally, can now be explained (see Supplemental Material \cite{Note1}).
Moreover, considering free precession $\alpha \to 0$, $h_{x,y} \to 0$, the absolute eigenfrequencies of the coupled set of equations \eqref{motiontheta} - \eqref{motionphi} are:
\begin{eqnarray}
\label{KK}
\omega_{ex} &=& \frac{m\overline{\gamma}^2}{\gamma_{ef} \chi_\perp} \approx |\Lambda|(|\gamma_{Tm}|M_{Fe} - |\gamma_{Fe}|M_{Tm}),\\
\label{FM}
\omega_{FM} &=& \gamma_{ef}\frac{2K_U}{m} \equiv \gamma_{ef} H_{a}.
\end{eqnarray}
Equation \eqref{KK} corresponds to Kaplan-Kittel's exchange resonance frequency \cite{doi:10.1063/1.1699018} while Eq. \eqref{FM} describes the conventional ferromagnetic precession of the net magnetization in the anisotropy field $H_a$. Using Eq. \eqref{KK} and Bloch's law for the spontaneous magnetization of iron while $M_{Tm}(T) \sim M_{Fe}(T)$, we fitted the temperature dependence of the oscillations frequency shown in inset of Fig. \ref{fig:fig2} using:
\begin{equation}
\label{fitfreq}
\omega_{ex}(T) \sim \omega_0\left(1-\left(T/T_C\right)^{\frac{3}{2}}\right).
\end{equation}
where $\omega_0$ the exchange resonance frequency at zero Kelvin. In reality $M_{Tm}$ drops faster with temperature than the magnetization of iron, accounting for the slight rise of frequency at low temperatures. In general, the fit is another confirmation of the validity of our assumption that the observed oscillations correspond to the exchange mode.
In conclusion, investigating the response of ferrimagnets to THz fields and comparing the data with theoretical predictions from numerical solutions of the Landau-Lifshitz-Gilbert equations and analytical solutions derived from Euler-Lagrange equations of motion, we showed that the THz field excites the exchange mode in the ensemble of antiferromagetically coupled spins. We demonstrated that the Zeeman-torque plays a dominant role in the coupling of the THz-field to the spins. While quantitative studies of spin dynamics in compensated antiferromagnets seem to require complex magnetometry techniques, ferrimagnets facilitate an excellent playground to study dynamics of antiferromagnetically coupled spins. At last, we would like to point out that previous measurements of ferrimagnetic resonance \cite{GdFeCo2017, PhysRevLett.105.107402} could only reveal an effective gyromagnetic ratio. Using excitation of exchange mode with THz magnetic field, magneto-optical detection via the Faraday effect and comparison of the observed amplitudes of magnetization dynamics with the results of numerical simulations provides a universal technique to directly estimate the individual gyromagnetic ratio of the ions.
\begin{acknowledgments}
The authors thank S. Semin, Ch. Berkhout and P. Albers for technical support. The work was supported by de Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO). M.V.L. acknowledges the support from the Russian Foundation for Basic Research (Nos. 18-29-27020 and 18-52-16006).
\end{acknowledgments}
\nocite{*}
\section{Static magneto-optical characterization of TmIG sample}
\begin{figure}[h!]
\centering
\vspace{-4ex}
\includegraphics[width = 0.75\textwidth]{supfig1.eps}
\vspace{-1ex}
\caption{\small{Measurements of the magneto-optical Faraday rotation using a continuous-wave helium-neon laser ($\lambda = 632.8$ nm) with both the external field and the light's wave-vector perpendicular to the sample plane. A paramagnetic contribution $\sim 0.56$ deg/T from the cryostat glass windows has been subtracted from the raw data. The data exhibits a large rotation and demonstrates a weak easy-axis type of anisotropy with small coercive $< 6$ mT and saturation $< 25$ mT field. No compensation point was observed above nitrogen temperatures $> 77$ K.}}
\label{fig:staticsHeNe}
\end{figure}
\begin{figure}[h!]
\vspace{-2ex}
\centering
\includegraphics[width = 0.6\textwidth]{supfig2.eps}
\caption{\small{Static polarization rotation measurements with light at the wavelength of $800$ nm in the experimental geometry (see Fig. 1(a) in article). The evolution of hysteresis loop form can be due to temperature dependent anisotropy constants. Clearly, no magnetization compensation point is observed in this temperature range. At T $ = 6$ K, the saturated polarization rotation is $
\sim \pm 1.65^\circ$, and given the angle of the magnetic field of $10^\circ$, this has been used to estimate the Faraday rotation ($\sim \pm 10^\circ)$ when the magnetization is along the sample normal. }}
\label{fig:supfig2}
\end{figure}
\ \ \ \ \ \
\newpage
\begin{figure}[h!]
\centering
\includegraphics{supfig3.eps}
\caption{\small{Domain pattern seen by magneto-optical microscopy in transmission at zero field. The typical "labyrinth" type domains grow with decreasing temperature, which indicates a growing role of easy axis anisotropy and thulium magnetization \cite{HoIG_spinflop}. When applying an external magnetic field along the out-of-plane easy axis, the domains along this field expand and the sample will be uniformly magnetized for relatively small field (see Suppl. Fig. \ref{fig:staticsHeNe})}}
\label{fig:domains}
\end{figure}
\ \ \ \
\newpage
\section{Experimental setup}
The experimental setup regarding THz generation by optical rectification in lithium niobate is described in detail in \cite{PhysRevLett.123.157202,Sajadi:15}. The THz path was purged with nitrogen to avoid water absorption lines in the THz spectrum. A small part of the initial $800$ nm beam is chopped out beforehand (ratio $1:100$) and is brought to spectral and temporal overlap with the THz pump pulse. The focused spot size of the probe beam is considerably smaller than that of the THz. The waveform of the THz pulse was mapped using electro-optical sampling in a $50$ $\mu$m GaP [$110$] crystal seen in Supplemental Fig. \ref{fig:THz1}.
\begin{figure}[h!]
\centering
\includegraphics{supfig4.eps}
\caption{\small{THz waveform and corresponding Fourier spectrum measured by EO sampling in GaP. }} \label{fig:THz1}
\end{figure}
\section{Supplemental Results}
\begin{figure}[h!]
\centering
\includegraphics{supfig5.eps}
\caption{\small{THz induced polarization rotation waveforms for two orthogonal THz pump polarizations (two figures) and for several orientations of the probe polarization. The angle depicted is the angle of the probe electric field with respect to the experimental $x$-axis (Fig. 1(a) of the article). This data implies that the THz induced signals are Faraday rotation (see main text). }}
\label{fig:supfig5}
\end{figure}
\begin{figure}
\centering
\includegraphics{supfig6.eps}
\caption{\small{Peak-to-peak amplitudes of THz induced waveforms as a function of external magnetic field (a) and THz field (b) for two orthogonal THz pump polarizations. Bending of the red dots at low THz fields is attributed to the facts that the THz light is not perfectly linearly polarized and imperfections of the wire grid polarizers.} }
\label{fig:supfig6}
\end{figure}
\newpage
\begin{figure}[h!]
\centering
\includegraphics{supfig7.eps}
\caption{\small{THz induced Faraday rotation waveforms obtained at several angles of the pump polarization angle $\alpha$ and applied external field of $\pm 250$ mT. Besides the previous figure, this is the only graph where a weaker THz electric field of $160$ kV/cm has been used. The result shows that measuring at $\alpha = \pm45^\circ$, in between the fully symmetric $\alpha=0^\circ$ and fully antisymmetric orthogonal $\alpha = \pm90^\circ$, results in a mix of symmetry/antisymmetry. Moreover, the effects gradually become weaker towards $\alpha = 0^\circ$ ($\mathbf{H}_{THz} \parallel \mathbf{M}_\parallel$).}}
\label{fig:supfig7}
\end{figure}
\newpage
\begin{figure}[h!]
\centering
\includegraphics{supfig8.eps}
\caption{\small{FFT spectra of THz induced Faraday rotation for $\mathbf{H}_{THz} \perp \mathbf{M}_{\parallel}$. The exchange-mode frequency at about $375$ GHz shows softening similar to the case $\mathbf{H}_{THz} \parallel \mathbf{M}_\parallel$ as is presented in article Fig. 2. At lower temperatures another high frequency ($725$ GHz) appears. It is known that crystal field transition may appear in this region \cite{PhysRev.129.1995}, but if it can be attributed to these transitions is yet unclear. }}
\label{fig:supfig8}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics{supfig9.eps}
\caption{\small{Experimental waveforms of THz induced Faraday rotation as a function of temperature. In both cases the same external fields (specified in the first figure) have been applied to ensure saturation of static magnetization.}}
\label{fig:supfig9}
\end{figure}
\newpage
\begin{figure}
\centering
\includegraphics{supfig10.eps}
\caption{\small{Preliminary data of THz-induced ferromagnetic resonance, used to estimate the effective $g$-factor $g_{eff} \approx 6$.}}
\label{fig:geff}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics{supfig11.eps}
\caption{\small{The dotted line shows the experimentally calibrated THz magnetic field pulse, which has been fitted using the Gaussian derivative function $G'(x) = -2A((x-d)/w) \exp\left[((x-d)/w)^2\right]$ with $A = 404$ mT, $d = 1.17$ ps (variable, determines arrival time of pulse) and $w = 0.2223$ ps pulse-width.}}
\label{fig:THz}
\end{figure}
\newpage
\begin{figure}
\centering
\includegraphics{supfig12.eps}
\caption{\small{Simulated dynamics of the iron magnetization $\mathbf{M}_{Fe}$ using LLG equations and plotted separately for the $x$, $y$ and $z$ components where $z$ coincides with the sample
out-of-plane axis. It shows how the symmetry with respect to external field is exactly opposite when looking at the $y$-component, to which we are not experimentally sensitive.}}
\label{fig:supfig12}
\end{figure}
\section{Equations of motion derived from Lagrangian formalism}
We start from the following Lagrangian and Rayleigh dissipation functions, which are equivalent to the LLG equations for a two-sublattice ferrimagnet \cite{Davydova_2019}:
\begin{eqnarray}
\label{Lagrangian}
\mathcal{L} &=& T - \Phi \nonumber\\
&= &-\frac{M_{Fe}}{\gamma_{Fe}}\cos\theta_{Fe}\frac{\partial\phi_{Fe}}{\partial t} -\frac{M_R}{\gamma_{R}}\cos\theta_R\frac{\partial\phi_R}{\partial t} - \Phi \\
\mathcal{R} &=& \mathcal{R}_{Fe} + \mathcal{R}_R, \text{ \ \ \ \ \ \ \ \ } \mathcal{R}_{Fe,R} = \frac{\alpha M_{Fe,R}}{2\gamma_{Fe,R}}\big(\dot{\theta}^2_{Fe,R} + \sin^2\theta_{Fe,R}\dot{\phi}^2_{Fe,R} \big),
\end{eqnarray}
where $\theta_i$ and $\phi_i$ the polar and azimuthal angles of the iron (Fe) and rare-earth (R) sublattices in the experimental coordinate system with the $x$-axis aligned to the external magnetic field $\mathbf{H}_{ext}$ (see Fig. \ref{fig:coords}, here we ignore the $10^\circ$ inclination of the field for simplification).
\begin{figure}[h!]
\centering
\includegraphics[scale = 1]{supfig13.eps}
\caption{\small{Coordinate system used for Lagrangian equation.}}
\label{fig:coords}
\end{figure}
The thermodynamic potential used is:
\begin{eqnarray}
\label{phi}
\Phi &=& -(\mathbf{M}_{Fe} + \mathbf{M}_{R}) \cdot\mathbf{H}_{ef} - \Lambda \mathbf{M}_{Fe}\cdot\mathbf{M}_{R} - K_{Fe}\frac{(\mathbf{M}_{Fe}\cdot\mathbf{n})^2}{M_{Fe}^2} - K_{R}\frac{(\mathbf{M}_R\cdot\mathbf{n})^2}{M_R^2}.
\end{eqnarray}
Here $\mathbf{n} = (0,0,1)$ is the directional vector of the easy axis of anisotropy, $\Lambda < 0$ the intersublattice exchange constant and $K_{Fe,R}>0$ the uniaxial anisotropy constants. The Euler-Lagrange equations w.r.t. $\theta_i$, $\phi_i$ give rise to four (coupled) equations of motion (two for each sublattice), and this is generally not easy and sometimes even impossible to solve. Instead, in Ref. \cite{Davydova_2019} an effective Lagrangian is obtained by assuming the canting of the two sublattices are equal and are assumed to be small. This approach generally works at field well below the exchange field (small canting), and it is valid here as the static measurements indicate we are well below the spin-flop field.
We introduce the usual definitions of the magnetization $\mathbf{M} = \mathbf{M}_{Fe} + \mathbf{M}_R$ and antiferromagnetic (Néel) vector $\mathbf{L} = \mathbf{M}_{Fe} - \mathbf{M}_R$. These two vectors are parameterized using a set of angles $\theta,\epsilon$ and $\phi, \beta$ defined as:
\begin{eqnarray}
\theta_{Fe} &=& \theta - \epsilon, \quad \theta_{R} = \pi-\theta-\epsilon, \label{theta} \\
\phi_{Fe} &=& \phi + \beta, \quad \phi_{R} = \pi + \phi -\beta. \label{phi_angle}
\end{eqnarray}
In the quasi-antiferromagnetic approximation \cite{Davydova_2019}, the canting angles are assumed to be small $\epsilon \ll 1$, $\beta \ll 1$. In first order approximation, the $\mathbf{M}$ and $\mathbf{L}$ are then naturally defined as:
\begin{eqnarray}
\mathbf{M} &=& m(\cos\theta, \sin\theta\cos\phi, \sin\theta\sin\phi) \\
\mathbf{L} &=& \mathcal{M}(\cos\theta, \sin\theta\cos\phi, \sin\theta\sin\phi)
\end{eqnarray}
where $m \equiv M_{Fe} - M_{R}$ and $\mathcal{M} \equiv M_{Fe} + M_{R} $.
Substituting our new set of angles (\ref{theta})-(\ref{phi_angle}) into the Lagrangian (\ref{Lagrangian}) and expanding up to quadratic terms in the small variables $\epsilon, \beta$ gives for the kinetic energy part:
\begin{eqnarray}
\mathcal{L} &=& -\frac{m}{\gamma_{ef}}\dot{\phi}\cos\theta - \frac{\mathcal{M}}{\overline{\gamma}}\sin\theta \left(\dot{\phi}\epsilon + \beta \dot{\theta}\right) - \Phi \label{Lexpanded}
\end{eqnarray}
where we defined:
\begin{equation}
\frac{1}{\gamma_{ef}} \equiv \frac{M_{Fe}/\gamma_{Fe} - M_{R} / \gamma_R}{M_{Fe} - M_R} \ \ \ \ \ \ \ \ \ \text{ and } \ \ \ \ \ \ \ \ \ \ \frac{1}{\overline{\gamma}} \equiv \frac{M_{Fe}/\gamma_{Fe} + M_{R} / \gamma_R}{M_{Fe} + M_R}.
\end{equation}
The potential energy $\Phi$ can be expanded similarly. Here, we make the simplification that both sublattices experience the same effective anisotropy $K_U \equiv (K_{Fe} + K_R)/2$ in which case the anisotropy terms can by replaced by a single term $-K_U(\mathbf{l}\cdot\mathbf{n})^2$ where $\mathbf{l} = \mathbf{L}/|\mathbf{L}|$. Furthermore, the effective field $\mathbf{H}_{ef}$ in (\ref{phi}) consists of the static external field and the time-dependent THz magnetic field $\mathbf{H}_{ef} = \mathbf{H}_{ext} + \mathbf{H}_{THz}$. The external field is chosen along the $x$-axis $\mathbf{H}_{ext} = (H_0,0,0)$, while we assume the THz magnetic field lies in the $x-y$ plane $\mathbf{H}_{THz} \equiv (h_x(t), h_y(t), 0$) (see Fig. 1 from the article). Writing $\delta \equiv -4\Lambda M_{Fe} M_{R}$ and $H \equiv H_{ext} + h_x$, the potential energy becomes after expanding in $
\epsilon, \beta$:
\begin{eqnarray}
\label{Phiexpanded}
\Phi &=& -mH\cos\theta - \mathcal{M}H\epsilon\sin\theta - mh_y \sin\theta\cos\phi + \mathcal{M}h_y\beta \sin\theta\sin\phi \\
&+& \mathcal{M}h_y \epsilon \cos\theta \cos\phi - mh_y\cos\theta\sin\phi \ \epsilon\cdot\beta +\frac{\delta}{2}\left(\epsilon^2 + \beta^2 \sin^2\theta \right) - K_U\sin^2\theta \sin^2\phi.
\nonumber \end{eqnarray}
We will ignore the term containing $\epsilon \cdot \beta$ as it is very small, from the quadratic terms only the ones proportional to the exchange constant $\sim \delta$ survive. We exclude the variables $\epsilon$,
$\beta$ by solving the Euler-Lagrange equations $\frac{d}{dt}\frac{\partial \mathcal{L}}{\partial \dot{\epsilon}} - \frac{\partial \mathcal{L}}{\partial \epsilon} = -\frac{\partial \mathcal{R}}{\partial\dot{\epsilon}} \approx 0$ and $\frac{d}{dt}\frac{\partial \mathcal{L}}{\partial \dot{\beta}} - \frac{\partial \mathcal{L}}{\partial \beta} = -\frac{\partial \mathcal{R}}{\partial\dot{\beta}} \approx 0$, giving:
\begin{eqnarray}
\epsilon &=& \frac{\mathcal{M}}{\delta } \sin\theta \left(H - \frac{\dot\phi}{\overline{\gamma}}\right) - \frac{\mathcal{M}h_y}{\delta}\cos\theta\cos\phi, \\
\beta \sin\theta &=& -\frac{\mathcal{M}}{\delta}\left(\frac{\dot\theta}{\overline{\gamma}} + h_y \sin\phi\right).
\end{eqnarray}
Substituting in \eqref{Lexpanded}-\eqref{Phiexpanded} and rearranging terms yields the effective Lagrangian from the article:
\begin{eqnarray}
\begin{aligned}
\label{Leff}
\mathcal{L}_{eff} &= \frac{\mathcal{M}^2}{2\delta}\Bigg[\left(\Big(\frac{\dot\phi}{\overline{\gamma}} - H\Big)\sin\theta + h_y\cos\theta\cos\phi \right)^2 + \left(\frac{\dot\theta}{\overline{\gamma}}
+ h_y\sin\phi\right)^2 \Bigg]
+ m \Big(H - \frac{\dot\phi}{\gamma_{ef}}\Big) \cos\theta \\&\quad + mh_y\sin\theta\cos\phi + K_U\sin^2\theta \sin^2\phi.
\end{aligned}
\end{eqnarray}
The equations of motion are now determined by Euler-Lagrange equations:
\begin{eqnarray}
\frac{d}{dt}\Big(\frac{\partial \mathcal{L}_{eff}}{\partial \dot\theta} \Big) - \frac{\partial \mathcal{L}_{eff}}{\partial \theta} + \frac{\partial \mathcal{R}}{\partial \dot\theta} = 0, \label{Eulag1}\\
\frac{d}{dt}\Big(\frac{\partial \mathcal{L}_{eff}}{\partial \dot\phi} \Big) - \frac{\partial \mathcal{L}_{eff}}{\partial \phi} + \frac{\partial \mathcal{R}}{\partial \dot\phi} = 0. \label{Eulag2}
\end{eqnarray}
We solve these equations and linearize them around the equilibrium (ground-state) equilibrium values $\theta_0$ and $\phi_0$, which are found by minimizing (\ref{phi}) yielding $\phi_0 = \pi/2$ and $\theta_0$ depending on the ratio of external field to anisotropy (i.e. when $H_{ext} = 0$ we have $\theta_0 = \pi/2$ while $\theta_0 = 0$ when $H_{ext} \gg H_{anis}$). Linearizing around these values, i.e. $\theta = \theta_0 + \theta_l$ and $\phi = \phi_0 + \phi_l$ with $\theta_l, \phi_l \ll 1$, the first equation (\ref{Eulag1}) gives:
\begin{eqnarray}
\ddot{\theta}_l + \frac{\alpha\mathcal{M}\overline{\gamma}}{\chi_\perp}\dot{\theta}_l + \Big(-\overline{\gamma}^2H^2\cos2\theta_0 + \frac{m\overline{\gamma}^2H}{\chi_\perp}\cos\theta_0 - \frac{2K_U\overline{\gamma}^2}{\chi_\perp} \cos2\theta_0\Big)\theta_l
+ \Big(\overline{\gamma} H \sin2\theta_0 - \frac{m\overline{\gamma}^2}{\gamma_{ef}\chi_\perp}\sin\theta_0\Big)\dot\phi_l \nonumber\\ + \Big(-\overline{\gamma}^2Hh_y\cos2\theta_0 + \frac{m\overline{\gamma}^2h_y}{\chi_\perp}\cos\theta_0\Big)\phi_l
= - \overline{\gamma}\dot h_y + \frac{\overline{\gamma}^2H^2}{2}\sin2\theta_0 - \frac{m\overline{\gamma}^2H}{\chi_\perp}\sin\theta_0 + \frac{\overline{\gamma}^2K_U}{\chi_\perp}\sin2\theta_0.
\label{theta_eqn}
\end{eqnarray}
where we introduced the notation $\chi_\perp \equiv \frac{\mathcal{M}^2}{\delta}$. Similarly for the second Euler-Lagrange equation (\ref{Eulag2}):
\begin{eqnarray}
\ddot\phi_l + \frac{\alpha \mathcal{M}\overline{\gamma}}{\chi_\perp}\dot\phi_l + \phi_l\Big(-\overline{\gamma}\dot h_y\cot\theta_0+ \overline{\gamma}^2h_y^2+\frac{2K_U\overline{\gamma}^2}{\chi_\perp}\Big) +\dot\theta_l\Big(-2\overline{\gamma} H \cot\theta_0 + \frac{m\overline{\gamma}^2}{\gamma_{ef}\chi_\perp\sin\theta_0}\Big) \nonumber \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\
+ \ \theta_l\Big(-2\overline{\gamma}\dot h_x\cot\theta_0 + \overline{\gamma}^2 Hh_y(1-\cot^2\theta_0)+ \frac{\overline{\gamma}^2mh_y}{\chi_\perp}\frac{\cos\theta_0}{\sin^2\theta_0}\Big)
= \overline{\gamma} \dot h_x + \overline{\gamma}^2 Hh_y\cot\theta_0 - \frac{m\overline{\gamma}^2h_y}{\chi_\perp}\frac{1}{\sin\theta_0}. \label{motion2}
\end{eqnarray}
These equations can be drastically simplified by noting that $\frac{1}{\chi_\perp}$ is proportional the the exchange constant and is therefore relatively large. Also the field derivative term $\gamma \dot{h}_i$ is strong, while terms proportional to $\sim \gamma h_{x,y}$ within brackets are driving terms proportional to the response and thus negligible. Equations (\ref{theta_eqn})-(\ref{motion2}) are then given in approximation:
\begin{multline}
\label{thetasimple}
\ddot{\theta}_l + \frac{\alpha\mathcal{M}\overline{\gamma}}{\chi_\perp}\dot{\theta}_l + \Big(-\overline{\gamma}^2H^2\cos2\theta_0+\frac{m\overline{\gamma}^2H}{\chi_\perp}\cos\theta_0 - \frac{2K_U\overline{\gamma}^2}{\chi_\perp} \cos2\theta_0\Big)\theta_l +\Big(\overline{\gamma} H \sin2\theta_0 - \frac{m\overline{\gamma}^2}{\gamma_{ef}\chi_\perp}\sin\theta_0\Big)\dot\phi_l \\ = - \overline{\gamma}\dot h_y - \frac{m\overline{\gamma}^2H}{\chi_\perp}\sin\theta_0 + \frac{\overline{\gamma}^2K_U}{\chi_\perp}\sin2\theta_0,
\end{multline}
\begin{equation}
\ddot\phi_l + \frac{\alpha \mathcal{M}\overline{\gamma}}{\chi_\perp}\dot\phi_l +\frac{2K_U\overline{\gamma}^2}{\chi_\perp}\phi_l + \Big(-2\overline{\gamma} H \cot\theta_0 + \frac{m\overline{\gamma}^2}{\gamma_{ef}\chi_\perp\sin\theta_0}\Big)\dot\theta_l = \overline{\gamma} \dot h_x + \overline{\gamma}^2 Hh_y\cot\theta_0 - \frac{m\overline{\gamma}^2h_y}{\chi_\perp}\frac{1}{\sin\theta_0}.
\end{equation}
The large field derivatives of the THz field $\dot h_{x,y}$ appear as a dominant driving force in these equations of motion. Interestingly, only the $y$-component $\dot h_{y}$ appears in the equation of motion for $\theta_l$, which we use here to understand the qualitative difference in dependencies on THz pump polarization assuming the field-derivative driving force is dominant.
In the experiment we saturate the magnetization with the external field at a small angle $\theta_0\approx 0$ (thus $\theta_0 \approx \pi$ for $\mathbf{-H}_{ext}$). Given that $\phi_0 = \pi/2$, we have that the modulations in the magnetization $z$-component are $M_z(t) = M\sin\phi\sin\theta \sim \pm \theta_l$ for $\pm\mathbf{H}_{ext}$ external field. The experiment reveals we are only sensitive to $M_z(t)$, so the detectable Faraday rotation modulations should also be proportional $\sim \pm \theta_l(t)$. When $\mathbf{H}_{THz} \perp \mathbf{M}$, $\dot h_x = h_x \neq 0$ and we have a strong non-zero driving force in \eqref{thetasimple}, it explains why we see immediate strong oscillations in $M_z$. Because the driving term has the same sign for both external field polarities $\pm \mathbf{H}_{ext}$, the forced oscillations must be sensitive to the polarity of the external magnetic field $\ddot M_z(t = 0) = \frac{d^2}{dt^2} \sin\big(\theta_0 + \theta_l(t)\big)\Bigr\rvert_{t = 0} \sim \pm \ddot\theta_l(t=0) \sim \mp \gamma \dot h_y$ (as $\theta_0 = 0,$ $\pi$ for $\pm \mathbf{H}_{ext}$) i.e. this is an $\mathbf{H}$-odd effect. After the THz pulse has left the sample, the system of equations resembles those for a harmonic oscillator in $2$D, meaning the subsequent free oscillations will have opposite phases in the cases of opposite polarities of the external magnetic field.
Meanwhile by a similar argument, it is clear why a strong response is absent in $M_z$ when $\mathbf{H}_{THz} \parallel \mathbf{M}_{\parallel}$ ($\dot h_y = h_y = 0$) as in this case only the equation of motion for the in-plane dynamics $\phi_l(t)$ (Eq. \eqref{motion2}), to which we are not sensitive, has an initial non-zero driving force $\gamma \dot h_x$ while $\theta_l(t)$ does not. Detectable oscillations in $\theta_l(t)$ are instead only driven by cross-terms like $-\frac{m\overline{\gamma}^2}{\gamma_{ef}\chi_\perp}\sin\theta_0\dot\phi_l$ (Eq. \eqref{thetasimple}). Here it is important that the ground state $\theta_0$ is not exactly equal to $0$ and $\pi$, i.e. $\sin(\theta_0) = \pm \rho$ for $\pm \mathbf{H}_{ext}$ with $\rho > 0$ small constant (due to experimental canting of external field, otherwise no dynamics in this case is observed as was also seen in the experiment and simulations). Thus for opposite external field polarities, the driving force in $\theta_l$ has opposite sign $\mp \frac{m\gamma}{\chi_\perp}\rho\dot\phi_l $, contrary to the previous case. This means that subsequent oscillations are now expected to be even with $\mathbf{H}_{ext}$, in accordance with what was seen experimentally. Because these field-even oscillations are a secondary result from primary in-plane oscillations $\phi(t)$, it also explains why the observed effects are relatively weak when $\mathbf{H}_{THz} \parallel \mathbf{M}_{\parallel}$ compared to $\mathbf{H}_{THz} \perp \mathbf{M}$.
The eigenfrequencies in the article have been found by solving the coupled set of equations:
\begin{eqnarray}
\label{motiontheta}
\ddot{\theta}_l + \frac{2K_U\overline{\gamma}^2}{\chi_\perp}\theta_l - \frac{m\overline{\gamma}^2}{\gamma_{ef}\chi_\perp}\dot\phi_l &=& 0, \\
\ddot\phi_l +\frac{2K_U\overline{\gamma}^2}{\chi_\perp}\phi_l + \frac{m\overline{\gamma}^2}{\gamma_{ef}\chi_\perp}\dot\theta_l &=& 0.
\end{eqnarray}
Assuming $\theta_l, \phi_l \sim \exp(i\omega t)$ the frequencies can be solved by the equation
\begin{equation}
\begin{vmatrix}
-\omega^2 + \omega_K^2 & -i \omega \omega_{ex} \\
i \omega \omega_{ex} & -\omega^2 + \omega_{K}^2
\end{vmatrix} = 0,
\end{equation}
where $\omega_K^2 = 2K_U\overline{\gamma}^2/\chi_\perp $ and $\omega_{ex} = m\overline{\gamma}^2/\gamma_{ef}\chi_\perp$
in the case of weak anisotropy $\omega_{ex} \gg \omega_K$, and we obtain:
\begin{equation}
\omega = \pm \frac{\omega_{ex}}{2} \pm \sqrt{\frac{\omega_{ex}^2}{4} + \omega_K^2} \approx \pm \frac{\omega_{ex}}{2} \pm (\frac{\omega_{ex}}{2} + \omega_K^2/\omega_{ex}).
\end{equation}
Thus we obtain two approximate absolute frequencies:
\begin{eqnarray}
\omega_1 &=& \omega_K^2/\omega_{ex} = \gamma_{ef} \frac{2K_U}{m},\\
\omega_2 &\approx& \omega_{ex} = \frac{m\overline{\gamma}^2}{\gamma_{ef}\chi_\perp} \approx |\Lambda|(|\gamma_{R}|M_{Fe} - |\gamma_{Fe}|M_{R})
\end{eqnarray}
where in the last approximation we used that $M_{Fe}M_{R} \approx (M_{Fe} + M_R)^2/4$ and $(M_{Fe}/\gamma_{Fe} + M_{R}/\gamma_{R})^2 \approx (M_{Fe}+M_{R})^2/\gamma_{Fe}\gamma_R$ to recover the approximate Kaplan-Kittel expression of exchange resonance.
\bibliographystyle{ieeetr}
|
2,869,038,153,996 | arxiv | \section{Introduction}
Electrification is considered as a major developing factor in modern societies. However, even if 84\% of world population did have access to electricity in 2016, this figure hides significant disparities. First, between countries, as this percentage dropped to 19\% for sub-Saharan Africa during the same year. Then, between areas as the vast majority of people without electricity access (around 80\%) lived in rural zones worldwide \cite{WEO_IEA}.\\
Rural electrification in developing countries is a significant challenge. As a matter of fact, traditional extension of centralized grid may be inefficient in this context for reasons such as capital scarcity, remoteness and lack of reliability \cite{schnitzer_microgrids_????}.
Autonomous microgrids thus offer an efficient alternative for rural electrification as they are less capital-intensive and offer better reliability thanks to distributed energy resources.\\
Autonomous microgrid planning consists in making investment decisions concerning an isolated microgrid on a predefined planning horizon. The isolated (or autonomous) character of such power systems implies that sufficient generation capacity should be placed to meet the total demand occurring in the system. Considering a set of $n$ nodes representing future consumption points to electrify and consumption profiles for these nodes, the problem consists in answering the following questions in such a way as to minimize the total cost (OPEX and CAPEX) of the system on the planning horizon:
\begin{itemize}
\item Which distribution and generation assets should be placed?
\item Where to place them?
\item When to place them?
\end{itemize}
The problem is inherently uncertain as consumption profiles are forecasts subject to errors. Furthermore, if RES based generators are considered, their power output is also uncertain. At last, uncertainty also arises from microgrid components (lines, generators) subject to contingencies. \\In this paper, we present a robust approach to autonomous microgrid planning considering load consumption uncertainty. This paper extends our previous work on a deterministic formulation for this problem \cite{martin_comparison_????} with the computation of worst operating scenarios developed in \cite{capitanescu_computation_2013}.\\
This paper is organized as follows: the deterministic formulation previously developed is presented in section \ref{DetFor}, section \ref{UncMod} describes the uncertainty model that is used and section \ref{ProbScen} presents the approach used to compute problematic operating scenarios. We present the results in section \ref{Res} before concluding in section \ref{Concl}.
\section{Deterministic formulation of autonomous microgrid planning}
\label{DetFor}
Autonomous microgrid planning is a high-dimensional problem with a lot of discrete investment decision variables. Firstly, generation and distribution capacity have to be installed, we thus consider them simultaneously in the planning problem as sequential planning of these two elements would be suboptimal. Secondly, autonomous microgrid planning differs from traditional expansion planning in the way that microgrids are often built from scratch in this context, which necessitates a lot of decision variables. The joint consideration of large number of discrete decision variables is known to lead to combinatorial explosion. Another salient feature of the problem is its non-convexity which is caused by power flow equations.\\
We presented in a previous work \cite{martin_comparison_????} four different convex formulations of the planning problem that could be cast as mixed-integer convex programs. The convexity of these formulations allows to reach a single global optimum. The first three formulations are linear approximations of the non-convex problem while the last one is a second-order cone relaxation of the original problem. In the latter case, the objective value corresponding to the relaxation global optimum is thus a lower bound of the optimal objective value for the non-convex problem. For this work, Benders decomposition has been successfully used to manage the dimensionality of the integer part of the problem.
Our goal is to take investment decisions (lines, generators) and operational decisions (production of generators) in order to minimize the total net present value of the system on the planning horizon . Investment decisions are taken once a year while operational decisions are taken once an hour. We simulate a limited amount of representative days per year.
\noindent We consider a set of \textit{n} nodes representing consumption points to electrify. Data for these nodes includes location (coordinates) and hourly consumption profiles for two typical days of the year. Consumptions are assumed to increase at a uniform rate each year, triggering the need for reinforcements. Available investment options consist of diesel generator sets with a linear operational cost function and overhead cables. We consider a unique size available for the lines and the generators. However, while there may be at most one generator installed at a node \textit{i}, there may be several lines placed in parallel between nodes \textit{i }and \textit{j }which is equivalent to a bigger line.
\noindent The objective is to minimize the NPV of the system, which is the sum of discounted yearly cash flows (CAPEX and OPEX). In these expressions, ${\omega }_{ijy}\ $is a binary variable equal to 1 if there is at least one conductor between nodes \textit{i} and \textit{j }at year \textit{y}, ${\gamma }_{ijy}\ is$ the number of lines in parallel between nodes \textit{i }and \textit{j }at year \textit{y,}$\ {\sigma }_{iy}\ $is a binary variable equal to 1 if there is a generator of fixed size at node \textit{i }at year \textit{y,}$\ P_{Git}$\textit{ }is the active power produced at node \textit{i }at period \textit{t.} The parameters are the following:\textit{ }$C_{cond}$\textit{ }is the cost of a single conductor (\$/km), $C_{pole}$ is the cost of poles (\$/km) (a unique pole is required regardless of the number of conductors),$\ D_{ij}$ is the distance between nodes \textit{i} and \textit{j }(km),\textit{ } $C_{Gen}$ is the fixed cost for installing a generator, \textit{a }and \textit{b} are the parameters of the generator linear cost function and \textit{ra} is the discount rate. As we only simulate a limited amount of days per year, we multiply the fuel costs for these days by a suitable scaling factor $H$ to represent the yearly operation cost.
\begin{flalign}
&CAPEX_{Dist,y}=\sum_{\left(i,j\right)}{\left({\gamma }_{ijy}-{\gamma }_{ijy-1}\right){D_{ij}C}_{cond}}&
\end{flalign}
\begin{flalign*}
+\left({\omega }_{ijy}-{\omega }_{ijy-1}\right){D_{ij}C}_{pole}&
\end{flalign*}
\begin{flalign}
&CAPEX_{Gen,y}=\sum_i{\left({\sigma}_{iy}-{\sigma}_{iy-1}\right)C_{Gen}}&
\end{flalign}
\begin{flalign}
&OPEX_y=H\sum_i{\sum_{t\in y}{(a{\sigma}_{iy}+bP_{Git}})}&
\end{flalign}
\begin{flalign}
&NPV=\sum^Y_{y=1}{\frac{1}{{\left(1+ra\right)}^y}[CAPEX_{Dist,y}+CAPEX_{Gen,y}+OPEX_y]}&
\end{flalign}
Eq. (5) express the fact that ${\gamma }_{ijy}$ shouldn't exceed the maximal amount of parallel lines $\overline{\xi }$ and that investments are permanent (i.e. cannot be unmade in following years). Eq. (6) also states that investments in generation can't be unmade as ${\sigma }_{iy}$, should be increasing though time. Eq. (7) simply expresses the symmetry of the problem regarding lines.
\begin{flalign}
\gamma_{ijy-1}\le {\gamma }_{ijy}\le \overline{\xi}&
\end{flalign}
\begin{flalign}
{\sigma }_{iy-1}\le {\sigma }_{iy} &
\end{flalign}
\begin{flalign}
\omega_{ijy}=\omega_{jiy}, \gamma_{ijy}=\gamma_{jiy}&
\end{flalign}
Eqs. (8) to (11) force the network to be at least radial (and allows it to be meshed) where \textit{n} is the number of nodes and $f_{jiy}$ is a fictitious flow only used to ensure connectivity of the network. The idea behind those constraints is to have a fictitious source supplying \textit{(n-1)} units at node 1 and fictitious sinks at other nodes that consume \textit{1}. As there is only one source, eqs. (8) to (11) ensure that there is no island in the network.
\begin{flalign}
\sum_{\left(i,j\right)\ }{{\omega }_{ijy}\ge 2\times \left(n-1\right)}&
\end{flalign}
\begin{flalign}
f_{ijy}\le {\omega }_{ijy}\times n &
\end{flalign}
\begin{flalign}
\sum_{(i,j)}{f_{1jy}=n-1} &
\end{flalign}
\begin{flalign}
\sum_{(i,j)}{f_{jiy}=1+\ }\ \sum_{(i,j)}{f_{ijy}} &
\end{flalign}
Eq. (12) ensures that the active power produced at node \textit{i }is smaller than the generation capacity installed at this node and larger than the technical minimum. Eq. (13) represents reactive capabilities of generators, with ${\mathrm{cos} \left(\mathrm{\Phi }\right)\ }$ being the minimal power factor (reactive or inductive) of generation units.
\begin{flalign}
{\sigma \ }_{iy}\ \underline{P}\ \le P_{Git}\le {\sigma }_{iy}\ \overline{P} &
\end{flalign}
\begin{flalign*}
-P_{Git}\times \mathrm{tan}\mathrm{}({{\mathrm{cos}}^{-1} \left({\mathrm{cos} \left(\mathrm{\Phi }\right)\ }\right)\ }\le Q_{Git}&
\end{flalign*}
\begin{flalign}
{\ \ Q}_{Git}\le P_{Git}\times \mathrm{tan}\mathrm{}({{\mathrm{cos}}^{-1} \left({\mathrm{cos} \left(\mathrm{\Phi }\right)\ }\right)\ } &
\end{flalign}
Eqs. (14) and (15) represent the active and reactive nodal power balance respectively, $P_{Cit}$ and $Q_{Cit}$ representing active and reactive power consumptions at node \textit{i }at period \textit{t }while$\ p_{ijt}$ and $q_{ijt}\ $represent active and reactive power flows from \textit{i }to \textit{j} at period \textit{t}.
\begin{flalign}
P_{Git}-P_{Cit}=\sum_{\left(i,j\right)\ }{p_{ijt}} &
\label{active_balance}
\end{flalign}
\begin{flalign}
Q_{Git}-Q_{Cit}=\sum_{(i,j)\ }{q_{ijt}} &
\label{reactive_balance}
\end{flalign}
\noindent We define binary variables $loi_{ijyk}$ to be equal to 1 if the amount of parallel lines between \textit{i }and \textit{j }is greater or equal to \textit{k }and zero otherwise, which is expressed by eqs. (16) and (17). These variables are used to write constraints (18), (19), (22) and (23) for each possible level of investment in lines such that we avoid bilinear terms.
\begin{flalign}
\sum^{\overline{\xi}}_{k=1}{loi_{ijky}={\gamma}_{ijy}}&
\end{flalign}
\begin{flalign}
{\omega}_{ijy}=loi_{ijy1}&
\end{flalign}
The last constraints express the physics of power flows. $\mathrm{\Psi }_{ijt}$ represents the squared amplitude of line current and $\nu_{it}$ represents the squared voltage amplitude. Eqs. (18) expresses active losses and is written such that the only active constraint is the one corresponding to the actual amount of parallel lines between \textit{i}and \textit{j} (i.e. to the unique k such that $loi_{ij,k,y}=1$ and $loi_{ij,k+1,y}=0$), in order to avoid bilinear terms. The reactive losses on the line are similarly developed by replacing $p_{ijt}$ and $r$ by $q_{ijt}$ and $x$ respectively in eq. 18. Parameters $\textit{r}$ and $\textit{x}$ are the line resistance and reactance per unit length and $\textit{M}_1$ is a large enough constant. Finally, eqs. (19) force active and reactive losses to be positive on every line. While being redundant, these constraints considerably tighten the resulting model.
\noindent
\begin{flalign*}
-\left(1-(loi_{ijky}-loi_{ijk+1y})\right)M_1&
\end{flalign*}
\begin{flalign*}
\le p_{ijt}+p_{jit}-\frac{rD_{ij}}{k}{\mathrm{\Psi }}_{ijt}&
\end{flalign*}
\begin{flalign}
\le \left(1-(loi_{ijky}-loi_{ijk+1y})\right)M_1&
\end{flalign}
\begin{flalign}
p_{ijt}+p_{jit}\ge 0,\ q_{ijt}+q_{jit}\ge 0&
\end{flalign}
Eq. (20) expresses the fact that power flowing in a line is the product of node voltage and line current. It is relaxed as an inequality and has the form of a (convex) rotated second order cone constraint.
\noindent
\begin{flalign}
p^2_{ijt}+q^2_{ijt}\le {\mathrm{\Psi }}_{ijt}{\nu }_{it}&
\end{flalign}
Eq. (21) expresses voltage drops and is written in a way similar to (18). Eq. (23) expresses nodal voltage bounds.
\begin{flalign*}
-\left(1-(loi_{ijky}-loi_{ijk+1y})\right)M_2&
\end{flalign*}
\begin{flalign*}
\le {\nu }_{jt}-{\nu }_{it}+2D_{ij}(\frac{r}{k}p_{ijt}+\frac{x}{k}q_{ijt}-\frac{D_{ij}}{k^2}\left(r^2+x^2\right){\mathrm{\Psi }}_{ijt})
\end{flalign*}
\begin{flalign}
\le \left(1-(loi_{ijky}-loi_{ijk+1y})\right)M_2&
\end{flalign}
\begin{flalign}
{\underline{v}}^2\le {\nu }_{it}\le {\overline{v}}^2&
\end{flalign}
Eq. (23) is the line thermal rating constraint. It is a SOC constraint.
\begin{flalign}
p^2_{ijt}+q^2_{ijt}\le {\gamma }^2_{ijy}{\overline{S}}^2&
\end{flalign}
Finally, as proposed in [4], we introduce lower and upper bounds on voltage angle differences even if this formulation doesn't include angles explicitly. As a matter of fact, these constraints significantly tighten the model. For brevity, we only present the general form of these constraints using the set of available variables. The bilinear term ${\gamma }_{ijy}{\nu }_{it}\ $can be replaced by an appropriate lift-and-project relaxation and ``big M'' constraints similar to (18) and (21). The parameter ${\theta }^{\mathrm{\Delta }}\ $is the maximum angle difference allowed between two nodes.
\begin{flalign*}
rD_{ij}\left(q_{ijt}+{\mathrm{tan} \left({\theta }^{\mathrm{\Delta }}\ \right)\ }p_{ijt}\right)+xD_{ij}\left({\mathrm{tan} \left({\theta }^{\mathrm{\Delta }}\ \right)\ }q_{ijt}-p_{ijt}\right)&
\end{flalign*}
\begin{flalign}
\le {\mathrm{tan} \left({\theta }^{\mathrm{\Delta }}\ \right)\ }{\nu }_{it}{\gamma }_{ijy}&
\end{flalign}
\begin{flalign*}
xD_{ij}\left(p_{ijt}+{\mathrm{tan} \left({\theta }^{\mathrm{\Delta }}\ \right)\ }q_{ijt}\right)+rD_{ij}\left({\mathrm{tan} \left({\theta }^{\mathrm{\Delta }}\ \right)\ }p_{ijt}-q_{ijt}\right)&
\end{flalign*}
\begin{flalign}
\le {\mathrm{tan} \left({\theta }^{\mathrm{\Delta }}\ \right)\ }{\nu }_{it}{\gamma }_{ijy}&
\end{flalign}
\textbf{}
\noindent This model is computationally intractable in its Mixed Integer Second Order Cone (MISOC) form. We thus apply the Ben-Tal Nemirovski (BTN) relaxation [7] to the SOC constraints (Eqs. (20) and (23)). It consists in replacing the second order cones by cutting places in an efficient way, with arbitrary accuracy. This formulation is named MISOC BTN hereafter.
\noindent
\noindent
\noindent
\noindent
\noindent
\section{Modeling of uncertainty in autonomous microgrid planning}
\label{UncMod}
Three sources of uncertainty can be distinguished in the autonomous microgrid planning problem: load forecast errors, RES-based generation forecast errors and contingencies. In this paper, we only consider load uncertainty that will be modelled with a rectangular uncertainty set $\Omega =\{\boldsymbol{\omega}\in \mathbb{R}^{n^{\Omega}} \; :\omega_i \in [\omega_i^L;\omega_i^U] \; \forall i \in 1,...,n^{\Omega}\}$. This means that we only consider the interval in which random load consumptions may vary without making any assumption about the distribution of these random variables.\newline
By considering uncertainty in the problem formulation, the aim is to build a microgrid satisfying all constraints defined in the previous section not only for a single scenario , e.g. the most likely one, but also for every possible realization of random variables, i.e. every $\boldsymbol{\omega}\in\Omega$. Autonomous microgrid planning thus becomes a robust optimization (RO) problem.\newline However, such problems are difficult to solve and are generally NP-hard . Indeed, considering continuous random variables potentially leads to an infinite uncertainty space which in turn leads to an infinite amount of constraints to consider in the RO problem \cite{calafiore_scenario_2006}. In \cite{calafiore_scenario_2006}, the authors propose a finite constraint sampling scheme to overcome this problem. They show that the probability of constraint violations rapidly decreases with the amount of samples. They also provide an upper bound on the amount of samples needed to obtain a predefined level of confidence concerning constraint enforcement, which allows to efficiently solve the problem to arbitrary accuracy. In \cite{margellos_road_2014} and \cite{venzke_convex_2017}, the authors propose another method to reduce to a finite size the set of constraints. They show that for a problem with a polytopic uncertainty set $\Omega$ and convex constraints of the form $g(x)\leq 0$ , the body of these constraints will always be maximal on the vertices of $\Omega$. To enforce such constraints for all $\boldsymbol{\omega}\in\Omega$, it is thus sufficient to enforce them on every vertex of $\Omega$. Nonetheless, even in the simple case where $\Omega$ is a rectangular set, the amount of vertices is equal to $2^{n^{\Omega}}$ which rapidly becomes intractable with a growing $n^{\Omega}$.\newline
Consequently, we adopt the approach developed in \cite{capitanescu_computation_2013} which has been used for security planning under uncertainty in transmission networks \cite{capitanescu_cautious_2012}. This approach consists in computing a subset of the vertices of $\Omega$, i.e. a set of scenarios to incorporate in the RO problem, sufficient to guarantee constraint enforcement on the whole uncertainty set $\Omega$. The approach, described in section \ref{ProbScen}, is based on the successive and iterative computation of an adversarial problem where the infeasibility ( i.e. violation) of the constraints is maximized in order to find problematic scenarios to add to the RO problem and a corrective problem where we try to remove these infeasibilities thanks to remedial actions. In this paper, we consider two sorts of infeasibilities: insufficient generation capacity and line thermal rating violation.
\section{Determination of problematic scenarios}
\label{ProbScen}
The scenario generation algorithm developed in \cite{capitanescu_cautious_2012} can be summarized as follows, $\mathcal{S}$ and $\mathcal{PS}$ being the set of scenarios to consider in the RO problem and the current set of problematic scenarios respectively. All these steps are described in the following subsections.
\begin{enumerate}
\item Initialize $\mathcal{S}$ with the scenario corresponding to the deterministic case
\item Unfix investment variables, solve main problem on $\mathcal{S}$ and then fix investement variables to their current optimal values
\item Reinitialize $\mathcal{PS} \leftarrow \emptyset $ and solve adversarial problem to compute the current set $\mathcal{PS}$
\item Solve corrective problem $\forall s \in \mathcal{S}$. If there are no more infeasibilities for $s^*$, then it is not a problematic scenario: $\mathcal{PS} \leftarrow \mathcal{PS}\setminus \{s^*\}$
\item If $\mathcal{PS} = \emptyset$, END. Else, update $\mathcal{S}\leftarrow\mathcal{S} \cup \mathcal{PS}$ and go back to step 2).
\end{enumerate}
We now express active and reactive consumptions as random variables $\tilde{P}_{cit}$ and $\tilde{Q}_{cit}$ in the adversarial problem while they were parameters in the deterministic formulation of section \ref{DetFor}. We thus have two uncertainty sets: $\Omega^{P}$ and $\Omega^{Q}$ for active and reactive power consumptions respectively. $n^{\Omega}$ is equal to $n \times T$ as there is a power consumption forecast for every node and every timestep of the planning horizon. A scenario $s$ thus consists of two matrices $\mathbf{\tilde{P}_{c}}$ and $\mathbf{\tilde{Q}_{c}}$ $\in \mathbb{R}^{n\times T}$ corresponding to a particular realization of random variables.
\subsection{Main problem}
The main problem is the deterministic problem described in section \ref{DetFor} with the following differences: operational variables $P_{Gits},Q_{Gits},p_{ijts},q_{ijts}, \Psi_{ijts}$ and $\nu_{its}$ and operational constraints (12)-(15) and (18)-(25) are now indexed on the scenario set $\mathcal{S}$ as well and the total OPEX is now the expected value of OPEX over all scenarios considering that they all are equiprobable.\newline It is thus a deterministic problem where operational constraints are replicated for each $s \in \mathcal{S}$. This problem is solved at each iteration of the algorithm. It should be emphasized that, while operational variables may be now adapted for each scenario, investment variables remain common to every scenario in order to find a unique investment plan suitable for the whole set $\mathcal{S}$.
\subsection{Adversarial problem}
The goal of the adversarial problem is to maximize the infeasibility. In this problem, we consider the investment variables $\gamma_{ijt},\omega_{ijt},loi_{ijkt}$ and $\sigma_{it}$ as fixed since we want to evaluate the current investment solution obtained by solving the main problem at the current iteration. As mentioned in the previous section, we consider two types of infeasibilities: lack of generation capacity and line thermal rating violation. We consider them separately in consecutive problems.
\paragraph{Generation infeasibility}
In this subproblem, we look for the random variable values that maximize the generation infeasibility. For that, we rewrite constraints \ref{active_balance} and \ref{reactive_balance} by including active and reactive power shedding respectively defined such as $P_{shed,it}\geq 0$ and $Q_{shed,it}\geq 0$.
\begin{flalign}
P_{Git}-\tilde{P}_{Cit}+P_{shed,it}=\sum_{\left(i,j\right)\ }{p_{ijt}} &
\label{active_balance_var}
\end{flalign}
\begin{flalign}
Q_{Git}-\tilde{Q}_{Cit}+Q_{shed,it}=\sum_{(i,j)\ }{q_{ijt}} &
\label{reactive_balance_var}
\end{flalign}
The adversarial subproblem corresponding to generation infeasibility is then written as follows:
\begin{flalign*}
\max_{\mathbf{\tilde{P}_{c}} \in \Omega^{P},\mathbf{\tilde{Q}_{c}} \in \Omega^{Q},P_{Git},Q_{Git}}\sum_{t=1}^T\sum_{i=1}^n A^{Gen}_{it}(P_{shed,it}+Q_{shed,it}) &\\
s.t. (12)-(13),(18)-(27)
\label{max_gen_infeas}
\end{flalign*}
$\mathbf{A^{Gen}}$ $\in \ \{0,1 \}^{n\times T}$ is simply a matrix that controls the indices $i$ and $t$ we want to include in the generation infeasibility maximization objective.
\paragraph{Line thermal rating infeasibility}
We now look for the random variable values that maximize the line thermal rating infeasibility. To this end, we remove constraint (23) for all indices and consider it in the objective. The corresponding adversarial subproblem is then:
\begin{flalign*}
\max_{\mathbf{\tilde{P}_{c}} \in \Omega^{P},\mathbf{\tilde{Q}_{c}} \in \Omega^{Q},P_{Git},Q_{Git}}\sum_{t=1}^T\sum_{ij} A^{Therm}_{ijt}(p_{ijt}^2+q_{ijt}^2-\gamma_{ijt}^2\overline{S}^2) &\\
s.t. (12)-(15),(18)-(22),(24)-(25)
\end{flalign*}
Similarly to the generation infeasibility adversarial problem, $\mathbf{A^{Therm}}$ $\in \ \{0,1 \}^{n\times n \times T}$ controls the indices $(i,j)$ and $t$ we want to include in the line thermal rating infeasibility maximization objective.
\subsection{Corrective problem}
In case the adversarial problem finds a problematic scenario, i.e. a scenario for which the objective of the adversarial problem is strictly positive, we now try to find corrective actions that can relieve the constraints violations previously maximized. The random variables are fixed (i.e. we fix the scenario) and we look for active/reactive generation setpoints such as to minimize constraint violations.
\paragraph{Corrective problem for generation infeasibility}
The problem is written as follows:
\begin{flalign*}
\min_{P_{Git},Q_{Git}}\sum_{t=1}^T\sum_{i=1}^n P_{shed,it}+Q_{shed,it} &\\
s.t. (12)-(13),(18)-(27)
\end{flalign*}
\paragraph{Corrective problem for line thermal rating infeasibility}
Eq. (23) is rewritten with a slack term $\delta_{ijt}\geq 0$ that represents a potential line thermal rating violation. It has to be noted that $\gamma_{ijt}$ is fixed in this problem, the introduction of $\delta_{ijt}$ in (23) thus doesn't remove its convexity. The problem is written as follows:
\begin{flalign*}
\min_{P_{Git},Q_{Git}}\sum_{t=1}^T\sum_{ij}\delta_{ijt}^2 \\
s.t. (12)-(15),(18)-(22),(24)-(25)&\\
p^2_{ijt}+q^2_{ijt}\le {\gamma }^2_{ijy}{\overline{S}}^2+\delta_{ijt}^2
\end{flalign*}
\subsection{Robust planning}
We present the whole algorithm for robust planning in the following flowchart (Fig \ref{flowchart}). For sake of clarity, we only describe the steps of the algorithms related to generation infeasibility. However, at every iteration of the algorithm, exactly equivalent steps are performed in parallel regarding line thermal rating infeasibility. Consequently, at every iteration, scenarios producing generation infeasibilities as well as scenarios causing line rating infeasibility are added to $\mathcal{S}$.
\tikzstyle{decision} = [diamond, draw,
text width=6em, text badly centered, node distance=3cm, inner sep=0pt]
\tikzstyle{block} = [rectangle, draw,
minimum width=9em, text centered, rounded corners, minimum height=4em]
\tikzstyle{block2} = [rectangle, draw, fill=yellow!20,
text width=9em, text centered, rounded corners, minimum height=4em]
\tikzstyle{line} = [draw, -latex']
\tikzstyle{cloud} = [draw, ellipse,fill=red!20, node distance=3cm,
minimum height=4em]
\begin{figure}[!h]
\scalebox{0.65}{
\centering
\begin{tikzpicture}[node distance = 2cm, auto, scale=0.6]
\node [block,yshift=-5cm] (node1) {\begin{tabular}{l} $\mathcal{S}\leftarrow \{\text{Deterministic scenario} \}$ \\ $\mathcal{PS}\leftarrow \emptyset$ \end{tabular}};
\node [block, below of = node1] (node2) {\begin{tabular}{l} Unfix $\gamma,\omega,loi,\sigma$ \\ Solve main problem on $\mathcal{S}$\end{tabular}};
\node [block, below of = node2] (node3) {\begin{tabular}{l} Fix $\gamma,\omega,loi,\sigma$\\$\mathcal{PS}\leftarrow \emptyset$ \end{tabular}};
\node [block, below of = node3] (node4) {\begin{tabular}{l} $\forall (i^*,j^*), i^*\in \{1,...,n\}, t^* \in$ \{1,...,T\}\\ $A_{it}=1$ if $(i,t)=(i^*,t^*)$ and $0$ otherwise \end{tabular}};
\node [block, below of = node4] (node5) {\begin{tabular}{l} Solve generation adversarial problem\\Identify the set of violated constraints $\mathcal{VC}$\\$\mathcal{VC}=\{(i,t): Q_{shed,it}+P_{shed,it}> 0\}$ \end{tabular}};
\node [block, below of = node5, yshift=-4.3cm] (node7) {\begin{tabular}{l} Fix $\tilde{P}_c$ and $\tilde{Q}_c$ and let $s^*=(\tilde{P}_c,\tilde{Q}_c)$\\Solve generation corrective problem \end{tabular}};
\node [block, right of = node7, xshift=2.5cm,yshift=3.7cm] (node6) {\begin{tabular}{l}$A_{it}=1$ if $(i,t)\in \mathcal{VC}$\\ and $0$ otherwise \end{tabular}};
\node [block, below of = node6] (node6bis) {\begin{tabular}{l}Solve generation \\adversarial problem \end{tabular}};
\node [block,below of = node7,xshift=2cm] (node8) {$\mathcal{PS} \leftarrow \mathcal{PS}\cup \{ s^*\}$};
\node [decision,below of = node8,xshift=-2cm] (node9) {All constraints considered?};
\node [decision,below of = node9] (node10) {$\mathcal{PS} = \emptyset$?};
\node [block,left of = node10,xshift=-2.5cm] (node12) {$\mathcal{S} \leftarrow \mathcal{S} \cup \mathcal{PS}$};
\node [block,right of = node10,xshift=2cm] (node11) {END};
\path [line] (node1) -- (node2);
\path [line] (node2) -- (node3);
\path [line] (node3)-- (node4);
\path [line] (node4) -- (node5);
\path [line] (node5) -- node{$\mathcal{VC}\supset \{(i^*,j^*)\}$}(node6);
\path [line] (node6) -- (node6bis);
\path [line] (node6bis) -- (node7);
\path [line] (node7) -- node[xshift=0.5cm,yshift=-0.3cm]{$Objective>0$}(node8);
\path [line] (node7) -- node[left]{$Objective=0$}(node9);
\path [line] (node8) -- (node9);
\path [line] (node12) |- (node2);
\path [line] (node10) -- node{N} (node12);
\path [line] (node10) -- node{Y} (node11);
\path [line] (node9) -- node{Y} (node10);
\path [line] (node9) --node{N}(-6.5,-40.5)--(-6.5,-18.5) |- (node4);
\path [line] (node5) -- node{$\mathcal{VC}= \{(i^*,j^*)\}$}(node7);
\path [line] (node5) --node{$\mathcal{VC}= \emptyset$}(-5.8,-25) -|(-5.8,-38)--(node9);
\end{tikzpicture}
}
\caption{Flowchart of the robust planning algorithm for the case of generation infeasibility}
\label{flowchart}
\end{figure}
\subsection{Extension to general probabilistic modelling of uncertainty}
As mentioned in the beginning of this section, we only consider a rectangular uncertainty set with no assumption on the distribution of random variables. However, the proposed method can also be used with probabilistic modelling, i.e. when we consider that the joint distribution function of random variables is known (variables may be correlated in general). Indeed, let us consider the vector of random variables $\mathbf{\omega} \in \Omega$ and the joint density function $p(\mathbf{\omega})$. If we define the two vectors of parameters $\overline{\mathbf{\omega}}^L$ and $\overline{\mathbf{\omega}}^U$, the probability that $\mathbf{\overline{\omega}}^L \leq \mathbf{\omega}\leq\mathbf{\overline{\omega}}^U$ is then expressed as the following integral:
\begin{equation}
\mathbb{P}(\mathbf{\overline{\omega}}^L \leq \mathbf{\omega}\leq\mathbf{\overline{\omega}}^U)=\int_{\mathbf{\overline{\omega}}^L}^{\mathbf{\overline{\omega}}^U}p(\mathbf{\omega})d\omega
\end{equation}
As mentioned in \cite{margellos_road_2014}, this allows to formulate chance-constrained optimization as robust optimization. Indeed, the chance-constrained paradigm consists of finding the extremum of an objective function $f(x) $ while allowing constraints $h(x)\leq 0$ to be violated with a small probability $\epsilon$ as written hereunder.
\begin{flalign}
&\sup_{x}f(x) \\
&s.t. \quad \mathbb{P}\big(h(x,\omega)\leq 0 \big)\geq 1-\epsilon \quad \forall \mathbf{\omega} \in \Omega
\label{chanceconstrained1}
\end{flalign}
We can reformulate this problem as a robust (deterministic) on a subspace of $\Omega$ such as the probability that random variables belong to this subspace is equal to $1-\epsilon$. This is written as follows. Note that the rectangular uncertainty interval defined by constraint (34) can be computed 'offline.
\begin{flalign}
\sup_{x}f(x)& \\
s.t.& \quad h(x,\omega)\leq 0 \quad \\
&\quad\mathbf{\overline{\omega}}^L \leq \mathbf{\omega}\leq\mathbf{\overline{\omega}}^U\\
&\quad\int_{\mathbf{\overline{\omega}}^L}^{\mathbf{\overline{\omega}}^U}p(\mathbf{\omega})d\omega = 1-\epsilon
\label{chanceconstrainedrobust}
\end{flalign}
\section{Results}
\label{Res}
The approach described in the previous sections is applied to a 20-node case described in \cite{carrano_electric_2006} on a 1-year planning horizon. Data for lines and loads consumptions for this case can be found in \cite{carrano_data_????-1}. Hourly consumption patterns are generated using real measurements used in \cite{navarro-espinosa_data_2015}. One representative day is considered for the whole year with 15 hourly consumption data. We consider a unique size for generators (2MW) and up to two lines placed in parallel between two nodes. The different models have been run on a 3.4Ghz Intel Core i7 processor with 8Go of memory. The models are written in AMPL and solved with CPLEX 12.7 using benders decomposition.\\ We compare the deterministic case where load consumptions $P_{cit}$ and $Q_{cit}$ are fixed and the uncertain case where they may vary between 50\% and 150\% of the deterministic value: $\tilde{P}_{cit}\in[0.5P_{cit};1.5P_{cit}]$ (idem for $\tilde{Q}_{cit}$). The results are shown in Table \ref{table1}.
\begin{table}[h!]
\caption{Comparison of planning solution for deterministic base case and robust case}
\label{table1}
\centering
\begin{tabular}{|r|c|c|}
\hline
&\small{Base case}&\small{Robust case}\\
\hline
\small{OPEX[M\$]}&\small{0.19}&\small{0.27}\\
\hline
\small{CAPEX[M\$]}&\small{3.30}&\small{4.02}\\
\hline
\small{Total cost[M\$]}&\small{3.49}&\small{4.29}\\
\hline
\small{Total amount of scenarios}&\small{/}&\small{274}\\
\hline
\small{Number of iterations}&\small{/}&\small{2}\\
\hline
\small{Computation time[s]}&\small{0.34}&\small{5079}\\
\hline
\end{tabular}
\end{table}
These results show that including load uncertainty in our planning problem increases the cost by almost 20\% for this test case. Indeed, the planning solution for the deterministic case only has 6MW of installed generation capacity while the robust solution has 8MW of installed generation capacity. It can also be observed that the computation time dramatically increases with the robust approach. As a matter of fact, the determination of problematic scenarios implies to solve several thousands of adversarial/corrective problems. Furthermore, at the second iteration, the main problem includes 274 times more operational constraints than the deterministic case which makes it a much bigger problem to solve than its deterministic counterpart.
\section{Conclusion}
\label{Concl}
In this paper, we presented a robust second-order cone formulation for the planning of autonomous microgrids under load uncertainty. An interval representation of uncertainty was used and it was shown that this approach could also be used to formulate a chance-constrained version of the planning problem. Preliminary results prove that inclusion of load uncertainty in the problem significantly increases the overall cost of the system which indicates the need to include uncertainty in planning. Further research will also include uncertainty related to RES-based generation and contingencies should be included as well to deliver more realistic planning solutions.
|
2,869,038,153,997 | arxiv | \section{Introduction}
Both head pose estimation and face alignment have been well studied in recent years given their wide application in human computer interaction, avatar animation, \wenxuan{and} face recognition/verification. These two problems are very correlated and putting them together will enable mutual benefits. Head pose estimation from 2D images remains a challenging problem due to the high diversity of face images \cite{haj2012partial, murphy2009head}. Recent methods \cite{fanelli2011real} attempt to estimate the head pose by using depth data. On the contrary, face alignment has made significant progress and several methods \cite{cfaneccv2014,asthanaincremental,renface,xiong2013supervised} have reported good performance on images \textit{in the wild}. However, they also show some failures. When we look into their failures cases, we find that those samples share one significant property, i.e., the head (face) in such images is usually rotated from frontal pose in big angles.
\wenxuan{The best performing face alignment methods proposed in recent years (\cite{xiong2013supervised}, \cite{asthanaincremental} and \cite{cfaneccv2014})} also share a similar cascaded pose regression framework, i.e., face alignment starts from a raw shape (a vector representation of the landmark locations), and updates the shape in a coarse to fine manner. The methods in this framework are usually initialisation dependent. Therefore, the final output of one cascaded face alignment system might change if a different initialisation is \wenxuan{provided} to the same \wenxuan{input} image. Moreover, each model has a convergence radius, i.e., if the initialisation lies within the range of the actual shape, the model will be able to output a reasonable alignment result, otherwise it might lead the shape to a wrong location, as shown in Fig.~\ref{fig:illustration}. The methods like \cite{xiong2013supervised,asthanaincremental} \wenxuan{perform initialisation} using a mean shape within the face bounding box or from a randomly selected shape from training set. \wenxuan{There} is no guarantee the initialisation lies within the convergence radius, especially when head pose variation is large.
\begin{figure}
\includegraphics[trim =0.0cm 0.0cm 0.0cm 0.0cm, clip = true, width=0.95\textwidth,height=0.33\textwidth]{images/bmvc_inllu.pdf}
\label{fig:illustration}
\caption{Our proposed head pose based cascaded face alignment procedure (path in \textcolor{cyan}{cyan} color) vs. conventional cascaded face alignment procedure (path in \textcolor{red}{red} color). }
\end{figure}
In this paper, we aim to address \wenxuan{the} above discussed problems and make cascaded face alignment perform better under large head pose variations. The difference between our proposed method and the conventional cascaded method procedure is illustrated in Fig.~\ref{fig:illustration}. In contrast to using mean shape or random shapes for initialisation by other methods, our proposed method aims to produce better initialisation schemes for cascaded face alignment based on explicit head pose estimation. This is motivated by two facts: 1) most current methods fail on face images with large head pose variation-as we will demonstrate later; 2) most recent face alignment methods work in a cascaded fashion and perform initialisation with mean shape. More specifically,
we first estimate the head pose using a deep Convolutional Network (ConvNet) directly from face image. Given the estimated head pose, we propose two schemes of producing the initialisations. The first scheme projects a canonical 3D face shape under the estimated head pose to the detected face bounding box. The second scheme searches shape(s) for initialisation from the training set by nearest neighbour method in the head pose space. We build on our proposed scheme on the Robust Cascaded Pose Regression (RCPR) to demonstrate the effectiveness of supervised initialisation. We note that the proposed initialisation scheme can be naturally applied to any other cascaded face alignment. In summary, we make the following contributions:
\begin{itemize}
\item We investigate the failure cases of several state of the art face alignment approaches and find \wenxuan{that} the head pose variation is a common issue across those methods.
\item Based on the above observation, we propose a ConvNet framework for explicit head pose estimation. It is able to achieve an accuracy of 4$^{\circ}$ absolute mean error of head pose estimation for face images acquired in unconstrained environment.
\item We propose two initialisation schemes based on reliable head pose estimation. They \wenxuan{enable} face alignment method (RCPR) perform better and reduce large head pose failures by 50\% when using only one initialisation.
\end{itemize}
To summarise, we propose better initialisation schemes based on explicit head pose estimation for cascaded face alignment, to improve the performance, especially in the case of large head pose variation.
\section{Related Work}
Face alignment has made considerable progress in the past years and a large number of methods have been proposed. \wenxuan{There} are two different sources of information typically used for face alignment: face appearance (i.e., texture of the face image) and the shape information. Based on how the spatial shape information is used, the methods are usually categorized into local-based methods and holistic-based methods. The methods in the former category usually rely on discriminative local detection and use explicit deformable shape models to regularize the local outputs while the methods in the latter category directly regress the shape (the representation of the facial landmarks) in a holistic way, i.e. the shape and appearance are modelled together.
\subsection{Local-based methods}
Local based methods usually consist of two parts. One is for local facial feature detection, which is also called local experts and the other is for spatial shape models. The former describes how image around each facial landmark looks like in terms of local intensity or color patterns while the latter describes how face shape, that is the relative location of the face parts, varies. This captures variations such as wide forehead, narrow eyes, long nose etc.
There are three types of local feature detection. (1) Classification methods include Support Vector Machine (SVM) classifier \cite{rapp2011multiple,belhumeur2011localizing} based on various image features such as Gabor \cite{vukadinovic2005fully}, SIFT \cite{lowe2004distinctive,xiong2013supervised}, HOG \cite{yanlearn} and multichannel correlation filter responses \cite{Kiani_2013_ICCV}. (2) Regression-based approaches are also widely used. For instance, Support Vector Regressors (SVRs) are used in \cite{martinez2012local} with a probabilistic MRF-based shape model and Continuous Conditional Neural Fields (CCNF) are used in \cite{baltruvsaitis2014continuous}. (3) Voting-based approaches are also introduced in recent years, including regression forests based voting methods \cite{cootesECCV2012,dantone2012real,yangiccv2013} and exemplar based voting methods \cite{smithnonparametric,shen2013detecting}.
One typical shape model is the Constrained Local Model (CLM) \cite{cristinacce2006feature}. The CLM steps can be summarised as follows: first, sample a region from the image around the current estimate and project it into a reference frame; second, for each point, generate a ``response image" giving a cost for having the point at each pixel; third, searching for a combination of points which optimises the total cost, by manipulating the statistical shape model parameters. The methods built on CLM mainly differ from each other in terms of local experts, for instance CCNF in \cite{baltruvsaitis2014continuous} and the Discriminative Response Map Fitting (DRMF) in \cite{asthana2013robust}. There are many other local based methods either using CLM or other models such as RANSAC in \cite{belhumeur2011localizing}, graph-matching in \cite{Zhou_2013_ICCV}, Gaussian Newton Deformable Part Model (GNDPM) \cite{tzimiropoulos2014gauss} and mixture of trees \cite{devacvpr2012face}.
\subsection{Holistic-based methods}
\begin{table*}[!hbtp]
\footnotesize
\setlength{\tabcolsep}{1.5pt}
\centering
\caption{Holistic methods and their properties.}
\label{tab::holisticmethods}
\begin{tabular}{lcccccc}
\hline
Methods & SDM \cite{xiong2013supervised} & RCPR \cite{burgos2013robust} & IFA \cite{asthanaincremental} & LBF \cite{renface} & CFAN \cite{cfaneccv2014} & TCDCN \cite{zhang2014facial} \\
initialisation & mean pose & random & mean pose & mean pose & supervised & supervised \\
features & SIFT & pixel &HOG & pixel & auto-encoder & ConvNet feature\\
regressor & linear regression & random ferns & linear regression & random forests & linear regression& ConvNet \\
\hline
\end{tabular}
\end{table*}
Holistic methods have gained high popularity in recent years and most of them work in a cascaded way like SDM \cite{xiong2013supervised} and RCPR \cite{burgos2013robust}. We list very recent holistic methods as well as their properties in Table~\ref{tab::holisticmethods}. The methods following the cascaded framework differ from each other mainly in three aspects. First, how to set up the initial shape; Second, how to calculate the shape-indexed features; Third, what type of regressor is applied at each iteration. \wenxuan{For} initialisation, there are mainly three strategies are proposed in literature: random, mean pose, and supervised.
In order to make it less sensitive to initialisation, previous approaches such as \cite{suncvpr2012, burgos2013robust} propose to run multiple different initialisations and pick the median of all the predictions as the final output. Each initialisation is treated \wenxuan{independently} way until the output is calculated. However, such a strategy has several issues, first the theoretical support \wenxuan{for} selecting the median value is not well understood; second, there is no guidance on how to choose the multiple initialisations; third, using multiple initialisations is computationally expensive. A similar supervised initialisation scheme was proposed in \cite{yang2015robust} where the initialisation shapes were selected by using an additional regression forest model for sparse facial landmarks estimation. A recent work \cite{yang2015mirror} proposed a re-initialisation scheme based on mirrorability to improve the face alignment performance.
\section{Data preparation}
In this section we describe how the data is prepared in order to support our further discussion. More specifically, we discuss how we provide ground truth head pose and face bounding boxes from different face detectors for the benchmark dataset.
We use face image data from the benchmark face alignment in the wild dataset, 300W \cite{sagonas300}. Since their testing samples are not publicly available, we follow the partition of recent methods \cite{renface} to set up the experiments. More specifically, we use face images from AFW \cite{devacvpr2012face}, HELEN \cite{tan2009enhanced}, LFPW \cite{belhumeur2011localizing} and iBug \cite{sagonas300}, which include 3148 training images and 689 test images in total. 3148 training images are from AFW (337 images), HELEN training set (2000 images) and LFPW training set (811 images), and 689 test images are from HELEN test set (330 images), LFPW test set (224 images) and iBug (135 images).
It is intractable to get the ground truth 3D head pose for face images collected in unconstrained conditions. In order to generate reasonable head pose (Pitch, Yaw and Roll) values, we use the pose estimator provided \wenxuan{by} Supervised Descent Method (SDM) \cite{xiong2013supervised}. \wenxuan{Note} that, when calculating the head pose, we feed the ground truth facial landmark locations instead of using the detected landmarks. Technically, head pose is estimated by solving the projection function from an average 3D face model (49 3D points) to the \wenxuan{input} image\wenxuan{,} given the 3D to 2D correspondences. We also use the 3D head pose estimator provided by \cite{asthana2013robust} for head pose calculation for evaluating the results. It produces very similar results to \cite{xiong2013supervised}. We calculate the head pose for all images in 300W.
The benchmark dataset only provides two types of face bounding boxes: one is the ground truth bounding box calculated as the tight box of the annotated facial landmarks; the other is the detection results from model of \cite{devacvpr2012face}, which is quite similar to the ground truth face bounding box. However, several models like SDM \cite{xiong2013supervised} and \wenxuan{RCPR} \cite{burgos2013robust} are trained with different face bounding boxes, thus their performance deteriorates significantly \wenxuan{when} using the provided face bounding boxes. We therefore provide different face bounding boxes to the test images by employing Viola-Jones detector \cite{viola2001rapid} and HeadHunter detector \cite{mathias2014face} for fair comparison. For the \wenxuan{input} images on which \wenxuan{the} face detector fails we manually set reasonable bounding boxes.
\begin{figure}
\includegraphics[trim =2.0cm 2.0cm 2.0cm 2.0cm, clip = true, width=0.45\textwidth,height=0.25\textwidth]{images/difficult_headpose.pdf}
\includegraphics[trim =0.0cm 0.0cm 0.0cm 0.0cm, clip = true, width=0.5\textwidth,height=0.25\textwidth]{images/difficult_headpose_hist.pdf}
\caption{Distribution of \wenxuan{the most} erroneous samples. }
\label{fig:toperror}
\end{figure}
\section{Method}
\subsection{Motivation}
\label{sec::motivation}
We first run several state of the art methods, including 6 holistic based methods (SDM \cite{xiong2013supervised}, IFA \cite{asthanaincremental}, LBF \cite{renface}, CFAN \cite{cfaneccv2014}, TCDCN \cite{zhang2014facial}, RCPR \cite{burgos2013robust}) and 3 local based methods (GNDPM \cite{tzimiropoulos2014gauss}, DRMF \cite{asthana2013robust}, CCNF \cite{baltruvsaitis2014continuous}) given their good performance and availability of source \wenxuan{code}. For each method, we provide the \textit{best} type of face bounding boxes in order to get the best performance. For each method, we select 50 difficult samples out of the 689 test samples that \wenxuan{provide} the biggest sample-wise alignment error. Then we plot their head poses in Fig.~\ref{fig:toperror} (left). As can be seen, most of the points are far away from the original point, i.e. they \wenxuan{have} big rotation angle(s). We further plot the histogram of the biggest absolute rotation angles of those samples in Fig.~\ref{fig:toperror} (right). The biggest absolute rotation angle is calculated as the one of the three directions with the biggest absolute value. As can be seen, those samples are distributed at big absolute angles. There are very few samples that \wenxuan{have} small rotation angles. Based on this observation, we can conclude that, large head pose rotation is one of the \wenxuan{main} factors that make most of the current face alignments fail. Based on this fact, we develop a head pose based initialisation scheme for improving the performance of face alignment under \wenxuan{large} head pose variations.
\begin{figure}
\begin{tabular}{ccc}
\includegraphics[trim =0.0cm 0.0cm .0cm 0.0cm, clip = true, width=0.95\textwidth,height=0.25\textwidth]{images/head_pose_net.pdf}
\end{tabular}
\caption{ConvNet model for head pose estimation. }
\label{fig:headposenet}
\end{figure}
\subsection{Head Pose Estimation}
\label{sec::headposeestimation}
Giving the training data from 300W with augmented head pose annotation, we train a convolutional network (ConvNet) \cite{lecun1998gradient} model for head pose estimation on the training set of 300W with 3148 images. The samples are augmented by 3 times with small permutations on the face bounding box. The ConvNet structure is shown is shown in Fig. \ref{fig:headposenet}. The input of the network is 96x96 gray-scale face image , normalised to the range between 0 and 1. The feature extraction stage contains three convolutional layers, three pooling layers, two fully connected layers and three drop-out layers. As we pose it as a regression problem, the output layer is 3x1 representing the head pose pitch, yaw and roll angle respectively. The angles are normalised between -1 and 1. We use Nesterov's Accelerated Gradient Descent (NAG) method \cite{sutskever2013importance} for parameter optimisation and we set the momentum to 0.9 and learning rate to 0.01. The training finishes in two hours on Tesla K40c GPU after around 1300 epochs, controlled by early-stop strategy. The learning curve is shown in Fig.~\ref{fig::headposeresult} (left). The forward propagation of this network on GPU only takes 0.3ms per image on average.
\subsection{Pose based Cascaded Face Alignment}
\subsubsection{General Cascaded Face Alignment}
In order to make this work stand alone, we first summarise the general framework of cascaded face alignment. Face shape is often represented as a vector of landmark locations, i.e., $S=(\mathrm{x}_1,...,\mathrm{x}_k,...,\mathrm{x}_K) \in\mathbf{R}^{2K}$, where $K$ is the number of landmarks. $\mathrm{x}_k \in \mathbf{R}^2$ is the 2D coordinates of the $k$-th landmark. Most of the current holistic-based method works in a coarse-to-fine fashion, i.e., shape estimation starts from an initial shape $S^0$ and progressively refines the shape by a cascade of $T$ regressors, $R^{1...T}$. Each regressor refines the shape by producing an update, $\Delta S$, which is added on the current shape estimate, that is,
\begin{equation}
S^t = S^{t-1} + \Delta S.
\end{equation}
The update $\Delta S$ returned from the regressor that takes the previous pose estimation and the image feature $I$ as inputs:
\begin{equation}
\Delta S = R^t(S^{t-1},I)
\end{equation}
An important aspect that differentiates this framework from the classic boosted approaches is the feature re-sampling process. More specifically, instead of using the fixed features, the input feature for regressor $R^t$ is calculated relative to the current pose estimation. This is often called pose-indexed feature as in \cite{dollar2010cascaded}. This introduces weak geometric invariance into the cascade process and shows good performance in practice. The CPR is summarized in Algorithm \ref{alg::algorithm1} \cite{dollar2010cascaded}.
\begin{algorithm}
\caption{Cascaded Pose Regression}
\label{alg::algorithm1}
\begin{algorithmic}[1]
\Require{Image $I$, initial pose $S^0$}
\Ensure{Estimated pose $S^T$}
\For {$t$=1 to $T$}
\State $f^t = h^t(I,S^{t-1})$\Comment{Shape-indexed features}
\State $\Delta S = R^t(f^t)$\Comment{Apply regressor $R^t$}
\State $S^t = S^{t-1}+\Delta S$\Comment{update pose}
\EndFor
\end{algorithmic}
\end{algorithm}
\subsubsection{Head Pose based Cascaded Face Alignment}
In section \ref{sec::headposeestimation} we have presented how a ConvNet model can be used for head pose estimation. We propose two head pose based initialisation schemes for face alignment. One is based on an average 3D face shape projection and the other is based on nearest neighbour searching.
\paragraph{Scheme 1: 3D face shape based initialisation}
Given a 3D mean face shape, represented by 68 3D facial landmark locations, as shown in Fig.~\ref{fig:illustration}, we first project this shape under the estimated head pose to a set of canonical 2D locations. More specifically we use constant translation and focus length in order to get a reasonable projection for all images. Then we re-scale the canonical 2D projection by the face bounding box scale of the test image to get the initialisation. We can represent the initialisation process by function $\mathcal{F}$ as follows.
\begin{equation}
S_0 = \mathcal{F}(\theta,bb,\bar{S}^{3D})
\end{equation}
with $bb$ the face bounding box, $\bar{S}^{3D}$, the 3D mean face shape, $\theta$, the estimated head pose, which can be represented by:
\begin{equation}
\theta = \mathcal{G}(I, bb)
\end{equation}
where $\mathcal{G}$ is the deep convolutional model described in section \ref{sec::headposeestimation}.
\paragraph{Scheme 2: Nearest Neighbour based initialisation}
We propose a second scheme for head pose based initialisation by nearest neighbour search. Since we have provided the training samples with head pose information as well, we can easily search samples that are with similar head pose of a test sample. Then we calculate similarity transformation between two face bounding boxes in order to calculate the initialisation shape for the test sample. In this way, we can also provide $K$ initialisations by searching $k$-Nearest Neighbors from the training set.
Once we get a reliable initialisation (or several ones), we feed it to Algorithm \ref{alg::algorithm1} and apply the cascade of regressors in the same way to the baseline approach. In the case of the multiple initialisations, we calculate the output in a similar fashion to \cite{burgos2013robust,suncvpr2012}, i.e., to pick up the median value of their estimations. We build our proposed head pose based initialisation schemes on top of the popular Cascaded Pose Regression (CPR) method due to its simplicity and popularity. We train its recent variant Robust Cascaded Pose Regression (RCPR) \cite{burgos2013robust} model by using its new interpolated feature extraction, which is re-implemented by the author of \cite{yang2014face}. We do not use its full version as occlusion status annotation is not available. We trained the baseline RCPR model on our 300W training set using Viola-Jones \cite{viola2001rapid} face detection. 20 random initialisations are used for data augmentation at the training time.
\section{Evaluation}
\begin{figure*}
\includegraphics[trim =0.0cm 0.0cm 0.0cm 0.0cm, clip = true, width=0.5\textwidth,height=0.25\textwidth]{images/headpose_training_curve.pdf}
\includegraphics[trim =0.0cm 0.0cm 0.0cm 0.0cm, clip = true, width=0.2\textwidth,height=0.25\textwidth]{images/headpose_error.pdf}
\includegraphics[trim =0.0cm 0.0cm 0.0cm 0.0cm, clip = true, width=0.25\textwidth,height=0.25\textwidth]{images/head_pose_example.pdf}
\caption{Head pose estimation result. Left, learning curve of head pose network, with y axis the Root Mean Square Error (RMSE) and x axis the number of epochs; middle, absolute mean error on test set; right, example results of head pose estimation.}
\label{fig::headposeresult}
\end{figure*}
\subsection{Head Pose Estimation}
We first evaluate the performance of head pose estimation. As we discussed before, it is very difficult to get the ground truth head pose for face images acquired in uncontrolled conditions. We calculate the pose based on the annotated facial landmark locations. We apply the trained deep ConvNet model on the test images of 300W and measure the performance. The result is shown in Fig.~\ref{fig::headposeresult}.
The absolute mean errors of the head pose pitch, yaw, roll angles are 5.1$^{\circ}$, 4.2$^{\circ}$ and 2.4$^{\circ}$, respectively. Some example results are shown on the right. Despite the work by Zhu \& Ramanan \cite{devacvpr2012face} is conceptually similar to our work in terms of simutaneuous head pose and facial landmarks estimation, we do not compare to it here because their work can only estimate very sparse head pose yaw angles (e.g. -15$^{\circ}$, 0$^{\circ}$ , 15 $^{\circ}$ ).
\subsection{Face Alignment}
We first show the effectiveness of head pose based initialisation by comparing with the baseline strategy of the CPR framework \cite{suncvpr2012,burgos2013robust}, i.e., generating random initialisations from training samples. The comparison is shown in Fig.~\ref{fig:comparewithbaseline}. As can be seen on the left figure, by using one initialisation projected from 3D face shape, we obtain similar performance to the baseline approach with 5 initialisation shapes, and much better performance than that uses only one random initialisation shape. Similar superior performance is obtained by using nearest neighbour initialisation scheme, as shown on the right. By using more head pose based initialisations, we gain even better results, though the improvement is minor. It is worthy noting that by using our proposed initialisation scheme, we are able to decrease the number of failure cases (sample-wise average alignment error $>$ 0.1) from 130 to 69 (scheme 1) and \wenxuan{from 130} to 72 (scheme 2), nearly 50\%. Those samples are usually with large head pose variations and difficult for conventional face alignment methods. Moreover, by using one \wenxuan{set of} initialisation, the whole test procedure on one typical image takes 3.8 ms (0.3 ms for head pose estimation and 3.5 ms for cascaded face alignment).
\begin{figure}
\includegraphics[trim =0.0cm 0.0cm .0cm 0.0cm, clip = true, width=0.48\textwidth,height=0.36\textwidth]{images/3d_vs_random.pdf}
\includegraphics[trim =0.0cm 0.0cm .0cm 0.0cm, clip = true, width=0.48\textwidth,height=0.36\textwidth]{images/nn_vs_random3.pdf}
\caption{Our proposed head pose based initialisation scheme vs. random initialisation scheme. Left, our 3D face shape based scheme; right, our Nearest Neighbour (NN) based scheme.}
\label{fig:comparewithbaseline}
\end{figure}
We further compare the proposed method with recent state of the art methods including 5 holistic based methods (SDM \cite{xiong2013supervised}, IFA \cite{asthanaincremental}, LBF \cite{renface}, CFAN \cite{cfaneccv2014}, TCDCN \cite{zhang2014facial}) and 3 local based methods (GNDPM \cite{tzimiropoulos2014gauss}, DRMF \cite{asthana2013robust}, CCNF \cite{baltruvsaitis2014continuous}). SDM and DRMF are trained using the Multi-PIE \cite{gross2010multi} dataset and detect 49 and 66 facial landmarks respectively. The rest of them are with models trained on 300W datasets. When we run their model on the test images, we use the \textit{best} bounding boxes for a fair comparison. Best bounding box refers to Viola-Jones detection for SDM and RCPR and tight face detection provided by 300w dataset for the rest of them. The comparison is shown in Fig.~\ref{fig:comparewithsoa}. As can be seen, our proposed method shows competitive performance. We also compare the performance on another type of common face detection, HeadHunter, given its best performance in face detection. The result is shown on the right of Fig.~\ref{fig:comparewithsoa}. We observe that the performance of most methods deteriorate significantly when testing on HeadHunter face bounding boxes. Our method \wenxuan{provides most stable result}, despite the fact that the HeadHunter face bounding box is more overlapped with the face detection from 300W (both are tight boxes of facial landmarks) than with Viola-Jones face detection. We believe this robustness to face bounding box changes is partially due to our head pose based initialisation strategy.
\begin{figure}
\includegraphics[trim =0.0cm 0.0cm .0cm 0.0cm, clip = true, width=0.48\textwidth,height=0.36\textwidth]{images/compre_to_soa1.pdf}
\includegraphics[trim =0.0cm 0.0cm .0cm 0.0cm, clip = true, width=0.48\textwidth,height=0.36\textwidth]{images/compre_to_soa2.pdf}
\caption{Comparison with recent methods. Left, results from the \textit{best} face detection of each method; right, results from the common HeadHunter face detection. Pose-RCPR is our proposed method using only 1 initialisation from 3D. }
\label{fig:comparewithsoa}
\end{figure}
\section{Conclusion and Future Work}
In this paper we first \wenxuan{demonstrate} that most recent face alignment methods show failure cases when large head pose variation is present. Based on the fact that cascaded face alignment is initialisation dependent, we proposed supervised initialisation schemes based on explicit head pose estimation. We use deep convolutional networks for head pose estimation and produce initialisation shape by either projecting a 3D face shape to the test image or searching nearest neighbour shapes from the training set. We demonstrated that using a more reliable initialisation is able to improve the face alignment performance with around 50\% failure decreasing. It also shows comparable or better performance when comparing to recent face alignment approaches.
\wenxuan{Although we have managed to decrease} the failure cases to a certain degree, we have not fully solved this problem. There are several interesting directions for future research. First, using head pose based initialisation shapes in the training stage may further boost the performance. Second, we only test our method on RCPR, we believe the proposed scheme can be naturally applied to other cascaded face alignment methods. It also raises several interesting questions. Do we need to make the cascaded learning model better for face alignment or to make the initialisation more reliable? Do we need more uniformly distributed data or a better model in order to make face alignment work better in wider range of head pose variations? We are going to investigate on these problem in our future research.
\section*{Acknowledgement}
The work is sponsored by Cambridge VBRAD project from Jaguar-Land-Rover. We gratefully acknowledge NVIDIA for the donation of the Tesla GPU used for this research.
|
2,869,038,153,998 | arxiv | \section{Introduction}
\label{sectionintroduction}
Precise determinations of the top quark mass $m$ are among the most important
Standard Model measurements being carried out at the Tevatron, and being planned
at the Large Hadron Collider (LHC) and a future International Linear Collider
(ILC). A precise top mass determination is important for precision electroweak
constraints, as well as extensions to the standard model like minimal
supersymmetry~\cite{Heinemeyer:2003ud}. The present combined measurement from
the Tevatron is $m=171.4\pm 2.1$~GeV~\cite{Brubaker:2006xn,Heinson:2006yq} and
mainly relies on methods where a number of top-mass-dependent kinematical
quantities and observables are used in a global fit to determine the most likely
top quark mass. For these fitting methods~\cite{Abe:1994st,Abazov:2004cs} the
observable most sentitive to the top quark mass is the top invariant
distribution. It is obtained from reconstructing the total invariant mass of
the top decay products. At the Tevatron the invariant mass distribution is being
used in connection with other top mass dependent observables due to the limited
statistics.
In principle the reconstruction of the top invariant mass distribution
provides the most natural way to measure the top quark mass since the
peaked structure at the resonance is most closely related to the
notion of the mass of a propagating massive and unstable degree of
freedom. This method can be applied at the LHC and ILC where larger
statistics are available. Experimental studies have concluded that at
the LHC top mass measurements with uncertainties at the level of
1~GeV~\cite{Borjanovic:2004ce,Etienvre:2006ph} can be achieved, while
at the ILC even smaller uncertainties can be
expected~\cite{Chekanov:2002sa,Chekanov:2003cp}. However, since the
top quark is a parton carrying non-vanishing color charge, its mass is
a priori not directly observable. In fact the top mass should be
considered as a renormalization scheme-dependent coupling in the QCD
Lagrangian rather than a physical object, just like the strong
coupling $\alpha_s$. As such, the top mass obtained from
reconstruction also depends on the method and prescription that is
used to define the top invariant mass since the latter is not a unique
physical quantity. In fact the notion of a physical particle whose
squared four-momentum is the mass does not apply to the top quark if
one asks for a precision in the mass value that is comparable to the
hadronic scale. This is also reflected in a number of conceptual and
experimental issues for top quark mass determinations that are
associated with gluon radiation, underlying events, and the jet energy
scales -- effects that can never be fully eliminated for measurements
of the top quark mass from reconstruction. Moreover certain top quark
mass renormalization schemes are more suitable for precision
measurements than others since the choice of scheme can affect the
higher order behavior of the perturbative corrections as well as the
organization of power corrections. Suitable quark mass schemes are
compatible with the power counting and also lead to an optimal
behavior of the perturbative expansion. Such schemes can be identified
and defined unambiguously if the precise relation of the observable to
a given Lagrangian top quark mass scheme can be established.
For all jet based methods of top quark mass determination, and for
reconstruction in particular, these issues have been intrinsically
difficult to address in the past. Previous work has not provided a
coherent analytic framework in which perturbative and non-perturbative
effects could be described in a systematic manner. Considering the
expected precision for top quark mass measurements in the upcoming
experiments such a framework is imperative.
A top mass determination method where a systematic analytic framework
exists and where the relation between the Lagrangian top mass
parameter $m$ and the measured top mass can be established to high
precision is the threshold scan of the line-shape of the total
hadronic cross section in the top-antitop threshold region, $Q\approx
2m$, at a future Linear
Collider~\cite{Fadin:1987wz,Strassler:1990nw,Fadin:1991zw,Jezabek:1992np,Sumino:1992ai},
where $Q$ is the c.m.\,energy. In this case the system of interest is
a top-antitop quark pair in a color singlet state and the observable
is related to a comparatively simple counting measurement. The
line-shape of the cross section rises near a center of mass energy
that is related to a toponium-like top-antitop bound state with a mass
that can be computed perturbatively to very high
precision~\cite{Hoang:2000yr,Hoang:2000ib,Hoang:2001mm,Pineda:2006ri,Hoang:2004tg}
using non-relativistic QCD (NRQCD)~\cite{Bodwin:1994jh,Luke:1999kz} an
effective field theory (EFT) for nonrelativistic heavy quark
pairs. The short lifetime of the top quark, $\tau=1/\Gamma\approx
(1.5\,\mbox{GeV})^{-1}$, provides an infrared cutoff for all kinematic
scales governing the top-antitop dynamics and leads to a strong power
suppression of non-perturbative QCD effects. Experimental studies
concluded that theoretical as well as experimental systematic
uncertainties for this method are at a level of only
$100$~MeV~\cite{Peralta:etal,Martinez:2002st}. The most suitable top
quark mass schemes are the so-called threshold
masses~\cite{Hoang:2000yr}, which can be related accurately to other
short-distance mass schemes such as the running $\overline{\rm MS}$
mass. Unfortunately, the threshold scan method cannot be used at the
LHC because the top-antitop invariant mass can only be determined with
a relative uncertainty of around 5\%~\cite{Beneke:2000hk}, which is
not sufficient to resolve the top-antitop threshold region.
In this work we use EFT's to provide, for the first time, an analytic framework
that can be applied to systematically describe the perturbative and
nonperturbative aspects of top quark invariant mass distributions obtained from
reconstruction. As a first step towards developing a detailed framework for the
LHC, we focus in this work on jets in a $e^+e^-$ Linear Collider environment at
c.m.~energies far above threshold $Q\sim 0.5-1$~TeV. For $e^+e^-$ collisions
strong interaction effects arising from the initial state can be neglected and
there is no need to identify or remove any `beam remnant' or underlying events.
Also, in the $e^+e^-$ framework it is easier to formulate shape variables like
thrust that control the jet-likeness and the soft dynamics of an event. We
consider the double differential top and antitop invariant mass distribution,
where each of the invariant masses, $M_t^2$ and $M_{\bar t}^2$, are defined from
all particles in each of the two hemispheres that are determined by the event's
thrust axis. In Fig.~\ref{fig:6topjet} we show a sketch of such an event. Other
invariant mass definitions, e.g.~based on $k_T$ algorithms and criteria to
identify jets from top and antitop decay can be employed as well. Our approach
also works for all-jet and lepton plus jet final states.
\begin{figure}
\centerline{
\hspace{2cm}\includegraphics[width=12cm]{6jet-hemi.eps}
}
\caption{ Six jet event initiated by a top quark pair, $t\bar t\to bW \bar b
W\to b qq' \bar b qq'$. The plane separating the two hemispheres is
perpendicular to the thrust axis and intersects the thrust axis at
the interaction point. The total invariant mass inside each
hemisphere is measured. Our analysis applies equally well to the
lepton+jets and the dilepton channels (not shown).}
\label{fig:6topjet}
\end{figure}
Our focus is to study the double differential invariant mass distribution in
the peak region close to the top mass, so that $M_t^2-m^2\sim m\Gamma$ and
$M_{\bar t}^2-m^2 \sim m\Gamma$. It is convenient to introduce
the
shifted variables
\begin{eqnarray}
\label{massshell}
\hat s_{t, \bar{t}}\equiv \frac{s_{t, \bar{t}}}{m}
\equiv \frac{M_{{t,\bar{t}}}^2-m^2}{m}
\sim \Gamma \ll m\,,
\end{eqnarray}
because it is only the invariant mass distribution close to the peak
that we wish to predict. Here the top width $\Gamma$ is setting a
lower bound on the width of the invariant mass distribution and the
shifted variable $\hat s_{t,\bar t}$ can also be larger than $\Gamma$
as long as $\hat s_{t,\bar t}\ll m$. However, for simplicity we will often
write $\hat s_{t,\bar t}\sim \Gamma$ as we did in Eq.~(\ref{massshell}).
There are three relevant disparate scales governing the dynamics of the system,
\begin{eqnarray} \label{threescales}
Q \gg m \gg \Gamma > \Lambda_{\rm QCD} \,.
\end{eqnarray}
This kinematic situation is characterized by energy deposits contained
predominantly in two back-to-back regions of the detector with opening angles of
order $m/Q$ associated to the energetic jets coming from the top quark decay and
collinear radiation. Frequently in this work we refer to the jets coming from
the top and antitop quark collectively as top and antitop jet, respectively, but
we stress that we do not require the jets from the top and antitop decay
products to be unresolved as pictured in Fig.~\ref{fig:6topjet} (for example one
can still identify a $W$ and do $b$-tagging). The region between the top jets is
predominantly populated by soft particles with energies of order of the hadronic
scale.
The EFT setup used to describe the dynamics in this kinematic
situation is illustrated in Fig.~\ref{fig:efts} and represents a
sequence of different EFT's. The use of different EFT's is mandatory
to separate the various relevant physical fluctuations. The high
energy dynamics for the top quarks at the scale $Q\gg m$ can be
described by quark and gluon degrees of freedom that are collinear to
the top and antitop jet axes, and by soft degrees of freedom that can
freely propagate between the jets. The appropriate EFT for this
situation is the Soft-Collinear Effective Theory
(SCET)~\cite{Bauer:2000ew,Bauer:2000yr,Bauer:2001ct,Bauer:2001yt} with
a nonzero top quark mass term~\cite{Leibovich:2003jd}, which
represents an expansion in $\lambda \sim m/Q\sim 0.2-0.3$. The
leading order soft-collinear decoupling~\cite{Bauer:2001ct} properties
of SCET allows a factorization of the process into three sectors: top
jet dynamics, antitop jet dynamics, and dynamics of the soft cross
talk between the top and antitop jets, which corresponds quite
intuitively to the situation pictured in Fig.~\ref{fig:6topjet}. In
SCET the typical fluctuation of the jet invariant masses around the
top mass are still of order $m$, $\hat s_{t, \bar{t}}\sim m$. Thus to
describe invariant masses in the peak region $\hat s_{t, \bar{t}}\sim
\Gamma$ the top and antitop jets are finally computed in Heavy-Quark
Effective Theory (HQET)~\cite{Manohar:2000dt} which represents an
expansion $\hat s/m$ and $\Gamma/m\sim 0.01$. We have in fact two
copies of HQET, one for the top and one for the antitop, plus soft
interactions between them. In these EFT's the top decay can be treated
as inclusive and is therefore described by the total top width term
$\Gamma$ that acts as an imaginary residual mass
term~\cite{Fadin:1987wz,Beneke:2004km}. Since HQET is usually
understood as being formulated close to the rest frame of the heavy
quark without the soft cross-talk interactions, we refer to these two
EFT's as boosted HQET's (bHQET's).\footnote{We adopt the acronym bHQET
in cases where we wish to remind the reader that the residual momentum
components of the heavy quark in the $e^+e^-$ c.m. frame are not
homogeneous, and that additional gluon interactions occur which are
not simply the soft gluons of standard HQET. }
\begin{figure}
\centerline{
\includegraphics[width=12cm]{EFTs-for-tops.eps}
}
\caption{Sequence of effective field theories used to compute the
top/antitop invariant mass distribution in the peak region. }
\label{fig:efts}
\end{figure}
At leading order in the expansion in $m/Q$ and $\Gamma/m$ we show that
the double differential invariant hemisphere mass distribution can be
factorized in the form
\begin{align}
\label{FactThm}
\bigg(\frac{d\sigma}{ dM_t^2\, dM_{\bar t}^2}\bigg)_{\rm hemi} &=
\sigma_0 \: H_Q(Q,\mu_Q,\mu_m) H_m\Big(m,\frac{Q}{m},\mu_m,\mu\Big)\!
\\
&\times \int\! d\ell^+ d\ell^- B_+\Big(\hat s_t- \frac{Q\ell^+}{m},\Gamma,\mu\Big)\:
B_-\Big(\hat s_{\bar t}-\frac{Q\ell^-}{m},\Gamma,\mu\Big)
S_{\rm hemi}(\ell^+,\ell^-,\mu)\,,\,\,\,\,\,\mbox{} \nn
\end{align}
where $\hat s_t$ and $\hat s_{\bar t}$ are defined in terms of
$M_{t,\bar t}^2$ in Eq.~(\ref{massshell}). The term $\sigma_0$ is a
normalization factor, and the factors $H_Q$ and $H_m$ are matching
corrections that are derived from matching and running in SCET and the
bHQET's, respectively. $H_Q$ and $H_m$ are independent of $\hat s_t$
and $\hat s_{\bar t}$ and do not affect the form of the invariant mass
distributions. The jet functions $B_\pm$ describe the QCD dynamics of
collinear radiation in the top/antitop direction, and the decay of the
top and antitop quarks near mass shell within the top/antitop jets.
They can be computed perturbatively at the scale $\mu\gtrsim\Gamma$ since
the top width $\Gamma$ provides an infrared cutoff from
hadronization. At tree level they are Breit-Wigner functions
\begin{align}
B_\pm(\hat s,\Gamma) \,
&= \,\frac{1}{\pi m} \:
\frac{\Gamma}{\hat s^2 + \Gamma^2} \,+\,\ldots \,,
\end{align}
where the ellipses indicate QCD corrections that distort the
Breit-Wigners. For the computation of the $B_{\pm}$ it is mandatory
to employ properly defined short-distance top mass schemes, to achieve
a well-behaved perturbative expansion. Finally, the soft function
$S_{\rm hemi}(\ell^+,\ell^- )$ describes the physics of the soft
nonperturbative gluons through which the top and antitop jets can
communicate. The low energy fluctuations of these soft gluons are not
cut off by the large top quark width. This can be intuitively
understood due to the lifetime dilation of the boosted top quarks. As
explained in Sec.~\ref{sectionefts}, using soft-collinear
factorization, we can show that the soft function is universal, namely
that the same function governs the low energy dynamics for massless
jets in the dijet
limit~\cite{Korchemsky:1998ev,Korchemsky:1999kt,Bauer:2002ie,Bauer:2002aj,Lee:2006nr}.
So, information on the form of $S_{\rm hemi}(\ell^+,\ell^-)$ can be
gained in a model-independent way from experimental data on massless
dijet events. The form of the factorization theorem in
Eq.~(\ref{FactThm}) is based on the same principles as the
factorization formula for massless dijet event
shapes~\cite{Korchemsky:1998ev,Korchemsky:1999kt,Bauer:2002ie,Lee:2006nr},
but it differs due to the need to treat massive quark jets and effects
related to the large top quark width. We also use our results to
derive a factorization theorem for thrust and the heavy-jet mass event
shape for $t\bar t$ production in the peak region. These distributions
can also be used to measure the top-quark mass.
The convolution in Eq.~(\ref{FactThm}) shows that the observed hemisphere mass
distributions are inevitably distorted by the nonperturbative soft momentum
distribution, and that top and antitop jets can only interact indirectly through
exchange of different light-cone momentum components that are governed by the
soft function. We can also show that for invariant masses $M_{t,\bar t}$ that
are defined through the identification of the jets from top and antitop decay,
which are determined from a $k_T$ jet algorithm, the same factorization formula
as in Eq.~(\ref{FactThm}) can be derived up to a different soft function.
We believe that the factorization approach proposed in this work, and the
factorization formula in Eq.~(\ref{FactThm}), represent advancements concerning
the following points:
\begin{itemize}
\item We give a well defined relation between a jet observable
sensitive to the top mass and the Lagrangian mass. This allows the
definition of a short-distance top mass which we call the
``jet-mass''. Theoretically the jet-mass can be determined with a
precision better than $\Lambda_{\rm QCD}$, once the soft function
governing nonperturbative effects is known by other means. We expect
that the jet-mass will be useful for a broad range of observables
involving jets and parton showering from massive quarks.
\item The soft function appearing in the massive-jet factorization formula is
universal, and appears in massless dijet event shapes. This universality can
reduce the dependence of uncertainties in the top mass from
reconstruction on parton shower monte carlos and hadronization models.
\item The factorization approach opens up the possibility to systematically
construct top mass observables where nonperturbative effects are
suppressed.
\end{itemize}
While the focus of this paper is on $t\bar{t}$ production at a $e^+e^-$ Linear
Collider, the main ideas and tools developed are general, and will also play an
important role for the environment of the LHC where a substantial number of top
events with large $p_T$ will be available. Other applications of our approach to
factorization of jets from massive particles may include processes such as
single top production~\cite{Cortese:1991fw,Willenbrock:1986cr}, W pair
production, or processes involving new colored unstable
particles~\cite{Butterworth:2002tt,Skiba:2007fw}. We briefly comment on these
applications in the summary.
The outline of the paper is as follows. In Sec.~\ref{sectionefts} we describe
the relevant EFT formalism for our computation. In Sec.~\ref{section3} we derive
the factorization theorem in SCET, introduce the hemisphere jet invariant
masses, and perform the factorization of mass effects in boosted HQET. The
result of the analysis in this section is the complete factorization theorem for
the double invariant mass distribution, and the extension to thrust and the
heavy-jet mass event shapes. In this section we also define the short-distance
jet-mass scheme. In Sec.~\ref{section4} we study the factorization theorem
numerically at leading order, and discuss implications for top-mass
measurements. We also display numerical results for the shape of the peak
region. In Sec.~\ref{sectionotheralgo} we discuss the relation between the
factorization theorem for the hemisphere invariant masses used in our work, to a
factorization theorem for the reconstruction method based on $k_T$~jet
algorithms employed in Refs.~\cite{Chekanov:2002sa,Chekanov:2003cp}. Finally we
summarize and conclude in Sec.~\ref{section6}.
This paper concentrates on the derivation of the factorization theorem, on field
theoretic issues, and on the basic phenomenological implications of our result.
Readers only interested in the final result may skip over the analysis in
sections~\ref{sectionefts} and \ref{section3}, and go directly to
section~\ref{section4}. In a future paper we present the computation of
$\alpha_s$ corrections to the jet invariant mass cross-section, and the
summation of large logarithms between the scales $Q, m,\Gamma$.
\section{The Effective Field Theories}
\label{sectionefts}
In this section we discuss the EFT's required to compute the double
differential invariant mass distribution $d^2\sigma/dM^2_t dM^2_{\bar
t}$ in the peak region. The relevant energy scales are:
\begin{eqnarray}
Q \gg m \gg \hat s_t\sim \hat s_{\bar t} \sim \Gamma \,,
\end{eqnarray}
where the hatted $s$-variables were defined in terms of $M_t^2$ and $M_{\bar
t}^2$ in Eq.~(\ref{massshell}). Once radiative corrections are included,
large logarithms arise through ratios of the above energy scales, some of which
are double logs, and thus can be quite large. For example $\Gamma/m\approx
1/120$, so $\ln^2(\Gamma/m)\approx 25$. It is obviously important to understand
the appearance of all large logs as accurately as possible, and to sum them
systematically. This summation is accomplished by matching onto a sequence of
EFTs and using renormalization group equations (RGE's).
Starting from QCD we first switch to the Soft Collinear Effective
Theory (SCET)~\cite{Bauer:2000yr,Bauer:2002nz,Bauer:2001yt,Bauer:2001ct}
for massive quarks and then to Heavy Quark Effective
Theory (HQET)~\cite{Eichten:1989zv,Isgur:1989vq,Isgur:1989ed,Grinstein:1990mj,Georgi:1990um}
combined with the unstable particle EFT
method~\cite{Beneke:2003xh,Beneke:2004km,Hoang:2006pd,Beenakker:1999hi}.
This scheme includes systematically effects related to the large top
quark width, as well as interactions related to the soft cross-talk:
\begin{eqnarray}
\text{QCD} \longrightarrow \text{SCET} \longrightarrow \text{boosted-HQET with
unstable heavy quarks}.
\end{eqnarray}
An intuitive picture which displays why this sequence of EFTs is relevant is
shown in Fig.~\ref{fig:efts}. We are interested in events where the top quarks
are produced close to their mass shell as characterized by the condition in
Eq.~(\ref{massshell}). At the production scale $Q$, the invariant mass of the top
and antitop quarks can still fluctuate with $\hat s_{t, \bar{t}} \sim Q$ due to
it's interactions with hard gluons of characteristic momentum $p_h\sim Q$. In
the first step, when switching to SCET, these hard modes are integrated out and
we expand in $m/Q\ll 1$. SCET makes it simple to
separate the physics associated with i) the top-jet, ii) the antitop jet, and
iii) the soft-cross talk between the jets. After the implementation of this
factorization theorem, each top jet and the soft-cross talk can be studied
independently in the field theory. The factorization theorem tells us how to tie
them together. Now in SCET the invariant mass of the top quark fluctuates with
$\hat s_{t,\bar{t}} \sim m$, so we still have to remove these large momentum
fluctuations to describe the desired kinematic region where $\Gamma
\sim\hat s_{t,\bar{t}} \ll m$. Such invariant mass fluctuations are analogous to those
encountered in HQET for a bottom quark inside a $B$-meson
\begin{eqnarray}
(mv +k)^2 -m^2 = 2 m v\cdot k + k^2 \sim 2 m \Lambda_{\rm QCD} \,,
\end{eqnarray}
with the difference that for the unstable top quark $v\cdot k\to v\cdot k +
i\Gamma/2$. Since top-quarks decay before they have a chance to hadronize, the
top-width $\Gamma$ adopts the role $\Lambda_{\rm QCD}$ plays for the $B$ meson.
Keeping in mind that the tops are highly boosted and unstable, we actually match
onto two boosted versions of HQET, one for the top and one for the antitop. A
discussion of the necessary SCET and HQET theoretical ingredients is given in
the following subsections.
\subsection{SCET with Masses}\label{mass-scet}
SCET is an effective theory describing the interactions of soft and collinear
particles, which are characterized by the scaling of their momenta. In
this framework it is convenient to introduce the four-vectors
\begin{eqnarray}
n^\mu = (1,\vec n), \qquad\quad {\bar n}^\mu=(1,-\vec n),
\end{eqnarray}
where $\vec n$ can be thought of as the direction of the top jet and
$-\vec n$ as the direction of the antitop jet ($\vec n^2 =1$, $n^2=0$, ${\bar n}^2=0$) .
Any momentum can then be decomposed as
\begin{eqnarray}
p^\mu = n\cdot p \>\frac{{\bar n}^\mu}{2} + {\bar n} \cdot p\> \frac{n^\mu}{2} +
p_\perp^\mu \,,
\end{eqnarray}
and we denote momentum components in this light cone basis as
$(p^+,p^-,p_\perp)=(n\cdot p,{\bar n} \cdot p, p_\perp)$. The square of the
momentum vector $p^\mu$ then reads $p^2=p^+ p^-+p_\perp^2$.
It is also convenient to denote the momentum of collinear particles in
the $\vec{n}$ and $-\vec{n}$ directions by the subscripts $n$ and ${\bar n}$
respectively, which corresponds to the large energy modes in the
corresponding jets. Thus we have collinear labels
\begin{align}
& n\ \ \text{for the top-jet,}\ &{\bar n}& \ \ \text{for the antitop-jet} \,.
\end{align}
The momentum of soft particles that communicate between the jets will be denoted
by a subscript $s$. We also have mass-modes that are required in order to
describe certain top-quark vacuum polarization loops. The momenta of the
collinear, mass, and soft modes\footnote{ In some factorization theorems it is
necessary to distinguish between soft and ultrasoft particles, and between two
versions of SCET: called \ensuremath{{\rm SCET}_{\rm I}}\xspace and \ensuremath{{\rm SCET}_{\rm II}}\xspace. In this paper we only deal with
\ensuremath{{\rm SCET}_{\rm I}}\xspace with ultrasoft gluons. For simplicity we will therefore simply use the
term soft modes. For modes with momenta $p^\mu\sim (m,m,m)$ that are specific
to the massive SCET theory, we use the term ``mass-modes''.} have the typical
scalings shown in table~\ref{table_fields} in the SCET column, where here
$\lambda$ is the small expansion parameter.
\begin{table}[t!]
\begin{center}
\begin{tabular}{|crl|rl|}
\hline
&
\multicolumn{2}{c}{SCET [$\lambda\sim m/Q\ll 1$] }
\vline & \multicolumn{2}{c}{bHQET [$\Gamma/m\ll 1$] } \vline
\\ \hline
& $n$-collinear ($\xi_n$, $A_n^\mu$)
& \hspace{0.2cm} $p_n^\mu \!\sim\! Q(\lambda^2,1,\lambda)$
& $n$-ucollinear ($h_{v_+}$, $A_{+}^\mu$)
& \hspace{0.2cm} $k^\mu\!\sim\! \Gamma(\lambda,\lambda^{-1},1)$ \\
& ${\bar n}$-collinear ($\xi_{\bar n}$, $A_{\bar n}^\mu$)
& \hspace{0.2cm} $p_{\bar n}^\mu\!\sim\! Q(1,\lambda^2,\lambda)$
& ${\bar n}$-ucollinear ($h_{v_-}$, $A_{-}^\mu$)
& \hspace{0.2cm} $k^\mu\!\sim\! \Gamma(\lambda^{-1},\lambda,1)$ \\
& mass-modes ($q_m$, $A_m^\mu$)
& \hspace{0.2cm} $p_m^\mu\!\sim\! Q(\lambda,\lambda,\lambda)$ & &
\\
Crosstalk: & soft
($q_{s}$, $A^\mu_{s}$)
& \hspace{0.2cm} $p_s^\mu\!\sim\! Q (\lambda^2,\lambda^2,\lambda^2)$
& same soft ($q_{s}$, $A^\mu_{s}$)
& \hspace{0.2cm} $p_s^\mu\!\sim\! (\Delta,\Delta,\Delta)$
\\
\hline
\end{tabular}
\end{center}
\vskip-0.4cm
\caption{ \label{table_fields} Summary of the fields required in SCET and
bHQET. The first field in each bracket is a quark, and the second
is a gluon. The scaling of momentum components is given for
$(p^+,p^-,p^\perp)$. After
factorization, the soft fields on the last line generate a cross-talk theory
that communicates with collinear
fields in both SCET and bHQET through two kinematic variables. $\Delta$ is
the scale for the soft modes.}
\end{table}
A particle with components scaling as $(\lambda^2,1,\lambda)$ has a small
$\perp$-momentum relative to its energy, and is said to be collinear to the
$n^\mu$ direction etc. Both $\lambda$ and the hard scale $Q$ have a size that
depends on the particular process under study. For example, in $B\to X_s
\gamma$ the hard scale is the $b$-quark mass $m_b$, and the expansion parameter
is $\sqrt{{\Lambda_{QCD}}/{m_b}}$. For pair production of top jets, the hard
scale $Q$ is the center of mass energy, and the SCET expansion parameter is
\begin{eqnarray}
\lambda \sim \frac{m}{Q} \,.
\end{eqnarray}
It follows that the typical virtuality of the collinear, mass, and soft modes in SCET satisfy
\begin{eqnarray}
p_n^2 \sim p_{{\bar n}}^2 \sim m^2, \qquad p_m^2\sim m^2,
\qquad \text{and}\quad p_s^2 \sim \frac{m^4}{Q^2}.
\end{eqnarray}
Since $m^4/Q^2 \gg \Lambda _{QCD}^2$, the soft modes in this theory still
contain perturbative components as well as the underlying non-perturbative
dynamics at smaller scales. Using $m =171\, \text{GeV}$ this is true for
$Q\lesssim 40\,\text{TeV}$ i.e.\,for any conceivable c.m.\,energy of a future
Linear Collider. The soft particles correspond to modes with wavelengths that
allow cross talk between the two jets. In addition, at two-loop order the soft
gluons in SCET interact with virtual top-quarks which are described by the
mass-modes indicated in table~\ref{table_fields}. These mass-modes do not
interact directly with the collinear fields and only appear as virtual effects
for our observable, because we only consider cases where $s_{t,\bar t}\ll Q m$.
(As discussed below Eq.~(\ref{state}).) In addition we have virtual collinear
top-quarks that can interact with the collinear particles through the collinear
Lagrangian. The $n$-collinear, ${\bar n}$-collinear, mass modes, and soft modes are
described by separate quark and gluon fields which are also listed in
table~\ref{table_fields}. Hard modes involving momenta $p^\mu\sim Q$ have
already been integrated out when QCD is matched onto SCET.
At leading order the SCET Lagrangian for collinear particles in
different directions can be written as a soft Lagrangian plus a sum of
collinear terms~\cite{Bauer:2002nz}, ${\cal L}^{(0)}={\cal
L}_s+\sum_{n_i} {\cal L}^{(0)}_{n_i} $. The sum satisfies the
constraint $n_i\cdot n_j\gg \lambda^2$ for $i\ne j$, with the choice
of $\lambda$ determining what is meant by distinct collinear
directions. The collinear particles in different sectors only interact
via soft gluon exchange or interactions in external operators.
When the $\perp$-momentum of the collinear particles is of the same
size as the quark mass the result for the leading order collinear
Lagrangian~\cite{Bauer:2000yr,Bauer:2001ct} must include the quark
mass terms derived in Ref.~\cite{Leibovich:2003jd} (see also
Ref.~\cite{Rothstein:2003wh}). The collinear quark Lagrangian for the
direction $n$ is therefore given by
\begin{eqnarray}
\label{Lscet}
{\cal L}^{(0)}_{qn} = \bar{\xi}_n\Big [ i n\cdot D_s + gn \cdot
A_n + (i D\!\!\!\!\slash _c^\perp \!-\! m) W_n\frac{1}{{\bar n}\!\cdot\! {\cal P}}W_n^\dagger
(iD\!\!\!\!\slash _c^\perp \!+\! m) \Big ] \frac{\bar n\!\!\!\slash}{2} \xi_n \,,
\end{eqnarray}
with $D_c^\perp \sim m \gg D_s^\perp$. There is also an $n$-collinear Lagrangian
for gluons~\cite{Bauer:2001yt}. Here the soft and collinear covariant
derivatives are
\begin{eqnarray}
i D_s^\mu = i \partial ^\mu + g A_s^\mu , \qquad
i D_c^\mu = {\cal P}^\mu + g A_n^\mu \,,
\end{eqnarray}
where ${\cal P}^\mu$ is a label operator picking out the large
collinear momentum of order $Q$ and $Q\lambda$ of a collinear
field~\cite{Bauer:2001ct}, while the partial derivative
acts on the residual momentum components $\partial ^\mu\sim\lambda^2$.
The term $W_n$ is the momentum space Wilson line built out of collinear
gluon fields
\begin{eqnarray}
W_n (x) = \sum _{\text{perms}} \text{exp} \>\big ( - \frac{g}{\bar{{\cal P}}}
{\bar n} \cdot A_{n}(x) \> \big ) \,.
\end{eqnarray}
We also note that Eq.~(\ref{Lscet}) is the bare Lagrangian. In particular, any
mass definition can be chosen for $m$ through an appropriate renormalization
condition without breaking the power-counting. At ${\cal O}(\alpha_s)$ these
mass-schemes are the same as those in QCD~\cite{Chay:2005ck}, because the
self-energy graphs are directly related.
\begin{figure}
\centerline{
\hspace{2cm}\includegraphics[width=10cm]{Top-Jet-hemi.eps}
}
\caption{Final state jets in SCET for stable top-quarks with invariant mass $\sim m^2$. The
invariant mass is restricted and the top-decay products become explicit by matching onto HQET. }
\label{fig:topjet}
\end{figure}
An example of an external operator that connects different collinear
sectors is the jet production current, which couples to the $\gamma^*$
or $Z^*$. In QCD the production matrix element is $\langle X | {\cal
J}_{a,v}^\mu | 0 \rangle$ where $\langle X|$ is the final state. The
required vector and axial currents are given by
\begin{align}
\label{QCDcurrents}
{\cal J}^\mu_v(x) &= \bar{\psi}(x) \gamma^\mu \psi(x) \,,
& {\cal J}^\mu_a(x) & = \bar{\psi}(x) \gamma^\mu \gamma_5 \psi(x) \,,
\end{align}
and for convenience we will adopt the short-hand notation ${\cal J}^\mu_i
=\bar\psi(x) \Gamma_i^\mu \psi(x)$. The matching relation of these QCD
currents to SCET currents
is given by the convolution formula~\cite{Bauer:2000yr}
\begin{eqnarray}
\label{currentmatch}
{\cal J}^\mu_i(0) = \int\!\! d\omega\, d\bar\omega\, C(\omega,\bar\omega,\mu)
J^{(0)\mu}_i(\omega,\bar \omega,\mu) \,,
\end{eqnarray}
where $C$ contains short-distance dynamics at the scale $Q$, while $
J_i^{(0)\mu}$ describes fluctuations at all longer distance scales. In the
presence of multiple collinear fields, as well as modes scaling like our
mass-modes and soft-modes, the construction of currents in SCET has been
discussed in great detail in Ref.~\cite{Bauer:2002nz}. Interactions between the
mass-modes and the collinear-modes produce offshell particles, which when
integrated out leave residual interactions through Wilson lines in the SCET
current. The SCET production current at leading order in $\lambda$ is given by
\begin{eqnarray}
\label{currentscet}
J^{(0)\mu}_i(\omega,\bar \omega,\mu)
= \bar \chi_{n,\omega}(0) S_n^\dagger \Gamma^\mu_i S_{\bar n} \chi_{{\bar n},\bar\omega}(0) \,,
\end{eqnarray}
where $\chi_{n,\omega}(0) = \delta(\omega- {\bar n}\!\cdot\! {\cal P}) (W_n^\dagger
\xi_n)(0)$ and $\chi_{{\bar n},\bar\omega}(0) = \delta(\bar\omega- n\!\cdot\! {\cal P})
(W_{\bar n}^\dagger \xi_{\bar n})(0)$. The mass-mode Wilson lines $S_n^\dagger$ and
$S_{\bar n}$ will be described below. Here the $(0)$ indicates that the fields are at
coordinate $x^\mu=0$, and we recall that this $x^\mu$ dependence carries
information about the residual momenta at the scale $Q\lambda^2=m^2/Q$. The
dependence on larger momenta is encoded in labels on the collinear
fields~\cite{Bauer:2001ct}, and, for example, $\delta(\omega-{\bar n}\cdot P)$ forces
the total minus-label-momentum of $(W^\dagger_n \xi_n)$ to be $\omega$. We also
use the notation $\chi_{n} = (W_n^\dagger \xi_n)$ and $\chi_{{\bar n}} =
(W_{\bar n}^\dagger \xi_{\bar n})$.
One can decouple the soft and collinear modes in ${\cal L}_{qn}^{(0)}$ by
performing a field redefinition on collinear fields~\cite{Bauer:2001yt}
\begin{eqnarray} \label{fd}
\xi _{n} \to Y_n \xi _{n} \,, \qquad
A_{n}^\mu \to Y_n\, A_{n}^{\mu}\, Y_n^\dagger \,,
\end{eqnarray}
where $Y_n$ is a soft Wilson line
\begin{align} \label{Yn}
Y_n(x) &= \overline {\rm P} \:
\exp\Big(-i g\! \int_{0}^\infty \!\!\!ds\, n\!\cdot\! A_{s}(ns\!+\! x) \Big)
\,.
\end{align}
This gives
\begin{align}
Y_n^\dagger(x) &= {\rm P} \,
\exp\Big(i g\! \int_{0}^\infty \!\!\!ds\, n\!\cdot\! A_{s}(ns\!+\!x) \Big) \,,
\end{align}
which satisfies $Y_n^\dagger Y_n=1$. For two-jet production the factorization
is most transparent~\cite{Bauer:2002ie} with the reference point $s_0=\infty$
shown in Eq.~(\ref{Yn}). The gluon fields are either antipath-ordered (for
$\overline{\rm P}$) or path-ordered (for ${\rm P}$). We use the same Wilson
line for both the quark and antiquark parts of $\xi_n$. Another possibility is
to make different field redefinitions on the particle and antiparticle parts of
the fields~\cite{Chay:2004zn}. In fact, all results are independent of the
choice of reference point in the field redefinition; the path is determined
entirely by changes the field redefinition induces on the operators and the
interpolating fields for the states~\cite{Arnesen:2005nk}.
The mass-mode Wilson line $S_n(x)$ is defined in an identical manner to
Eq.~(\ref{Yn}), but with $n\cdot A_s\to n\cdot A_m$. In order to avoid double
counting with the effects contained in the soft Wilson lines the mass-mode
$A_m^\mu$ fields are defined with zero-bin subtractions for the
soft-region~\cite{Manohar:2006nz}, and we have mass-mode top-quarks $\psi_m$
with a mass $m$. Any graphs with mass-mode gluons that do not involve a
top-bubble from $\psi_m$ fields are exactly canceled by these zero-bin
subtractions. Thus the mass-modes only contribute in these vacuum polarization
graphs. The soft-gluons can also couple to the $\psi_m$ fields, however they do
so with a multipole expansion, and therefore do not inject momentum into the
closed $\psi_m$ loop.
After the change of variable in Eq.~(\ref{fd}) the leading order SCET collinear
quark Lagrangian and current become
\begin{align}
{\cal L}^{(0)}_{nq} &= \bar{\xi}_n\Big [ i n\cdot \partial + g n \cdot
A_n + (i D\!\!\!\!\slash _c^\perp \!-\! m) W_n\frac{1}{\bar{{\cal P}}}W_n^\dagger
(iD\!\!\!\!\slash _c^\perp \!+\! m) \Big ] \frac{\bar n\!\!\!\slash}{2} \xi_n
\,,\nn\\
J^{(0)\mu}_i &= \overline \chi_{n,\omega} Y_n^\dagger
S_n^\dagger \Gamma_i^\mu S_{\bar n}
Y_{\bar n} \chi_{{\bar n},\bar\omega}(0)\,,
\end{align}
where we used the property $Y^\dagger_n \> n\cdot D_s \> Y_n = n\cdot
\partial$. The only coupling of the soft gluon to the collinear quark
was through $i n\cdot D_s$ which is no longer present (and a similar
property occurs in the collinear gluon action). These soft couplings
reappear as Wilson lines in the current as shown above. Hence we have
achieved soft-collinear decoupling in the Lagrangian and the current.
In the two-jet process we wish to factorize we must also consider the
transformation property of the state $|X\rangle$ under
Eq.~(\ref{fd}). For manipulations in the factorization theorem for
two-jet production we decompose the state into collinear and soft
pieces,
\begin{align} \label{state}
\langle X\ | = \langle X_n X_{\bar n} X_s | \,.
\end{align}
Note that this decomposition is only valid for the states we are interested in
for describing the dijet region, not for a general state in QCD. Since there is
always at least one $n$-collinear and one ${\bar n}$-collinear particle, we do not
consider any mass-modes, $X_m$, in these states either. The presence of a mass mode
would induce an invariant mass $p_X^2 = (p_n+p_m)^2 \simeq p_n^- p_m^+ \simeq Q
m \gg m^2$, which would make it impossible to satisfy the invariant mass
condition required to study the peak region. Therefore the mass-modes will only
appear as virtual contributions. The collinear states $\langle X_n|$ and
$\langle X_{\bar n} |$ are a color triplet and color antitriplet, just like a quark
and antiquark state. Therefore, we must consider how these collinear states
transform under the change in the action induced by Eq.~(\ref{fd}). However,
because these color triplet states can be derived from the out states at large
time, $t\to \infty$, they are not affected by the field redefinition with reference
point at $\infty$~\cite{Arnesen:2005nk}. With the current at $x$, we therefore
have
\begin{align} \label{melt}
\langle X | {\rm T}\big\{ \overline \chi_{n,\omega} S_n^\dagger \Gamma S_{\bar n}
\chi_{{\bar n},\bar\omega} \big\} | 0 \rangle \to
\langle X | {\rm T} \big\{ \overline \chi_{n,\omega} Y_n^\dagger\, S_n^\dagger\,
\Gamma\, S_{\bar n}\, Y_{\bar n} \chi_{{\bar n},\bar\omega} \big\} | 0 \rangle \,.
\end{align}
Here the ${\rm T}$ reminds us to keep the proper time-ordering of the
$A^a(x)$ gluon fields in the $Y$'s. There is no ordering issue between
fields in $Y_{\bar n}$ with
those in $Y_n^\dagger$, since they are space-like separated and
commute~\cite{Bauer:2002ie}. We also need the complex conjugate of
Eq.~(\ref{melt}) for the matrix element, which is
\begin{align} \label{cmelt}
\langle 0 | {\rm T}\big\{ \overline \chi_{{\bar n},\bar\omega} S_{\bar n}^\dagger \Gamma
S_n \chi_{n,\omega} \big\} | X \rangle
\to
\langle 0 | \overline {\rm T}\big\{ \overline \chi_{{\bar n},\bar\omega}
Y_{\bar n}^\dagger\, S_{\bar n}^\dagger\, \overline
\Gamma\, S_n\, Y_n \chi_{n,\omega} \big\} | X \rangle
\,,
\end{align}
where $\overline {\rm T}$ is anti-time-ordering.
Note that
\begin{align} \label{TY}
{\rm T}\ (Y_{\bar n})^T
&= (Y_{\bar n}^\dagger)^*
= \overline {Y_{\bar n}}^\dagger
= {\rm P} \: \exp\Big( i g\! \int_{0}^{\infty} \!\!\!ds\,
{\bar n}\!\cdot\! \overline {A}_{s}({\bar n} s\!+\! x) \Big)
\,, \nn\\
{\overline {\rm T}}\ (Y_{\bar n}^\dagger)^T
&= (Y_{\bar n})^*
= \overline {Y_{\bar n}}
= \overline {\rm P}\: \exp\Big(-i g\! \int_{0}^{\infty} \!\!\!ds\,
{\bar n}\!\cdot\! \overline {A}_{s}({\bar n} s\!+\!x) \Big)
\,,
\end{align}
where $\overline {A}_{s} = A^A_{s} \overline {T}^A$, with $\overline {T}^A = -
(T^A)^T$ the generator for the $\overline 3$ representation, and the superscript
$T$ is the transpose with respect to the color indices of the fundamental
representation. If we switch to these barred Wilson lines then the
time-ordering and anti-time-ordering becomes redundant. Eq.~(\ref{TY}) applies equally
well for the $S$ Wilson lines. Considering the squared matrix element for the
cross-section we find
\begin{align} \label{melt2}
& \langle 0 | \overline {\rm T}\big\{ \overline \chi_{{\bar n},\bar\omega'}
Y_{\bar n}^\dagger\, S_{\bar n}^\dagger\, \overline
\Gamma\, S_n\, Y_n \chi_{n,\omega'} \big\} | X \rangle
\langle X | {\rm T} \big\{ \overline \chi_{n,\omega} Y_n^\dagger\, S_n^\dagger\,
\Gamma\, S_{\bar n}\, Y_{\bar n} \chi_{{\bar n},\bar\omega} \big\} | 0 \rangle\nn\\[4pt]
&= \langle 0 | \overline \chi_{{\bar n},\bar\omega'}^a (\overline {Y}_{\bar n})^{ba}
\, (\overline {S}_{\bar n})^{b'b}
(\overline\Gamma\, S_n Y_n \chi_{n,\omega'})^{b'} | X \rangle
\langle X | (\overline \chi_{n,\omega} Y_n^\dagger\, S_n^\dagger
\Gamma)^{c'}\, (\overline {S}^\dagger_{\bar n})^{cc'}
(\overline {Y}^\dagger_{\bar n})^{dc} \chi_{{\bar n},\bar\omega}^d | 0
\rangle \nn\\
&= {\cal M}(m,\mu) \
\langle 0 | \overline \chi_{{\bar n},\bar\omega'}^a (\overline {Y}_{\bar n})^{ba}
\, (\overline\Gamma\, Y_n \chi_{n,\omega'})^{b'} | X \rangle
\langle X | (\overline \chi_{n,\omega} Y_n^\dagger\,
\Gamma)^{c'}\,
(\overline {Y}^\dagger_{\bar n})^{dc} \chi_{{\bar n},\bar\omega}^d | 0
\rangle
\,,
\end{align}
where $a,b,c,d,b',c'$ are color indices. The decoupling of soft gluons in
Eq.~(\ref{melt2}) is identical to that in massless two-jet production, and
ignoring the mass-mode Wilson lines the discussion above agrees with the SCET
derivation in Ref.~\cite{Bauer:2002ie}, as well as the original derivation in
Refs.~\cite{Korchemsky:1994is,Korchemsky:2000kp}. To obtain the last line in
Eq.~(\ref{melt2}) we note that the Dirac structures $\Gamma$ and
$\overline\Gamma$ are color singlets, and that the mass-mode Wilson lines can be
separated into vacuum matrix elements since there are no mass-modes in the
states. Furthermore
\begin{align}
\langle 0 | (\overline {S}_{\bar n})^{b'b} (S_n)^{b'a} | 0 \rangle &=
\frac{\delta^{ba}}{N_c} \langle 0 | (\overline {S}_{\bar n})^{b'a'} (S_n)^{b'a'} | 0 \rangle\,,
\end{align}
with an analogous result for $\langle 0 | (S_n^\dagger)^{ac'} (\overline
{S}^\dagger_{\bar n})^{cc'} | 0\rangle$, so this contracts the color indices on
either side of the product of soft Wilson-line factors. Thus defining
\begin{align}
{\cal M}(m,\mu) &\equiv \frac{1}{N_c^2} \big| \langle 0 | \overline
{S}_{\bar n}^{ab} S_n^{ab} | 0 \rangle \big|^2 \,,
\end{align}
we are left with the matrix element shown on the last line of Eq.~(\ref{melt2}).
Here ${\cal M}(m,\mu)=1+{\cal O}(\alpha_s^2)$.
The soft-collinear decoupling property is crucial to organizing the physics of
the massive two-jet problem. As we will show in Sec.~\ref{section3}, there
is a factorization theorem in SCET that decouples the soft and collinear modes
at leading order which allows us to study the physics of each jet
independently. The cross-talk is confined to a simple top mass
independent vacuum matrix element involving the
$Y$ Wilson lines,
\begin{align}\label{Yme}
\langle 0 | (\overline {Y}_{\bar n})^{ba} \,
(Y_n)^{bc} | X_s \rangle
\langle X_s | (Y_n^\dagger)^{cd}\, (\overline {Y}^\dagger_{\bar n})^{ad} | 0
\rangle
\,,
\end{align}
which agrees with the corresponding soft matrix element for massless
quark production~\cite{Korchemsky:2000kp,Lee:2006nr,Bauer:2003di} and
which will eventually determine the soft function $S_{\rm
hemi}(\ell^+,\ell^-)$ to be used in Eq.~(\ref{FactThm}). As we also
show in Sec.~\ref{section3}, the precise definition of the soft
function $S$ depends on the prescription that is used how the momenta
of soft particles enter the top and antitop invariant masses $\hat
s_t$ and $\hat s_{\bar t}$, respectively. In the next subsection we
describe how the matrix element in Eq.~(\ref{Yme}) is modified when we
integrate out the top-quark mass.
Finally, in SCET because the top-quark mass $m$, and mass of the $W$-boson,
$m_W$, are still low energy scales, the decay of an $n$-collinear top-quark is
simply described by the full electroweak interaction,
\begin{eqnarray} \label{Lnew}
{\cal L}_{ew} = \frac{g_2}{\sqrt{2}}\, \bar b W_\mu^- \gamma^\mu P_L t
+ \frac{g_2}{\sqrt{2}}\, \bar t W_\mu^+ \gamma^\mu P_L b \,,
\end{eqnarray}
where $G_F=\sqrt{2} g_2^2/(8m_W^2)$ is the Fermi constant. This treatment is
consistent since we can treat the top decay as fully inclusive up to ${\cal
O}(m^2/Q^2)$.
The collinear $A_n$ and $A_{\bar n} $ gluons in SCET can induce
fluctuations $\hat s_t, \hat s_{\bar t}\sim m$. Once we restrict
ourselves to events with $\Gamma\sim \hat s_t, \hat s_{\bar t} \ll
m$, i.e.\,we force the top quark and antiquark to remain close to
their mass shell, the situation looks very much like two distinct
copies of HQET in boosted frames. There was nothing special in the
dynamics that sets the scale $m^2/Q$ for the soft interactions, and so
we call $\Delta$ the scale that controls the soft-cross talk. In the
field theory $\Delta$ will be defined as the scale where we model or
fit the primordial soft function. Generally we will take $m\gg
\Delta \sim \Gamma$, although any value $\Delta > \Lambda_{\rm QCD}$ can be
considered. So we must switch from SCET onto these HQET theories, and also
consider what happens to the decay interaction in Eq.~(\ref{Lnew}). We describe
the boosted HQET theories in detail in the next section, and we also discuss how
the soft cross-talk interactions remains active when the fluctuations at the
top mass scale $m$ are integrated out.
Since the above Lagrangians and currents are LO in $\lambda$, it is
natural to ask about the role of power corrections. As it turns out,
higher order Lagrangians and currents give corrections to our analysis
at ${\cal O}(\alpha_s m/Q)$, ${\cal O}(\Delta/Q)$, ${\cal
O}(m^2/Q^2)$, or ${\cal O}(\Gamma/m)$. The absence of ${\cal O}(m/Q)$
implies that the $m/Q$ expansion does not significantly modify the
top-mass determination. The leading action contains all $m/Q$
corrections that do not involve an additional perturbative gluon, so
the corrections are ${\cal O}(\alpha_s m/Q)$. We have also verified that at
tree level the $m/Q$ corrections to the SCET
current~\cite{Rothstein:2003wh} vanish when contracted with the
leptonic tensor. Furthermore, many of the higher order $m/Q$ corrections
have the form of normalization corrections, and thus do not change the
shape of the invariant mass distribution. Subleading soft
interactions are ${\cal O}(\Delta/Q)$. The interplay of our
hemisphere invariant mass variable with the top decay can induce
${\cal O}(m^2/Q^2)$ corrections, as we discuss later on. Finally there
will be power corrections of ${\cal O}(\Gamma/m)$ in bHQET.
\subsection{Boosted HQET with Unstable Particles and Soft Cross-Talk}
\label{buHQET}
{\it Boosted Heavy Quarks.}
HQET \cite{Georgi:1990um, Eichten:1989zv, Grinstein:1990mj, Isgur:1989vq,
Isgur:1989ed} is an effective theory describing the interactions of a heavy
quark with soft degrees of freedom, and also plays a crucial role for jets
initiated by massive unstable particles in the peak regions close to the heavy
particles mass shell. The momentum of a heavy quark interacting with soft
degrees of freedom can be written as
\begin{eqnarray}
\label{HQETmomdecomp}
p ^\mu = m v^\mu + k^\mu ,
\end{eqnarray}
where $k^\mu$ denotes momentum fluctuations due to interactions with
the soft degrees of freedom and is much smaller than the heavy quark
mass $|k^\mu | \ll m$. Also typically $v^\mu\sim 1$ so that we are
parametrically close to the top quark quark rest-frame, $v^\mu=(1,\vec
0)$.
In the top-quark rest frame we have $\Gamma \lesssim k^\mu \ll m$, where $k^\mu$
refers to momentum fluctuations of the top due to interactions with gluons
collinear to its direction, which preserve the invariant mass conditions $\Gamma
\sim \hat s_t,\hat s_{\bar t} \ll m$. For our top-quark analysis, the center of
mass frame is the most convenient to setup the degrees of freedom. In this
frame the gluons collinear to the top-quark which preserve the invariant mass
condition will be called {\it ultra-collinear (ucollinear)} in the $n$
direction. A different set of ${\bar n}$-ucollinear gluons interact with the antitop
quark which moves in the ${\bar n}$ direction. The leading order Lagrangian of the
EFT describing the evolution and decay of the top or antitop close to it's mass
shell is given by
\begin{align}
\label{LbHQET}
{\cal L}_{+} &=
\bar{h}_{v_+} \big( i v_+ \cdot D_+ - \delta m + \frac{i}{2} \Gamma \big) h_{v_+ } ,
&{\cal L}_{-} &=
\bar{h}_{v_-} \big( i v_- \cdot D_- -\delta m+ \frac{i}{2} \Gamma \big) h_{v_-} ,
\end{align}
where the $+$ and $-$ subscripts refer to the top and antitop sectors
respectively, and $iD_\pm^\mu = i\partial^\mu+ g A_\pm^\mu$. These HQETs
represent an expansion in $\Gamma/m$. The HQET field $h_{v_+}$ annihilates top
quarks, while $h_{v_-}$ creates antitop quarks. In the c.m. frame the
components of $k^\mu$ are no longer homogeneous in size, and $v_\pm^\mu \
\slash\!\!\!\!\!\!\sim 1$. Instead for the $(+,-,\perp)$ components we have
\begin{eqnarray}
\label{BHQETres}
v^\mu_+ &=& \bigg( \frac{m}{Q} , \frac{Q}{m}, \mathbf{0}_\perp \bigg),
\qquad\quad
k^\mu_+ \sim \Gamma\,\bigg(\frac{m}{Q}, \frac{Q}{m}, 1 \bigg),
\\
v^\mu_- &= & \bigg( \frac{Q}{m}, \frac{m}{Q} , \mathbf{0}_\perp \bigg),
\qquad\quad
k^\mu_- \sim \Gamma\bigg(\frac{Q}{m}, \frac{m}{Q}, 1\bigg)
\nonumber .
\end{eqnarray}
Note that the $\Gamma$ in Eq.~(\ref{BHQETres}) can be replaced by a
larger scale, of order $\hat s$, as long as this scale is much less
than $m$. Eq.~(\ref{BHQETres}) is easily
obtained by boosting from the rest frame of the top and antitop
respectively with a boost factor of $Q/m$. In this naming scheme we
will continue to call the gluons that govern the cross-talk between
top and antitop jets {\it soft}. We emphasize that they are not
included in ${\cal L}_\pm$, since they have nothing to do with the
gluons in standard HQET. Soft gluon interactions will be added
below. To avoid double counting between the soft gluons, the
ultracollinear gluons are defined with zero-bin
subtractions~\cite{Manohar:2006nz}, so that for example ${\bar n}\!\cdot\!
k_+\ne 0$ and $n\!\cdot\! k_- \ne 0$. Finally, since HQET is applied for $\mu<m$
there are no analogs of the SCET mass-modes in this theory. All effects
associated with virtual top-quark loops are integrated out at the scale $m$.
The leading order Lagrangians ${\cal L}_\pm$ contain a residual mass
term $\delta m$ which has to be chosen according to the desired top
quark mass scheme. For a given top mass scheme $m$, the residual mass
term is determined by its relation to the pole mass $m_{\rm pole} \, =
\, m + \delta m$. Anticipating that we have to switch to a properly
defined short-distance mass
definition~\cite{Hoang:1998nz,Beneke:1998rk,Uraltsev:1998bk,Hoang:1998ng}
when higher order QCD corrections are included, we note that only
short-distance mass definitions are allowed which do not violate the
power counting of the bHQET theories, $\delta m\sim \Gamma$. This
excludes for example the use of the well known $\overline{\rm MS}$
mass, since in this scheme $\delta m\sim \alpha_s m \gg
\Gamma$. In practice, this means that using the $\overline{\rm MS}$ mass leads
to an inconsistent perturbative expansion as explained in
Sec.~\ref{sec:sdmass}. This is the
reason why the $\overline{\rm MS}$ mass can not be measured directly from
reconstruction.
The leading order Lagrangians ${\cal L}_\pm$ also contain top-width terms
$i\Gamma/2$. An effective field theory treatment of the evolution and decay of
a massive unstable particle close to its mass shell was developed
in~\cite{Fadin:1991zw,Beenakker:1999hi, Beneke:2003xh,
Beneke:2004km,Hoang:2006pd,Hoang:2004tg}. The examples treated in these
references were the resonant production of a single unstable scalar particle,
and the leading and subleading width corrections to threshold $t\bar t$
production. In our case, we deal with the energetic pair production of massive
unstable fermions, and we arrive at two copies of this unstable HQET
corresponding to the top and antitop sectors. In these two HQET theories we
treat the top and antitop decays as totally inclusive, since we do not require
detailed differential information on the decay products. So the total top width
$\Gamma$ appears as an imaginary mass term in ${\cal L}_\pm$, which is obtained
by simply matching the imaginary part of the top and antitop self-energy graphs
from SCET onto bHQET. As we show in Sec.~\ref{section3}, this inclusive
treatment of the top decay is consistent with the hemisphere invariant mass
definition we employ in this work up to power corrections of order $(m/Q)^2$. We
will come back to the role of higher order power corrections in the treatment of
the finite top lifetime at the end of this section.
{\it Soft Interactions.}
Lets consider how the soft gluons interact with our heavy quarks in each bHQET.
For a heavy quark in the boosted frame we consider interactions with soft gluons
of momentum
\begin{align}
\ell^\mu \lesssim (\Delta,\Delta,\Delta) \,.
\end{align}
Our main interest is in the case $\Delta\lesssim \Gamma$, but it is useful to keep a
more general $\Delta \ge \Lambda_{\rm QCD}$ for the moment. We wish to
demonstrate that these gluons are still entirely described by the cross-talk
matrix element in Eq.~(\ref{Yme}), and that this is true without needing to
expand in the ratio of $\Delta$ to $\Gamma$. Or in other words, that the simple
eikonal propagators for the soft-gluon attachments to the energetic tops remains
valid even below the mass of the quarks and even in the presence of the
top-width. Our demonstration assumes the reader is quite familar with
Ref.~\cite{Bauer:2001yt}. To prove this we go back to the original SCET Lagrangian in
Eq.~(\ref{Lscet}) prior to the field redefinition, and match the soft
interactions onto the HQET theory. This gives the same Lagrangian as in
Eq.~(\ref{LbHQET}) but with replacements
\begin{align} \label{Drepl}
i D_{+}^\mu \to i {\cal D}_+^\mu &= i \tilde \partial_+^\mu + g A_{+}^\mu
+ \frac{{\bar n}^\mu}{2} g n\!\cdot\! A_s
\,,\nn\\
i D_{-}^\mu \to i {\cal D}_-^\mu &= i \tilde \partial_-^\mu + g A_{-}^\mu
+ \frac{n^\mu}{2} g {\bar n}\!\cdot\! A_s \,.
\end{align}
The new covariant derivatives ${\cal D}_\pm$ also appear in the pure gluon
action responsible for the ultracollinear gluon kinetic term. The nature of the
expansion for different momenta in $i\tilde \partial^\mu$ will depend on the
size of the soft scale $\Delta$ relative to the smallest ultracollinear
components $m\Gamma/Q$ displayed in Eq.~(\ref{BHQETres}). Note in this
comparison that the width is suppressed by a factor of $m/Q$. Physically this
factor is easy to understand, it is simply the time-dilation of the width of the
energetic top-quark from the point of view of the soft gluons. The boost factor
is encoded in $v_+\cdot {\bar n} = v_-\cdot n = Q/m$.
For our analysis we also need the effective current in the bHQET theories that
corresponds to the SCET current in Eq.~(\ref{currentscet}). It is
\begin{align}
\label{JbHQET}
J^\mu_{\rm bHQET} = (\bar h_{v_+} W_{n}) \Gamma_i^\mu (W_{{\bar n}}^{\dagger}
h_{v_-}) \,,
\end{align}
where the Wilson lines are the same as $W_n$ and $W_{\bar n}^\dagger$ in SCET, except
here we have gluons ${\bar n}\cdot A_+$ with path along ${\bar n}^\mu$ for $W_{n}$,
and $n\cdot A_-$ with path along $n^\mu$ for $W_{{\bar n}}^\dagger$. The simplest
way to derive this result is to note that the two collinear sectors in the SCET
current in Eq.~(\ref{currentscet}) do not directly interact, and neither do the
two sectors of the two bHQETs. In the rest frame of the top-quark for example
the matching is simply $(W^\dagger_n \psi_s) \to (W_n^{\dagger} h_{v_+})$, where
$\psi_s$ is a field for the top-quark near its rest-frame. Boosting this result gives
the matching for the top-quark field in Eq.~(\ref{JbHQET}), and the result for
the antitop quark is analogous. The dynamics of the $B_+$ and $B_-$ jet
functions will be defined by the two interpolating field operators
$(W_{n}^\dagger h_{v_+} )$ and $(W_{{\bar n}}^{\dagger} h_{v_-})$, and is governed by
the Lagrangians ${\cal L}_{+} $ and ${\cal L}_{-}$ respectively.
Let us now come back to the derivation of Eq.~(\ref{Yme}) from bHQET. For
convenience we start by taking both scales the same size, $m\Gamma/Q \sim
\Delta$. (Below we will show that the same result is obtained for the case where
$m\Gamma/Q \ll \Delta$, which includes the situation $\Delta\sim\Gamma$.)
For $m\Gamma \sim Q\Delta$ we can formulate the multipole expansion for the
coupling of soft gluons to the heavy quarks by splitting the momenta into a
large label components of size $Q\Gamma/m$ and $\Gamma$, and residual momentum
components of size $m\Gamma/Q$. Thus\footnote{This formulation of the multipole
expansion is the same as for the coupling of ultrasoft particles to collinear
particles in SCET~\cite{Bauer:2001yt} where the two types of derivatives are
formally separated by introducing label operators, and leaving residual
momenta to be picked out by $i\partial^\mu$.}
\begin{align} \label{ipartial}
i \tilde \partial^\mu_+ &= \frac{n^\mu}{2}{\bar n}\!\cdot\! {\cal P}_c +
{\cal P}_{c\perp}^\mu
+ \frac{{\bar n}^\mu}{2} n\!\cdot\! i\partial
\,,
&i \tilde \partial^\mu_- &= \frac{{\bar n}^\mu}{2} n\!\cdot\! {\cal P}_c +
{\cal P}_{c\perp}^\mu
+ \frac{n^\mu}{2} {\bar n}\!\cdot\! i\partial \,.
\end{align}
The notation indicates that soft momenta only appear in the components
$i\partial^\mu$. On the other hand ultracollinear momenta appear in all four
components, and are picked out by the label operators ${\cal P}_c^\mu$ or the
$i\partial$. Next we make the same field redefinition on bHQET fields that we
made on the SCET fields in Eq.~(\ref{fd})
\begin{align}\label{fd22}
& h_{v_+} \to Y_n h_{v_+} \,,
& A_{+}^\mu & \to Y_n A_{+}^\mu Y_n^\dagger \,,
& h_{v_-} & \to Y_{\bar n} h_{v_-} \,,
& A_{-}^\mu & \to Y_n A_{-}^\mu Y_n^\dagger \,,
\end{align}
where the fields in $Y_n$ and $Y_{\bar n}$ are soft gluons. Since
\begin{align}
& (v_+\!\cdot\! {\bar n}) (in\!\cdot\! \partial \ensuremath{\! + \!} g n\!\cdot\! A_s) Y_n = 0 \,,
& & (v_-\!\cdot\! n) (i{\bar n}\!\cdot\! \partial \ensuremath{\! + \!} g {\bar n}\!\cdot\! A_s) Y_{\bar n} = 0\,,
\end{align}
this field redefinition gives back exactly Eq.~(\ref{LbHQET}) for the bHQET
Lagrangian and also gives a leading ucollinear gluon action that has no
couplings to soft gluons. In addition when making the field redefinition in the
bHQET currents, Eq.~(\ref{JbHQET}), we get exactly the same soft cross-talk
matrix element for the two-jet production
\begin{align}\label{bYme}
\langle 0 | (\overline {Y}_{\bar n})^{ba} \,
(Y_n)^{bc} | X_s \rangle
\langle X_s | (Y_n^\dagger)^{cd}\, (\overline {Y}^\dagger_{\bar n})^{ad} | 0
\rangle
\,.
\end{align}
The only difference between the SCET matrix element in Eq.~(\ref{Yme}) and the
HQET matrix element in Eq.~(\ref{bYme}) is that in the former the soft-gluons
couple to the massive $\psi_m$ fields, while there are no such couplings in the
latter. In matching renormalized soft matrix elements at a scale $\mu\simeq m$
the only effect of these couplings to $\psi_m$ fields is to induce an overall
Wilson coefficient, so that $S^{\rm SCET} = T_0(m,\mu) S^{\rm bHQET}$. Thus
the main dynamics of the soft gluons is not modified in a substantial way by
passing from SCET to the boosted HQET Lagrangian, nor by the presence of the
width term for unstable quarks.
For completeness, let us now consider the case $\Delta \gg m\Gamma/Q$ and show
that the same result is obtained. In this case a soft gluon of momentum
$\ell^\mu$, coupling to an $h_{v_+}$ with residual momentum $k_+$, has ${\bar n}\cdot
\ell \gg {\bar n}\cdot k_+$, while for $h_{v_-}$ we have $n\cdot \ell \gg n\cdot
k_-$. Thus these soft gluons knock the heavy quarks away from their mass shell,
and their interactions can not be formulated in a local manner in the same
theory as the ucollinear gluons. This is similar to how soft and collinear
gluons interact in the theory \ensuremath{{\rm SCET}_{\rm II}}\xspace as discussed in Ref.~\cite{Bauer:2001yt}.
To derive the form of the soft gluon interactions for this situation we can
construct an auxiliary intermediate theory where the ucollinear gluons and heavy
quarks are further from their mass shell and the soft interactions are local.
The form of this theory is identical to
Eqs.~(\ref{LbHQET},\ref{Drepl},\ref{ipartial}), but with $\Gamma$ in
Eq.~(\ref{BHQETres}) replaced by $Q\Delta/m$, and we can make the field
redefinition of Eq.~(\ref{fd22}) in this theory. Then we lower the offshellness
of the ucollinear particles and match onto the bHQET with scaling exactly as in
Eq.~(\ref{BHQETres}). (This is identical to the procedure used to construct
\ensuremath{{\rm SCET}_{\rm II}}\xspace operators from \ensuremath{{\rm SCET}_{\rm I}}\xspace which was devised in
Refs.~\cite{Bauer:2002aj,Bauer:2003mg}.) The result of this procedure is exactly
Eqs.~(\ref{LbHQET}) and (\ref{bYme}). Thus, the result for $\Delta \gg
m\Gamma/Q$ is the same as for $\Delta\sim m\Gamma/Q$.
We conclude that at leading order the interaction of the bHQET heavy quarks with
soft gluons are described by Eq.~(\ref{bYme}). This matrix element can be used
to define a soft function $S$, that describes the cross-talk between massive
top-quarks which have fluctuations below the mass scale $m$, and we can use
Eq.~(\ref{LbHQET}) for the remaining dynamics at LO. Thus, the dynamics
separates in the manner shown in Fig.~\ref{fig:efts}, into two decoupled HQET's
and a decoupled soft-sector. In Sec.~\ref{subsectionfactorizationtheorem} below
we will derive the same result in an alternative manner, starting from the
factorization theorem for the cross-section in SCET. In this approach the
definition of the jet functions and soft-cross talk matrix elements are first
defined in SCET, and then matched onto bHQET. In this case the soft couplings
are fully formulated by the matrix element in Eq.~(\ref{bYme}), and there is no
need to consider soft couplings to fields in the bHQET Lagrangian.
{\it Decay Product Interactions.}
It is conspicuous that in the leading order bHQET setup, gluon exchange
involving top and antitop decay products is not present. We now show that this
treatment is correct and discuss the size of possible power corrections. Since
we are interested in top/antitop invariant masses in the peak region at large
$Q$, we only have to consider ucollinear and soft gluons. Concerning ucollinear
gluons it is convenient to switch for each bHQET into the respective heavy quark
rest frame where $v_\pm^\mu=(1,0,0,0)$ and the ucollinear gluons have momenta
$k^\mu\sim\Gamma\ll m$. For the hemisphere invariant masses we can treat the top
decay as fully inclusive at leading order (see Sec.~\ref{section3}), so we can
address the issue by analyzing possible cuts from the top/antitop final states
in electroweak diagrams contributing to the bHQET matching
conditions~\cite{Hoang:2004tg}.
\begin{figure}[t!]
\centerline{
\includegraphics[width=14cm]{topSE_ward_id.eps}
}
\caption{Example of the cancellation of soft gluon attachments to the decay products.}
\label{fig:ward}
\end{figure}
At leading order in the expansion in $\Gamma/m$ there are cuts from the
top/antitop self energy which lead to the width terms in ${\cal L}_\pm$.
Subleading finite lifetime corrections to the heavy quark bilinear terms are
suppressed by $\Gamma/m$ and physically related to the lifetime-dilations coming
from residual momentum fluctuations of the heavy quark. Furthermore, due to
gauge invariance finite lifetime matching contributions can not arise for the
$v_\pm\cdot A_\pm$ couplings in the covariant derivatives of ${\cal L}_\pm$.
Diagrammatically this involves a cancellation between the graphs in
Fig.~\ref{fig:ward} including all possible cuts. Diagram a) is a vertex
correction, while diagram b), c) is a wave-function-type contribution. Since
momenta in the cut graphs are of order $m$, at leading order we can take the
ucollinear gluons to have momentum $k^\mu=0$. In this situation the diagrams
cancel due to gauge invariance. Thus, at leading order there are no finite
lifetime effects involving ucollinear gluon exchange. Effects from the sum of
the diagrams in Fig.~\ref{fig:ward} that do not cancel are suppressed by at
least a factor $\alpha_s\Gamma/m$ relative to the leading order factorization
theorem.
Finally we consider soft gluon interactions. Using the proof above for
the universality of the soft cross-talk matrix element in
Eq.~(\ref{bYme}) and repeating the arguments made for the ucollinear
gluon interactions we find that the dominant soft gluon interactions
involving top/antitop decay products are described by possible cuts of
electroweak matching contributions of the $n\cdot A_s$ and $\bar
n\cdot A_s$ couplings in Eq.~(\ref{Drepl}). In this case the same
cancellation as for the ucollinear gluons takes place since the
average soft gluon energy in the top/antitop rest frame is still
$\Delta$ and thus much smaller than $m$. Thus interactions involving
top/antitop decays products and soft gluons are suppressed by at least
a factor $\Delta/m$. Numerical studies in Ref.~\cite{Sjostrand:1999ki}
have estimated QCD interconnection effects based on nonperturbative
models (see also Ref.~\cite{Khoze:1992rq}).
Having defined the EFT's we now turn to the derivation of the
factorization theorem.
\section{Factorized Cross-Section and Invariant Mass Definitions}\label{section3}
\subsection{The QCD Cross-Section}
We start with the general expression of the cross-section for top-antitop quark
production, $\>e^+e^-\rightarrow \gamma^*,Z^*\rightarrow t\bar t+X$. The final
state we are interested in is observed as the top and antitop jets plus soft
radiation $J(t) J(\bar t)X_s$. We remind the reader that we refer to all the
jets coming from the top and antitop quark decay collectively as top and antitop
jets, respectively. But we stress that despite the language, our analysis is
still perfectly consistent with the fact that the different jets from each the
top and antitop decay can be resolved in the experimental analysis.
The full cross-section is
\begin{eqnarray}\label{qcdcrosssection}
\sigma &=& \sum_X^{res.} (2\pi)^4 \, \delta^4(q-p_X) \sum_{i=a,v}
L_{\mu\nu}^{i}\ \langle 0| {\cal J}^{\nu\dagger}_i(0) |X\rangle
\langle X | {\cal J}^\mu_i(0) |0\rangle
\, ,
\end{eqnarray}
where the initial state total leptonic momentum is $q=p_{e^-}+p_{e^+}$,
$Q^2=q^2$, and the QCD currents ${\cal J}_{v,a}^\mu$ are given in
Eqs.~(\ref{QCDcurrents}). The superscript $res.$ on the summation symbol
denotes a restriction on the sum over final states $X$, to give $J(t) J(\bar
t)X_s$. These final states contain top and antitop jets with invariant masses
close to the top quark mass. The explicit form of these restrictions depends on
the specific jet and invariant mass definitions used. For the hemisphere
invariant mass prescription these restrictions will be implemented explicitly in
Sec.~\ref{section_hemi} below, while other methods are discussed in
Sec.~\ref{sectionotheralgo}.
In Eq.~(\ref{qcdcrosssection}) we include photon and $Z$ boson exchange, and
imply an angular average of the leptonic tensor, to obtain the parity
conserving $L_{\mu\nu}^{i}$ with a sum over vector and axial-vector parts,
$i=v, a$. For convenience we also include the charges and boson
propagators, and the cross-section prefactor $1/(2Q^2)$, so that
\begin{align}
L_{\mu\nu}^{(v)} &= -\frac{8\pi^2 \alpha^2}{3 Q^4} \Big(g_{\mu\nu}-\frac{q_\mu
q_\nu}{Q^2}\Big)
\bigg[\, e_t^2 -
\frac{2 Q^2\, v_e v_t e_t}{Q^2-m_Z^2} +
\frac{Q^4 (v_e^2+a_e^2)v_t^2}{(Q^2-m_Z^2)^2}\, \bigg] \,,\nn\\
L_{\mu\nu}^{(a)} &= -\frac{8\pi^2 \alpha^2}{3 Q^4} \Big(g_{\mu\nu}-\frac{q_\mu
q_\nu}{Q^2}\Big)
\bigg[\, \frac{Q^4\, (v_e^2+a_e^2)a_t^2}{ (Q^2-m_Z^2)^2 } \bigg] \,.
\end{align}
Here $e_t$ is the top-quark charge, and
\begin{eqnarray}
v_f = \frac{T_3^f-2 Q_f \sin^2\theta_W}{2\sin\theta_W \cos\theta_W}\,,
\qquad\qquad
a_f = \frac{T_3^f}{2\sin\theta_W \cos\theta_W} \,,
\end{eqnarray}
where $T_3^f$ is the third component of weak isospin, and $\theta_W$ is the
weak mixing angle.
\subsection{The SCET Cross-Section}
We now proceed by using the fact that the states are restricted to be
dijet-like through the constraint that the top and antitop jet invariant
masses are close to the top quark mass, as illustrated in
Fig.~\ref{fig:topjet}.
In this section we reformulate the cross section by using the more specific
SCET currents of Eq.~(\ref{currentscet}) that are
suitable for this kinematic situation. We integrate out the hard production
energy scale $Q$ by matching the SCET currents onto the QCD currents giving us
via the matching relation~(\ref{currentmatch})
a new expression for the cross-section defined with matrix elements in SCET.
The SCET currents in Eq.~(\ref{currentscet}) correctly reproduce the long
distance physics of the QCD current, and the difference in the short distance
physics is contained in the Wilson coefficient $C(\omega,\bar\omega,\mu)$. We
will see momentarily that momentum conservation dictates that the final form of
the cross-section depends only on $C(Q,-Q,\mu)\equiv C(Q,\mu)$. In
Ref.~\cite{Manohar:2003vb} the Wilson coefficient at one loop was computed. It
is independent of the Dirac structure $\Gamma _i$ and also of whether or not the
collinear quarks are massive (the latter fact is demonstrated in
Ref.~\cite{FHMS2} where the matching computation for the corresponding vertex
diagrams is carried out explicitly for finite heavy quark mass). The result is
\begin{eqnarray}
\label{cQmu}
C(Q,\mu) = 1+\frac{\alpha _s C_F}{4\pi}\Big [ 3\log \frac{-Q^2\!-\!i0}{\mu ^2} -\log
^2\frac{-Q^2\ensuremath{\! - \!} i0}{\mu ^2}
-8 + \frac{\pi ^2}{6}\Big ].
\end{eqnarray}
At the matching scale $\mu=Q$ this Wilson coefficient does not contain any large
logarithms. The product of the Wilson coefficient $C(Q,\mu)$ and the SCET
matrix element is independent of the scale $\mu$, and renormalization group (RG)
evolution determines the Wilson coefficient at a lower scale $\mu$. This RG
evolution of the hard Wilson coefficient sums logarithms of $\mu/Q$ with $\mu\gtrsim
m$. The Wilson coefficient contains an imaginary part that arises from real
QCD intermediates states in the QCD vertex diagram that are not accounted for
in the corresponding SCET diagrams when the collinear action only contains the
two sectors for the $n$ and $\bar n$ directions (see
Sec.~\ref{mass-scet}). However, only $|C(Q,\mu)|^2$ will appear in the
final factorization theorem since we will sum over $\vec n$.
Using Eqs.~(\ref{currentmatch}) and (\ref{currentscet}) in
Eq.(\ref{qcdcrosssection}), the cross-section in SCET takes the form
\begin{eqnarray}
\label{scetcross-section}
\sigma &=& \sum_{\vec n} \sum_{X_n X_{\bar n} X_s}^{res.} (2\pi)^4 \,
\delta^4(q \!-\! P_{X_n} \!-\! P_{X_{\bar n}}\!-\! P_{X_s}) \sum_{i} L_{\mu\nu}^{(i)}
\int\!\! d\omega\,d\bar\omega\,d\omega'\,d\bar\omega'\:
\\
&&\hspace{-0.5cm}
\times C(\omega,\bar\omega) C^*(\omega',\bar\omega')
\langle 0| {\rm T}\{ \bar\chi_{{\bar n},\bar\omega'} S_{\bar n}^\dagger \bar\Gamma_j^{\nu}
S_n \chi_{n,\omega'}\} |X_n X_{\bar n} X_s\rangle
\langle X_n X_{\bar n} X_s| {\rm T}\{ \overline\chi_{n,\omega} S_n^\dagger \Gamma_i^\mu
S_{\bar n} \chi_{{\bar n},\bar\omega} \} |0\rangle
\,. \nn
\end{eqnarray}
Here we have pulled out the explicit sum over the top jet label directions
$\vec n$ and keep only two collinear sectors ${\cal L}^{(0)}_n$ and ${\cal
L}^{(0)}_{\bar n}$ for the SCET description of top and antitop jets. This
allows us to explicitly carry out the integral over the top jet directions
$\vec n$ in Sec.~\ref{subsectionmomdecomp} in parallel to implementing
factorization.
In Eq.~(\ref{scetcross-section})
we have decomposed the final states $|X\rangle$ into a soft sector
$|X_s\rangle$ and collinear sectors $|X_n\rangle,|X_{\bar n} \rangle$ in the
$\vec{n}$ and $\vec{{\bar n}}$ directions respectively
\begin{eqnarray} \label{X1}
| X \rangle =| X_n X_{\bar n} X_s \rangle \,.
\end{eqnarray}
Since the hard production scale is integrated out by the matching procedure,
these states now form a complete set of final states that can be
produced by the SCET currents ${\cal J}^\mu_i$. This already implements
part of the restrictions, ``res'', in the sum over states in
Eq.~(\ref{scetcross-section}). The
momentum $P_X$ of the final state $|X\rangle$ is also decomposed into the
momentum of the collinear and soft sectors:
\begin{eqnarray}
P_X=P_{X_n} + P_{X_{\bar n}} + P_{X_s}.
\end{eqnarray}
Recall that there are no particles with $p_m^\mu\sim (m,m,m)$ scaling that can
cross the final state cut, without taking the invariant mass far from the peak
region, so there are no mass-modes in this decomposition. Because the set of
hadrons observed in the detector has a well defined set of momenta, it is
possible to impose criteria on the hadrons in the final state to associate them
with one of $X_n$, $X_{\bar n}$, or $X_s$. Thus, the hadronic two-jet state
factorizes as a direct product
\begin{eqnarray}
\label{XXX}
| X\rangle = | X_n \rangle | X_{\bar n} \rangle | X_s \rangle \,.
\end{eqnarray}
This factorization is also a manifest property of the hadronic states in SCET.
For quark and gluon states in SCET the difference from the purely
hadronic case in Eq.~(\ref{XXX}) is that the states can carry global
color quantum numbers. After having made the soft-collinear
decoupling field redefinition, the individual Lagrangians for these
sectors are decoupled, and they only organize themselves into color
singlets in the matrix elements which appear in the observable
cross-section. We can take this as a manifestation of quark-hadron
duality. Using the soft-collinear decoupling property from
Sec.~\ref{mass-scet} we can write the matrix elements in
Eq.~(\ref{scetcross-section}) as ${\cal M}(m,\mu)$ times
\begin{align} \label{spincolorindices}
&\big\langle 0 \big| \overline \chi_{{\bar n},\bar\omega'}^a (\overline {Y}_{\bar n})^{ba} \,
(\overline\Gamma\, Y_n \chi_{n,\omega'})^b \big| X_n X_{\bar n} X_s \big\rangle
\big\langle X_n X_{\bar n} X_s \big| (\overline \chi_{n,\omega} Y_n^\dagger\,
\Gamma)^c\, (\overline {Y}^\dagger_{\bar n})^{dc} \chi_{{\bar n},\bar\omega}^d \big| 0 \big\rangle
\\[5pt]
&= \big\langle 0 \big| \overline \chi_{{\bar n},\bar\omega'}^a \big| X_{\bar n} \big\rangle
\big\langle X_{\bar n} \big| \chi_{{\bar n},\bar\omega}^{a'} \big| 0 \big\rangle
\big\langle 0 \big| \chi_{n,\omega'}^b \big| X_n \big\rangle
\big\langle X_n \big| \overline \chi_{n,\omega}^{b'} \big| 0 \big\rangle \nn\\
&\quad\times \big \langle 0 \big| (\overline {Y}_{\bar n})^{ca}
(\overline\Gamma Y_n)^{cb} \big| X_s \big\rangle
\big\langle X_s \big| ( Y_n^\dagger
\Gamma)^{b'c'} (\overline {Y}^\dagger_{\bar n})^{a'c'} \big| 0 \big\rangle
\,,\nn
\end{align}
where here roman indices are for color and spin and $|X_n\rangle$ and
$|X_{\bar n}\rangle$ are color triplets. Next we rearrange the color and spinor
indices so that they are fully contracted within each of the $n$-collinear,
${\bar n}$-collinear, and soft product of matrix elements. This makes explicit the
fact that in SCET each of these contributions to the cross-section must
separately be a spin and color singlet. Although it is not absolutely necessary
to make this arrangement of indices manifest at this point, it does allow us to
avoid carrying around unnecessary indices (a similar manipulation was used for
$B\to X_s\gamma$ in Ref.~\cite{Lee:2004ja}). For color, our
$|X_{\bar n}\rangle\langle X_{\bar n}|$ forces the indices on $\overline \chi_{\bar n}^a$ and
$\chi_{\bar n}^{a'}$ to be the same, so $\big\langle 0 \big| \overline \chi_{\bar n}^a
\big| X_{\bar n} \big\rangle \big\langle X_{\bar n} \big| \chi_{\bar n}^{a'} \big| 0
\big\rangle = (\delta^{aa'}/N_c) \, \big\langle 0 \big| \overline \chi_{\bar n}^b
\big| X_{\bar n} \big\rangle \big\langle X_{\bar n} \big| \chi_{\bar n}^{b} \big| 0
\big\rangle$. A similar result holds for the $n$-collinear matrix elements.
For spin we can use the SCET Fierz formula
\begin{align} \label{Fierz}
1 \otimes 1
&= \frac{1}{2} \Big[
\big(\frac{\bar n\!\!\!\slash}{2}\big)\!\otimes\! \big(\frac{n\!\!\!\slash}{2} \big)
+ \big(\frac{-\bar n\!\!\!\slash\gamma_5}{2}\big)\!\otimes\!
\big(\frac{n\!\!\!\slash\gamma_5}{2}\big)
+ \big(\frac{-\bar n\!\!\!\slash\gamma_\perp^\alpha}{2}\big) \!\otimes\!
\big(\frac{n\!\!\!\slash\gamma^\perp_\alpha}{2}\big)
\Big] \,,
\end{align}
which is valid when the identity matrices are inserted so that the $n\!\!\!\slash$
terms on the RHS appear between $\overline\chi_{{\bar n}} \cdots \chi_{\bar n}$ without
additional $\bar n\!\!\!\slash$ factors next to these fields (or the analogous statement
with $n\leftrightarrow {\bar n}$). Combining the color and spin index
rearrangement, the matrix element in Eq.~(\ref{spincolorindices}) becomes
\begin{align} \label{factor-m-elt}
& {\rm tr}\Big[ \frac{n\!\!\!\slash}{2} \Gamma_i^\mu \frac{\bar n\!\!\!\slash}{2}
\bar \Gamma_j^\nu \Big]
\Big[ \big\langle 0 \big| \overline \chi_{{\bar n},\bar\omega'}^a \big| X_{\bar n} \big\rangle
\big\langle X_{\bar n} \big| \Big(\frac{n\!\!\!\slash}{4N_c} \chi_{{\bar n},\bar\omega}\Big)^{a}
\big| 0 \big\rangle \Big]\
\Big[ \big\langle 0 \big| \Big(\frac{\bar n\!\!\!\slash}{4N_c} \chi_{n,\omega'} \Big)^b
\big| X_n \big\rangle
\big\langle X_n \big| \overline \chi_{n,\omega}^{b} \big| 0 \big\rangle \Big]
\nn\\
&\qquad\times \Big[ \big \langle 0 \big| (\overline {Y}_{\bar n})^{ca'}
(Y_n)^{cb'} \big| X_s \big\rangle
\big\langle X_s \big| ( Y_n^\dagger)^{b'c'} (\overline {Y}^\dagger_{\bar n})^{a'c'}
\big| 0 \big\rangle
\Big] \nn\\
& \equiv {\rm tr}\Big[ \frac{n\!\!\!\slash}{2} \Gamma_i^\mu \frac{\bar n\!\!\!\slash}{2}
\bar \Gamma_j^\nu \Big]
{\rm tr} \Big( \big\langle 0 \big| \overline \chi_{{\bar n},\bar\omega'} \big| X_{\bar n} \big\rangle
\big\langle X_{\bar n} \big| \slash\!\!\!\hat n \chi_{{\bar n},\bar\omega}
\big| 0 \big\rangle \Big)
{\rm tr} \Big(\big\langle 0 \big| \slash\!\!\!\hat{\bar n} \chi_{n,\omega'}
\big| X_n \big\rangle
\big\langle X_n \big| \overline \chi_{n,\omega} \big| 0 \big\rangle\Big)
\nn\\
&\qquad\times {\rm tr} \Big( \big \langle 0 \big| \overline {Y}_{\bar n}
Y_n \big| X_s \big\rangle
\big\langle X_s \big| Y_n^\dagger \overline {Y}^\dagger_{\bar n}
\big| 0 \big\rangle \Big) \,,
\end{align}
where for convenience we defined
\begin{equation}
\slash\!\!\!\hat n \equiv n\!\!\!\slash/(4N_c)\,,\qquad
\slash\!\!\!\hat {\bar n} \equiv \bar n\!\!\!\slash/(4N_c)\,.
\end{equation}
Note that only the first term on the RHS of Eq.~(\ref{Fierz})
contributes because the collinear states give at least one matrix
element which is zero when we have a $\gamma_5$ or
$\gamma_\perp^\alpha$. This factorizes the SCET cross-section into a
product of three singlets under spin and color. For convenience we
will in the following suppress writing these explicit traces on the
matrix elements.
Using Eq.~(\ref{factor-m-elt}) in Eq.~(\ref{scetcross-section}), the
factorized SCET cross section takes the form
\begin{eqnarray}
\label{factorizedcross-section}
\sigma &=& \!\! K_0\, {\cal M}
\sum_{\vec n} \!\sum_{X_n X_{\bar n} X_s}^{res.} \!\!
(2\pi)^4 \, \delta^4(q\!-\!P_{X_n}\!-\!P_{X_{\bar n}}\!-\!P_{X_s})
\langle 0| \overline {Y}_{\bar n}\, {Y}_n |X_s \rangle
\langle X_s| {Y}^\dagger_n\, \overline {Y}_{\bar n}^\dagger |0\rangle
\\
&& \!\! \!\! \!\!\times \!\!
\int\!\! d\omega\,d\bar\omega\,d\omega'\,d\bar\omega'\:
C(\omega,\bar\omega) C^\dagger(\omega',\bar\omega')
\langle 0| \slash\!\!\!\hat {\bar n} \CH n {\omega'} |X_n \rangle
\langle X_n |\bCH n \omega |0 \rangle
\langle 0 |\bCH {\bar n} {\bar\omega'} | X_{\bar n}\rangle
\langle X_{\bar n}| \slash\!\!\!\hat n \CH {\bar n} {\bar\omega} | 0 \rangle
\,, \nonumber
\end{eqnarray}
where ${\cal M}={\cal M}(m,\mu)$ and we defined the normalization factor
\begin{eqnarray}
K_0 &=& \sum_{i=v,a}
L_{\mu\nu}^{(i)} \textrm{Tr}
\Big[\frac{n\!\!\!\slash}{2} {\Gamma}^\mu_i
\frac{\bar n\!\!\!\slash}{2} \overline {\Gamma}^{\,\nu}_i \Big]
= -2 g_\perp^{\mu\nu}\sum_{i=v,a}
L_{\mu\nu}^{(i)} \nn\\
&=& \frac{32\pi^2 \alpha^2}{3 Q^4}
\bigg[\, e_t^2 -
\frac{2 Q^2\, v_e v_t e_t}{Q^2-m_Z^2} +
\frac{Q^4 (v_e^2+a_e^2)(v_t^2+a_t^2)}{(Q^2-m_Z^2)^2} \bigg]\,.
\end{eqnarray}
We can further simplify the form of the factorized cross-section. First we
use the identities
\begin{eqnarray} \label{Qconserved}
\langle X_n| \bCH {n} {\omega'} |0 \rangle
&=& \langle X_ n|\overline\chi_n\delta_{\omega',{\bar n}\cdot {\cal P}^\dagger} |0 \rangle
= \delta_{\omega',p^-_{X_n}}\, \langle X_n | \overline \chi_n | 0\rangle \,,
\nn\\
\langle X_{{\bar n}} | \bCH {{\bar n}} {\bar\omega'} |0 \rangle
&=& \langle X_{\bar n}|\overline\chi_{{\bar n}} \delta_{\bar\omega',n\cdot {\cal P}^\dagger} |0 \rangle
= \delta_{-\bar \omega',p^+_{X_{\bar n}}}\, \langle X_{\bar n} | \overline \chi_{\bar n} | 0\rangle \,,
\end{eqnarray}
with similar relations for the other two collinear matrix elements in
Eq.(\ref{factorizedcross-section}). Combining this with the relation
$\delta_{\omega',p_{X_n}^-} \delta_{\omega,p_{X_n}^-}
=\delta_{\omega',\omega} \delta_{\omega,p_{X_n}^-} $, and analog for
$p_{X_{\bar n}}^+$, we can write the product of collinear matrix elements
in Eq.(\ref{factorizedcross-section}) as
\begin{eqnarray}
&& \langle 0 | \slash\!\!\!\hat {\bar n} \CH n {\omega'} |X_n \rangle
\langle X_n |\bCH n \omega | 0 \rangle
\langle 0 |\bCH {\bar n} {\bar\omega'} |X_{\bar n} \rangle
\langle X_{\bar n} |\slash\!\!\!\hat n \CH {\bar n} {\bar\omega} | 0 \rangle
\nn\\
&&=
\delta_{\bar\omega',\bar\omega}\, \delta_{\omega',\omega}\,
\langle 0 | \slash\!\!\!\hat {\bar n} \chi_n | X_n \rangle
\langle X_n |\bCH n \omega | 0 \rangle
\langle 0 |\overline\chi_{{\bar n}} | X_{\bar n} \rangle
\langle X_{\bar n} | \slash\!\!\!\hat n \chi_{{\bar n},\bar\omega} | 0 \rangle \,.
\end{eqnarray}
Next we do the sums over $\omega' , \bar\omega'$ to arrive at the form
\begin{eqnarray}
\label{factorizedcross-section2}
\sigma &=& K_0\, {\cal M}
\sum_{\vec n}\! \sum_{X_n X_{\bar n} X_s}^{res.}
(2\pi)^4 \, \delta^4(q\!-\!P_{X_n}\!-\!P_{X_{\bar n}}\!-\!P_{X_s})
\langle 0| \overline {Y}_{\bar n}\, {Y}_n |X_s \rangle
\langle X_s| {Y}^\dagger_n\, \overline {Y}_{\bar n}^\dagger |0\rangle
\nn\\
&& \!\! \!\! \!\!\times \!\!
\int\!\! d\omega\,d\bar\omega\:
|C(\omega,\bar\omega)|^2
\big\langle 0 \big| \slash\!\!\!\hat {\bar n} \chi_n \big| X_n \big\rangle
\big\langle X_n \big|\bCH n \omega \big| 0 \big\rangle
\big\langle 0 \big| \overline\chi_{\bar n} \big| X_{\bar n} \big\rangle
\big\langle X_{\bar n} \big| \slash\!\!\!\hat n \chi_{{\bar n},\bar\omega} \big| 0 \big\rangle
\,.
\end{eqnarray}
Before proceeding, we pause to define the thrust axis which is needed to
properly define the invariant mass of jets and to state its relation to the
direction of the energetic collinear degrees of freedom. Then in order to make
the power counting manifest we decompose the final state momenta into label and
residual parts and perform some general manipulations of the phase space
integrals to setup a formula for the cross-section to be used for the
remaining calculation.
\subsection{Thrust or Jet Axis }
The thrust $T$ of any event
is defined to be
\begin{equation}
\label{thrust-1}
T = \mathop{\textrm{max}}_{\hat{{\bf t}}} \frac{\sum_i | \hat{{\bf t}} \cdot
{\bf p}_i |}{Q} \,,
\end{equation}
where the sum is over the momenta ${\bf p}_i$ of all the final state particles
produced. The thrust axis ${\bf \hat{t}}$ is chosen so that is maximizes the sum
of particle momenta projected along ${\bf \hat{t}}$. Intuitively, for a
dijet-like event the thrust axis corresponds to the axis along which most of
the momentum is deposited. Conversely, the thrust $T$ is close to its
maximum for a dijet-like event.
We choose $\vec n$ to point along ${\bf \hat{t}}$. For an event with exactly
two massive stable particles $T= \sqrt{Q^2 - 4m^2}/Q = 1 - 2 m^2 / Q^2 + {\cal
O}(m^4/Q^4)$, is the maximum allowed thrust. Since we are interested in
thrusts in the dijet region for the top and antitop jets it is convenient to
define a shifted thrust parameter,
\begin{align} \label{tau}
\tau &= \sqrt{1- \frac{4m^2}{Q^2} } - T
=1 - \frac{2m^2}{Q^2} - T +{\cal O}\Big(\frac{m^4}{Q^4}\Big)\,.
\end{align}
For stable top-antitop production additional jets always result in $\tau>0$.
For unstable top-quarks the values of $\tau<0$ also become allowed. Note that
for massless jet production the thrust ($T$) distribution is peaked close to
$T=1$ while for events containing a heavy quark pair it is peaked close to
$T=\sqrt{Q^2 - 4m^2}/Q$. Thus a cut on thrust can in principle be used to
discriminate between massive and massless quark
production~\cite{Chekanov:2003cp}.
\subsection{Differential Cross-Section with Momentum Decomposition}
\label{subsectionmomdecomp}
To insert the invariant mass constraints into our cross-section in
Eq.~(\ref{factorizedcross-section}) we use the identity operator:
\begin{eqnarray}
\label{identity-1a}
1= \int\! d^4p_n \> d^4p_{\bar n}\> d^4p_s\> \delta ^4(p_n- P_{X_n}) \> \delta ^4(p_{\bar n}-
P_{X_{\bar n}}) \> \delta ^4(p_s- P_{X_s})\,,
\end{eqnarray}
which sets the total collinear and soft momenta of the states $P_{X_n}, P_{X_{\bar n}},
P_{X_s}$ to $p_n,p_{\bar n},p_s$ respectively. In Sec.~\ref{section_hemi} we will
use an additional insertion of an identity operator to define the hemisphere
invariant masses, $M_t$ and $M_{\bar t}$. In this section we carry out
manipulations that are common to any definition of the invariant masses. For now
we ensure that the invariant mass of each hemisphere is close to the top mass by
including in the restrictions, ``res'', on the states the fact that $M_t,
M_{\bar t}$ are in the region
\begin{eqnarray}
\label{sn-range}
| s_{t,\bar t} | = | M^2_{t,\bar t}-m^2 | \ll m^2 .
\end{eqnarray}
From here on we assume that in the sense of power counting $\Delta
\sim \Gamma$. We now decompose the collinear and soft momenta into
label and residual parts
\begin{align}
\label{labelresidual}
p_n &= \tilde p_n + k_n,
& p_{\bar n} & = \tilde p_{\bar n} + k_{\bar n} ,
& P_{X_n}^\perp & = K_{X_n}^\perp ,
\\
P_{X_{\bar n}}^\perp & = K_{X_{\bar n}}^\perp ,
& P_{X_n}^- & = \tilde P_{X_n}^- + K_{X_n}^- ,
& P_{X_{\bar n} }^+ & = \tilde P_{X_{\bar n}}^+ + K_{X_{\bar n}}^+
\,, \nn \\
P_{X_n}^+ &= K_{X_n}^+ \,,
& P_{X_{\bar n}}^- &= K_{X_{\bar n}}^- \,,
& P_{X_s}^\mu & = K_{X_s}^\mu \,,
& p_s^\mu &= k_s^\mu \,. \nn
\end{align}
Note that our choice of $\vec n$ along the thrust axis together with
the restrictions on the states ensures that the perpendicular momentum
of the jets relative to this axis, $P_{X_n}^\perp$ and
$P_{X_{\bar n}}^\perp$, are purely residual. The last result in
Eq.~(\ref{labelresidual}) indicates that the soft state also has a
momentum that is purely residual. The integrals in
Eq.(\ref{identity-1a}) can be decomposed into a sum over labels and
integrals over residual momenta as \begin{eqnarray}
\label{momdecomp-1}
\int d^4 p_n \, \int d^4 p_{\bar n} &=&
\frac{1}{2} \sum_{\tilde{p}_n} \int dk_n^+ dk_n^- d^2k_n^\perp\,
\frac{1}{2} \sum_{\tilde{p}_{\bar n}} \int dk_{\bar n}^+ dk_{\bar n}^- d^2k_{\bar n}^\perp\, .
\end{eqnarray}
In the total cross-section in Eq.~(\ref{factorizedcross-section}) we
sum over the directions $\vec n$ of the thrust axis. To turn this sum
into an integral over the full solid angle, we need to combine it with
a residual solid angle integration for each $\vec n$. Therefore, we
decompose the residual measure as
\begin{align}
d^2k_n^\perp = |\vec p_n|^2\: d\phi\: d\cos(\theta_r)
= \Big(\frac{Q^2}{4} - p_{n}^2\Big) d\phi\: d\cos(\theta_r) ,
\end{align}
where $\theta_r$ is the small angle of $p_3$ relative to $\vec p$. In the first
equality we used the fact that $\cos(\theta_r)\simeq 1$.
Since we are in the peak region we can approximate $p_n^2= m^2$ up to small
$\Gamma/m$ corrections. Combining this with the sum over $\vec n$ gives
\begin{eqnarray} \label{SA-relation}
\sum_{\vec n} d^2k_n^\perp = \Big(\frac{Q^2}{4} - m^2\Big)
\: d\phi\: d\cos(\theta) = \frac{Q^2}{4} \,
d\Omega \,,
\end{eqnarray}
where in the last equality we work to leading order in $m^2/Q^2$.
Since the angular averaged two-jet production is independent of the
thrust direction we are free to carry out the remaining integrations
in a frame where $k_n^\perp=0$, and also replace $\int d\Omega =
4\pi$. The differential cross-section now reads
\begin{align}
\label{factorizedcross-section-new}
\sigma &=\!
\frac{\pi Q^2 K_0}{4}\, {\cal M} \!\!\!\sum_{X_n X_{\bar n} X_s}^{res.} \!\!\!
(2\pi)^4 \, \delta^4(q\!-\!P_{X_n}\!-\!P_{X_{\bar n}}\!-\!P_{X_s})
\sum_{\tilde{p}_n, \tilde{p}_{\bar n} } \!\int \!\! dk_n^+ dk_n^- \!
\int \!\! dk_{\bar n}^+ dk_{\bar n}^- d^2k_{\bar n}^\perp d^4k_s
\nn\\
&\times \: \delta ^4(p_n- P_{X_n}) \> \delta ^4(p_{\bar n}- P_{X_{\bar n}})
\> \delta ^4(k_s- P_{X_s})\
\langle 0| \overline {Y}_{\bar n}\, {Y}_n |X_s \rangle
\langle X_s| {Y}^\dagger_n\, \overline {Y}_{\bar n}^\dagger |0\rangle \nn\\
& \times
\int\!\! d\omega\,d\bar\omega\:
|C(\omega,\bar\omega)|^2
\big\langle 0 \big| \slash\!\!\!\hat {\bar n} \chi_n(0) \big| X_n \big\rangle
\big\langle X_n \big|\bCH n \omega (0) \big| 0 \big\rangle
\big\langle 0 \big|\bar \chi_{\bar n} (0) \big| X_{\bar n} \big\rangle
\big\langle X_{\bar n} \big| \slash\!\!\!\hat n \chi_{{\bar n},\bar\omega} (0) \big| 0 \big\rangle
\,.
\end{align}
In the remainder of this section we will simplify this formula as much as
possible prior to specifying the exact constraints on the restricted sum of
states. First we decompose the delta functions into label and residual parts as
\begin{align} \label{decomposedelta-1}
\delta^4 (p_n \!-\! P_{X_n}) &= \delta_{\tilde{p}_n, \tilde{P}_{X_n}}
\> \delta^4 (k_n \!-\! K_{X_n})
= \delta_{\tilde{p}_n^-, \omega}
\delta_{\tilde{p}_n^\perp, 0 }\! \int\!\! \frac{d^4x}{(2\pi)^4} \>
e^{ i\left[ (k_n^+ \!-\! K_{X_n}^+)\frac{x^-}2
+ (k_n^- \!-\! K_{X_n}^-)\frac{x^+}2
- K_{X_n}^\perp\cdot \, x^\perp \right] }
, \nn \\
\delta^4 (p_{\bar n} \!-\! P_{X_{\bar n}}) &= \delta_{\tilde{p}_{\bar n}, \tilde{P}_{X_{\bar n}}} \>
\delta^4 (k_{\bar n} - K_{X_{\bar n}})
= \delta_{\tilde{p}_{\bar n}^+, -\bar\omega}
\delta_{\tilde{p}_{\bar n}^\perp, 0 }
\> \int\!\! \frac{d^4y}{(2\pi)^4} \> \:
e^{i\>(k_{\bar n} - K_{X_{\bar n}})\>\cdot \>y} , \nn \\
\delta^4 (p_s \!-\! P_{X_s}) & = \delta^4 (k_s \!-\! K_{X_s})
=
\> \int\!\! \frac{d^4z}{(2\pi)^4} \> \:
e^{i\>(k_s - K_{X_s})\>\cdot \>z}
\,,
\end{align}
where there is no $k_n^\perp$ in the first line (or below) because we
fixed $k_n^\perp=0$. In the second equality on lines 1 and 2 we
replaced $\tilde{P}_{X_n}^- , \tilde{P}_{X_{\bar n}}^+$ with the labels
$\omega ,
\bar{\omega}$ respectively using the momentum conservation delta-functions
discussed below Eq.~(\ref{Qconserved}). We also decompose
\begin{align}
\label{decomposedelta-2}
\delta ^4(q\!-\!P_{X_n}\!-\!P_{X_{\bar n}}\!-\! K_{X_s})
&= \delta _{Q,\tilde{p}_n^-} \,
\delta_{Q,\tilde{p}_{\bar n}^+} \,
\delta^4(k_n\!+\! k_{\bar n} \!+\! k_{s} )
\,,
\end{align}
where we have replaced $P_{X_n}, P_{X_{\bar n}}$ with $p_n,p_{\bar n}$ by
using the delta functions in Eq.~(\ref{decomposedelta-1}).
Next we use Eqs.~(\ref{decomposedelta-1}) and (\ref{decomposedelta-2}) in
Eq.~(\ref{factorizedcross-section-new}) and with the exponential factors of
$e^{-i K_{X_n}\cdot x}, e^{-iK_{X_{\bar n}}\cdot y}$, and $e^{-i K_{X_s}\cdot z}$ in
Eq.~(\ref{decomposedelta-1}) we translate the collinear and soft fields to the
positions $x$, $y$, and $z$, respectively. This gives
\begin{align}
\label{factorizedcross-section-new-2}
\sigma &=
\frac{\pi}{(2\pi)^8} \frac{Q^2 K_0}{4}\, {\cal M} \!\!
\sum^{res.}_{X_n X_{\bar n} X_s} \!\! \int\!\! dk_n^+ dk_n^- dk_{\bar n}^+ dk_{\bar n}^-
d^2k_{\bar n}^\perp d^4 k_s
\!\! \int\!\! d^4 x \, d^4 y \, d^4 z \> \delta^4(k_n\ensuremath{\! + \!} k_{\bar n} \ensuremath{\! + \!} k_s)
\nn\\
&\times \big |C(Q,\mu)\big |^2\:
\text{Exp} \Big [\frac{i}{2} k_n^+ x^- \ensuremath{\! + \!} \frac{i}{2} k_n^- x^+ \ensuremath{\! + \!}
ik_{\bar n}\!\cdot\! y \ensuremath{\! + \!} i k_s \!\cdot\! z \Big ] \
\big\langle 0 \big| (\overline {Y}_{\bar n}\, {Y}_n) (z) \big|X_s \big\rangle
\big\langle X_s\big| ({Y}^\dagger_n\, \overline {Y}_{\bar n}^\dagger) (0)
\big|0\big \rangle
\nn \\
&\times
\big\langle 0 \big| \slash\!\!\!\hat {\bar n} \chi_n(x) \big| X_n \big\rangle
\big\langle X_n \big|\bCH n Q (0) \big| 0 \big\rangle
\big\langle 0 \big|\overline \chi_{\bar n} (y) \big| X_{\bar n} \big\rangle
\big\langle X_{\bar n} \big| \slash\!\!\!\hat n \chi_{{\bar n},-Q} (0) \big| 0 \big\rangle
\,,
\end{align}
where here the large label momenta in the jets are fixed to be $Q$,
$\overline\chi_{n,Q}= \overline\chi_n\delta_{Q,\bar {\cal P}^\dagger}$ and
$\chi_{{\bar n},-Q}=\delta_{-Q,{\cal P}}\chi_{\bar n}$. Next we can use the fact that the
$n$-collinear graphs are independent of the $k_n^-$ and $k_n^\perp$, so that the
above $n$-collinear matrix element is proportional to
$\delta(x^+)\delta(x_\perp)$~\cite{Bauer:2001yt}. Similarly the ${\bar n}$-collinear
matrix element is $\propto \delta(y^-)\delta(y_\perp)$. It is not crucial to use
these $\delta$-functions at this stage, but they do allow us to simplify the
formula by dropping $x^+$, $x_\perp$, $y^-$, and $y_\perp$ dependence in the
exponentials. Performing a few integrals we arrive at a fairly simple form for
the cross-section
\begin{align}
\label{factorizedcross-section-pre-res}
\sigma\ = & \
\sigma_0\:
\big |C(Q,\mu)\big |^2 {\cal M} \!
\int\!\! dk_n^+ \, dk_{\bar n}^-\, dk_s^+ \, dk_s^-
\\
& \times \sum^{res.}_{X_n} \frac{1}{2\pi} \int\!\! d^4 x \: e^{ik_n^+ x^-/2} \
\big\langle 0 \big| \slash\!\!\!\hat {\bar n} \chi_n(x) \big| X_n \big\rangle
\big\langle X_n \big|\bCH n Q (0) \big| 0 \big\rangle
\nn\\
&\times \sum^{res.}_{X_{\bar n}} \frac{1}{2\pi} \int\!\! d^4 y \: e^{ik_{\bar n}^- y^+/2} \
\big\langle 0 \big|\overline\chi_{\bar n} (y) \big| X_{\bar n} \big\rangle
\big\langle X_{\bar n} \big| \slash\!\!\!\hat n \chi_{{\bar n},-Q} (0) \big| 0 \big\rangle
\nn\\
&\times \sum^{res.}_{X_s} \!\! \:\frac{1}{4N_c (2\pi)^2} \int\!\! dz^+ dz^- \:
e^{\frac{i}{2} (k_s^+z^- + k_s^-z^+) }\
\langle 0| \overline {Y}_{\bar n}\, {Y}_n (z^-,z^+) |X_s \rangle
\langle X_s| {Y}^\dagger_n\, \overline {Y}_{\bar n}^\dagger (0) |0\rangle
\,. \nn
\end{align}
The result in Eq.~(\ref{factorizedcross-section-pre-res}) is a
factorized product of Fourier transforms over $n$-collinear,
${\bar n}$-collinear, and soft matrix elements. We introduced a $1/N_c$ in
front of the soft-matrix element in
Eq.~(\ref{factorizedcross-section-pre-res}), and include a
compensating factor $N_c$ in $\sigma_0$. This equation provides a good
starting point for the derivation of any differential cross-section
(for massive or massless dijet events). The new normalization factor
$\sigma_0$ is just the total Born cross-section
\begin{eqnarray}
\sigma_0 &=& N_c \frac{Q^2}{8\pi} \, K_0
= N_c\, \frac{4\pi \alpha^2}{3 Q^2}
\bigg[\, e_t^2 -
\frac{2 Q^2\, v_e v_t e_t}{Q^2-m_Z^2} +
\frac{Q^4 (v_e^2+a_e^2)(v_t^2+a_t^2)}{(Q^2-m_Z^2)^2} \bigg] \,.
\end{eqnarray}
For massive quarks
$\sigma_0$ depends on $\beta_m=(1-4m^2/Q^2)^{1/2}$ through an extra
multiplicative factor of $\beta_m (3-\beta_m^2)/2 = 1 - 6
m^4/Q^4+\ldots$. This is only a 1\% correction to $\sigma_0$ for
$Q/m\sim 5$.
To proceed further we now need to make the prescription how the $n$- and $\bar
n$-collinear and the soft particles enter the invariant masses $s_t$ and
$s_{\bar t}$ explicit. This removes the implicit restrictions on the sums
over states indicated in Eq.~(\ref{factorizedcross-section-pre-res}). In the
next subsection we implement the prescriptions
for the hemisphere jet invariant masses. In Sec.~\ref{sectionotheralgo} we
briefly discuss how the implementation changes for other invariant mass
prescriptions.
\subsection{Factorization for Hemisphere invariant masses in SCET}
\label{section_hemi}
In the hemisphere mass case all the final state particles are assigned
to belong to one of two hemispheres defined with respect to the thrust
axis. The boundary between the two hemispheres is perpendicular to the
thrust axis and centered at the $e^+e^-$ collision point, see
Fig.~\ref{fig:topjet}. Thus the top and antitop jets we consider
correspond to all the particles in the respective two hemispheres and
the invariant mass of each jet is defined to be the total invariant
mass of all the final state particles in each hemisphere. As we show
explicitly below, the requirement that these jet invariant masses are
both close to the top mass, automatically restricts the final state to
be dijet-like, and eliminates the need to introduce any additional
event-shape constraint. We stress that some mechanism to control the
soft particles is absolutely crucial for establishing the
factorization theorem and the unique definition of the soft function
$S$. Here this is accomplished by the fact that all soft particles
enter the invariant mass variables $M^2_{t,\bar t}$.
The invariant mass of each hemisphere includes contributions from both
soft and collinear particles. The total momentum of the collinear
particles in the $n$-hemisphere is $P_{X_n}$ and in the
${\bar n}$-hemisphere is $P_{X_{\bar n}}$. The total final state soft momentum
$K_{X_{s}}$ is split between the two hemispheres and can be divided
as:
\begin{eqnarray}
K_{X_s} = k^a_s + k^b_s
\end{eqnarray}
where $k^a_s$ and $k_s^b$ correspond to the total momenta of all the soft
partons in the $n$ and ${\bar n}$ hemispheres, respectively. It is useful to think of these
hemisphere momenta as the result of hemisphere projection operators
$\hat{P}_a,\hat{P}_b$:
\begin{eqnarray} \label{PaPb}
\hat{P}_a\> | X_s\rangle = k^a_s \>| X_s\rangle,
\>\>\>\> \hat{P}_b \>|X_s\rangle = k_s^b \>| X_s\rangle.
\end{eqnarray}
In other words, these projection operators act on each state
$|X_s\rangle$, pick out the soft partons in the respective hemisphere
and add up their total momentum. Note that the eigenvalues are
dependent on the state $X_s$, so $k_s^a=k_s^a[X_s]$ and
$k_s^b=k_s^b[X_s]$. We can now define the invariant mass of each jet
as $(P_{X_n} + k^a_s)^2$ and $(P_{X_{\bar n}} + k^b_s)^2$ for the $n$ and
${\bar n}$ hemispheres respectively. The delta functions $\delta ^4 (p_n -
P_{X_n})\, \delta ^4 (p_{\bar n} - P_{X_{\bar n}})$ in the second line of
Eq. (\ref{factorizedcross-section-new}) allow us to define the jet
invariant masses in terms of $p_n, p_{\bar n}$ as $(p_{n} + k^a_s)^2$ and
$(p_{{\bar n}} + k^b_s)^2$ for the $n$ and ${\bar n}$ hemispheres respectively.
Note that this implements a very simple form of a jet algorithm. For a different
jet algorithm we would change the definitions of the operators $\hat P_a$ and
$\hat P_b$. Running a jet algorithm in inclusive $e^+e^-$
mode~\cite{Catani:1991hj,Butterworth:2002xg} each soft parton is still accounted for, having a
certain probability of being assigned to either the top or the antitop invariant
mass. We discuss other algorithms in Sec.~\ref{sectionotheralgo}.
If the top quark were a stable particle, these invariant mass definitions would
be obvious because $n$- and $\bar n$-collinear particles would be fully radiated
into the $n$- and $\bar n$-hemispheres, respectively. Due to the finite
lifetime of the top quark, however, we need to convince ourselves that this
invariant mass definition still works if the $n$- and $\bar n$-collinear momenta
of the top and antitop quarks, respectively, are distributed among their decay
products. So let us consider the top quark in the $n$-hemisphere. Since the top
rest frame is boosted with respect to the $e^+e^-$ c.m.\,frame with a boost
factor $Q/m$, top decay events can have final state particles appearing in the
$\bar n$-hemisphere of the antitop quark only if these final state particles
have an angle (defined in the top rest frame) smaller than $m/Q$ with respect to
the antiparticle direction. On the other hand, the top spin is only about $20\%$
polarized (for unpolarized $e^+e^-$ beams and upon averaging over the directions
of the thrust axis)~\cite{Parke:1996pr}, and thus the top decay products in the
top rest frame are distributed isotropically to a rather good approximation. The
fraction of events in this kinematical situation is therefore suppressed by
$(m/Q)^2$ and can be neglected at leading order in the power counting. Of course
the analogous conclusion also applies to the antitop quark in the $\bar n$
hemisphere. So at leading order in the power counting it is consistent to
employ the invariant mass definition of the previous paragraph.
The jet invariant mass definitions can be implemented into the cross-section of
Eq.~(\ref{factorizedcross-section-pre-res}) by inserting underneath the
$\sum_{X_s}$ the identity relation
\begin{eqnarray}
\label{identity-2}
1&=& \int \!\! dM_t^2 \> \delta \big((p_n + k_s^{a})^2-M_t^2\big)
\int d M^2_{\bar t}\> \delta \big((p_{\bar n} +k_s^{b} )^2 -M_{\bar t}^2 \big)
\nn\\
&=& \int \!\! dM_t^2 \> \delta \big((p_n + k_s^{a})^2-m^2-s_t\big)
\int d M^2_{\bar t}\> \delta \big((p_{\bar n} +k_s^{b} )^2 -m^2 - s_{\bar t} \big),
\end{eqnarray}
where $s_t=s_t(M_t)$ and $s_{\bar t}=s_{\bar t}(M_{\bar t})$ from
Eq.~(\ref{massshell}), i.e. it should be understood that $s_{t,\bar
t}$ are functions of $M_{t,\bar t}^2$. In the second line $m$ is
defined as the pole mass. It is straightforward to switch the final
result to a suitable short distance mass definition, as we explain in
Sec.~\ref{sec:sdmass}. Decomposing the $\delta$-functions at leading
order gives
\begin{eqnarray} \label{decomposedelta-3}
\delta ((p_n+k_s^a)^2-m^2-s_t) &=& \frac{1}{Q} \> \delta \Big(k_n^+ +k_s^{+a} -
\frac{m^2 + s_t}{Q}\Big) \,, \nn \\
\delta ((p_{\bar n}+k_s^b)^2-m^2-s_{\bar t}) &= & \frac{1}{Q} \> \delta \Big(k_{\bar n}^-
+k_s^{-b}- \frac{m^2 + s_{\bar t}}{Q} \Big) \,,
\end{eqnarray}
where we set $\tilde{p}_n^-=\tilde{p}_{\bar n}^+=Q$ due to
$\delta$-functions from Eq.~(\ref{decomposedelta-2}). Carrying out
the integration over $k_s^+$ and $k_s^-$ in
Eq.~(\ref{factorizedcross-section-pre-res}) sets the arguments of the
soft function to $z^\pm=0$. Inserting the identity relation
\begin{eqnarray}
1 = \int\!\! d\ell^+ d\ell^- \delta(\ell^+ - k_s^{+a}) \delta(\ell^- - k_s^{-b})
\end{eqnarray}
the differential cross-section then reads
\begin{align}
\label{factorizedcross-section-new-1}
\frac{d^2\sigma}{dM_t^2 dM_{\bar t}^2} &=
\frac{\sigma_0}{Q^2} \:
\big |C(Q,\mu)\big |^2 {\cal M} \!
\int\!\! dk_n^+ \, dk_{\bar n}^-\, d\ell^+ \, d\ell^-
\delta \Big(k_n^+ \!+\ell^+\! -\! \frac{m^2 \!+\! s_t}{Q}\Big)
\delta \Big(k_{\bar n}^- \! +\ell^-\! -\! \frac{m^2 \!+\! s_{\bar t}}{Q} \Big)
\nn\\
& \times \sum_{X_n} \frac{1}{2\pi} \int\!\! d^4 x \: e^{ik_n^+ x^-/2} \
{\rm tr} \big\langle 0 \big| \slash\!\!\!\hat {\bar n} \chi_n(x) \big| X_n \big\rangle
\big\langle X_n \big|\bCH n Q (0) \big| 0 \big\rangle
\nn\\
&\times \sum_{X_{\bar n}} \frac{1}{2\pi} \int\!\! d^4 y \: e^{ik_{\bar n}^- y^+/2} \
{\rm tr} \big\langle 0 \big|\overline\chi_{\bar n} (y) \big| X_{\bar n} \big\rangle
\big\langle X_{\bar n} \big| \slash\!\!\!\hat n \chi_{{\bar n},-Q} (0) \big| 0 \big\rangle
\nn\\
&\times \sum_{X_s} \!\! \:\frac{1}{ N_c}
\delta(\ell^+ - k_s^{+a}) \delta(\ell^- - k_s^{-b})
{\rm tr} \langle 0| \overline {Y}_{\bar n}\, {Y}_n (0) |X_s \rangle
\langle X_s| {Y}^\dagger_n\, \overline {Y}_{\bar n}^\dagger (0) |0\rangle
\,,
\end{align}
where we have dropped the ``res.'' label on the sums, because all restrictions
are now explicitly implemented.
To see that the hemisphere definition of $s_t$ and $s_{\bar t}$ can be used to
select dijet-like events, we can check that Eq.~(\ref{sn-range}) plus the
$\delta$-functions in Eq.~(\ref{factorizedcross-section-new-1}) are sufficient
to constrain the thrust to the dijet region. At leading order the total thrust
of an event is given by
\begin{align}
Q T = | p_n^z | + | p_{\bar n}^z | + |k_s^{a\,z}| + |k_s^{b\,z}| \,,
\end{align}
where $2|p_n^z| = Q + k_n^- - k_n^+$ and $2|p_{\bar n}^z| = Q + k_{\bar n}^+ - k_{\bar n}^-$.
So the shifted thrust defined in Eq.~(\ref{tau}) is
\begin{align} \label{tauss}
\tau &= -\frac{2m^2}{Q^2} + \frac{1}{2Q}\Big[ k_n^+ - k_n^- + k_{\bar n}^- - k_{\bar n}^+
+ k_s^{a\,+} - k_s^{a\,-} + k_s^{b\,-} - k_s^{b\,+} \Big] \nn\\
&= -\frac{2m^2}{Q^2} + \frac{1}{Q}\Big[ k_n^+ + k_s^{a\,+}
+ k_{\bar n}^- + k_s^{b\,-} \Big] \nn\\
&= \frac{s_t + s_{\bar t} }{Q^2} \,.
\end{align}
To obtain the second line we used the separate conservation of the
$+$ and $-$ momentum components to eliminate
$k_n^- + k_s^{b\,-}$ and $k_{\bar n}^+ + k_s^{a\,+}$. For the last line we used
the $\delta$-functions in Eq.~(\ref{decomposedelta-3}) to get $s_t$ and $s_{\bar
t}$. Thus, the restriction to small hemisphere invariant masses $s_{t,\bar
t}$ automatically gives small $\tau$ and restricts the events to the dijet
region. The presence of a third hard jet takes us away from the
dijet region and directly shows up by a substantial positive shift of $s_t +
s_{\bar t}$ away from the peak region.
Next we simplify the form of the cross-section by defining the massive
collinear jet functions $J_n, J_{\bar n}$ as
\begin{align}
\label{jetfunc-1}
\sum_{X_n} {\rm tr}\,
\big\langle 0 \big| \slash\!\!\!\hat {\bar n} \chi_n(x) \big| X_n \big\rangle
\big\langle X_n \big|\bCH n Q (0) \big| 0 \big\rangle
&=
Q\int \frac{d^4r_n}{(2\pi)^3} e^{-i r_n\cdot x}
J_{n}(Qr_n^+ -m^2,m)
\\
&
= Q\, \delta(x^+) \delta ^2(x_\perp ) \int \!\!{dr^+_n}\:
e^{-\frac{i}{2} r_n^+ x^-}\: J_{n}(Qr_n^+ -m^2,m)\,,
\nn\\
\sum_{X_{\bar n}} {\rm tr}\,
\big\langle 0 \big|\overline\chi_{\bar n} (y) \big| X_{\bar n} \big\rangle
\big\langle X_{\bar n} \big| \slash\!\!\!\hat n \chi_{{\bar n},-Q} (0) \big| 0 \big\rangle
&=
Q \int \frac{d^4r_{\bar n}}{(2\pi)^3} e^{- i r_{\bar n}\cdot y}
J_{{\bar n}}(Qr_{\bar n}^- -m^2,m)
\nn \\
&
= Q\, \delta(y^-) \delta ^2(y_\perp ) \int \!\! {dr^-_{\bar n}}\:
e^{-\frac{i}{2} r_{\bar n}^- y^+}\: J_{{\bar n}}(Qr_{\bar n}^--m^2,m)
\,. \nn
\end{align}
Here $m$ is the pole mass just as in Eq.~(\ref{identity-2}) and we do
not display the $\mu$ dependence. Note that the subscript $Q$ on the
LHS does not change the mass-dimension of a $\chi$-field away from
$3/2$, since $\delta_{Q,\bar {\cal P}}$ is dimensionless. We remind the reader
that $\slash\!\!\!\hat n =n\!\!\!\slash/(4N_c)$, $\hat {\bar n}\!\!\! \slash =
\bar n\!\!\!\slash/(4N_c)$ and that tr is a trace over both color and spin
indices. The arguments of the jet functions, $J_{n}$ and $J_{{\bar n}}$,
in Eq.~(\ref{jetfunc-1}) are just the off-shellness of the jets,
$p_n^2-m^2$ and $p_{\bar n}^2-m^2$, respectively, but given in expanded
form. Here the labels $Q$ on the $\overline\chi_n$ and $\chi_{{\bar n}}$
fields ensures that there is only a contribution from the required
``quark'' and ``antiquark'' cut since $Q>0$. To see this recall that
the sign of the label $p$ on $\xi_{n,p}$ picks out the quark
annihilation, or antiquark production part of the
field~\cite{Bauer:2001ct}. We note that the sums over collinear states
in the collinear jet functions are unrestricted since the restrictions
are now implemented automatically through the amount the jet invariant
mass differs from $m^2$. Thus, the jet functions can be written as the
discontinuity of a forward scattering amplitude after summing over the
collinear states:
\begin{eqnarray}
\label{jetfunc2}
J_{n}(Qr_n^+ - m^2,m)
&=&
\frac{-1}{2\pi Q}\, \textrm{Disc}\! \int\!\! d^4 x \: e^{i r_n\cdot x} \,
\langle 0|\text{T}\{ \bCH n Q (0)\slash\!\!\!\hat {\bar n} \chi_n(x)\}|0 \rangle \, ,
\nn
\\
J_{{\bar n}}(Qr_{\bar n}^- - m^2,m)
&=&
\frac{1}{2\pi Q }\, \textrm{Disc}\! \int\!\! d^4 x \: e^{i r_{\bar n}\cdot x} \,
\langle 0|\text{T}\{ \bar \chi_{\bar n} (x) \slash\!\!\!\hat n \chi_{{\bar n},-Q}(0)\} |0 \rangle \, .
\end{eqnarray}
The collinear fields in the SCET jet functions $J_n$ and $J_{\bar n}$ are defined
with zero-bin subtractions~\cite{Manohar:2006nz}, which avoids double counting
with the soft-function. Using Eq.(\ref{jetfunc-1}) and performing all the
remaining integrals in the cross-section of
Eq.(\ref{factorizedcross-section-new-1}) we arrive at the SCET result for double
differential hemisphere invariant mass cross-section
\begin{align}
\label{SCETcross-hem}
\frac{d^2\sigma }{dM^2_t\>dM^2_{\bar t}} &=
\sigma_0
\> H_Q(Q,\mu)\: {\cal M}(m,\mu) \\
&\ \times \int_{-\infty}^{\infty}\!\!\!d\ell^+ d\ell^-
\> J_n(s_t - Q\ell^{+},m,\mu) J_{\bar n}(s_{\bar t} - Q\ell^{-},m,\mu)
S_{\rm hemi}(\ell^+,\ell^-,\mu,m)
\,, \nn
\end{align}
where the hard function $H_Q(Q,\mu) = | C(Q,\mu)|^2$. Here the hemisphere soft
function in SCET is
defined by
\begin{align} \label{SSS}
S_{\rm hemi}(\ell^+,\ell^-,\mu,m) &= \frac{1}{N_c} \sum _{X_s}
\delta(\ell^+ - k_s^{+a}) \delta(\ell^- - k_s^{-b})
\langle 0| \overline {Y}_{\bar n}\, {Y}_n (0) |X_s \rangle
\langle X_s| {Y}^\dagger_n\, \overline {Y}_{\bar n}^\dagger (0) |0\rangle \,.
\end{align}
At tree level for stable top quarks $H_Q=1$, $J_{n}(s_t) = \delta(s_t)$,
$J_{{\bar n}}(s_{\bar t}) = \delta(s_{\bar t})$, and $S_{\rm hemi}(\ell^+,\ell^-) =
\delta(\ell^+)\delta(\ell^-)$, and integrating Eq.~(\ref{SCETcross-hem}) over
$s_t$ and $s_{\bar t}$ gives the total tree-level Born cross-section $\sigma_0$.
This provides a check for the normalization of Eq.~(\ref{SCETcross-hem}). The
argument $m$ on the soft-function in Eq.~(\ref{SSS}) and ${\cal M}(m,\mu)$ in
Eq.~(\ref{SCETcross-hem}) account for massive top-quark bubbles that are
perturbative and start at ${\cal
O}(\alpha_s^2(m))$~\cite{Kniehl:1989kz,Burgers:1985qg,Hoang:1995ex}. Note that
Eq.~(\ref{SCETcross-hem}) extends the SCET computation of the massless dijet
cross-section in Ref.~\cite{Bauer:2002ie,Bauer:2003di} to all orders in
perturbation theory for the jet-functions.
In the factorization theorem in Eq.~(\ref{SCETcross-hem}) the jet-functions
$J_n$ and $J_{\bar n}$ describe the dynamics of the top and antitop jets. In the
next section we will see that these jet functions can be computed in
perturbation theory and at the tree level are just Breit-Wigner distributions.
The soft matrix elements $ \langle 0| {Y}^\dagger_n {Y}_{\bar n} (0)|X_s \rangle
\langle X_s| \tilde{Y}^\dagger_{\bar n} \tilde{Y}_n (0)|0\rangle$, on the other hand,
depends on the scale $\Lambda _{QCD}$, and thus the soft function $S_{\rm
hemi}(\ell^+,\ell^-)$ is governed by non-perturbative QCD effects. The
momentum variables $\ell^\pm$ represent the light cone momentum of the soft
particles in each of the two hemispheres, and $S_{\rm hemi}(\ell^+,\ell^- )$
describes the distribution of soft final state radiation.
Eq.~(\ref{SCETcross-hem}) already demonstrates that the invariant mass spectrum
for unstable top quarks is not a Breit-Wigner function even at tree level
because the convolution with the soft function $S_{\rm hemi}$ modifies the
observed distribution. The effects of the convolution on the observable
invariant mass distribution are discussed in Sec.~\ref{section4}.
To sum large logs in Eq.~(\ref{SCETcross-hem}) the SCET production current can
be run from $\mu=Q$ down to $\mu= m$, which then characterizes the typical
virtuality of the collinear degrees of freedom in massive SCET. In the process,
large logarithms of $Q/m$ are summed into the hard function $H_Q(Q,\mu)$. In
the next section we integrate out the scale $m$ and match these SCET jet
functions onto bHQET jet functions.
\subsection{Factorization of Jet mass effects in HQET}
\label{subsectionfactorizationtheorem}
The main result of the last subsection is the factorization of the
scales $Q$ and $m$ in the differential cross section of
Eq.~(\ref{SCETcross-hem}). In this section we further factorize the
scale $m$ from the low energy scales $\Gamma$, $\hat s$, and
$\Delta$. This will allow us to sum large logs of $\Gamma/m$ and $\hat
s_{t,\bar t}/m$ in the jet functions, and lower the scale of the soft
functions to $\Delta$. This step is also important for treating the
width effects. As explained earlier, one can formulate width effects
in a gauge invariant way with a natural power counting in HQET,
whereas doing so in a relativistic theory such as SCET is notoriously
difficult.
To perform the scale separation and sum the logarithms requires us to match and
run below the scale $\mu=m$. This can be done in a standard way, by matching and
running of the bHQET current in Eq.~(\ref{JbHQET}), as we described in
Sec.~\ref{buHQET}. However, due to the factorization properties of SCET
which leads to a decoupling of the $n$-collinear, ${\bar n}$-collinear, and soft
sectors, the matching and running below the scale $\mu=m$ can also be done
independently for $J_n$, $J_{\bar n}$, and $S$. In the following we explain this
second method.
As discussed in Sec.~\ref{buHQET} the soft function above and below the scale
$m$ is identical, except for certain vacuum polarization effects from graphs
with top-quark bubbles that only exist in SCET. For the soft-function in bHQET
we have
\begin{align}
S_{\rm hemi}(\ell^+,\ell^-,\mu) &= \frac{1}{N_c} \sum _{X_s}
\delta(\ell^+ - k_s^{+a}) \delta(\ell^- - k_s^{-b})
\langle 0| \overline {Y}_{\bar n}\, {Y}_n (0) |X_s \rangle
\langle X_s| {Y}^\dagger_n\, \overline {Y}_{\bar n}^\dagger (0) |0\rangle \,,
\end{align}
where there is no $m$ dependence. The matching condition between the soft
functions in the two theories is
\begin{align}
S_{\rm hemi}(\ell^+,\ell^-,\mu,m ) = T_0(m,\mu) S_{\rm hemi}(\ell^+,\ell^-,\mu)
\,,
\end{align}
where $T_0(m,\mu)$ is a Wilson coefficient. Large logarithms in the soft
function can be summed by computing the anomalous dimension of the soft function
and using RG evolution to run between $\Delta$ and $m$ and between $m$ and $Q$
as illustrated by the line labeled $U_S$ in Fig.~\ref{fig:theory}.
\begin{figure}
\centerline{
\includegraphics[width=15cm]{ttbar_scales.eps}
}
\caption{Scales and functions appearing in the formula for the
invariant mass distribution. The result is determined by matching at the
physical scales and running to sum large logs as shown. We show both the
top-down and bottom-up approach to the running. The evolution for $U_H$ and
$U_C$ is local, while all other evolution functions involve convolutions. Note
that the evolution functions obey $U_H=U_{J_-}\otimes U_{J_+}\otimes U_S$ and
$U_C=U_{B_-}\otimes U_{B_+}\otimes U_S$ where $\otimes$ indicates
convolutions. }
\label{fig:theory}
\end{figure}
For the SCET collinear degrees of freedom the power counting for the
virtuality is $p_c^2 \sim m^2$. Thus, $J_n$ and $J_{\bar n}$ describe the
physics of jets with an invariant mass up to $M^2\sim \mu^2 \sim
m^2$. However, the restriction of being in the peak region means that
$M^2-m^2\ll m^2$. This disparity gives rise to the large
logarithms in the collinear jet functions. Intuitively, this can also
be understood by noting that if one starts out with a top quark that
is close to its mass shell, a typical collinear SCET gluon will knock
the top far offshell so that $p_{c}^2- m^2 \sim m^2 $. By restricting
the jet functions to $p_{c}^2-m^2 \ll m^2$ we forbid such real
radiation contributions, but not virtual contributions. The latter must be
integrated out explicitly by switching to the description of the jet
functions in the boosted unstable HQET theories discussed in
Sec.~\ref{buHQET}. In these HQETs the only fluctuations are due to low
energy ultracollinear gluons that preserve the condition $M^2 -m^2
\ll m^2$.
To determine the definitions of the bHQET jet functions we follow the
same procedure as for the bHQET current in Eq.~(\ref{JbHQET}), namely
boost the SCET jet function in Eq.~(\ref{jetfunc2}) to the heavy quark
rest frame, giving $\bar \psi(x) W(x) W(0)\psi(0)$, then match onto
HQET $\psi(x)\to h_v(x)$. We then boost back to the moving frame where
$v\to v_\pm$. The spin structure can also be simplified to give
\begin{align}
\frac{1}{Q} \overline\chi_{n,Q} \hat\bar n\!\!\!\slash \chi_n \to
\frac{1}{Q}\, \bar h_{v_+} \hat\bar n\!\!\!\slash\, h_{v_+} =
\frac{v_+\cdot{\bar n}}{4N_c Q}\, \bar h_{v_+} h_{v_+}
= \frac{1}{4N_c m}\, \bar h_{v_+} h_{v_+} \,.
\end{align}
Thus the bHQET jet functions are defined as
\begin{eqnarray}
\label{hqetjet}
B_+(2v_+\!\cdot\! k) &=&
\frac{-1}{8\pi N_c m} \, \textrm{Disc}\! \int\!\! d^4 x \: e^{i k \cdot x} \,
\,
\langle 0|{\rm T}\{\bar{h}_{v_+}(0) W_n(0) W_n^{\dagger}(x) h_{v_+} (x)\}|0 \rangle\, ,
\nn
\\
B_{-}(2v_-\!\cdot\! k) &=&
\frac{1}{8\pi N_c m} \, \textrm{Disc}\! \int\!\! d^4 x \: e^{i k\cdot x} \,
\langle 0|{\rm T}\{\bar{h}_{v_-}(x) W_{\bar n}(x) W_{\bar n}^{\dagger}(0) h_{v_-} (0)\}|0 \rangle .
\end{eqnarray}
These bHQET jet functions can be calculated using the usual Feynman rules of
HQET except that the gluons have ucollinear scaling as in Eq.~(\ref{BHQETres}).
The $W$-Wilson lines in $B_\pm$ also contain these boosted gluons. Since $p_n^2
-m^2=2m v_+ \cdot k$ and $p_{\bar n}^2-m^2 = 2 m v_-\cdot k$, we can identify
the arguments of the bHQET jet functions as
\begin{eqnarray}
2 v_+\cdot k = \frac{s_t}{m} = \hat s_t \,,\qquad\qquad
2 v_-\cdot k = \frac{s_{\bar t}}{m} = \hat s_{\bar t} \,.
\end{eqnarray}
In the factorization theorem these arguments are shifted by the soft gluon
momenta as shown in Eq.~(\ref{bHQETcross-hem}) below. Recall that the fields
$h_{v_+}$ and $h_{v_-}$ are defined with zero-bin subtractions on their
ultracollinear momenta. For Eq.~(\ref{hqetjet}) these subtractions can be
thought of as being inherited from the SCET fields in the matching. They remove
the light-cone singularities $n\cdot k \to 0$ and ${\bar n}\cdot k\to 0$ in $B_+$ and
$B_-$ respectively, and are important to ensure that the width $\Gamma$ is
sufficient to make $B_{\pm}$ infrared finite.
In general the matching of the jet functions in SCET onto those in bHQET could
take the form
\begin{eqnarray}\label{matchscetbhqet}
J_{n,{\bar n}}(m\hat s,m,\Gamma,\mu )
= \int_{-\infty}^\infty \!\! d\hat{s}' \: \>T_{\pm} (\hat{s},\hat{s}',m,\mu )\>
B_\pm(\hat{s}',\Gamma,\mu ),
\end{eqnarray}
where the convolution takes into account the fact that depending on
the definition, the observable $\hat s$ could be sensitive to scales
of ${\cal O}(m)$ and ${\cal O}(\Gamma)$. In such a case, since $\hat
s'$ does not know about the scale $m$, it can not be identical to
$\hat s$. The convolution with $T_{\pm}(\hat s,\hat s',m,\mu)$ then
compensates for this difference. In our case (and most reasonable
cases) the definition of the invariant mass is not sensitive to $m$,
so we have $T_\pm(\hat s,\hat s',m,\mu) =
\delta(\hat s-\hat s') T_\pm(m,\mu)$ and the matching equations are simply
\begin{align} \label{matchscetbhqet2}
J_n(m\hat s,m,\Gamma,\mu_m) &= T_+(m,\mu_m)\: B_+(\hat{s},\Gamma,\mu_m) \,,\nn\\
J_{\bar n}(m\hat s,m,\Gamma,\mu_m) &= T_-(m,\mu_m)\: B_-(\hat{s},\Gamma,\mu_m) \,.
\end{align}
Since there are no mass-modes in bHQET the function ${\cal M}$ also appears as
part of the Wilson coefficient. From this we define a hard-coefficient that
contains the mass corrections as\footnote{In explicit computations scheme
dependence may effect the manner in which the mass-corrections are divided up
between $T_\pm$, $T_0$, and ${\cal M}$, however this dependence will cancel
out in the product that gives $H_m$.}
\begin{align} \label{Cm}
H_m(m,\mu_m) = T_+(m,\mu_m)T_-(m,\mu_m) T_0(m,\mu_m) {\cal M}(m,\mu_m) \,.
\end{align}
By charge conjugation we know that the jet functions for the top and antitop
have the same functional form, and that $T_+ = T_-$. When we sum large logs into
the coefficient $H_m$ it develops an additional dependence on $Q/m$ through its
anomalous dimension which depends on $v_+\cdot {\bar n} = v_-\cdot n =Q/m$. Note that
in principle $H_m(m,\mu)$ and the factors in Eq.~(\ref{Cm}) can also have $Q/m$
dependence at NNLL order. For related discussions see
Refs.~\cite{Becher:2007cu,Chiu:2007yn}.
Since the
functions $T_\pm$ are independent of the top width $\Gamma$, we are
free to set $\Gamma=0$ (i.e.\,use stable top quarks) for the matching
calculations at any order in perturbation theory. At tree level we
need to compute the
\begin{figure}
\centerline{
\includegraphics[width=12cm]{BHQETtree.eps}
}
\caption{Tree level top-quark jet functions in a) SCET and b) bHQET. }
\label{fig:Bjet}
\end{figure}
discontinuity of the graphs in Fig.~\ref{fig:Bjet} which have a trace over spin
and color indices. For $\Gamma=0$ this gives
\begin{align}
B_{+}^{\rm tree}(\hat s,{\Gamma=0})
&= \frac{-1}{8\pi N_c m} (-2N_c)\: {\rm Disc} \Big( \frac{i}{v_+ \cdot k +
i0} \Big)
= \frac{1}{4\pi m} {\rm Im} \Big( \frac{-2 }{v_+ \cdot k + i0} \Big) \nn\\
&= \frac{1}{m}\: \delta(2v_+\cdot k)
= \frac{1}{m}\: \delta(\hat s) = \delta(s) \,,
\end{align}
which is identical to the result for the corresponding SCET jet function, so
at tree level $T_+=T_-=1$.
Plugging Eq.~(\ref{matchscetbhqet2}) into Eq.~(\ref{SCETcross-hem}), and
incorporating renormalization group evolution, the form for the differential cross
section is
\begin{align}
\label{bHQETcross-hem}
\left( \frac{d^2\sigma }{dM^2_t\>dM^2_{\bar t}} \right)_{\rm hemi} \!\!\! &=
\sigma_0
\> H_Q(Q,\mu_m) H_m\Big(m,\frac{Q}{m},\mu_m,\mu\Big) \\
&\quad\times \int_{-\infty}^{\infty}\!\!\!d\ell^+ d\ell^-
\> B_+\Big(\hat s_t- \frac{Q\ell^{+}}{m},\Gamma,\mu\Big) \:
B_-\Big(\hat s_{\bar t} - \frac{ Q\ell^{-}}{m},\Gamma,\mu\Big)\:
S_{\rm hemi}(\ell^+,\ell^-,\mu)
. \nn
\end{align}
Eq.~(\ref{bHQETcross-hem}) is our final result in terms of the pole mass $m$.
The analogous result for a short distance mass is given in the next section.
Here $H_m(m,Q/m,\mu_m,\mu)$ is the hard coefficient $H_m(m,\mu_m)$ run down from
$\mu_m$ to $\mu$, and we still have $H_Q(Q,\mu_m) = | C(Q,\mu_m)|^2$, and the
soft function with Wilson lines evaluated at $x=0$,
\begin{align}
S_{\rm hemi}(\ell^+,\ell^-,\mu) &= \frac{1}{N_c} \sum _{X_s}
\delta(\ell^+ \ensuremath{\! - \!} k_s^{+a}) \delta(\ell^- \ensuremath{\! - \!} k_s^{-b})
\langle 0| (\overline {Y}_{\!{\bar n}})^{ca'} ({Y}_n)^{cb'} |X_s \rangle
\langle X_s| ({Y}^\dagger_n)^{b'c'} (\overline {Y}_{\!{\bar n}}^\dagger)^{a'c'}
|0\rangle
\,.
\end{align}
For completeness we wrote out the color indices from Eq.~(\ref{factor-m-elt}).
Its interesting to note that in the result in Eq.~(\ref{bHQETcross-hem}) the
final matrix elements only involve Wilson lines (since the coupling of gluons to
a heavy quark field $h_{v_+}$ in $B_+$ is the same as to a Wilson line $W_{v_+}$).
To conclude this section we finally repeat the computation of the tree level
bHQET jet functions, but now for the realistic case with $\Gamma\ne 0$ in the
HQET propagators. The computation is done at a scale $\mu\gtrsim \Gamma$, but the
$\mu$ dependence does not show up at tree level. Fig.~\ref{fig:Bjet}b gives
\begin{align}
\label{Bpmtree}
B_\pm^{\rm tree} (\hat s,\Gamma)
&= \frac{-1}{8\pi N_c m} (-2N_c)\: {\rm Disc} \Big( \frac{i}{v_\pm \cdot k +
i\Gamma/2} \Big)
= \frac{1}{4\pi m} {\rm Im} \Big( \frac{-2 }{v_\pm \cdot k + i\Gamma/2} \Big) \nn\\
&= \frac{1}{\pi m}\: \frac{\Gamma}{\hat s^2 + \Gamma^2} \,.
\end{align}
Thus we see that $B_\pm(\hat s)$ are equal to Breit-Wigners at lowest
order in $\alpha_s$. At higher orders in perturbation theory the width
will cut off the IR divergences that would otherwise occur at $\hat
s=0$. The functions $B_\pm$ at the scale $\mu$ can therefore be
computed perturbatively to any desired order in $\alpha_s$. In general
the perturbative ``matching'' corrections will lead to distortions of
the tree-level Breit-Wigner distributions shown in
Eq.~(\ref{Bpmtree}), as does the potential separate running between
$\mu_\Delta$ and $\mu_\Gamma$ discussed below in Sec.~\ref{sec:RGE}
and shown in Fig.~\ref{fig:theory}.
\subsection{A Short-Distance Top-Mass for Jets} \label{sec:sdmass}
The derivation of the factorization formulae (\ref{bHQETcross-hem}) in
the previous section was given in the pole mass scheme\footnote{In
Eq.~(\ref{bHQETcross-hem}) we used $m$ for the pole mass, but in this
section we write $m_{\rm pole}$, and reserve ``m'' for a generic
mass-scheme.}, $m_{\rm pole}$. It is, however, well known that the
pole mass definition leads to an artificially enhanced sensitivity to
small momenta in Feynman diagrams (see Ref.~\cite{Beneke:1998ui} for a
review) and, as a consequence, to artificially large perturbative
corrections. This behavior is particularly important for observables
that have a strong dependence on the heavy quark
mass~\cite{Hoang:1998nz,Beneke:1998rk,Hoang:1998ng,Uraltsev:1998bk,Hoang:2000yr}.
From a nonperturbative point of view, this feature is related to an
intrinsic ambiguity in the heavy quark pole mass parameter of order
the hadronization scale $\Lambda_{\rm QCD}$, and is sometimes referred
to as the ${\cal O}(\Lambda_{\rm QCD})$-renormalon problem of the pole
mass. Heavy quark mass definitions that do not have such an ${\cal
O}(\Lambda_{\rm QCD})$ ambiguity are called short-distance mass
schemes.\footnote{In practice, determining the pole mass from the
analysis of experimental data leads to values that depend strongly on
the order of perturbation theory that has been employed for the
theoretical predictions. This makes the treatment of theoretical
errors difficult.} In the factorization formulae in
Eq.~(\ref{bHQETcross-hem}), the top-mass appears in the hard function
$H_m$ and in the two jet functions $B_+(\hat s_t)$ and $B_-(\hat
s_{\bar t})$. The most important sensitivity to the top-mass scheme
is in $\hat s_t= (M_t^2-m^2)/m$ and $\hat s_{\bar t}= (M_{\bar
t}^2-m^2)/m$, where $M_{t}^2$ and $M_{\bar t}^2$ are scheme
independent observables.
A specific short-distance top quark mass scheme ``$m$'' can be defined by a
finite residual mass term $\delta m\neq 0$, as
\begin{eqnarray} \label{chgscheme}
m_{\rm pole} \, = \, m + \delta m \,,
\end{eqnarray}
where $\delta m$ starts at ${\cal O}(\alpha_s)$ or higher, and must be
strictly expanded perturbatively to the same order as other ${\cal
O}(\alpha_s)$ corrections. (This strict expansion does not apply to
powers of $\alpha_s$ times logs that are summed up by renormalization
group improved perturbation theory.) Let $B_+(\hat s,\mu,\delta m)$
denote the jet-function in the short-distance mass scheme specified by
$\delta m$. We can calculate $B_+(\hat s,\mu,\delta m)$ in two
equivalent ways. i) Use the pole-mass scheme initially by
setting $\delta m=0$ in Eq.~(\ref{LbHQET}). In this case the
mass-dependence appears in $\hat s_{\rm pole}=(M^2-m_{\rm
pole}^2)/m_{\rm pole}$ in $B_+$ and we change the scheme with
Eq.~(\ref{chgscheme}). Alternatively, ii) treat $\delta m\ne 0$ in
Eq.~(\ref{LbHQET}) as a vertex in Feynman diagrams, and take $\hat s$ to
be defined in the short-distance mass scheme right from the start, so
$\hat s= (M^2-m^2)/m$.
As discussed in Sec.~\ref{buHQET}, it is necessary that the residual mass
term is consistent with the bHQET power counting, i.e.~
\begin{eqnarray} \label{massscheme}
\delta m\sim \hat s_t\sim \hat s_{\bar t}\sim\Gamma \,.
\end{eqnarray}
Eq.~(\ref{massscheme}) restricts us to a suitable class of
short-distance mass schemes for jets. In any short-distance mass scheme which
violates Eq.~(\ref{massscheme}) the EFT expansion breaks down, and thus the
notion of a top-quark Breit Wigner distribution becomes invalid. The most
prominent example for an excluded short-distance mass scheme is the
$\overline{\mbox{MS}}$ mass scheme, $\overline m$, for which $m_{\rm
pole}-\overline m=\delta \overline m$. Here $\delta \overline m \simeq
8\,{\rm GeV} \gg \Gamma$, or parametrically $\delta \overline m\sim\alpha_s
\overline m\gg\Gamma$. Using Eq.~(\ref{Bpmtree}) and converting to the
$\overline{\mbox{MS}}$ scheme with the ${\cal O}(\alpha_s)$ residual mass term
we have
\begin{eqnarray}
B_+(\hat s, \mu,\delta \overline m\,)
&=& \frac{1}{\pi\overline m}\, \Bigg\{ \frac{\Gamma}{\big[\frac{(M_t^2-\overline
m^2)^2}{\overline m^2} + \Gamma^2 \big]}
\, + \,
\frac{(4\, \hat s\, \Gamma)\, \delta \overline m}
{\big[\frac{(M_t^2-\overline m^2)^2}{\overline m^2} + \Gamma^2\big]^2}
\Bigg\} \,.
\end{eqnarray}
Here the first term is $\sim 1/(\overline m \Gamma)$ and is swamped by the
second term $\sim \alpha_s/\Gamma^2$, which is supposed to be a perturbative
correction. This means that it is not the $\overline{\mbox{MS}}$ mass that is
ever directly measured from any reconstruction mass-measurement that uses a top
Breit-Wigner at some level of the analysis. We stress that this statement
applies to any top mass determination that relies on the reconstruction of the
peak position of an invariant mass distribution.
To define a short distance scheme for jet reconstruction measurements, $m_J$, we
choose the residual mass term $\delta m_J$ such that, order-by-order, the jet
functions $B_\pm$ have their maximum at $\hat s_t=\hat s_{\bar t}=0$, where
$B_+(\hat s)$ is the gauge invariant function defined in Eq.~(\ref{hqetjet}). So
order-by-order in perturbation theory the definition is given by the solution to
\begin{eqnarray}
\frac{ dB_+(\hat s,\mu,\delta m_J)}{d\hat s} \bigg|_{\hat s=0} = 0 \,.
\end{eqnarray}
We call this mass definition the {\it top quark jet-mass}, $m_J(\mu)=m_{\rm
pole}-\delta m_J$. Since the bHQET jet functions have a nonvanishing
anomalous dimension, the top jet-mass depends on the renormalization scale
$\mu$, at which the jet functions are computed perturbatively. Thus the jet-mass
is a running mass, similar to the $\overline{\mbox{MS}}$ mass, and different
choices for $\mu\gtrsim \Gamma$ can in principle be made.
For simplicity we will use the notation $\tilde B_\pm(\hat s,\mu)$ for
the bHQET jet-functions in the jet-mass scheme. At next-to-leading
order in $\alpha_s$,
\begin{eqnarray} \label{Bshift}
\tilde B_\pm(\hat s,\mu)
& = &
B_\pm(\hat s,\mu ) +
\frac{1}{\pi m_J} \, \frac{(4\,\hat s\,\Gamma)\, \delta m_J}{(\hat s^2 + \Gamma^2)^2}
\,,
\end{eqnarray}
where $m_J=m_J(\mu)$ and $B_+$ is the pole-mass jet function to ${\cal
O}(\alpha_s)$. Here we dropped all corrections that are power suppressed by
$\Gamma/m$. The one-loop relation between the pole and jet-mass
is~\cite{FHMS2}
\begin{eqnarray} \label{mJmpole}
m_J(\mu) = m_{\rm pole} - \Gamma \frac{\alpha_s(\mu)}{3} \Big[
\ln\Big(\frac{\mu}{\Gamma}\Big) + \frac32 \Big] \,.
\end{eqnarray}
For $\mu=\Gamma$ we have $\delta m_J \simeq 0.26\, {\rm GeV}$, so the
jet-mass is quite close to the one-loop pole mass.
Equation~(\ref{mJmpole}) also shows that the jet-mass is substantially
different from the short-distance masses that are employed for $t\bar
t$-threshold analyses~\cite{Hoang:2000yr}, where $\delta m\sim
\alpha_s^2 m\sim 2\,{\rm GeV}$ is of order the binding energy of the
$t\bar t$ quasi-bound state. Nevertheless, in some of the threshold
mass schemes~\cite{Beneke:1998rk,Uraltsev:1998bk} $\delta m$ is
proportional to a cutoff scale that could in principle be adapted such
that they are numerically close to the jet-mass we are proposing. A
detailed discussion on the impact of switching from the pole to the
jet-mass scheme at the one-loop level and at higher orders will be
given in Refs.~\cite{FHMS2} and \cite{FHMS3}, respectively. We remark
that many other schemes satisfying Eq.~(\ref{massscheme}) can in
principle be defined, but the existence of one such scheme suffices.
However, for any suitable short-distance mass scheme the
renormalization scale in $\alpha_s$ contained in $\delta m$ has to be
equal to the scale $\mu$ used for the computation of the bHQET jet
functions.
The other function that must be modified in the factorization theorem is
$H_m(m,Q/m,\mu_m,\mu_\Delta)$. However this function only depends
logarithmically on $m$, and
\begin{eqnarray}
\ln\Big(\frac{m_{\rm pole}}{\mu}\Big) = \ln\Big(\frac{m_J}{\mu}\Big)
+ {\cal O}\Big(\frac{\alpha_s\Gamma}{m_J}\Big) \,.
\end{eqnarray}
So dropping these perturbatively suppressed power corrections we can simply
replace $m\to m_J$ in $H_m$. We note that any $\mu$ dependence from $m_J(\mu)$
in $H_m$ is also power suppressed.
Thus our final result for the cross-section in terms of the short-distance
jet-mass is
\begin{align}
\label{final-cross}
\left( \frac{d^2\sigma }{dM^2_t\>dM^2_{\bar t}} \right)_{\rm hemi} \!\!\! &=
\sigma_0
\> H_Q(Q,\mu_m) H_m\Big(m_J,\frac{Q}{m_J},\mu_m,\mu\Big) \\
&\times\! \int_{-\infty}^{\infty}\!\!\!d\ell^+ d\ell^-
\> \tilde B_+\Big(\hat s_t- \frac{Q\ell^{+}}{m_J},\Gamma,\mu\Big) \:
\tilde B_-\Big(\hat s_{\bar t} - \frac{ Q\ell^{-}}{m_J},\Gamma,\mu\Big)\:
S_{\rm hemi}(\ell^+,\ell^-,\mu)
\,, \nn
\end{align}
where the running jet-mass $m_J=m_J(\mu)$.
\subsection{Renormalization-Group Evolution} \label{sec:RGE}
In order to explain the $\mu$-dependence of the factorization theorem in
Eq.~(\ref{final-cross}) we give a brief discussion of the renormalization group
evolution. A more detailed discussion is given in Ref.~\cite{FHMS2}.
Equation~(\ref{final-cross}) depends on two renormalization scales, $\mu_m$ and
$\mu$. The matching scale $\mu_m\sim m$ was the endpoint of the evolution of
the hard function $H_Q(Q,\mu_m)$. From the matching at $m$ we get the dependence
on $\mu_m$ in $H_m$, and from running below $m$ we get an additional dependence
on $\mu$ as well as $Q/m$ (which is discussed in more detail in
Ref.~\cite{FHMS2} and signifies the presence of a cusp term in the anomalous
dimension, see Ref.~\cite{Korchemsky:1991zp}). The $\mu$-dependence in $H_m$ cancels
against the $\mu$-dependence in the bHQET jet functions and the soft function.
To sum the remaining large logarithms we have in principle two choices. We can
either run the Wilson coefficient $H_m$, or we can run the individual functions
$\tilde B_\pm$ and $S$. The first option essentially corresponds to running the bHQET
top pair production current of Eq.~(\ref{JbHQET}), and we will call this method
{\it ``top-down''}. The relation
\begin{equation}
H_m\Big(m,\frac{Q}{m},\mu_m,\mu\Big) = H_m(m,\mu_m) U_{H_m}\Big(\mu_m,\mu,\frac{Q}{m}\Big)
\end{equation}
defines the corresponding evolution factor $U_{H_m}$ that is shown in
Fig.~\ref{fig:theory}. The second option means running the jet functions
$\tilde B_\pm$ and the soft function $S_{\rm hemi}$ independently with the evolution
factors $U_{B_\pm}(\mu,\mu_m)$ and $U_S(\mu,\mu_m)$ respectively, and is also
illustrated in Fig.~\ref{fig:theory}. This running involves
convolutions, such as
\begin{align} \label{Brun}
\mu\frac{d}{d\mu} \tilde B_+(\hat s,\mu) &= \int\!\! d\hat s' \: \gamma_{B_+}(\hat
s-\hat s') \: \tilde B_+(\hat s',\mu) \,,\nn\\
\tilde B_+(\hat s,\mu_m)
& = \int \!\! d\hat s'\: U_{B_+}(\hat s-\hat s',\mu_m,\mu)\: \tilde B_+(\hat s',\mu)
\,,
\end{align}
and analogously for $\tilde B_-$ and $S_{\rm hemi}$. Since this method for the running
usually involves taking the functions $B_\pm$ and $S_{\rm hemi}$ as an input at
the low scale (to avoid the appearance of large logs) we will call this option
{\it ``bottom-up''}. Because the running of $H_m$ is local (i.e.~has no
convolution), this RG evolution only affects the normalization of the cross
section and {\em does not change} the dependence on $s_t$ and $s_{\bar t}$ in a
non-trivial way. This is more difficult to discern from the bottom-up running,
but when the convolutions for $B_\pm$ and $S$ are combined they must become
local. These cancellations are discussed in detail in Ref.~\cite{FHMS2}
where also the full leading log evolution is derived.
Generically, we may wish to run the soft function and jet function to slightly
different low energy scales. Lets examine the case shown in
Fig.~\ref{fig:theory} where we run the soft function to $\mu_\Delta$, but run
the bHQET jet functions to a slightly lower scale $\mu_\Gamma$. (The opposite
case could of course also be realized.) In this case the running
is local up to the scale $\mu_\Delta$, and below this scale we have convolution
running for $B_\pm$. Using Eq.~(\ref{Brun}) the factorization formula for split
low energy renormalization scales is
\begin{align}
\label{bHQETcross-hem2}
\frac{d^2\sigma }{dM_t^2\>dM^2_{\bar t}} &=
\sigma_0
\> H_Q(Q,\mu_m) H_m\Big(m_J,\frac{Q}{m_J},\mu_m,\mu_\Delta\Big)\!\! \\
&\times
\int_{-\infty}^{\infty}\!\!\!\!\! d\hat s_t'\: d\hat s_{\bar t}' \:
\: U_{B_+}(\hat s_t\ensuremath{\! - \!} \hat s_t',\mu_\Delta,\mu_\Gamma)
\: U_{B_-}(\hat s_{\bar t}\ensuremath{\! - \!} \hat s_{\bar t}',\mu_\Delta,\mu_\Gamma)
\nn\\
&\times
\int_{-\infty}^{\infty}\!\!\!d\ell^+ d\ell^-
S_{\rm hemi}(\ell^+,\ell^-,\mu_\Delta)
\> \tilde B_+\Big(\hat s_t' - \frac{Q\ell^{+}}{m_J},\Gamma,\mu_\Gamma\Big) \:
\tilde B_-\Big(\hat s_{\bar t}' - \frac{Q\ell^{-}}{m_J},\Gamma,\mu_\Gamma\Big)
\,, \nn
\end{align}
where parametrically $\mu_\Delta \sim \mu_\Gamma$ and here we take
$m_J=m_J(\mu_\Gamma)$. In this paper we will use common low energy
scales for our numerical analysis, as shown in
Eq.~(\ref{final-cross}), and leave the discussion of the more general
case in Eq.~(\ref{bHQETcross-hem2}) to Ref.~\cite{FHMS2}.
\subsection{Thrust and Other Event Shape Variables} \label{sec:eventshapes}
Starting from the two-dimensional distribution, $d^2\sigma/dM_t^2dM_{\bar t}^2$
in Eq.~(\ref{final-cross}) it is straightforward to derive results for other
event shape variables. For example, for the thrust $T$ defined in
Eq.~(\ref{thrust-1}), we have $1-T=(M_t^2+M_{\bar t}^2)/Q^2$ which follows using
Eq.~(\ref{tauss}) with Eqs.~(\ref{massshell}) and (\ref{tau}). Inserting the
identity
\begin{align}
1 = \int\!\! dT\ \delta\Big(1-T - \frac{M_t^2+M_{\bar t}^2}{Q^2}\Big)
\end{align}
into Eq.~(\ref{final-cross}) and integrating over $M_t^2$ and $M_{\bar t}^2$ we find
\begin{align} \label{Tfactorization}
\frac{d\sigma}{dT} &=
\sigma_0^H(\mu)
\int_{-\infty}^{\infty}\!\!\!ds_t\: ds_{\bar t}\:
\> \tilde B_+\Big( \frac{s_t}{m_J},\Gamma,\mu\Big) \:
\tilde B_-\Big( \frac{s_{\bar t}}{m_J},\Gamma,\mu\Big)\:
S_{\rm thrust}\Big(1-T-\frac{(2m_J^2 + s_t+s_{\bar t})}{Q^2},\mu\Big)
\,,
\end{align}
where $\sigma_0^H(\mu) = \sigma_0 H_Q(Q,\mu_m) H_m(m_J,Q/m_J,\mu_m,\mu)$.
Here the thrust soft-function is simply a projection of the hemisphere soft function,
\begin{align} \label{Sthrust}
S_{\rm thrust}(\tau,\mu) & = \int_0^\infty\!\!\! d\ell^+\: d\ell^-
\delta\Big(\tau - \frac{(\ell^+ + \ell^-)}{Q}\Big) S_{\rm hemi}(\ell^+,\ell^-,\mu)
\\
&= \frac{1}{N_c} \sum_{X_s} \delta\Big(\tau - \frac{k_s^{+a}+k_s^{-b}}{Q}\Big)
\langle 0| \overline {Y}_{\bar n}\, {Y}_n (0) |X_s \rangle
\langle X_s| {Y}^\dagger_n\, \overline {Y}_{\bar n}^\dagger (0) |0\rangle
\nn \,.
\end{align}
Another well known distribution, which is also frequently analyzed for massless
jets, is the heavy jet mass. It can be defined by the dimensionless variable
\begin{align} \label{rho}
\rho = \frac{1}{Q^2}\: {\rm Max} \big\{ M_t^2, M_{\bar t}^2 \big\} \,.
\end{align}
Using the same steps as above for $\rho$, the factorization theorem for top
initiated jets is
\begin{align}\label{HJMfactorization}
\frac{d\sigma}{d\rho} &= \sigma_0^H(\mu) \int_{-\infty}^{\infty}\!\!\!ds_t\:
ds_{\bar t}\: \> \tilde B_+\Big( \frac{s_t}{m_J},\Gamma,\mu\Big) \: \tilde
B_-\Big( \frac{s_{\bar t}}{m_J},\Gamma,\mu\Big)\:
S_{\rm HJM}(\rho-\frac{m_J^2}{Q^2},s_t,s_{\bar t}) \,,
\end{align}
where the relevant soft-function is
\begin{align}
S_{\rm HJM}(\rho,s_t,s_{\bar t}) &= \! \int_0^\infty\!\!\!\! d\ell^+\: d\ell^-\:
\delta\Big(\rho - \frac{1}{Q^2}
{\rm Max}\big\{ Q\ell^+ \ensuremath{\! + \!} s_t, Q\ell^- \ensuremath{\! + \!} s_{\bar t}\big\} \Big)
S_{\rm hemi}(\ell^+,\ell^-,\mu) \,.
\end{align}
Factorization theorems for other event shapes that are related to
$d^2\sigma/dM_t^2 dM_{\bar t}^2$ can be derived in an analogous manner. As
should be obvious from the definitions of thrust and the heavy jet mass
distribution in Eqs.~(\ref{Tfactorization}) and (\ref{HJMfactorization}), these
event shape distributions are also characterized by a peak at shape parameter
values that are sensitive to the short-distance top-quark mass. It is therefore
possible to use these event shapes to measure the top-mass with a precision
comparable to the invariant mass distribution discussed in the previous
subsection. A brief numerical analysis of the thrust distribution is given in
Sec.~\ref{section40}.
\section{Analysis of the Invariant Mass Distribution}\label{section4}
\subsection{A Simple Leading Order Analysis}\label{section40}
The main result of this paper is the formula in Eq.~(\ref{final-cross}) for the
double invariant mass distribution with a short distance top-quark mass suitable
for measurements using jets. In this section we discuss the implications of
Eq.~(\ref{final-cross}) for top-mass measurements. For convenience we rewrite
the cross-section in terms of dimension one invariant mass variables
\begin{align} \label{sigmaMM}
\frac{ d^2\sigma }{dM_t\, dM_{\bar t}}
& = \frac{4 M_t M_{\bar t} \: \sigma_0^H}{ (m_J\Gamma)^2}\
F(M_t,M_{\bar t},\mu) \,,
\end{align}
where $\sigma_0^H= \sigma_0 H_Q(Q,\mu_m) H_m(m_J,Q/m_J,\mu_m,\mu)$ is the
cross-section normalization factor with radiative corrections, $Q$ is the
c.m. energy, and we have defined a dimensionless function
\begin{align} \label{F}
F(M_t,M_{\bar t},\mu) &= (m_J\Gamma)^2\!
\int_{-\infty}^\infty\!\!\!\! d\ell^+\, d\ell^-
\tilde B_+\Big(\hat s_t - \frac{Q\ell^+}{m_J}, \Gamma,\mu \Big)
\tilde B_-\Big(\hat s_{\bar t} - \frac{Q\ell^-}{m_J}, \Gamma,\mu\Big) S_{\rm
hemi}(\ell^+,\ell^-,\mu)
.
\end{align}
In terms of $M_t$ and $M_{\bar t}$ the variables $\hat s_{t,\bar t}$ are
\begin{align} \label{ssM}
\hat s_t = 2 M_t -2 m_J \,,\qquad\quad
\hat s_{\bar t} = 2 M_{\bar t} -2 m_J
\,,
\end{align}
up to small $\Gamma/m$ power corrections. In Eqs.~(\ref{sigmaMM}-\ref{ssM}) the
jet hemisphere invariant masses are $M_t$ and $M_{\bar t}$ and the
short-distance top-quark mass that we wish to measure is $m_J$. In
$d^2\sigma/dM_tdM_{\bar t}$ the function $F$ dominates the spectrum, while $4
M_t M_{\bar t}\, \sigma_0^H/(m_J\Gamma)^2$ acts as a normalization constant
(since $M_t M_{\bar t}$ is essentially constant in the peak region of interest).
A measurement of the normalization is not optimal for determining $m_J$; it only
has logarithmic dependence on the short-distance mass, and has larger
theoretical uncertainties. On the other hand, the spectrum is very sensitive to
$m_J$, so henceforth we focus on $F(M_t,M_{\bar t},\mu)$.
From Eq.~(\ref{F}) $F$ is given by the convolution of the computable $\tilde
B_\pm$ functions, with a non-perturbative hemisphere soft-function, $S_{\rm
hemi}$, that describes soft final-state radiation. The majority of the
important features of Eq.~(\ref{F}) can be explained without discussing
perturbative corrections, so we focus here on the leading order result. From
Eq.~(\ref{Bpmtree}), $\tilde B_\pm$ are simply Breit-Wigner's at leading order,
\begin{align} \label{Bpmtree2}
\tilde B_+(\hat s_t) &= \frac{1}{\pi (m_J\Gamma)}\: \frac{1}{(\hat
s_t/\Gamma)^2+1} \,,
&
\tilde B_-(\hat s_{\bar t}) &=
\frac{1}{\pi (m_J\Gamma)}\: \frac{1}{(\hat s_{\bar t}/\Gamma)^2+1} \,.
\end{align}
For our numerical analysis we use the two-loop standard model prediction for the
top-width $\Gamma=1.43\,{\rm GeV}$~\cite{Czarnecki:1998qc} and we take the short
distance jet-mass to be fixed at $m_J=172\,{\rm GeV}$. As demonstrated in
Secs.~\ref{sectionefts} and \ref{section3}, $S_{\rm hemi}$ is the same
function that controls the soft radiation for massless dijets, which was studied
in Refs.~\cite{Korchemsky:1998ev,Korchemsky:1999kt,Korchemsky:2000kp}. Hence,
it is convenient for our analysis to adopt the model used to fit the massless
dijet data~\cite{Korchemsky:2000kp},
\begin{align} \label{SM1}
S_{\rm hemi}^{\rm M1}(\ell^+,\ell^-) = \theta(\ell^+)\theta(\ell^-)
\frac{ {\cal N}(a,b) }{\Lambda^2}
\Big( \frac{\ell^+\ell^-}{\Lambda^2}\Big)^{a-1} \exp\Big(
\frac{-(\ell^+)^2-(\ell^-)^2-2 b \ell^+\ell^-}{\Lambda^2} \Big).
\end{align}
Here the normalization constant ${\cal N}(a,b)$ is defined so that $\int d\ell^+
d\ell^- S(\ell^+,\ell^-) = 1$, the parameter $\Lambda \sim \Lambda_{\rm QCD}$
sets the scale for $\ell^\pm$ and hence the soft radiation, and the parameter
$a$ controls how fast the soft-function vanishes at the origin. The
dimensionless parameter $b>-1$ controls the correlation of energy flow into the
two hemispheres. Any $b\ne 0$ implies cross-talk between the two hemispheres. A
fit to the heavy jet mass distribution using $e^+e^-$ dijet data from LEP and
SLD with $Q=m_Z$ gives~\cite{Korchemsky:2000kp}
\begin{align}\label{abL}
a=2 \,, \qquad\quad
b=-0.4 \,, \qquad\quad
\Lambda=0.55\,{\rm GeV} \,,
\end{align}
These values were shown to yield accurate predictions for the heavy jet-mass and
$C$-parameter event shapes for a wide range of energies, $Q=35$--$189\,{\rm
GeV}$~\cite{Korchemsky:2000kp}, as well as available thrust distributions with
$Q=14$--$161\,{\rm GeV}$~\cite{Korchemsky:1999kt}. We adopt Eq.~(\ref{abL}) as
the central values for our analysis, but will also discuss how our predictions vary
with changes to these model parameters.
\begin{figure}[t!]
\centerline{
\includegraphics[width=10cm]{plot3Db.eps}
}
\caption{Plot of $F(M_t,M_{\bar t})$, which is the
double differential hemisphere invariant mass
cross-section $d^2\sigma/dM_t dM_{\bar t}$ in units of
$4 \sigma_0^H/\Gamma^2$. The observed peak position (intersection of
the magenta
lines) is not given by the true top-quark mass, $m=m_J=172\,{\rm GeV}$ (red
lines). This peak shift depends on the energy $Q$, the width $\Gamma$, and the
soft-radiation function. The result is shown for $Q/m_J=4.33$ and the
parameters in Eq.~(\ref{abL}). }
\label{fig:plot3D}
\end{figure}
In Fig.~\ref{fig:plot3D} we plot $F(M_t,M_{\bar t})$ using
Eqs.~(\ref{Bpmtree2}-\ref{abL}) and taking $Q= 745\,{\rm GeV}$. The key feature
to note is that the observed peak position {\em is not given by} the
short-distance top-quark mass $m_J$, but is instead shifted upward by $\simeq
1.5\,{\rm GeV}$. The positive sign of this shift is a prediction of
Eq.~(\ref{F}) irrespective of the choice of parameters. The precise value for
this shift depends on $Q/m_J$, $\Gamma$, as well as the parameters of the soft
function. A less obvious feature of Fig.~\ref{fig:plot3D} is that the width of
the observed peak has also increased beyond the width $\Gamma$ of
Eq.~(\ref{Bpmtree2}). Physically, the reason for this behavior is that soft
radiation contributes to the invariant masses, while the Breit-Wigner is {\em
only} a leading order approximation for the spectrum of the top-quark and
accompanying collinear gluons. Thus the arguments of $\tilde B_\pm$ in
Eq.~(\ref{F}) subtract the dominant soft momentum component from $\hat s_{t,\bar
t}$. If we approximate $S_{\rm hemi}(\ell^+,\ell^-)$ as a very narrow Gaussian
centered at $\ell^\pm=\ell_0^\pm$, then the observed peak simply occurs at
$M_{t,\bar t} \sim m_J + Q\ell^\pm_0 /(2m_J)$. Although this model is too naive,
we demonstrate in the next section that the linear dependence of the peak shift
on $Q/m_J$ is in fact generic and independent of the soft-function parameters.
The peak width also increases linearly with $Q/m_J$.
The presence of the shift is due to the inclusion of soft radiation in the
definition of the invariant masses $M_t$ and $M_{\bar t}$. Although we adopted a
hemisphere mass definition, the same type of shift will be present for any jet
algorithm that groups all the soft radiation into the jets identified for the
top and anti-decay products, as we discuss in Sec.~\ref{sectionotheralgo}.
The numerical analysis performed in this section applies equally well to these
situations, though the appropriate definition and model for the soft
functions $S$ for such analyses will in general be different than that in
Eq.~(\ref{SM1}) with Eq.~(\ref{abL}). We are not aware of studies
where models for such soft functions were discussed.
It is important to emphasize that the shift of the observed peak
position away from $m_J$ is not an artifact of the
mass-scheme. At the order used to make Fig.~\ref{fig:plot3D} we could set
$m_J=m^{\rm pole}$ since as explained in Sec.~\ref{sec:sdmass} they differ by
${\cal O}(\alpha_s \Gamma)$.\footnote{In general use of $m^{\rm pole}$ is not a
good idea, since in fits it would induce an unphysical change in the required
parameters $a,b,\Lambda$ order-by-order in perturbation theory} In a generic
short distance top-quark jet-mass scheme there is a small shift $\sim
\alpha_s\Gamma$ in the peak position due to perturbative corrections in the
matrix element defining $\tilde B_\pm$ (as discussed in detail in
Ref.~\cite{FHMS2}). In Sec.~\ref{sec:sdmass} we defined $m_J$ using a
jet-mass scheme which keeps the peak of $\tilde B_\pm$ fixed order-by-order in
perturbation theory. In this scheme the shift in the peak location relative to
the short-distance mass is entirely due to the non-perturbative soft radiation.
\begin{figure}[t!]
\centerline{
\includegraphics[width=10cm]{Thrust.eps}
}
\caption{Plot of the thrust distribution, $d\sigma/dT$ in units of $\sigma_0^H$, for
top-initiated events in the peak region. We use $Q/m_J=5$, $m_J=172\,{\rm GeV}$ and
the soft function parameters in Eq.~(\ref{abL}). }
\label{fig:thrust}
\end{figure}
Although $m_J$ is not determined by the peak-position, the shape of the
cross-section is very sensitive to $m_J$, and hence for precision $\delta m_t
\lesssim 1\,{\rm GeV}$ the top-quark mass should be determined by a fit to $F$
in Eq.~(\ref{sigmaMM}). In Sec.~\ref{sec:eventshapes} factorization theorems
for related event shape variables were derived, including thrust $d\sigma/dT$,
and the heavy-jet mass $d\sigma/d\rho$. These event shapes also exhibit a peak.
They are sensitive to the top-quark mass parameter $m_J$, and can be used for
top-mass measurements. As an example, in Fig.~\ref{fig:thrust} we plot
$d\sigma/dT$ using $Q/m_J=5$, $m_J=172\,{\rm GeV}$, and the parameters in
Eq.~(\ref{abL}). The expected peak in the thrust distribution is at $1-T\simeq
2m^2/Q^2 = 0.08$, and is shifted to the right by $\Delta(1-T)=1.3\times 10^{-3}$
by the soft-radiation. Again the direction of the shift is a prediction, but the
precise amount of the shift depends on the soft-model parameters in
Eq.~(\ref{abL}) as well as $Q/m_J$. An analysis of any other event shape
distributions that are related to $d^2\sigma/dM_t^2dM_{\bar t}^2$ can be made in a
similar fashion.
In Sec.~\ref{section4a} we explore the functional dependence of the peak
shift for $d^2\sigma/dM_tdM_{\bar t}$ in greater detail. In
Sec.~\ref{section4b} we discuss the implications of our results for fits to
determine the short-distance mass.
\subsection{Analysis of the Peak Shift and Broadening } \label{section4a}
In this section we analyze the parameter dependence of the peak shift and
broadening of the width, and demonstrate that they have a linear dependence on
$Q$. The main analysis is carried out assuming that the soft-function model
parameters have been determined from massless jet observables with small
uncertainties and adopting the parameters in Eq.~(\ref{abL}). It is,
however, also instructive to study the dependence of the invariant
mass distribution on variations of the model parameters, anticipating
that the soft function is different when the definition of the invariant
masses is modified (see the discussion in Sec.~\ref{sectionotheralgo}).
We carry out such an analysis near the end of this section.
In Fig.~\ref{fig:plot1dBW}a we plot the peak location, $M_t^{\rm peak}$, for
nine values of $Q$. $M_t^{\rm peak}$ is obtained from the two-dimensional
distribution, and corresponds to the intersection of the magenta lines in
Fig.~\ref{fig:plot3D}. Since $d^2\sigma/dM_tdM_{\bar t}$ is symmetric the value
of $M_{\bar t}^{\rm peak}$ is the same. Note that for $Q\simeq 2m_J$ where the
tops are near $t\bar t$ production threshold, our effective theory expansions do not apply. The
straight blue line in Fig.~\ref{fig:plot1dBW}a is a linear fit to the points
with $Q/m_J \ge 4$, and clearly shows that the peak location grows linearly with
$Q$. In Fig.~\ref{fig:plot1dBW}b we plot the ``Peak Width'', defined as the
full-width at half-max of $d^2\sigma/dM_tdM_{\bar t}$ in the top-variable $M_t$,
while fixing the antitop $M_{\bar t}=M_{\bar t}^{\rm peak}$. The red solid line
is a linear fit for $Q/m_J \ge 4$.
\begin{figure}[t!]
\centerline{
\begin{minipage}{7.2cm}
\includegraphics[width=7cm]{plot2dpeak.eps} \\[15pt]
\hspace{.2cm}\includegraphics[width=7cm]{plot2dwidth.eps}
\end{minipage}
\hspace{0.2cm}
\raisebox{-3.5cm}{\includegraphics[width=9cm]{plot1dBW.eps} }
}
\caption{Effect of a change in $Q$ on the invariant mass distribution. Results on
the left are generated from $d^2\sigma/dM_tdM_{\bar t}$, a) shows the
peak position versus $Q/m_J$, and b) gives the full width
at half-max versus $Q/m_J$.
In c) we show $d\sigma/dM_t$ in units of $2\sigma_0^H/\Gamma$ for
different values of $Q/m_J$. The curves use $m_J=172\,{\rm GeV}$,
$\Gamma=1.4\,{\rm GeV}$, and the parameters in Eq.~(\ref{abL}).}
\label{fig:plot1dBW}
\end{figure}
This figure demonstrates that we also have linear growth with $Q$ for
the width of the measured invariant mass distribution. Note that the
values for the peak position and peak width shown are consistent with
our power counting since $\hat s_{t,\bar t}$ can be order $\Gamma$ as
well as greater than $\Gamma$.
To get a better picture of how the distribution changes with $Q$ we plot the
single invariant mass distribution $d\sigma/dM_t$ in Fig.~\ref{fig:plot1dBW}c.
In particular we plot
\begin{align} \label{F1}
F_1(M_t) &= \frac{2}{\Gamma} \int_{M_{\rm lower}}^{M_{\rm upper}} dM_{\bar t}\
\: F(M_t,M_{\bar t}) ,
\end{align}
which gives $d\sigma/dM_t$ in units of $2\sigma_0^H/\Gamma$. In the numerical
analysis we center the integration interval $[M_{\rm lower},M_{\rm upper}]$ on
$M_{\bar t}^{\rm peak}$ with a size that is twice the measured peak width. Hence
the size of the interval depends on $Q$, but keeps the number of events
collected at each $Q$ constant for the comparison. For different choices of $Q$
we find that the peak position and width of $F_1(M_t)$ behave in an identical
manner to Figs.~\ref{fig:plot1dBW}a,b, including having essentially the same
slopes. In order to keep the area under the curves constant the peak height
drops as $Q$ is increased. Note that for values $Q/m_J\simeq 8$--$10$ the
observed peak location may be as much as $2.0$--$2.5\,{\rm GeV}$ above the value
of the Lagrangian mass $m_J$ one wants to measure. In our analysis $m_J$ is held
fixed as shown by the dashed line in Fig.~\ref{fig:plot1dBW}c.
To gain an analytic understanding of this linear behavior we consider the effect of
$Q$ on the mean of the cross-section, which is a good approximation to the peak
location. Taking the first moment with respect to $\hat s_t/2 =(M_t-m_J)$ over
an interval of size $2L\gg Q\Lambda$ and the zeroth moment in $\hat s_{\bar
t}/2=(M_{\bar t}-m_J)$ gives
\begin{align} \label{mom1}
F^{[1,0]} &\equiv \frac{1}{m_J^2\Gamma^2}
\int _{-L }^{L}\!\!\!\! ds_{t}\, \frac{\hat s_t}{2}
\! \int _{-\infty}^{\infty} \!\!\!\!\! ds_{\bar{t}}\ F(M_t,M_{\bar t})
= \!\! \int_{-\infty}^{\infty}\!\!\!\! d\ell^+\!\!
\int _{-L }^{L}\!\!\!\! ds_{t}\: \frac{\hat s_t}{2} \:
\tilde B_+\Big(\hat s_t \ensuremath{\! - \!} \frac{Q\ell^+}{m_J}\Big)\!
\int_{-\infty}^{\infty}\!\!\!\!\! d\ell^- S_{\rm hemi}(\ell^+,\ell^-)
\nn\\
&\simeq \frac12 \int_{-\infty}^{\infty}\!\!\! d\ell^+
\int _{-L }^{L}\!\!\! d s_{t}\ \Big(\hat s_t + \frac{Q\ell^+}{m_J}\Big)\
\tilde B_+(\hat s_t )
\int_{-\infty}^{\infty}\!\!\! d\ell^- S_{\rm hemi}(\ell^+,\ell^-)
\nn\\
& = \frac{Q}{2m_J} S_{\rm hemi}^{[1,0]} \ .
\end{align}
Thus the mean grows linearly with $Q/m_J$ with a slope determined by the
first-moment of the soft function, $S_{\rm hemi}^{[1,0]}=\int d\ell^+d\ell^-\:
\ell^+ S_{\rm hemi}(\ell^+,\ell^-)$. In the first equality of Eq.~(\ref{mom1})
the $\tilde B_-$ function drops out because we integrate over all $\hat s_{\bar
t}$. The approximation in Eq.~(\ref{mom1}) is that terms of $\sim 1/L$ are
dropped. We can also directly consider the location of the peak in $M_t$, again
integrating over $M_{\bar t}$ for convenience. We use the fact that the
tree-level $\tilde B_+(\hat s_t)$ is symmetric, and solve for $M_t^{\rm
peak}=m_J+\hat s_t^{\rm peak}/2$ by setting
\begin{align}
0 &=\frac{1}{m_J^2\Gamma^2} \int_{-\infty}^\infty\!\!\! d\hat s_{\bar t}\
\frac{dF(M_t,M_{\bar t})}{d\hat s_t}
= \int_{-\infty}^{\infty}\!\!\! d\ell^+
\tilde B_+^\prime\Big(\hat s_t - \frac{Q\ell^+}{m_J}\Big)
\int_{-\infty}^{\infty}\!\!\! d\ell^- S_{\rm hemi}(\ell^+,\ell^-)
\nn\\
&= \int_{-\infty}^{\infty}\!\!\! d\ell^+
\bigg[ (\hat s_t \ensuremath{\! - \!} \frac{Q\ell^+}{m_J}\Big) \tilde B_+^{\prime\prime}(0)
\ensuremath{\! + \!} \frac{1}{3!} \Big(\hat s_t \ensuremath{\! - \!} \frac{Q\ell^+}{m_J}\Big)^3 \tilde B_+^{(4)}(0)
\ensuremath{\! + \!} \ldots \bigg]
\int_{-\infty}^{\infty}\!\!\! d\ell^- S_{\rm hemi}(\ell^+,\ell^-)
\,.
\end{align}
For $Q\Lambda\gg m\Gamma$ we can keep only the first term which yields
\begin{align} \label{MtpeakLinear}
M_t^{\rm peak}\simeq m_J + \frac{Q}{2m_J}\, S_{\rm hemi}^{[1,0]}.
\end{align}
Thus we find the same shift as for the moment in Eq.~(\ref{mom1}). Our
default model in Eq.~(\ref{abL}) gives $S_{\rm
hemi}^{[1,0]}/2=0.31\,{\rm GeV}$ for the slope in $Q/m_J$. This can be
compared with the fit to the two-dimensional peak position,
Fig.~\ref{fig:plot1dBW}a, which gives a slope of $0.26\,{\rm
GeV}$. The fit to the peak position of $F_1(M_t)$ in
\ref{fig:plot1dBW}c has a similar slope, $0.25\,{\rm GeV}$. Finally,
the first moments of $F_1(M_t)$ also display linear behavior in
$Q/m_J$ with a slope of $0.28\,{\rm GeV}$. We see that $S_{\rm
hemi}^{[1,0]}/2$ accounts for the largest portion of these slopes,
with the remainder being accounted for by other moments. Note that the
linear behavior in $Q/m_J$ observed in Fig.~\ref{fig:plot1dBW} is much
more accurate than the statement that $S^{[1,0]}_{\rm hemi}/2$
determines the proper slope at lowest order. We want to point out
that a first measurement of the short distance mass could be made
using a couple of different $Q$ values and a simple extrapolation with
Eq.~(\ref{MtpeakLinear}).
\begin{figure}[t]
\centerline{
\begin{minipage}{8.4cm}
\includegraphics[width=8.4cm]{models.eps} \\[0pt]
\includegraphics[width=8.4cm]{Fmodels.eps}
\end{minipage}
\hspace{0.2cm}
\begin{minipage}{8cm}
\includegraphics[width=7.6cm]{moment1.eps} \\[4pt]
\includegraphics[width=7.6cm]{peak1.eps}
\end{minipage}
}
\caption{
Dependence of the invariant mass distribution on the soft-function
model parameters. In a) we show 9 models with different $a$ and $b$
parameters, and in b) we show the resulting $d\sigma/dM_t$ in units
of $2\sigma_0^H/\Gamma$. In c) we plot the first moment of the
invariant mass distribution, $F^{[1,0]}$, versus the first moment of
the soft-function $\ell^+$, $S^{[1,0]}$. In d) we plot the peak
position of $d\sigma/dM_t$ versus $S^{[1,0]}$. These plots are made
with $Q/m_J=5$ and $m_J=172\,{\rm GeV}$. Note that these scans are
only relevant experimentally if the universality of $S_{\rm hemi}$
with massless dijet events is not used to determine the parameters.}
\label{fig:models}
\end{figure}
Finally we consider the effect on the invariant mass shift from a scan over
model parameters. $F(M_t,M_{\bar t})$ depends on the parameters
\begin{align}
m_J ,\quad \Gamma,\quad \beta = \frac{Q\Lambda}{m_J \Gamma},
\quad a, \quad b \,.
\end{align}
Here the scale $\Lambda$ for the soft-function only shows up along
with $Q/m_J$ in the effective boost parameter $\beta$. To demonstrate
that it is $\beta$ that appears in $F(M_t,M_{\bar t})$, switch
integration variables to $x=\ell^+/\Lambda$ and $y=\ell^-/\Lambda$,
and let $\hat s_{t,\bar t} =z_{t,\bar t} \Gamma$. This yields a soft
function $\Lambda^2 S_{\rm hemi}(\Lambda x,\Lambda y)$ that is
independent of $\Lambda$, and $\tilde B_+(\hat s_t -Q\ell^+/m_J)=
\tilde B_+(\Gamma(z_t- \beta x))$ which is only a function of
$(z_t-\beta x)$ times $(\Gamma m_J)^{-1}$. Hence $F(M_t,M_{\bar t})$
is only a function of $\beta$, $z_{t,\bar t} = (M_{t,\bar
t}-m_J)/\Gamma$, $(m_J\Gamma)$, and the model parameters $a$ and $b$.
Hence changing $\Lambda$ has the same effect as changing
$Q/m_J$. Below we will only consider variations of the model
parameters $a$ and $b$, while keeping $\Lambda=0.55\,{\rm GeV}$.
We generate 9 soft-function models from the intersection of
$a=\{1,2,3\}$ and $b=\{-0.9,0.0,0.9\}$, and in Fig.~\ref{fig:models}a
give the profile of these models by plotting $S(\ell^+)=\int d\ell^-
S_{\rm hemi}(\ell^+,\ell^-)$. Increasing $a$ shifts the distribution
to larger average momenta. For each model the result for the single
invariant mass distribution $F_1(M_t)$ is shown in
Fig.~\ref{fig:models}b with curves of a matching color. We again take
the $M_{\rm \bar t}$ integration interval, $[M_{\rm lower},M_{\rm
uppper}]$, to be centered on the measured peak, with a size that is
twice the measured peak width. The peak positions for $F_1(M_t)$ are
ordered in the same manner as the peak positions for the $S(\ell^+)$
models. We note that models with $b=0,0.9$ generate smaller peak
shifts than those with $b\sim -0.9$. To examine the peak shifts more
quantitatively we plot the first moment $F^{[1,0]}$ versus the first
moment $S^{[1,0]}_{\rm hemi}$ in Fig.~\ref{fig:models}c. To compute
$F^{[1,0]}$ we restrict the two integrals in Eq.~(\ref{mom1}) to the
same interval choice $[M_{\rm lower},M_{\rm upper}]$. From the figure
we observe that the mean of the invariant mass distributions for
different models fall close to a straight line. In Fig.~\ref{fig:models}d we
plot the peak position $M_t^{\rm peak}$ for each model versus the
first moment $S^{[1,0]}_{\rm hemi}$, and observe that the behavior is
also quite linear. We conclude that the main effects of $a,b$ on the peak
shift are controlled by the first moment parameter $S^{[1,0]}_{\rm
hemi}$.
\subsection{Implications for top-quark mass measurements} \label{section4b}
In this section we take a step back to consider the more general implications of
our method for top-quark mass measurements. In a realistic top-quark mass
analysis at a hadron collider with $pp$ or $p\bar p$ collisions, the set of
issues that effect the accuracy for a $m_t$-measurement and that can potentially be
improved by theoretical progress includes: i) the choice of the observable to be
measured, ii) the top mass definition, iii) hadronization effects, iv) color
reconnection, v) final state radiation, vi) initial state radiation, vii)
underlying events, viii) cuts to remove the beam remnant, and ix) parton
distribution functions. In our analysis we treat $e^+e^-$ collisions which
allows us to investigate strong interaction effects in categories i)-v). We
briefly discuss what our result for $d^2\sigma/dM_tdM_{\bar t}$ implies for
these uncertainties.
The main advantage of the factorization approach is that it keeps careful track
of how changing the observable effects corrections from the other categories.
For example, switching from invariant mass variables to thrust gives a different
function for the non-perturbative soft radiation, but the soft-functions in
these observables are related by Eq.~(\ref{Sthrust}), and one model can be used
to fit both of them. In our analysis, the inclusive nature of the hemisphere
invariant mass observable reduces the uncertainty from hadronization effects. In
particular it yields jet-functions which sum over hadronic states with invariant
mass up to $\sim m\Gamma$, and remain perturbatively computable due to the
low-momentum cutoff provided by the top-width. Final state gluon radiation from
the decay products contributes to the width, while emission of soft-gluons in
the c.m. frame organize themselves with radiation from the top-quark to give a
single universal soft-radiation function. Thus, non-trivial color reconnection
effects between the decay products are power suppressed. The level of control
provided by the factorization theorem therefore provides a significant reduction
in the associated uncertainties. Of course the nature of this control is
observable dependent, and will undoubtably change in the hadronic collider
environment, in particular, hemisphere masses are not suitable for mass
measurements at the Tevatron or LHC. Nevertheless, we expect the control
provided by the factorization approach to provide useful applications to this
case as well. Finally, as discussed in detail in Sec.~\ref{sec:sdmass}, the
inherent theoretical ambiguity in the pole mass, $\delta m^{\rm pole}\sim
\Lambda_{\rm QCD}$, can be avoided by switching to a short-distance jet-mass
$m_J$. This mass definition is suitable for reconstruction measurements in
$e^+e^-$ collisions, and in principle also for $p\bar p$, and $pp$
collisions.
For the case studied in detail here, a measurement of $d^2\sigma/dM_tdM_{\bar
t}$ from energetic top-jets in $e^+e^-$ collisions, there are at least two
ways the result in Eq.~(\ref{sigmaMM}) can be used to fit for the short-distance
mass $m_J$. In the first one takes the soft-function model parameters $a$, $b$,
and $\Lambda$ from a fit to massless dijet event shapes, and then analyzes
$d^2\sigma/dM_tdM_{\bar t}$ to fit for $m_J$. This method makes use of the
universality of the soft-hemisphere function $S_{\rm hemi}(\ell^+,\ell^-)$
between massive and massless jets. Alternatively, one can vary $Q$ and do a
simultaneous fit to $m_J$, $a$, $b$, and the effective boost parameter $\beta$,
to determine the soft-parameters from the same data used to determine $m_J$.
This may be advantageous if the energy resolution, jet energy scale, or other
experimental effects have non-trivial interactions with the soft radiation that
are particular to $t\bar t$ decays. In the next section we consider how the
factorization theorem is modified by the use of an inclusive $k_T$ algorithm
rather than using hemisphere masses.
To conclude this section we note that in our analysis we have not accounted
for QED effects since in this work we are interested in treating the effects
of the strong interactions. For a realistic description of experimental data
obtained at a future Linear Collider QED effects will of course have to be
included. Effects from QED can contribute to the categories ii), v), vi)
as well as to the QED analogues of categories iv) and ix).
As such the treatment of these QED effects is straightforward and can be
included naturally in our factorization approach.
To be more concrete, the QED analogue to parton distribution functions entails
to account for initial state radiation, beam strahlung and the beam energy
spread through a luminosity spectral function which has to be convoluted with
the QCD cross section. The luminosity spectrum is obtained from analysing
Babbha scattering~\cite{Boogert:2002jr}.
The effect of soft photons showing up in the two hemispheres can be
incorporated as additional perturbative contributions in the soft function,
and the effects of collinear photons can be incorporated as additional
perturbative contributions to the jet functions in our factorization theorem.
Finally, the effect of hard photons not alligned with the top and antitop
jets is analogous to the productions of additional hard jets, which leads to
contributions in the hemisphere masses away from the resonance region.
Compared to the QCD effects treated in this work the above QED corrections
lead to changes of the invariant mass distribution that are
suppressed by the small QED coupling.
\section{Factorization for Masses Based on Jet Algorithms}
\label{sectionotheralgo}
In this work we have defined the top and antitop invariant masses up to now as
the invariant masses of all particles in the two hemispheres defined through the
thrust axis of each event, see Fig.~\ref{fig:topjet}. In past experimental
studies, on the other hand, a $k_T$ algorithm was employed so that each event
results in exactly six jets for the all-hadronic decay mode, $e^+e^-\to t\bar
t+X\to 6~\mbox{jets}$~\cite{Chekanov:2002sa,Chekanov:2003cp}. Of these six jets,
three jets were combined to the top and the other three to the antitop invariant
mass. We remind the reader that jet algorithms for $e^+e^-$ collisions do not
need to remove any 'beam remnants', so every final state particle of an event is
eventually either assigned to the top or the antitop invariant mass. It is this
crucial aspect of jet algorithms for $e^+e^-$ collisions that makes them share a
number of important properties with the hemisphere invariant masses that we have
analyzed so far. One of these properties is that having both invariant masses
in the peak region close to the top quark mass automatically ensures that that
the event is dijet-like, such that the EFT setup discussed in the previous
sections can be applied in the same way as it was for the hemisphere masses.
In this section we show that using a jet algorithm with the property mentioned
above for the top and antitop invariant mass reconstruction, the double
differential top and antitop invariant mass distribution in the peak region can
be written in the factorized form of Eq.~(\ref{final-cross}), but with a
different soft function $S(\ell^+,\ell^-)$ which depends on the jet algorithm.
All other ingredients, the jet functions $B_{\pm}$ and the matching and
evolution factors are identical. For the proof we assume that the top and
antitop decay jets\footnote{ For this discussion we deal with the case that the
top quarks decay all-hadronically into jets. However, our arguments also work
in principle for final states with leptons plus jets. } obtained from the
jet algorithm can be assigned unambiguously to the top and the antitop, i.e.~we
neglect the combinatorial background. This simplification is possible at leading
order in $m/Q$ because hard jets from the top decay only have a very small
probability of order $(m/Q)^2$ to show up in the hemisphere of the antitop
quark, as was already pointed out in Sec.~\ref{section_hemi}. The analogous
statement is of course also true for hard jets from the antitop decay. Moreover
we assume that the jet algorithm uses simple addition of four-vectors as its
recombination scheme for merging final state objects.
The proof can be carried out in the EFT setups described in
Sec.~\ref{subsectionfactorizationtheorem} and Fig.~\ref{fig:theory}. The crucial
point that has to be shown is that, at leading order in the power counting,
the total $n$-collinear momentum $P_{X_n}$ enters exclusively the top invariant
mass, while the total $\bar n$-collinear momentum enters exclusively the antitop
invariant mass, just as for the hemisphere mass definitions explained in
Sec.~\ref{subsectionmomdecomp}. Furthermore the prescriptions to determine the
soft function has to be provided for a given jet algorithm. This corresponds to
defining appropriate projection operators $\hat P_a$ and $\hat P_b$ in
Eq.~(\ref{PaPb}) or equivalently the momenta $k_s^a$ and $k_s^b$ for each state
$|X_s\rangle$. Apart from that, the derivation of the factorization theorem goes
along the same lines as for the hemisphere case described in detail in the
previous sections.
Concerning the assignment of $n$- and $\bar n$-collinear momenta it is
easy to see that the top and antitop collinear momenta are attributed
correctly to the top and and antitop invariant masses since we can
neglect combinatorial background for the assignment of top and antitop
decay jets at leading order. Assuming for example a $k_T$ jet
algorithm similar to Refs.~\cite{Chekanov:2002sa,Chekanov:2003cp}
where all final state particles are combined to exactly six jets, one
can also conclude that at leading order in $m/Q$ the $n$-collinear
gluons are properly assigned to the top invariant mass, since these
gluons are radiated into the $n$-hemisphere and therefore assigned to
one of the three hard jets from the top quark decay. The analogous
conclusion, of course, also applies to the $\bar n$-collinear
gluons. This shows that the top mass reconstruction based on a jet
algorithm treats $n$- and $\bar n$-collinear momenta essentially in
the same way as the hemisphere method. It also means that the double
differential top and antitop invariant mass distribution based on a
jet algorithm can be derived in complete analogy to the hemisphere
case and has the form shown in Eq.~(\ref{final-cross}). The soft
function depends on the jet algorithm that is employed, and in
particular on the distance measure implemented in the algorithm.
Whether a soft gluon of a given energy ends up contributing to the top
or the antitop invariant mass depends on its relative angles with
respect to the hard jets coming from the top and antitop decay. So
upon averaging over the hard jet-configurations, a soft gluon with a
given energy and a given angle with respect to the thrust axis
contributes either to the top or to the antitop invariant mass
governed by a probability function that is determined by the jet
algorithm. This means that a soft gluon in let's say the
$n$-hemisphere has in general a nonvanishing probability to be
eventually assigned to the antitop invariant mass. The equivalence of
the top-down and the bottom-up approaches to the RG evolution in the
EFT's used to derive the factorization theorem further ensures that
the RG running of the soft function for a given jet algorithm agrees
with the running of the hemisphere soft function $S_{\rm hemi}$,
although their scale-independent terms differ. (We assume that the
jet algorithm is symmetric in its treatment of top and antitop final
states.) The explicit one-loop expressions for the RG running and the
scale-independent contributions of the soft function for a general jet
algorithm and for the hemisphere masses will be given in
Ref.~\cite{FHMS2}.
\section{Conclusions} \label{section6}
The reconstruction of top quark invariant mass distributions is one of the major
methods for measuring the top-mass $m$ at present and future collider
experiments. Using a sequence of effective theories to separate effects at
different mass scales we presented an analytic factorization approach for
the top invariant mass distribution in the peak region. To be definite, we
derived the double differential top/antitop invariant mass distribution
$d^2\sigma/dM_tdM_{\bar t}$ in $e^+e^-$ collisions for c.m.\,energies $Q\gg m$,
where $M_{t,\bar t}$ are defined as the total invariant masses of all particles
in the two hemispheres determined with respect to the event thrust axis. The
factorization formula is given in Eq.~(\ref{final-cross}) and represents the
leading order result in an expansion in $m/Q$ and $\Gamma/m$, where $\Gamma$ is
the top quark total width.
The factorization formula consists of two jet functions for top and antitop
quarks, which depend strongly on the top quark Lagrangian mass, and can be
computed perturbatively order-by-order in $\alpha_s$. It also involves a
nonperturbative soft function that describes the momentum distribution of soft
final state radiation. Using alternative invariant mass prescriptions, for which
the soft particles are assigned differently to $M_t$ and $M_{\bar t}$, the same
factorization formula applies, but with a different soft function. The
observable invariant mass distribution is obtained from a convolution of the
perturbative jet functions with the nonperturbative soft function. Through this
convolution the energy of the maximum and the width of the observed distribution
are dependent on the c.m.~energy $Q$. For a lowest order analysis see
Figs.~\ref{fig:plot3D} and \ref{fig:plot1dBW}, and the accompanying discussion.
A very important outcome of the derivation is that the soft function for the
hemisphere mass prescription also governs event shape distributions for massless
dijet events for which plenty of data has been collected at LEP and previous
$e^+e^-$ experiments. Since the soft function can be determined from these data,
it is possible to predict the top invariant mass distribution based on the
hemisphere prescription as a function of the c.m.~energy $Q$, the strong
coupling $\alpha_s$ and the Lagrangian top mass in different mass schemes
without hadronization uncertainties at leading order in the expansion in
$m/Q$, $\Gamma/m$ and $\Lambda_{\rm QCD}/m$. In principle, this allows to
measure a short-distance top quark mass from reconstruction with a precision
better than $\Lambda_{\rm QCD}$.
We have proposed a new short-distance mass scheme called {\it top quark jet
mass} which can be measured with minimized theoretical uncertainties from data
obtained at a future Linear Collider and which can be reliably related to other
known short-distance masses such as the threshold or the $\overline{\mbox{MS}}$
masses. We also expect that, quite generally, the jet-mass scheme will provide an
appropriate mass scheme for jet related observables involving heavy quarks.
The factorization approach developed in this work can be applied to
determine the top quark mass from reconstruction of the
top/antitop quark invariant mass distributions at a future $e^+e^-$ Linear
Collider. The at present most precise method to measure the top quark mass at
a future Linear Colliner is the threshold scan method. It relies on the
determination of the hadronic $R$-ratio for c.m.\,\,energies around twice the
top mass and will provide a short-distance top quark mass measurement with
theoretical and experimental uncertainties at the level of $100$~MeV. Compared
to the measurement of the $R$-ratio for the threshold scan, the reconstruction
of the top/antitop invariant mass distribution is without doubt substantially
more complicated. But it has the advantage that it can be carried out at any
c.m.\,\,energy above threshold and that substantially more luminosity can be
spent of it. Given that our factorization approach allows to control the
perturvative and nonperturbative effects contributing to the invariant mass
distributions we believe that it could eventually become a method that
competes with the threshold scan.
The factorization ideas proposed in this work can be applied to mass
distributions of other final state particles produced in $e^+e^-$ collisions
in a straightforward manner. Notable examples include single top quark
production, the production of $W$ bosons or of new heavy colored unstable
particles such as squarks or gluinos in a certain supersymmetric new physics
scenarios. They will also be relevant for predicting invariant mass
distributions at hadron colliders. However, at hadron colliders there are
additional complications that still need to be
resolved as discussed in Sec.~\ref{section4b}. These include the initial
state radiation and incorporating parton distribution functions, which lead to a
distribution for $Q$ and require modifications of the concept of event shapes,
the large $p_T$ cuts needed to get clear signals away from the beam remnant, and
the effects of underlying events, which need to be taken into account. Finally,
the algorithm for defining and measuring the invariant mass of jets that contain
the top-decay products is different in the LHC environment. We plan to address
these issues in future work.
\acknowledgments
We would like to thank A.~Juste, S.~Kluth, S.~Menke, and M.~Wise
for helpful discussions. We also thank C.~Bauer and M.~Dorsten for collaboration
in an early stage of this work. S.F. and S.M. thank the visitor program of the
Max-Planck-Institute for Physics for support. This work was supported in part by
the Offices of Nuclear and Particle Physics of the U.S.\ Department of Energy
under DE-FG02-94ER40818, DE-FG03-92ER40701, and DE-FG02-06ER41449, and in part
by the EU network contract MRTN-CT-2006-035482 (FLAVIAnet). I.S. and S.F.~were
supported in part by the DOE OJI program, and I.S.~was supported in part by the
Sloan Foundation.
|
2,869,038,153,999 | arxiv | \section{Application}
\label{sec:app}
Exploring image collections have been one of the main applications for distance preserving grids, with a positive impact on image retrieval tasks~\cite{similarity2011schoeffmann}. In this section, we present a strategy using DGrid to explore photo collections that allow users to control the semantics of the similarity between images and to navigate collections into different levels of detail.
Figure~\ref{fig:appoverview} overviews our approach. First, the photo collection is processed to extract different sets of features $f_1, f_2, \ldots, f_k$, representing different elements of a photo, such as colour, the presence of objects, and so on. From each set of features we extract samples of photos $s_1, s_2, \ldots, s_k$ that are joined to compose our complete sample $S = s_1 \cup s_2 \cup \ldots \cup s_k$. For sampling, we use the k-means technique to cluster the data, getting the medoid of each cluster as a sample. By getting samples using the different set of features, we seek to guarantee that we have images that contain the different traits captured by the different features. After that, projections ${p'}_1, {p'}_2, \ldots, {p'}_k$ are generated considering the images in $S$ but using the different types of features ${f'}_1, {f'}_2, \ldots, {f'}_k$. The input projection for building the sample grid is then created as a convex combination of these projections, that is, $P' = \alpha_1 {p'}_1 + \alpha_2 {p'}_2 + \ldots + \alpha_k {p'}_k$, where $\sum \alpha_i = 1$. By changing $\alpha_1, \alpha_2, \ldots, \alpha_k$ users can control the contribution of each different projection to the complete projection, implicitly controlling the importance of each type of feature to the semantics of the employed similarity. After defining the most appropriated weights for the convex combination, the projection $P$ used to create the complete grid with all photos is defined as a combination of projections ${p}_1, {p}_2, \ldots, {p}_k$ of the complete dataset but considering the different features, that is, $P = \alpha_1 {p}_1 + \alpha_2 {p}_2 + \ldots + \alpha_k {p}_k$.
\begin{figure}[htb]
\centering
\includegraphics[width=.9\linewidth]{Figs/appoverview3.pdf}
\caption{Overview of our strategy to compose a photo grid that allows users to control the semantics of the similarity between images. Based on a small sample, users can interactively combine different features seeking for the combination that best approaches his/her point of view regarding similarity. This combination is then propagated to the entire dataset to compose the complete grid which can then be explored into different levels of detail.}\label{fig:appoverview}
\vspace{-0.4cm}
\end{figure}
Here, we use the Photographer dataset~\cite{ThomasK15}, composed of $180,193$ photos taken by $41$ well-known photographers. We extract $4$ different sets of features from this dataset, representing: (1) color, using LaB color features; (2) texture, using Gabor filters~\cite{Chen:2004}; (3) borders, using the HoG technique~\cite{Dalalhog}; and (4) the presence of objects, using a pre-trained convolutional neural network, called CaffeNet~\cite{Jia2014}, that classifies images into $1,000$ different object categories. Also, we create a set of features to represent the similarity between photographers using an external source. We download Wikipedia articles about each photographer and construct a bag-of-words vector representation for each. All photos of the same photographer are then represented by his/her vector representation. Consequently, the similarity among photos is defined as the similarity between texts describing the photographers.
In this paper, to create the sample $S$ we get $200$ photos of each different set of features, ensuring we have at least $5$ photos of each photographer ($800$ photos in total). Figure~\ref{fig:teaser} shows the resulting sample grid. In this grid, the similarity mostly reflects the color and a little amount of information about the photographers and the objects contained in the photos. To visually support this combination, we develop a widget, shown in the bottom-right of the figure, inspired by the idea presented in~\cite{10.1016/j.neucom.2014.07.072}. Using this widget, $\alpha_1, \alpha_2, \ldots, \alpha_k$ are defined based on the closeness of the ``orange'' dial and the anchors representing each feature. To help the perception of the weights, we change the transparency level of the anchors and fonts accordingly. Since the similarity between images is on the eye of the viewer, allowing users to control the semantics of the employed similarity based on combinations of different features renders a powerful mechanism.
We are combining projections instead of features to allow a real-time exploration of the different combinations. In this strategy, projections for each set of features are calculated once. Only the grid is derived when the combination is changed. Since grids are built from projections almost instantly for reasonable amounts of data (see Figure~\ref{fig:time}), our approach allows the exploration in interactive rates. If we opt to combine features, a projection should be produced after changing a combination, and, currently, there are no good quality projection techniques that are fast enough to attain interactive rates. Notice that, this application is only possible given the high efficiency of DGrid to derive grids in real-time, a trait not found in the current state-of-the-art techniques in distance preserving grid layouts.
Once the proper combination has been defined from the users' point of view, a grid representing the complete photo collection is constructed. Considering the dataset size and setting $\Delta=11/8.5$ to match the aspect ratio of the visual area (paper size), the resulting photo grid has $482$ rows and $374$ columns. Since this is too much information to present at once, we allow the grid compression. In this process, we convolute the grid with a $R \times S$ mask merging the covered cells into one single cell, thus dividing the number of rows by $R$ and the number of columns by $S$. In the compact layout, the cells are represented by the photo closest to the center of the $R \times S$ mask. Figure~\ref{fig:photogrid} presents the resulting compressed photo grid. In this example, we use a $5 \times 5$ mask, resulting in a grid with $96$ rows and $75$ columns.
\begin{figure*}[]
\centering
\includegraphics[width=\linewidth]{Figs/mask5weighted3small.png}
\caption{Photo grid of the Photographers dataset. Since a considerably larger weight is assigned to the color features (see Figure~\ref{fig:teaser}), a clear global separation between gray and colored photos can be observed. We also add the presence of objects and photographer style into the feature combination, so locally these features influence, up to and extent, the grid organization as noted in the zoom in part of the top-right corner.}\label{fig:photogrid}
\end{figure*}
The mask $R \times S$ defines the level of detail of the compact representation. Changing its size allows the navigation of the photo collection into different levels of abstraction, from a coarse representation to a more detail view. Another possibility of navigation is to allow users to select a particular photo, expanding the compressed grid to show all the photos it represents. In this case, by expanding all the cells belonging to the same row or same column of the selected photo, we define a multilevel process that preserves the context. In~\cite{isomatch2015fried} a similar application was presented. However, they use a hierarchical clustering approach to group the instances, showing representatives of the groups. The user can then navigate by clicking on the representatives, displaying new grids containing the elements (or representatives) inside the groups. Therefore, losing the context whenever a zoom in operation is executed.
\section{Discussion and Limitations}
\label{sec:discussion}
The space partition strategy presented in Algorithm~\ref{alg:dgrid} shares similarities with the kd-tree technique~\cite{kdtree1975bentley}. In both cases, the recursive process of bisecting the space into partitions and sub-partitions constructs complete binary trees. The difference is that the kd-tree only considers the spatial position of the points in this process whereas our approach incorporates the number of rows and columns to it. As a result, our technique can create grids with an arbitrary number of rows and columns while the kd-tree can only create squared grids that are power-of-two~\cite{ssm2014strong}. The same applying to the NMAP technique~\cite{nmap2014duarte} as discussed in Section~\ref{sec:related}.
Regarding the computational complexity, the DGrid has two distinct steps. The projection and the space partition. Different techniques can be used in the first step, with different complexities. Here we use the t-SNE which is O($N^2$) and the LAMP which is O($\min\{m^2N^\frac{3}{2}, mN^2\}$), where $m$ is the number of dimensions of the dataset. Recalling that every time a partition is bisected, its instances are sorted according to the $x$ or $y$ coordinates. If an O($N \log N$) sorting algorithm is used, the computational complexity of the space partition step is O($N \log^2 N$). Therefore, the overall computational complexity is dominated by the projection process, which is confirmed by the running times (see Figure~\ref{fig:time}). If a faster approach is required, changing the projection technique is an option, for instance, PLMP~\cite{paulovich2010plmp} is O($N$), but probably the quality of the obtained grids will be penalized.
The way DGrid was conceived only allows it to generate orthogonal regular grids. KS and IsoMatch are more flexible. Since they are based on assignment processes, they can map data instances into non-orthogonal domains. However, they are computationally expensive, O($N^3$). For orthogonal grids the DGrid is much faster, attaining similar or even better results considering different quality metrics, rendering DGrid a very attractive technique for processing large datasets.
Finally, in our space partition process, we consider a simple splitting method, dividing the space so that the resulting partitions contain approximately half of the instances. Defining a better way to partition the space is an aspect that deserves to be investigated more deeply, for instance, guiding the partition according to the distribution of the points on the plane. However, finding the best partitioning considering both the data distribution and the grid dimension constraint is not a trivial task.
\section{Conclusion}
\label{sec:conclusion}
In this paper, we proposed a novel approach for generating grid layouts that preserve distance information, called \textit{Distance-preserving Grid (DGrid)}. DGrid is a two-step approach that combines a projection technique with an assignment strategy to create orthogonal regular grids. The set of comparisons we provide shows that DGrid outperforms the existing state-of-the-art techniques considering different quality metrics, being almost two orders of magnitude faster than the fastest existing technique. The quality of the produced layouts combined with the low computational cost render DGrid one of the most attractive methods for generating distance preserving grid layouts.
\section{Introduction}
\label{sec:intro}
Distance preserving visualization techniques compose a family of strategies that seek to map data instances into graphical elements so that the pairwise distances calculated between instances are preserved as much as possible between graphical elements. Among the existing strategies, the multidimensional projections have emerged as one of the fundamental tools for data analysis~\cite{lamp2011joia}. Through the definition of a proper function to compute the distance between instances, projection techniques produce layouts where the dissimilarity patterns are ``visible,'' allowing the execution of tasks that involve the identification and analysis of distance relationships.
Despite their popularity, with applications varying from fiber tracking analysis~\cite{fiber2012poco} to visual text mining~\cite{exemplar2009chen, wordcluod2010cui, projcloud2012paulovich}, projection techniques present limitations when the graphical elements are used to convey information. Given their nature of approximating distances, projection techniques tend to produce layouts with overlapped graphical elements, so suffering from occlusion problems. To address such limitation, some approaches employ post-processing strategies~\cite{projsnippet2014erick, rwordles2012strobelt} or put constraints on the projection process~\cite{incboard2010pinho} to remove the overlapping. However, they make poor use of the visual space, creating layouts with void areas. Aiming at making better use of the available space, distance preserving grid techniques have been devised to arrange the graphical elements into grids, using as much as possible the visual space.
Currently, the state-of-the-art approaches to produce distance-preserving grids solve assignment problems~\cite{isomatch2015fried,kernelized2010quadrianto} or use permutations to optimize cost functions~\cite{ssm2011strong,ssm2014strong}. Although precise, such strategies are computationally expensive, limited to small datasets or being dependent on specialized hardware to speed up the process. In this paper, we introduce a novel approach, called \textit{Distance-preserving Grid (DGrid)} that combines multidimensional projection techniques with a space-partitioning strategy to create orthogonal regular distance-preserving grids. Despite its simplicity, the quality of the produced layouts and the running times render DGrid a very attractive method for large datasets.
In summary, the main contributions of this paper are:
\squishlist
\item A novel distance-preserving grid layout technique that presents high-quality regarding distance and neighborhood preservations while running in a fraction of the time of the current state-of-the-art techniques; and
\item A framework to explore image collections that allows real-time tuning of the semantics of the similarity between images to match users expectations and the navigation of large collections into different levels of detail.
\squishend
\section{Proposed Methodology}
\label{sec:method}
The \textit{Distance-preserving Grid (DGrid)} employs a two step approach to generate uniform grids that preserve, as much as possible, the distances relationships of a given dataset. On the first step, the data instances ${\mathcal{D}=\{d_1,d_2,\ldots,d_N\}}\in\mathbb{R}^m$ are mapped into points on the plane using a multidimensional projection technique, obtaining their two-dimensional Cartesian Coordinates ${\mathcal{P}=\{{p_1=(x_1,y_1)},{p_2=(x_2,y_2)},\ldots,{p_N=(x_N,y_N)}\}}\in\mathbb{R}^2$. Then, a grid ${\mathcal{G}=\{g_{1,1},g_{1,2},\ldots,g_{1,s},\ldots,g_{s,1},g_{s,2},\ldots,g_{r,s}\}}$ with $r$ rows and $s$ columns, where $r \times s \geq N$, is created, assigning each projected instance $p_i$ to a grid cell $g_{p,q}$.
The reasoning behind our approach is that if the projection $\mathcal{P}$ precisely preserves the distances relationships in $\mathcal{D}$, and if $\mathcal{G}$ preserves the geometry of $\mathcal{P}$, $\mathcal{G}$ will preserve the distances relationships in $\mathcal{D}$. Consider that $\mathcal{P}$ has been obtained from $\mathcal{D}$ (this is later discussed in Section~\ref{sec:projection}). If the points in $\mathcal{P}$ are uniformly distributed over the plane and are arranged following the number of rows and columns of the target grid, like in Figure~\ref{fig:grid0} for a grid with $5$ rows and $4$ columns, the process to assign $\mathcal{P}$ to $\mathcal{G}$ is trivial. First, we vertically split the space into $4$ partitions, with $5$ instances in each, defining the column index of the instances in each partition (the horizontal numbers in Figure~\ref{fig:grid1}). Then, we horizontally split the partitions so that each instance is placed in its partition, defining the row index of each instance (the vertical numbers in Figure~\ref{fig:grid2}). In this example, the instance colored in red is mapped to the grid cell $g_{2,2}$, that is, the cell occupying the third row and third column.
\begin{figure}[h]
\vspace{-0.2cm}
\centering
\subfigure[Initial Projection.]{\includegraphics[width=.275\linewidth]{Figs/grid0.pdf}\quad\label{fig:grid0}}
\subfigure[Vertical split.]{\includegraphics[width=.275\linewidth]{Figs/grid1.pdf}\quad\label{fig:grid1}}
\subfigure[Horizontal split.]{\includegraphics[width=.275\linewidth]{Figs/grid2.pdf}\label{fig:grid2}}\vspace{-0.2cm}
\caption{Process of assigning a projection to a grid when the projection is uniformly distributed over the plane and follows the grid pattern.}
\label{fig:grid}
\vspace{-0.2cm}
\end{figure}
In practice, the assumption that the projection is uniformly distributed over the plane and follows the grid pattern seldom holds. Seeking to approximate such constraints, we recursively bisect the projection into non-overlapping partitions until the obtained partitions individually obey, as much as possible, such constraints. Then, (sub)grids are derived from each partition. For the first bisection, consider $\mathcal{P}$ as the input projection, and $(r,s)$ the dimension of the target grid. If $r > s$, we split $\mathcal{P}$ horizontally, obtaining two partitions $\mathcal{P}=\mathcal{P}_1\cup\mathcal{P}_2$, so that, the upper partition $\mathcal{P}_1$ contains enough instances to completely fill half of the desired grid, that is, $|\mathcal{P}_1| = \lceil{r/2}\rceil \times s$. Otherwise, we split $\mathcal{P}$ vertically, so that the left partition $\mathcal{P}_1$ contains enough instances to completely fill half of the desired grid, that is, $|\mathcal{P}_1| = r \times \lceil{s/2}\rceil$. Figure~\ref{fig:bisect0} presents the result if this process is applied to a slightly modified version of Figure~\ref{fig:grid0} to show an example where the bisectors cannot be straight lines in $\mathbb{R}^2$ but arbitrary curves and the simple process of Figure~\ref{fig:grid} cannot be directly used.
\begin{figure}[h]
\centering
\vspace{-0.2cm}
\subfigure[Horizontal bisection.]{\includegraphics[width=.325\linewidth]{Figs/bisect1new.pdf}\label{fig:bisect0}}
\subfigure[Vertical bisection.]{\includegraphics[width=.325\linewidth]{Figs/bisect0new.pdf}\label{fig:bisect1}}
\subfigure[Horizontal bisection.]{\includegraphics[width=.325\linewidth]{Figs/bisect2new.pdf}\label{fig:bisect2}}\vspace{-0.2cm}
\caption{Process of bisecting the projection and calculating the top-left corner indexes of the resulting grids. This process is applied until each data instance is assigned to a grid cell.}
\label{fig:bisect}
\vspace{-0.2cm}
\end{figure}
Figure~\ref{fig:bisect1} and Figure~\ref{fig:bisect2} show the bisecting process recursively applied. To compute the cells' indexes from the partitions, we calculate during the bisecting process the indexes of the top-left corner cells of each (sub)grid resulted from each partition. For instance, on Figure~\ref{fig:bisect0}, the index of the top-left corner cell of the upper partition is $[0,0]$, indicating that the grid generated from it starts at row $0$ and column $0$. For the lower partition, the index of the top-left corner cell is $[3,0]$, indicating that the grid generated from it starts at row $3$ and column $0$. Consider that the input projection $\mathcal{P}$ is split into $\mathcal{P}_1$ and $\mathcal{P}_2$, where $\mathcal{P}_1$ is the upper partition for a horizontal cut or the left partition for a vertical cut. Also, let $(i,j)$ be the index of the top-left corner cell of $\mathcal{P}$. By construction, the index of the top-left corner cell of $\mathcal{P}_1$ is $(i,j)$, and the index of $\mathcal{P}_2$ is $(i+\lceil{r/2}\rceil,j)$ for a horizontal cut, and $(i,j+\lceil{s/2}\rceil)$ for a vertical cut.
As mentioned before, this process of bisecting and calculating the top-left corner indexes are successively applied until the resulting partitions obey the uniform distribution and grid pattern constraints. However, using this as a stop criterium would penalize the computational cost of the overall algorithm since it is an O($N^2$) procedure for a partition containing $N$ instances. Instead, we execute the bisecting and corner computation process until each partition contains only one instance. Notice that this returns the same grid as the process presented in Figure~\ref{fig:grid} if the input projection obeys the uniform distribution and grid pattern constraints. Therefore, it is not necessary to test if such constraints hold in any step of the algorithm, rendering a much faster and simpler process to implement.
Algorithm~\ref{alg:dgrid} puts all these pieces together, showing the overall process adopted by our approach to assigning a projection to a grid. The function \textsc{Split}$_y(\mathcal{P}, k)$ performs the horizontal bisection. In this process, $\mathcal{P}$ is sorted according to the $y$-coordinates ($\mathcal{P}$ is viewed as a list), and the first $k$ instances are assigned to $\mathcal{P}_1$ and the remaining to $\mathcal{P}_2$. The function \textsc{Split}$_x(\mathcal{P}, k)$ performs the vertical bisection using the same process, but sorting $\mathcal{P}$ according to the $x$-coordinates. Notice that since the bisecting process always assigns to the upper and left partitions enough elements to result in a filled grid, spaces are not opened in the interior of the final grid. All empty cells are grouped on the bottom-right corner.
\begin{algorithm}{}
\algrenewcommand\algorithmicindent{1.0em}%
\begin{algorithmic}
\Function{DGrid}{$\mathcal{G}$, $\mathcal{P}$, $(r,s)$, $(i,j)$}
\If{$\mathcal{P} \neq \emptyset$}
\If{$|\mathcal{P}| = 1$} \Comment{$\mathcal{P}$ has one instance}
\State $g_{i,j} = x_0$ \Comment{cell $g_{i,j}\in\mathcal{G}$ receives the only instance in $\mathcal{P}$}
\Else
\If{$r > s$}
\State $\mathcal{P}_1,\mathcal{P}_2 \gets$ \textsc{Split}$_y$($\mathcal{P}$, $\lceil{r/2}\rceil\times s$)
\State \textsc{DGrid}($\mathcal{G}$, $\mathcal{P}_1, (\lceil{r/2}\rceil, s), (i,j)$)
\State \textsc{DGrid}($\mathcal{G}$, $\mathcal{P}_2, (r-\lceil{r/2}\rceil, s), (i+\lceil{r/2}\rceil, j)$)
\Else
\State $\mathcal{P}_1,\mathcal{P}_2 \gets$ \textsc{Split}${}_x$($\mathcal{P}$, $r \times\lceil{s/2}\rceil$)
\State \textsc{DGrid}($\mathcal{G}$, $\mathcal{P}_1, (r, \lceil{s/2}\rceil), (i,j)$)
\State \textsc{DGrid}($\mathcal{G}$, $\mathcal{P}_2, (r, s-\lceil{s/2}\rceil), (i, j+\lceil{s/2}\rceil)$)
\EndIf
\EndIf
\EndIf
\EndFunction
\end{algorithmic}
\caption{Process of assigning a projection to a grid.}\label{alg:dgrid}
\end{algorithm}
A different interpretation of this binary partition method is to consider it a translation process that removes void spaces so that the distribution of the projected points on the vertical and horizontal directions is as uniform as possible and follows the grid dimensions, that is, the desired number of rows and columns. Since translation is a rigid transformation that preserves relative distances, in the worst case scenario half of the distance relationships are fully kept when a projection is bisected. Typically, more than that is preserved, so that the geometry of $\mathcal{P}$ is preserved, up to an extent, by $\mathcal{G}$, and, consequently, the produced grid preserves the distances relationships in $\mathcal{D}$.
\subsection{Grid Dimension}
\label{sec:griddimension}
The process of assigning a projection to a grid defined in the previous section is very flexible. Since it focuses on splitting the partitions considering the number of rows and columns, instead of the number of data instances, it is not limited to any particular grid shape. The only constraint is that the number of grid cells should be larger or equal to the number of data instances, that is, $r \times s \geq N$.
In this paper, we allow the control of the grid shape by defining its aspect-ratio $\Delta$. Here, the aspect-ratio can be the ratio between the number of rows and the number of columns of the final grid, or the ratio between the height and width of the visual space. Given $\Delta$, the target grid dimension is calculated as
\begin{equation}
\begin{array}{l}
r = \lfloor{\sqrt{N * \Delta}}\rfloor\\
s = \lceil{N/r}\rceil
\end{array}
\end{equation}
If $\Delta=1$, the resulting grid will as square as possible. If ${0<\Delta<1}$, the resulting grid will present more columns than rows. The opposite if $\Delta>1$.
\subsection{Projecting the Dataset}
\label{sec:projection}
We aim to preserve on the produced grid the distance relationships of a given dataset $\mathcal{D}$ by assigning similar data instances to close grid cells, and dissimilar ones to far apart cells. In this context, the projection $\mathcal{P}$ plays a central role since it guides the grid geometry. Therefore, $\mathcal{P}$ should preserves, as much as possible, the distance relationships in $\mathcal{D}$. In the current literature, there are several multidimensional projection techniques to derived $\mathcal{P}$ from $\mathcal{D}$. Typically, the most precise techniques, such as the classical multidimensional scaling~\cite{mds1965torgerson} or the t-SNE~\cite{maaten2008visualizing}, are computationally expensive, whereas the less precise out-of-sample methods, such as LAMP~\cite{lamp2011joia} or the PLMP~\cite{paulovich2010plmp}, can handle very large datasets in a reasonable amount of time. Thereby, the choice of what technique to use rely mostly on the tradeoff between the size of the dataset and the desired precision.
\section{Related Work}
\label{sec:related}
Different approaches have been proposed to create visual representations for conveying distance information. Dimension reduction or projection techniques are well-known examples, such as the classical scaling~\cite{mds1965torgerson}, t-SNE~\cite{maaten2008visualizing}, ISOMAP~\cite{isomap2000tenenbaum}, and LAMP~\cite{lamp2011joia}. Another example are the techniques that arrange geometric objects on the plane preserving similarity relations while avoiding overlaps, such as RWordle~\cite{rwordles2012strobelt}, IncBoard~\cite{incboard2010pinho}, ProjSnippet~\cite{projsnippet2014erick}, and the UnTangle Maps~\cite{7091015}.
It is beyond the scope of this paper to survey all possible techniques for creating distance layouts. Here we focus on strategies that arrange data into orthogonal regular grids preserving similarity relationships.
One technique that can generate distance preserving grids is the Self-Organizing Map (SOM)~\cite{som98kohonen}. SOM is an unsupervised neural network that creates a discretized lower dimension representation of the data by arranging it into a two-dimensional regular spacing hexagonal or rectangular grid. The main drawback is that it can map several data instances to a single grid cell, opening spaces and overlapping instances on the composed grid~\cite{kernelized2010quadrianto, isomatch2015fried}. The same occurring to its probabilistic counterpart, the Generative Topographic Mapping (GTM)~\cite{gtm98bishop}. Spectral Hashing (SH)~\cite{NIPS2008_3383} can also be used (or adapted) to create distance preserving grids. SH creates hashing codes so that the Hamming distance between two codes approach the Euclidean distance between the instances they represent. Thereby, by splitting the code into bins corresponding to the rows and columns of a grid, SH codes can be used to assign data instances to grid cells preserving distances. Though a promising strategy, it suffers from the inherent problem of hashing techniques, collisions. Consequently, the produced grids also present opening spaces and overlapping instances. Our technique can also be viewed as a process to assign data instance to grid cells (indexes), but we ensure a non-overlapping constraint so that each instance is mapped to a single cell.
Another technique that can generate distance preserving grids is the NMAP~\cite{nmap2014duarte}. NMAP is a space-filling technique that, starting from an input projection, creates rectangular arrangements through a bisecting and scaling recursive process. NMAP was designed for creating distance preserving Treemaps~\cite{treemap1992shneiderman} but it can be adapted to produce grids if the rectangles' sizes (weights) are all the same. However, due to its binary partition nature, it can only build squared grids that are power-of-two. In fact, the number of rows and columns are not input parameters for NMAP, and there is no guarantee of producing orthogonal grids with cells of the same size. Our technique relies on a similar binary partition process but, different from NMAP, we use the projection to impose a distance-based ordering among the data instances. This ordering is then used to assign instances to grid cells or indexes. Our output are indexes while NMAP returns rectangles and their positions on the plane. Consequently, we can generate orthogonal grids of any dimension, covering more realistic scenarios where the dataset is not power-of-two in size.
Starting with a random assignment of the data instances to grid cells, the Self-Sorting Map (SSM)~\cite{ssm2014strong, ssm2011strong} technique uses a permutation process, swapping instances between grid cells, aiming at maximizing a cross-correlation function between the distances among data instances and distances among the cells' positions. Since the number of possible permutations is a function of the factorial of data set size, the SSM technique searches for a locally optimal organization. In this process, the grid is split into quadrants and sub-quadrants, and swaps are performed between cells in different (sub)quadrants. Given the binary nature of this process, strategies need to be used to support non-square power-of-two grids~\cite{ssm2014strong}. Also, if the number of cells exceeds the number of data instances, it is not possible to control the position of the empty cells. They are (randomly) spread over the grid. In our technique the empty cells are grouped in one corner of the grid, so avoiding empty spaces inside the grid that reduces its overall quality.
The Kernelized Sorting (KS)~\cite{kernelized2010quadrianto} technique creates distance preserving grids finding a locally optimal solution for a quadratic assignment problem~\cite{assigment57tjalling}. KS establishes a matrix containing the pairwise distances between the data instances and a matrix containing the pairwise distances between the grid positions. Then a permutation process is applied on the second matrix to approximate, as much as possible, the first one, resulting in a one-to-one matching between instances and the grid cells. The IsoMatch~\cite{isomatch2015fried} also employs an assignment strategy for constructing distance preserving grids. First, it projects the data into the plane using the ISOMAP~\cite{isomap2000tenenbaum} technique and builds a complete bipartite graph between the projection and grid positions. Then, using the Hungarian algorithm~\cite{hungarian1955kuhn}, it calculates a bipartite matching of this graph, assigning each instance to a grid position to minimize the aggregate displacement when transforming the projection positions into the grid positions. Different from the previous techniques, the KS and the IsoMatch are not limited to rectangular grids. They can create grids with arbitrary shapes. However, since they solve assignment problems, they are computationally expensive, not being able to handle large datasets or even small dataset in real-time. Our technique, although not supporting arbitrary grids, it is limited to orthogonal ones, can process much larger datasets in a fraction of the time without asking for particular hardware.
Other techniques share similarities with the approach proposed in this paper. For instance, the work by Meulemans et al.~\cite{smallmultgaps2017meulemans} positions small multiples into a grid, intentionally adding spaces to improve user perception. Another example are the well-known Cartograms~\cite{cartograms2016nusrat}, more specifically the Rectangular Cartograms~\cite{rectcartograms2004vankreveld,rectcartograms1934raisz}, that scale areas of geographical regions to rectangular shapes in proportion to some statistic, or the Tile Maps~\cite{tilemaps2017mcneill} that displays geographic regions as a grid of identical cell sizes. Although visually similar, they were not designed to create distance preserving grids, so out of the scope of this paper.
\section{Evaluation and Comparison}
\label{sec:resu}
\subsection{Quantitative Analysis}
In this section we present a quantitative evaluation of the \textit{Distance-preserving Grid (DGrid)} technique, comparing it against the state-of-the-art in distance preservation grid techniques, viz., Kernelized Sorting (KS)~\cite{kernelized2010quadrianto}, Self-Sorting Map (SSM)~\cite{ssm2014strong}, and IsoMatch~\cite{isomatch2015fried}. In this comparison we use three different quality metrics, $k$-neighborhood preservation index~\cite{loch2015fadel}, cross-correlation~\cite{ssm2014strong}, and energy function~\cite{isomatch2015fried}. The $k$-neighborhood preservation index was originally developed to evaluate projections, but here we use it to measure how much the neighborhood in the dataset $\mathcal{D}$ is preserved in the grid $\mathcal{G}$. It is calculated as
\begin{equation}
NP_k = \frac{1}{N}\sum^N_i\frac{N^{\mathcal{D}}_{k_i} \cap N^{\mathcal{G}}_{k_i}}{k}
\end{equation}
where $N^{\mathcal{X}}_{k_i}$ is the set containing the indexes of the $k$-nearest neighbors of $d_i$ in $\mathcal{D}$, and $N^{\mathcal{G}}_{k_i}$ is the set containing the indexes of the $k$-nearest neighbors of $g_i$ in $\mathcal{G}$. $NP_k$ ranges in $[0,1]$, the larger the value the better the result. The cross-correlation measures how well the placements of the data instances in the grid correlate to the dissimilarities among them, given by
\begin{equation}
CC=\sum_i^N\sum_j^N\frac{(\lambda(g_i, g_j) - \overline{\lambda}) (\delta(d_i,d_j) - \overline{\delta})
}{\sigma_{\lambda} \sigma_{\delta}}
\end{equation}
where $\delta(d_i,d_j)$ is the dissimilarity between pairs of instances, $\lambda(g_i, g_j)$ is the distance between the cells the instances are assigned to, $\overline{\lambda}$ is the mean distance between any two cells, $\overline{\delta}$ is the mean distance between any two pairs of instances, and $\sigma_{\lambda}$ and $\sigma_{\delta}$ are the corresponding standard deviation. The cross-correlation ranges in $[-1,1]$, the large the better. In this paper, we normalize the cross-correlation in $[0,1]$ using $CC'=(CC+1)/2$ to easier the comparison among the different metrics. Finally, the energy function measures how well the pairwise distances between the data instances are preserved by the corresponding distances in the grid. This function is computed as
\begin{equation}
E_p = \left(\sum_i^N\sum_j^N \frac{|c \cdot \delta(d_i,d_j)-\lambda(g_i, g_j)|^p}
{\sum_r^N\sum_s^N \lambda(g_r, g_s)^p}\right)^{\frac{1}{p}}\end{equation}
where $p$ defines the employed norm, and $c$ is a scaling constant (see~\cite{isomatch2015fried} for more details). As suggested in~\cite{isomatch2015fried}, we set $p=1$ since it favors solutions which preserve the smaller distances more than the larger ones. Also, we invert the original equation using ${E'}_p = 1 - E_p$ so that it ranges in $[0,1]$ with larger values rendering better results.
For the tests we have selected all datasets from the \textit{UCI Machine Learning Repository}~\cite{uci2013lichman} with real-valued attributes and sizes varying between $100$ and $2,500$ instances, allowing the comparison of the techniques in different scenarios. We only get real-valued datasets so that (Euclidean) distances can be properly calculated, and we limited their sizes due to the high computational complexities and running times of KS and IsoMatch. Also, we have discarded the datasets with missing values, resulting in $38$ datasets. The first part of Table~\ref{tab:datasets} details these datasets, presenting their names, sizes, and number of dimensions. For all tests, we have normalized the datasets so that the columns (attributes) have zero mean and standard deviation equals to one. All the results were generated in an Intel Core $i7$ [email protected]$, with $16GB$ of RAM. For the SSM, KS, and IsoMatch techniques we used the original codes, made available by the authors.
Figure~\ref{fig:boxplots} presents boxplots summarizing the results of each technique considering all the $38$ datasets. For the DGrid, we report results using the t-SNE and LAMP techniques to generate the input projections. Although the IsoMatch originally employs the ISOMAP as input, we also report results using the t-SNE and LAMP, so it is possible to compare the DGrid and the IsoMatch isolating the projection contribution to the quality of the produced results. The DGrid, IsoMatch, and KS techniques are deterministic, so we only run each technique once for each dataset. Given the random initialization of the SSM technique, we run it $30$ times for each dataset. In Figure~\ref{fig:boxplots}, the boxplots in red represent the results of the projections (LAMP and t-SNE) used as input by DGrid and IsoMatch techniques. They serve only as baselines to show the correlation between projection quality and grid quality. Notice that, the drop in precision between the projections and the produced grids is expected since the techniques we use do not create uniformly distributed projections (see Section~\ref{sec:method}). Also note that direct comparisons only make sense among grid layouts, not among grids and projections.
\begin{table}[!h]
\centering
\caption{Datasets employed in the evaluations. We have selected all datasets from the \textit{UCI Machine Learning Repository} with real-valued attributes and sizes up to a limit, allowing the comparison of the techniques in different scenarios.}
\label{tab:datasets}
\footnotesize{\begin{tabular}{|l|l|l|}
\hline
\textbf{Name} & \textbf{Size} & \textbf{Dimensions} \\ \hline \hline
Concrete Slump Test & 103 & 10 \\
Breast Tissue & 106 & 10 \\
LSVT Voice Rehabilitation & 126 & 309 \\
Iris & 150 & 4 \\
Urban Land Cover & 168 & 148 \\
Planning Relax & 182 & 13 \\
Parkinsons & 197 & 23 \\
Connectionist Bench & 208 & 60 \\
Seeds & 210 & 7 \\
Glass Identification & 214 & 10 \\
Yacht Hydrodynamics & 308 & 7 \\
Vertebral Column & 310 & 6 \\
Ecoli & 336 & 8 \\
Leaf & 340 & 16 \\
Libras Movement & 360 & 91 \\
PEMS & 440 & 138,672 \\
Forest Fires & 517 & 11 \\
Vowel Recognition & 528 & 10 \\
Istanbul Stock Exchange & 536 & 8 \\
Climate & 540 & 18 \\
WDBC & 569 & 32 \\
DrivFace & 606 & 6,400 \\
Hill-Valley & 606 & 100 \\
Blood Transfusion & 748 & 5 \\
Gene Expression & 801 & 20,531 \\
Arcene & 900 & 10,000 \\
MicroMass & 931 & 1,300 \\
Cloud & 1,024 & 10 \\
Concrete Compressive Strength & 1,030 & 9 \\
Geographical Original of Music & 1,059 & 68 \\
Banknote Authentication & 1,372 & 5 \\
Yeast & 1,484 & 8 \\
Airfoil Self-Noise & 1,503 & 5 \\
Plant species leaves & 1,600 & 64 \\
Drug Consumption & 1,885 & 32 \\
Cardiotocography & 2,126 & 23 \\
Image Segmentation & 2,100 & 19 \\
Statlog (Image Segmentation) & 2,310 & 19 \\
\hline
\hline
HTRU2 & 17,898 & 9 \\
Default of credit card & 30,000 & 23 \\
Online News Popularity & 39,644 & 61 \\
Facebook Comments & 40,949 & 54 \\
Tamilnadu Electricity Board & 45,781 & 4 \\
Sensorless Drive Diagnosis & 58,509 & 49 \\
Corel Image Features & 68,040 & 89 \\
Blog Feedback & 56,497 & 281 \\
FMA: A Dataset For Music Analysis & 106,574 & 518 \\
MiniBooNE Particle Identification & 130,065 & 50 \\
\hline
\end{tabular}}
\vspace{-0.4cm}
\end{table}
Regarding the $k$-neighborhood preservation index, Figure~\ref{fig:nn}, the best result was attained by the DGrid with t-SNE as input ($\overline{NP}=0.52$), better than the other more costly counterparts, IsoMatch ($\overline{NP}=0.36$) and KS ($\overline{NP}=0.50$). DGrid presents not only the largest mean but also the smallest spread regarding the best and worst results. Comparing the different flavors of DGrid, the results produced using the t-SNE are also considerably superior than the results produced using the LAMP. This is an expected outcome since the formulation of t-SNE favors the preservation of small neighborhoods instead of a global distance preservation as conveyed by the LAMP, which is confirmed by the boxplots of the projections in red. This indicates the impact of the input projection to the produced grid, and also shows that our strategy for assigning the projection to grid cells satisfactorily preserves the input geometry.
In this example, we approximate the neighborhood size $k$ to $5\%$ of the dataset size, setting $k=(\lfloor\sqrt{0.05*N}\rfloor)^2$. We use this approximation instead of $5\%$ to match the grid topology when calculating the neighborhoods.
\begin{figure}[!h]
\centering
\subfigure[$k$-neighborhood preservation index]{\includegraphics[width=.9\linewidth]{Figs/neighborhood-vf.pdf}\label{fig:nn}}\\\vspace{-0.2cm}
\subfigure[cross-correlation]{\includegraphics[width=.9\linewidth]{Figs/crossCorrelation-vf.pdf}\label{fig:cc}}\\\vspace{-0.2cm}
\subfigure[energy function]{\includegraphics[width=.9\linewidth]{Figs/energy-vf.pdf}\label{fig:ef}}\vspace{-0.2cm}
\caption{Boxplots of $k$-neighborhood preservation index, cross-correlation, and energy function. In all these aspects, the DGrid surpass (on average) current state-of-the-art techniques, indicating its quality on preserving distance relationships. The boxplots in red summarize the results of the input projection and serve as baselines to show the correlation between projections and grid properties. They are not intend for direct comparisons.}\label{fig:boxplots}
\vspace{-0.4cm}
\end{figure}
Figure~\ref{fig:cc} shows the cross-correlation results. On average, DGrid with LAMP ($\overline{CC'}=0.80$) presents better results than IsoMatch ($\overline{CC'}=0.78$) and KS ($\overline{CC'}=0.78$), with a smaller spread regarding the best and worst results. Different from the $k$-neighborhood preservation index, which is a local measure, the cross-correlation is global, explaining why the results considering LAMP as input are better than the ones considering the t-SNE, which is also confirmed by the projection boxplots in red. This renders exceptional flexibility to our technique since it allows selecting a projection technique that fulfills specific needs, considering different global or local geometry properties, generating grids that satisfactorily preserve them.
The quality of our approach is also confirmed by the energy function metric (Figure~\ref{fig:ef}). The DGrid with the LAMP ($\overline{{E'}_p}=0.65$) again outperforms the other techniques, IsoMatch ($\overline{{E'}_p}=0.63$) and KS ($\overline{{E'}_p}=0.63$). Since the energy function is a global measure, the LAMP technique presents better results than the t-SNE (see the red boxplots), which reflects in the produced grids. In~\cite{isomatch2015fried}, the authors show that the energy function strong correlates with human performance in search tasks, pointing that this is a good measure of grid organization. The same holds for the cross-correlation measure. Therefore, the attained results provide evidence to place the combination of DGrid with the LAMP as one of the best choices for tasks that involve the analysis of similarity relationships based on grids.
\begin{figure*}[]
\centering
\includegraphics[width=.8\linewidth]{Figs/neighComp.png}
\caption{Resulting grids colored according to the \textbf{$\bold{k}$-neighborhood preservation index}. SSM technique groups bad quality cells close to the empty cells, showing the negative impact of empty spots on the produced layouts.}\label{fig:nngrid}
\vspace{-0.4cm}
\end{figure*}
\begin{figure*}[]
\centering
\includegraphics[width=.8\linewidth]{Figs/corComp.png}
\caption{Resulting grids colored according to the \textbf{cross-correlation}. Cross-correlation is global measure, so the use of a global projection technique (LAMP) as input resulted in better grids in that aspect. This renders exceptional flexibility to our approach since it allows selecting a projection technique that fulfills specific geometry properties, generating grids that satisfactorily preserve them.}\label{fig:ccgrid}
\vspace{-0.4cm}
\end{figure*}
\begin{figure*}[]
\centering
\includegraphics[width=.8\linewidth]{Figs/energyComp.png}
\caption{Resulting grids colored according to the \textbf{energy function}. The energy function strong correlates with human performance in search tasks, placing DGrid among the best choices for tasks that involve the analysis of similarity relationships based on grids.}\label{fig:efgrid}
\vspace{-0.4cm}
\end{figure*}
To complement the statistical analysis conveyed by the boxplots, providing more detailed information, we show in Figures~\ref{fig:nngrid},~\ref{fig:ccgrid},~and~\ref{fig:efgrid} the resulting grids for some selected datasets. Aiming at showing different aspects of each technique, we choose datasets with varied distance distributions, from a dataset with most instances similar among themselves (Forest) to a dataset with most instances dissimilar among themselves (MicroMass). In these figures, the cells are colored according to different quality metrics calculated for each cell. The cells colored in black are empty. They exist because we have more cells than instances in these examples. Notice that the DGrid, KS, and IsoMatch place all empty cells on the grid borders whereas the SSM open spaces inside the grid. The quality metric values are shown below each grid, and the best results are highlighted using a bold font. Although KS is marginally better in one case and presents the same quality as DGrid in other cases, it is an O($N^3$) technique and cannot address problems involving large datasets. DGrid is much less expensive (see Section~\ref{sec:discussion}) so not only small examples can be processed in a fraction of the time but also it can address larger problems that neither KS nor IsoMatch are capable of.
Regarding the $k$-neighborhood preservation index grids (Figure~\ref{fig:nngrid}), the KS and the DGrid with t-SNE, which attained the best results on average (see Figure~\ref{fig:boxplots}), do not present spots concentrating bad quality cells, the error is uniformly spread over the grid. Conversely, SSM tends to group the bad quality cells close to the empty cells, showing their negative impact on the produced layouts. The cross-correlation grids (Figure~\ref{fig:ccgrid}) report an intriguing pattern produced by all techniques. For the Forest dataset, there is a clear spot with bad quality cells, concentrated in the border of two different regions. A close examination explains the reason. Since the Forest dataset is composed of two very distinct groups of instances, approaching them in the produced layout increases the error in the border cells, an inevitable aspect of grid layouts. The energy function grids (Figure~\ref{fig:efgrid}) also present an unusual pattern regarding the Forest dataset. The two different groups of instances can also be identified, but in these grids, the larger group presents significant worse results if compared to the smaller one (compare with Figure~\ref{fig:ccgrid}). In this case, the problem is related to the size of the groups. Since one group is much larger than the other, considering the groups individually defines two different scenarios regarding distance distribution. For the smaller group, most instances are dissimilar between themselves, but for the larger, most instances are similar between themselves. Since the energy function is global and measures the distance preservation, the differences in distribution affect the quality of the grid cells.
Given the process we develop to derive grids from projections, our approach can obtain better results if the number of rows and columns, controlled by $\Delta$, is defined considering the distribution of the input projections. Figure~\ref{fig:delta} shows the impact of varying $\Delta$ into the quality of the produced grids. To have better control of this test, we artificially generate a projection with three times more points in the vertical direction than in the horizontal direction. As expected, the best results are attained when $\Delta=3$ (the dashed line). In all examples in this section we have used squared grids, setting $\Delta=1$. Since most techniques we are comparing to do not depend on projections, we prefer to set a fixed value instead of using the best possible value of $\Delta$ to not bias the evaluation favoring our approach.
\begin{figure}[htb]
\centering
\includegraphics[width=.9\linewidth]{Figs/aspect-ratio-rectangle.pdf}
\caption{Impact of varying the grid dimensions to the quality of the produced layouts. The best results are attained when the grid dimensions are related to the distribution of the input projection.}\label{fig:delta}
\vspace{-0.4cm}
\end{figure}
Finally, we have compared DGrid with SSM regarding the running times. We have removed the other techniques from this comparison since they are computationally expensive, not capable of processing large datasets. In this test we have selected $10$ datasets from the \textit{UCI Machine Learning Repository}, varying the sizes up to $130,000$ instances. The employed datasets are detailed in the second part of Table~\ref{tab:datasets}. Figure~\ref{fig:time} summarizes the results. To allow a fair comparison, DGrid and SSM are both implemented in Java. Besides the boxplots for the DGrid and the SSM techniques, the figure shows individual boxplots for the projection and the grid assignment steps. In this example, we are using the LAMP to project the data. On average, the DGrid is almost two orders of magnitude faster than the SSM, but better results can be obtained if a faster projection technique is employed (the projection step dominates the process). Considering the tested techniques, DGrid presents the best tradeoff between running times and quality of the produced grids, placing it among the state-of-the-art techniques for generating distance preserving grids from large datasets.
\begin{figure}[htb]
\centering
\includegraphics[width=.8\columnwidth]{Figs/plotTime.pdf}
\caption{Running times boxplots. DGrid is almost two orders of magnitude faster than the SSM technique, and the projection phase dominates its running times. We have removed the other technique from this comparison since they are not capable of processing large datasets.}
\label{fig:time}
\vspace{-0.4cm}
\end{figure}
\subsection{Qualitative Analysis}
|
2,869,038,154,000 | arxiv | |
2,869,038,154,001 | arxiv |
\section{Introduction}\label{sec:intro}
In the present work we consider the one-dimensional shallow water system with transverse velocity and Coriolis force. This system is also known as 1D rotating shallow-water equations (RSW) and is given by
\begin{equation} \label{eq:RSW1D}
\begin{cases}
\partial_t h + \partial_x (hu) = 0, \\
\partial_t (hu) + \partial_x\left(hu^2 + \frac{gh^2}{2}\right) = fhv - gh\partial_x z,\\
\partial_t (hv) + \partial_x (huv) = -fhu,
\end{cases}
\end{equation}
where $h(x,t)$ denotes the fluid height, $u(x,t)$ and $v(x,t)$ are the two components of the horizontal velocity, $z(x)$ designates the topography and is a given function, $g$ is the constant gravitational acceleration and $f$ the Coriolis parameter.
This system can be written under the more compact form $\partial_x w + \partial_x f(w) = s(w,z)$ with
$$w =\begin{pmatrix} h \\ hu \\ hv \end{pmatrix}, \quad
f(w) = \begin{pmatrix}
hu \\ hu^2+\frac{gh^2}{2} \\ huv
\end{pmatrix},
$$
and $s(w,z) = s_{cor}(w) + s_{topo}(w)\partial_x z$ where we have set
$$
s_{cor}(w) = \begin{pmatrix}
0 \\ fhv \\ -fhu
\end{pmatrix}
\text{ and }
s_{topo}(w) = \begin{pmatrix}
0 \\ -gh \\ 0
\end{pmatrix}.$$
The first source term is related to the Coriolis force and the second one to the topography.
The vector $w$ must belong to the convex set of admissible states $$\Omega = \{ w=(h,hu,hv)^T \in \mathbb{R}^3 ; h>0 \}.$$
This 1D system can be obtained from the two-dimensional RSW equations,
\begin{equation} \label{eq:RSW2D}
\begin{cases}
\partial_t h + \partial_x (hu) + \partial_y (hv) = 0,\\
\partial_t (hu) + \partial_x\left(hu^2 + \frac{gh^2}{2}\right) + \partial_y(huv) = fhv - gh\partial_x z,\\
\partial_t (hv) + \partial_x (huv) + \partial_y\left(hv^2+\frac{gh^2}{2}\right) = -fhu - gh\partial_y z,
\end{cases}
\end{equation}
in which the variations in the $y$ direction are neglected.
The RSW system takes into account the force due to the Earth's rotation through the Coriolis term and can therefore model large-scale oceanic or atmospheric fluid flows. One of the remarkable behaviour of geophysical flows is the geostrophic equilibrium, that received a great attention in the literature these last years, see \cite{28BouchutSommerZeitlin2004,36bookZeitlinBouchut2007,19LukacovaNoelleKraft2007,30AudusseKleinNguyen2011,25Lahaye2014,17Audusse2017,20GouzienLahayeZetilinDubos2017,29ChertockDudzinski2018} for instance. Most oceanic and atmospheric circulations are perturbations of the geostrophic equilibrium, which express the balance between the Coriolis force and the horizontal pressure force, as follows in 2D
$$g\nabla (h+z) = f\begin{pmatrix} v \\ -u \end{pmatrix}.$$
In 1D, the geostrophic equilibrium writes
\begin{equation} \label{eq:geostrophic steady state}
\begin{cases}
u = 0,\\
g\partial_x (h+z) = fv,
\end{cases}
\end{equation}
which is a steady solution of \eqref{eq:RSW1D} with no tangential velocity. Let us notice that in 1D, all the steady solutions of \eqref{eq:RSW1D} with no tangential velocity are described by the geostrophic equilibrium \eqref{eq:geostrophic steady state}. With a zero velocity $v$, we recover the lake at rest solution of the classical shallow-water model.
From a numerical point of view, it is well-known since the pioneer works \cite{37bermudez1994,38greenberg1996,39gosse2000,40jin2001}, that numerical schemes should capture accurately the steady solutions. In the few last decades, a large literature was devoted to design such well-balanced schemes able to preserve steady solution at rest in different contexts. For the classical shallow-water equations, we can mention the hydrostatic reconstruction method proposed in \cite{16Audusse2004HYDRO} and numerous other works using various methods, including \cite{liang2009numerical,fjordholm2011well,fernandez2008consistent}.
Concerning the RSW system, some authors have developed numerical schemes which preserves exactly the geostrophic equilibrium \eqref{eq:geostrophic steady state}, for instance in \cite{28BouchutSommerZeitlin2004,29ChertockDudzinski2018,19LukacovaNoelleKraft2007,22LiuChertock2019}.
More recently, some numerical schemes able to preserve all the steady states, including the moving ones, were derived. Let us emphasize that it is in general a very challenging task to derive such fully well-balanced schemes. The first attempt was in \cite{43castro2007}, where a scheme that captures all the steady states of the shallow-water equations with topography was presented. However, this scheme was not able to preserve the positivity of the water height. In \cite{bouchut2010subsonic}, the authors obtain a scheme that preserves all the sonic steady states. The first fully well-balanced and positive preserving scheme was derived by Berthon-Chalons \cite{32BerthonChalons2016}. Later, fully well-balanced schemes were also derived for the shallow-water equations with both topography and friction in \cite{47Berthon_Dansac_Manning_Friction} and for the blood flow equations in \cite{46Berthon_Blood_flow}.
For the 1D RSW equations, the steady solutions are described by
\begin{equation} \label{eq:steadystates}
\begin{cases}
\partial_x (hu) = 0, \\
\partial_x\left(hu^2 + \frac{gh^2}{2} \right) = fhv - gh\partial_x z, \\
(hu) \partial_x v = -fhu.
\end{cases}
\end{equation}
Up to our knowledge, no fully well-balanced scheme was proposed for the 1D RSW equations. In this system, there is an additional difficulty due to the complex structure of the steady states. Indeed, let us notice that the steady solutions with nonzero tangential velocity satisfy
\begin{equation} \label{eq:moving_steady_states}
\begin{cases}
\partial_x (hu) = 0, \\
\partial_x\left(\frac{u^2}{2} + g(h+z)\right) = fv, \\
\partial_x v = -f.
\end{cases}
\end{equation}
Thus the steady solutions with no tangential velocity described by \eqref{eq:geostrophic steady state} cannot be obtained by setting $u=0$ in \eqref{eq:moving_steady_states}. It leads to two different families of steady states. This is a discrepancy with the standard shallow-water model, where the lake at rest can be obtained by setting $u=0$ in the moving steady states equations.
The first aim of this paper is therefore to derive a fully well-balanced and positive preserving scheme for the one-dimensional RSW equations.
Another issue arises with the 1D RSW equations when we try to increase the order of precision, while preserving the well-balanced property. For other systems with source terms, well-balanced second-order extensions exist. The reader is referred for instance to \cite{15BouchutLIVRE,34DansacBerthonClainFoucher2016} for the shallow-water system with topography, \cite{47Berthon_Dansac_Manning_Friction} for the shallow-water system with both topography and friction and \cite{46Berthon_Blood_flow} for the blood flow equations. In all these extensions, the main ingredient lies in a reconstruction procedure that preserves the discrete steady states. Unfortunately, such a procedure is not possible in the case of the 1D RSW, once again due to the complex structure of the steady solutions.
In \cite{47Berthon_Dansac_Manning_Friction} and \cite{46Berthon_Blood_flow}, a discrete steady state detection procedure is performed. The purpose is to modify the limitation procedure in order to recover the well-balanced first-order scheme near steady states and keep the high-order scheme far from steady state. We propose to adapt this technique for the 1D RSW equations. However, in order for this method to work in this context, we must complement this technique with some new manipulations of the space steps.
The paper is organized as follows. In \cref{sec:GTS}, we start by recalling some general notions about Godunov-type schemes and we choose the discretisation of the continuous steady solutions the scheme will have to preserve. Next, \cref{sec:approximate_Riemann_solver} is devoted to the derivation of an approximate Riemann solver that lead to a fully well-balanced and positive preserving scheme, as stated in \cref{thm:first-order_scheme}. In \cref{sec:secondorder}, we recall the principle of the classical second-order MUSCL extension and we explain why it cannot give a fully well-balanced scheme for the RSW system. Therefore, we present a new strategy based on a discrete steady state detection to recover this property. We also check this modification does not create non-positive fluid height values. In \cref{sec:numerical_results}, we show some numerical examples that illustrates the fully well-balanced property and the accuracy of both first-order and second-order schemes. Finally, we give some concluding remarks in \cref{sec:conclusions}.
All along this paper, for any quantity $X$ which has a left value $X_L$ and a right value $X_R$, we will use the following notations
$$[X]=X_R-X_L,\qquad \overline{X}=\frac{X_L+X_R}{2}.$$
\section{Godunov-type scheme} \label{sec:GTS}
The numerical scheme we will derive to approximate system \eqref{eq:RSW1D} is a Godunov-type scheme. In this section, we recall the framework of this family of finite volume schemes.
\subsection{Principle}\label{sec:principle}
In the following, we consider a space discretisation made of cells $K_i = (x_{i-1/2},x_{i+1/2})$, with constant length $\Delta x$. The center of the cell $K_i$ is denoted by $x_i$.
The topography is discretized by
$$z_i=\frac{1}{\Delta x}\int_{K_i}z(x)dx.$$
At time $t^n$, we assume known an approximation of the solution of \eqref{eq:RSW1D} constant on each cell, $$w_{\Delta x}(x,t^n) = w_i^n, \text{ if } x \in K_i.$$
In order to simplify the notations, we set $\widetilde{w} = (w, z)$, which belongs to the set
$$\widetilde{\Omega}=\{ \widetilde{w}=(h,hu,hv,z)^T \in \mathbb{R}^3 ; h>0 \}.$$
Since $z$ does not depend on time, we have $\widetilde{w}_i^n=(w_i^n,z_i)$.
We aim to update this approximation at time $t^{n+1} = t^n+\Delta t$, with a step $\Delta t$ chosen according to a CFL condition.
Godunov-type schemes are mainly based on Riemann problems, which are Cauchy problems for system \eqref{eq:RSW1D} with an initial data of the form
\begin{equation} \label{eq:riemann_pb}
\widetilde{w}(x,0) = \begin{cases}
\widetilde{w}_L & \text{ if } x<0, \\
\widetilde{w}_R & \text{ if } x>0.
\end{cases} \end{equation}
We denote by $\mathcal{W}_R(\frac{x}{t},\widetilde{w}_L,\widetilde{w}_R)$ the exact solution of \eqref{eq:RSW1D}--\eqref{eq:riemann_pb}.
This exact solution is usually very difficult to compute. Therefore, we prefer to use an approximate Riemann solver $\widehat{\mathcal{W}}_R\left(\frac{x}{t},\widetilde{w}_L,\widetilde{w}_R\right)$ instead. According to \cite{31HLL1983}, the approximate Riemann solver has to satisfy the following consistency property:
\begin{equation*}
\frac{1}{\Delta x} \int_{-\frac{\Delta x}{2}}^{\frac{\Delta x}{2}} \widehat{\mathcal{W}}_R\left(\frac{x}{\Delta t},\widetilde{w}_L,\widetilde{w}_R\right) dx
= \frac{1}{\Delta x}\int_{-\frac{\Delta x}{2}}^{\frac{\Delta x}{2}} \mathcal{W}_R\left(\frac{x}{\Delta t},\widetilde{w}_L,\widetilde{w}_R\right) dx.
\end{equation*}
The average of the exact Riemann solution can be computed and the previous condition is equivalent to
\begin{multline} \label{eq:strong_consistency}
\frac{1}{\Delta x} \int_{-\frac{\Delta x}{2}}^{\frac{\Delta x}{2}} \widehat{\mathcal{W}}_R\left(\frac{x}{\Delta t},\widetilde{w}_L,\widetilde{w}_R\right) dx = \frac{w_L+w_R}{2} - \frac{\Delta t}{\Delta x}(f(w_R)-f(w_L)) \\
+ \frac{1}{\Delta x} \int_0^{\Delta t} \int_{-\frac{\Delta x}{2}}^{\frac{\Delta x}{2}} s\left(\mathcal{W}_R\left(\frac{x}{t},\widetilde{w}_L,\widetilde{w}_R\right), z(x) \right) dxdt.
\end{multline}
In the absence of source term, we can enforce this equality to ensure the consistency of the approximate Riemann solver. However, it is not always possible to compute exactly the average of the source term. Therefore, it is usual to use a relevant approximation (see for instance \cite{15BouchutLIVRE,32BerthonChalons2016,12Ripa2016})
$$ S(\widetilde{w}_L,\widetilde{w}_R)\approx\frac{1}{\Delta t} \int_0^{\Delta t} \int_{-\frac{\Delta x}{2}}^{\frac{\Delta x}{2}} s\left(\mathcal{W}_R\left(\frac{x}{t},\widetilde{w}_L,\widetilde{w}_R\right), z(x)\right) dxdt.$$
This numerical source term should be consistent with the continuous source term $s$ in the following sense.
\begin{definition} \label{def:source_term_consistency}
The numerical source term $S$ is consistent with the continuous source term $s(\widetilde{w}) = s_{cor}(w) + s_{topo}(w)\partial_x z$ if it satisfies
\begin{equation} \label{eq:general_source_consistency}
S((w,z_L),(w,z_R)) = s_{cor}(w) \Delta x + s_{topo}(w) [z].
\end{equation}
\end{definition}
Provided a consistent numerical source term, the approximate Riemann solver can only satisfy a weaker version of \eqref{eq:strong_consistency}. It leads to the definition of a weakly consistent approximate Riemann solver.
\begin{definition}\label{def:weak_consistency}
The approximate Riemann solver $\widehat{\mathcal{W}}_R$ is weakly consistent if there exists a consistent numerical source term $S$ such that
\begin{equation} \label{eq:weak_consistency}
\frac{1}{\Delta x}\int_{-\frac{\Delta x}{2}}^{\frac{\Delta x}{2}} \widehat{\mathcal{W}}_R\left(\frac{x}{\Delta t},\widetilde{w}_L,\widetilde{w}_R\right) dx = \frac{w_L+w_R}{2} - \frac{\Delta t}{\Delta x}(f(w_R)-f(w_L)) + \frac \Delta t \Delta x S(\widetilde{w}_L,\widetilde{w}_R).
\end{equation}
\end{definition}
The following section will be devoted to derive a weakly consistent approximate Riemann solver. For now, we show how we can obtain a numerical scheme from an approximate Riemann solver. A Godunov-type scheme is built in two steps:
\begin{itemize}
\item firstly, we consider the juxtaposition of approximate Riemann solvers at each interface $x_{i+1/2}
$,
$$
w_{\Delta x}(x,t^n+t) = \widehat{\mathcal{W}}_R\left(\frac{x-x_{i+1/2}}{t},\widetilde{w}^n_{i},\widetilde{w}_{i+1}^n \right), \text{ if }x \in (x_{i},x_{i+1});
$$
\item secondly, the update at time $t^{n+1}$ is obtained by averaging the previous function on each cell
$$ w_{i+1}^n = \frac{1}{\Delta x}\int_{x_{i-1/2}}^{x_{i+1/2}} w_{\Delta x}(x,t^n+\Delta t)dx,
$$
or equivalently
\begin{equation} \label{eq:scheme_update}
w_{i}^{n+1} = \frac{1}{\Delta x} \int_0^{\frac{\Delta x}{2}} \widehat{\mathcal{W}}_R\left(\frac{x}{\Delta t},\widetilde{w}_{i-1}^n,\widetilde{w}_i^n\right)dx + \frac{1}{\Delta x} \int_{-\frac{\Delta x}{2}}^0 \widehat{\mathcal{W}}_R\left(\frac{x}{\Delta t},\widetilde{w}_i^n,\widetilde{w}_{i+1}^n \right)dx.
\end{equation}
\end{itemize}
In order to prevent the approximate Riemann solvers to interact between each other, we must enforce the CFL restriction
\begin{equation} \label{eq:CFL_order1}
\frac{\Delta t}{\Delta x} \max_{i \in \mathbb{Z}}| \lambda^\pm(\widetilde{w}_i^n,\widetilde{w}_{i+1}^n) | \leq \frac{1}{2},
\end{equation}
where $\lambda^\pm(\widetilde{w}_L,\widetilde{w}_R)$ denotes both the maximum and minimum speed of the waves that appear in $\widehat{\mathcal{W}}_R(\frac{x}{t},\widetilde{w}_L,\widetilde{w}_R)$.
Under this condition, we can write the Godunov-type scheme as a finite volume scheme (see for instance \cite{32BerthonChalons2016})
\begin{equation} \label{eq:godunov_type_scheme}
w_i^{n+1} = w_i^n - \frac{\Delta t}{\Delta x}(F_{i+1/2}^n - F_{i-1/2}^n) + \frac{\Delta t}{2 \Delta x} (S_{i+1/2}^n + S_{i-1/2}^n),
\end{equation}
with $F_{i+1/2}^n = F(\widetilde{w}_i^n,\widetilde{w}_{i+1}^n)$ and $S_{i+1/2}^n = S(\widetilde{w}_i^n,\widetilde{w}_{i+1}^n)$, where the numerical flux is given by
\begin{multline} \label{eq:numerical flux general def}
F(\widetilde{w}_L,\widetilde{w}_R) = \frac{f(w_L)+f(w_R)}{2} - \frac{\Delta x}{4\Delta t}(w_R-w_L) \\
+ \frac{1}{2\Delta t} \left( \int_0^{\frac{\Delta x}{2}}\widehat{\mathcal{W}}_R\left(\frac{x}{\Delta t},\widetilde{w}_L,\widetilde{w}_R \right)dx - \int_{-\frac{\Delta x}{2}}^0 \widehat{\mathcal{W}}_R\left(\frac{x}{\Delta t},\widetilde{w}_L,\widetilde{w}_R \right) dx \right),
\end{multline}
and the numerical source term $S(\widetilde{w}_L,\widetilde{w}_R)$ is the same as introduced in \cref{def:weak_consistency}.
At this point, the only property the scheme has to satisfy is the weak consistency of the approximate Riemann solver. We now list some other properties the scheme should satisfy.
\subsection{Numerical scheme properties}\label{sec:numerical_scheme_properties}
We present two important features of numerical schemes in this context: robustness and well-balancing. Godunov-type schemes have the advantage of inheriting these properties from the approximate Riemann solver $\widehat{\mathcal{W}}_R$. First, we study the preservation of fluid height positivity.
\begin{lemma}\label{lem:robust}
If the approximate Riemann solver $\widehat{\mathcal{W}}_R$ satisfies the robustness condition
\begin{equation} \label{eq:RS_robustness}
\forall (\widetilde{w}_L,\widetilde{w}_R) \in \widetilde{\Omega}^2,\, \forall \xi \in \mathbb{R},\, \widehat{\mathcal{W}}_R\left(\xi,\widetilde{w}_L,\widetilde{w}_R\right) \in \Omega,
\end{equation}
then under the CFL condition \eqref{eq:CFL_order1}, the Godunov-type scheme \eqref{eq:godunov_type_scheme} preserves the positivity of the fluid height:
$$\forall i \in \mathbb{Z}, h_i^n>0 \Rightarrow \forall i \in \mathbb{Z}, h_{i}^{n+1}>0.$$
\end{lemma}
\begin{proof}
Assuming $w_i^n \in \Omega $ for all $i \in \mathbb{Z},$ the state $w_i^{n+1}$ defined by \eqref{eq:scheme_update} appears to be an average of elements that belong to the convex set $\Omega$.
\end{proof}
Similarly, the Godunov-type scheme is well-balanced as soon as the approximate Riemann solver is. To be more specific, we have to introduce the notion of local steady state.
\begin{definition}\label{def:local steady state}
A couple of states $(\widetilde{w}_L,\widetilde{w}_R)$ defines a local steady state for the system \eqref{eq:RSW1D} if it satisfies
\begin{equation} \label{eq:steady states discretisation}
\begin{cases}
h_Ru_R=h_Lu_L=q, \\
\left[ \frac{u^2}{2} + g(h+z) \right] = \Delta x f\overline{v}, \\
q[v] = - \Delta x fq,
\end{cases}
\end{equation}
or equivalently if the local steady state indicator
\begin{equation} \label{eq:steady state indicator epsLR}\mathcal{E}(\widetilde{w}_L,\widetilde{w}_R,\Delta x) = \sqrt{\Big\vert \left[ hu \right] \Big\vert^2 + \Bigg\vert \left[\frac{u^2}{2}+g (h+z) \right] - \Delta x f\overline{v} \Bigg\vert^2 + \Big\vert \overline{hu} ( [v] + f \Delta x ) \Big\vert^2},
\end{equation}
is equal to zero.
\end{definition}
All along this paper, we write $\mathcal{E}_{LR}$ rather than $\mathcal{E}(\widetilde{w}_L,\widetilde{w}_R,\Delta x)$ if no ambiguity is possible.
Let us notice that \eqref{eq:steady states discretisation} is actually a discretization of the equations \eqref{eq:steadystates} that define the continuous steady states. Other choices of discretization could be possible, especially in the choice of the mean value $\overline{v}$.
The definition of a well-balanced Riemann solver follows.
\begin{definition}\label{def:ARS WB}
An approximate Riemann solver is said well-balanced if
$$\widehat{\mathcal{W}}_R\left(\frac{x}{t},\widetilde{w}_L,\widetilde{w}_R\right) =
\begin{cases}
w_L & \text{ if } x>0, \\
w_R & \text{ if } x<0,
\end{cases} $$ as soon as $(\widetilde{w}_L,\widetilde{w}_R)$ is a local steady state.
\end{definition}
Similarly, we define a discrete steady state and a well-balanced scheme.
\begin{definition}\label{def:approx at steady state} $~$
\begin{enumerate}
\item A sequence $(\widetilde{w}_i^n)_{i \in \mathbb{Z}}$ defines a discrete steady state if the couples $(\widetilde{w}_i,\widetilde{w}_{i+1})$ are local steady states for all $i \in \mathbb{Z}$.
\item A numerical scheme is said well-balanced if for any discrete steady state $(\widetilde{w}_i^n)_{i \in \mathbb{Z}}$, we have $$ w_i^{n+1} = w_i^n, \; \forall i \in \mathbb{Z}.$$
\end{enumerate}
\end{definition}
Now we prove that the well-balanced property of the approximate Riemann solver extends to the numerical scheme.
\begin{lemma}\label{lem:WB}
If the approximate Riemann solver $\widehat{\mathcal{W}}_R$ is well-balanced, then the associated Godunov-type scheme \eqref{eq:godunov_type_scheme} is well-balanced too.
\end{lemma}
\begin{proof}
We consider a discrete steady state $(\widetilde{w}_i^n)_{i \in \mathbb{Z}}$. Since the approximate Riemann solver is well-balanced, we get from \eqref{eq:scheme_update} that
$$w_i^{n+1} = \frac{1}{\Delta x} \int_0^{\frac{\Delta x}{2}} w_i^n dx + \frac{1}{\Delta x} \int_{-\frac{\Delta x}{2}}^0 w_i^n dx = w_i^n, \ \forall i \in \mathbb{Z},$$
and the proof is complete.
\end{proof}
To summarize, the approximate Riemann solver that we will derive in the next section has to satisfy the following properties:
\begin{itemize}
\item \label{item:consistency with exact solver} the weak consistency condition \eqref{eq:weak_consistency},
\item \label{item:robust} the robustness condition \eqref{eq:RS_robustness},
\item \label{item:fully WB} the fully well-balanced property given by \cref{def:ARS WB}.
\end{itemize}
\section{Approximate Riemann solver} \label{sec:approximate_Riemann_solver} Here, we propose an approximate Riemann solver for the system \eqref{eq:RSW1D} that satisfies the three previous properties. All along its construction, we will carefully choose the used relations for this purpose. We adapt to the RSW system the strategy proposed in \cite{34DansacBerthonClainFoucher2016,47Berthon_Dansac_Manning_Friction,46Berthon_Blood_flow} for different systems.
\subsection{Source term discretisation}\label{sec:numerical source term}
The aim of this section is to propose a numerical source term $S(\widetilde{w}_L,\widetilde{w}_R)=(0,S^{hu}(\widetilde{w}_L,\widetilde{w}_R),S^{hv}(\widetilde{w}_L,\widetilde{w}_R))^T$ which is consistent with the continuous source term $s$ in the sense of \cref{def:source_term_consistency}. Moreover this choice of a numerical source term has to be coherent with the required well-balanced property.
To this end, we start by considering a Riemann data $(\widetilde{w}_L,\widetilde{w}_R)$ which is a local steady state according to \cref{def:local steady state}. Since we want the approximate Riemann solver to be both weakly consistent and well-balanced, the condition \eqref{eq:weak_consistency} enforces
\begin{equation} \label{eq:source_term_flux}
S(\widetilde{w}_L,\widetilde{w}_R) = f(w_R) - f(w_L),
\end{equation}
or equivalently
\begin{align*}
& S^{hu}(\widetilde{w}_L,\widetilde{w}_R) = h_Ru_R^2 + \frac{gh_R^2}{2} - h_Lu_L^2 - \frac{gh_L^2}{2},\\
& S^{hv}(\widetilde{w}_L,\widetilde{w}_R) = h_Ru_Rv_R - h_Lu_Lv_L.
\end{align*}
Based on the chosen \cref{def:local steady state} of the local steady states, these relations can be written
\begin{align}
\label{eq:Shu at steady state, alpha_LR}& S^{hu}(\widetilde{w}_L,\widetilde{w}_R) = \left( g\overline{h} - \frac{q^2}{h_Lh_R}\right) [h],\\
\label{eq:Shv at steady state}& S^{hv}(\widetilde{w}_L,\widetilde{w}_R) = -\Delta x fq.
\end{align}
The expression \eqref{eq:Shu at steady state, alpha_LR} cannot be used to define the numerical source term in the general case since it would not be consistent in the sense of \cref{def:source_term_consistency}.
Hence, we continue to develop this expression for a local steady state.
First, from the second equality of \eqref{eq:steady states discretisation}, we get
\begin{equation}
\frac{q^2}{2h_R^2} + g(h_R+z_R) - \frac{q^2}{2h_L^2} -g(h_L+z_L) = \Delta x f \overline{v},
\end{equation}
which leads to
\begin{equation} \label{eq:equilibrium}
[h]\left(1-\frac{q^2 \overline{h}}{gh_L^2h_R^2}\right) = \Delta x f \overline{v}/g - (z_R - z_L).
\end{equation}
It follows
\begin{equation} \label{eq:relation at steady states between hR-hL and zR-zL}
[h] = \frac{\Delta x f\overline{v}/g - [z]}{1-\text{Fr}},
\end{equation}
where $\text{Fr} = \frac{\overline{h}|u_Lu_R|}{gh_Lh_R}$ is a discrete Froude number.
Injecting this relation into \eqref{eq:Shu at steady state, alpha_LR}, we get
\begin{equation}\label{eq:Shu at steady state0}
S^{hu}(\widetilde{w}_L,\widetilde{w}_R) = \Delta x f\overline{h}\overline{v}-g\overline{h}[z] + \frac{g\text{Fr} [h]^2}{4\overline{h}} \frac{ (\Delta x f\overline{v}/g - [z])}{(1-\text{Fr})}.
\end{equation}
We inject one more time \eqref{eq:relation at steady states between hR-hL and zR-zL} in the above equality to obtain a more convenient expression
\begin{equation} \label{eq:Shu at steady state, zR-zL}
S^{hu}(\widetilde{w}_L,\widetilde{w}_R) = \Delta x f\overline{h}\overline{v}-g\overline{h} [z]+ \frac{g\text{Fr} [h]}{4\overline{h}} \frac{ (\Delta x f\overline{v}/g - [z])^2}{(1-\text{Fr})^2}.
\end{equation}
This expression is \emph{a prior}i not well-defined when $\text{Fr} = 1$. However, combining \eqref{eq:relation at steady states between hR-hL and zR-zL} and \eqref{eq:Shu at steady state0} leads to rewrite $S^{hu}(\widetilde{w}_L,\widetilde{w}_R)$ under the form
$$S^{hu}(\widetilde{w}_L,\widetilde{w}_R) = g\overline{h}[h](1-\text{Fr}) + \frac{g}{4\overline{h}}\text{Fr} [h]^3.$$
Therefore $ S^{hu}(\widetilde{w}_L,\widetilde{w}_R)$ admits the following limit when $\text{Fr}$ goes to $1$
\begin{equation} \label{eq:Shu at steady state + Fr = 1}
\lim_{\text{Fr} \to 1} S^{hu}(\widetilde{w}_L,\widetilde{w}_R) = \frac{g}{4\overline{h}} [h]^3.
\end{equation}
At this point, \eqref{eq:Shu at steady state, zR-zL} and \eqref{eq:Shv at steady state} are suitable definitions for the numerical source terms $S^{hu}$ and $S^{hv}$ when a local steady state is considered.
However, let us point out that the limit \eqref{eq:Shu at steady state + Fr = 1} is only valid for a local steady state. Therefore, the right-hand side of \eqref{eq:Shu at steady state, zR-zL} is not well-defined when $\varepsilon_{LR}\neq 0$ and $\text{Fr}=1$. To deal with this issue, we add a nonnegative term $\mathcal{E}_{LR}$ to the denominator as follows
\begin{equation} \label{eq:Shu generalised not well-defined}
S^{hu}(\widetilde{w}_L,\widetilde{w}_R) = \Delta x f\overline{h}\overline{v}-g\overline{h} [z]+ \frac{g \text{Fr} [h]}{4\overline{h}} \frac{ (\Delta x f\overline{v}/g - [z])^2}{(1- \text{Fr})^2+\mathcal{E}_{LR}}.
\end{equation}
Indeed, the denominator in \eqref{eq:Shu generalised not well-defined} can only vanish when $\varepsilon_{LR}=0$, which means the Riemann data $(\widetilde{w}_L,\widetilde{w}_R)$ is a local steady state. But then the source term can be defined by the limit \eqref{eq:Shu at steady state + Fr = 1} as mentioned before.
To generalise \eqref{eq:Shv at steady state} away from local steady states, we need to define a general discharge $\widetilde{q}$ which coincides with $q$ as soon as a local steady state is considered or as soon as $w_L=w_R=w$. There are several possible definitions, $\widetilde{q} = \overline{hu}$ for instance.
We finally obtain the following definitions for the numerical source terms
\begin{multline} \label{eq:Shu_general_definition}
S^{hu}(\widetilde{w}_L,\widetilde{w}_R) = \\
\begin{cases}
\displaystyle \Delta x f\overline{h}\overline{v}-g\overline{h}[z]+\frac{g \text{Fr} [h]}{4\overline{h}} \frac{ (\Delta x f\overline{v}/g - [z])^2}{(1-\text{Fr})^2 + \mathcal{E}_{LR} } & \text{if } \text{Fr} \neq 1 \text{ or } \mathcal{E}_{LR} \neq 0,\\[1.5em]
\displaystyle \frac{g}{4\overline{h}}[h]^3 & \text{if } \text{Fr} = 1 \text{ and } \mathcal{E}_{LR} = 0.
\end{cases}
\end{multline}
\begin{equation} \label{eq:Shv_general_definition}
S^{hv}(\widetilde{w}_L,\widetilde{w}_R) = -\Delta x f \widetilde{q}.
\end{equation}
To conclude, we prove these numerical source terms are consistent.
\begin{lemma}\label{lem:numerical source term definition}
The numerical source term $S(\widetilde{w}_L,\widetilde{w}_R)=(0,S^{hu}(\widetilde{w}_L,\widetilde{w}_R),S^{hv}(\widetilde{w}_L,\widetilde{w}_R))^T$ defined by \eqref{eq:Shu_general_definition} and \eqref{eq:Shv_general_definition} is consistent in the sense of \cref{def:source_term_consistency}.
\end{lemma}
\begin{proof}
The consistency is immediate for $S^{hv}$, as for $S^{hu}$ in the case $\text{Fr} \neq 1$ or $\mathcal{E}_{LR} \neq 0$. In the case $\text{Fr} = 1$ and $\mathcal{E}_{LR} = 0$, let us notice that according to \eqref{eq:equilibrium}, we have $\Delta x f\overline{v}/g = [z]$. Therefore the source term $S^{hu}$ can be written under the form
$$ S^{hu}(\widetilde{w}_L,\widetilde{w}_R) = \Delta x f\overline{h}\overline{v}-g\overline{h}[z] + \frac{g[h]^3}{4\overline{h}},$$
and the consistency follows.
\end{proof}
\subsection{Approximate Riemann solver} \label{sec:ARS}
The numerical source term being well-defined, we now turn to build a weakly consistent approximate Riemann solver which is fully well-balanced and preserves the positivity of the fluid height.
Let us notice that the well-balanced property of the approximate Riemann solver strongly depends on the choice that was made in \cref{def:local steady state} to discretise the steady states. However, the following procedure stands for any discretisation of the steady states. This is not the case for the source term discretisation which was done in the previous section and should be adapted to the steady state discretisation.
We consider a Riemann data $(\widetilde{w}_L,\widetilde{w}_R)\in\widetilde{\Omega}^2$. We choose to build an approximate Riemann solver $\widehat{\mathcal{W}}_R$ with four constant states separated by three discontinuities with respective speed $\lambda_L < 0$, $\lambda_0 = 0$ and $\lambda_R>0$, as described in \cref{fig:Approximate Riemann solver}. This approximate Riemann solver writes
\begin{equation}\label{eq:ARS}
\widehat{\mathcal{W}}_R\left(\frac{x}{t},\widetilde{w}_L,\widetilde{w}_R\right) =
\begin{cases}
w_L & \text{ if } \frac{x}{t}< \lambda_L, \\
w_L^\star &\text{ if } \lambda_L<\frac{x}{t}<0, \\
w_R^\star &\text{ if } 0<\frac{x}{t}<\lambda_R, \\
w_R &\text{ if } \frac{x}{t}>\lambda_R.
\end{cases}
\end{equation}
This leads to two intermediate states $w_L^\star$ and $w_R^\star$ and thus six unknowns. We are searching for as many equations as unknowns.
\begin{figure}[h!] \label{fig:Approximate Riemann solver}
\centering
\begin{tikzpicture}[scale=0.5]
\draw[-] (-6,0) -- (6,0);
\draw[-] (-3,5.5) -- (0,0);
\draw[-] (0,5.5) -- (0,0);
\draw[-] (4.5,5.5) -- (0,0);
\draw[black] (-4.5,2) node {$w_L$};
\draw[black] (-1.,4) node {$w_L^\star$};
\draw[black] (1.5,4) node {$w_R^\star$};
\draw[black] (4.5,2) node {$w_R$};
\draw[black] (-3.,5) node [above left] {$\lambda_L$};
\draw[black] (0,5.5) node [above] {$0$};
\draw[black] (4.5,5.) node [above right] {$\lambda_R$};
\end{tikzpicture}
\caption{Approximate Riemann solver $\widehat{\mathcal{W}}_R$}
\end{figure}
In order to simplify the subsequent notations, we introduce the intermediate state of the HLL approximate Riemann solver (see \cite{31HLL1983})
\begin{equation} \label{eq:def_HLL}
w^{HLL} = \frac{\lambda_R w_R - \lambda_L w_L}{\lambda_R - \lambda_L} - \frac{f(w_R) - f(w_L)}{\lambda_R - \lambda_L}.
\end{equation}
Let us notice that its first component can be written as
$$h^{HLL} = \frac{u_L-\lambda_L}{\lambda_R-\lambda_L}h_L + \frac{\lambda_R-u_R}{\lambda_R-\lambda_L}h_R.$$
As a consequence, as soon as the speeds $\lambda_L$ and $\lambda_R$ satisfy
\begin{equation} \label{eq:lambda}
\lambda_L<u_L\qquad\text{and}\qquad \lambda_R>u_R,
\end{equation}
we have $h^{HLL}>0$.
First, the weak consistency condition \eqref{eq:weak_consistency} writes after a standard computation
\begin{align}
\label{eq:Riemann solver relations consistency 1} & \lambda_R h_R^\star - \lambda_L h_L^\star = (\lambda_R-\lambda_L) h^{HLL}, \\
\label{eq:Riemann solver relations consistency 2} & \lambda_R h_R^\star u_R^\star - \lambda_L h_L^\star u_L^\star = (\lambda_R - \lambda_L) (hu)^{HLL} + S^{hu}(\widetilde{w}_L,\widetilde{w}_R), \\
\label{eq:Riemann solver relations consistency 3} & \lambda_R h_R^\star v_R^\star - \lambda_L h_L^\star v_L^\star = (\lambda_R - \lambda_L) (hv)^{HLL} + S^{hv}(\widetilde{w}_L,\widetilde{w}_R).
\end{align}
It provides three relations that will ensure the weak consistency of the approximate Riemann solver.
The three missing relations will come from the fully well-balanced constraint. In other words, we have to choose three additional relations such that the solution of the system formed by these relations and equations \eqref{eq:Riemann solver relations consistency 1}, \eqref{eq:Riemann solver relations consistency 2} and \eqref{eq:Riemann solver relations consistency 3} satisfies
$$ w_R^\star = w_R \quad \text{and} \quad w_L^\star = w_L$$
as soon as $(\widetilde{w}_L,\widetilde{w}_R)$ is a local steady state.
We will deal with each variable separately.
First, for the variable $hu$, the simplest choice is to enforce the relation
\begin{equation} \label{eq:link steady state source term intermediate 1}
h_L^\star u_L^\star = h_R^\star u_R^\star = q^\star.
\end{equation}
The system \eqref{eq:Riemann solver relations consistency 2}--\eqref{eq:link steady state source term intermediate 1} can be solved immediately to obtain the intermediate discharge
\begin{equation} \label{eq:qstar}
q^\star = (hu)^{HLL} + \frac{S^{hu}(\widetilde{w}_L,\widetilde{w}_R)}{\lambda_R - \lambda_L}.
\end{equation}
Concerning the variable $h$, let us notice that when $(\widetilde{w}_L,\widetilde{w}_R)$ is a local steady state, we have according to \eqref{eq:Shu at steady state, alpha_LR}
$$\alpha_{LR}(h_R-h_L)=S^{hu}(\widetilde{w}_L,\widetilde{w}_R),$$
where $\alpha_{LR} = g\overline{h} - |u_Lu_R|$. A simple choice for the additional equation would be
$$\alpha_{LR}(h_R^\star - h_L^\star) = S^{hu}(\widetilde{w}_L,\widetilde{w}_R).$$
Together with equation \eqref{eq:Riemann solver relations consistency 1}, this leads to a simple linear system. However this system does not admit a unique solution when $\alpha_{LR}$ vanishes. We suggest the following modification
\begin{equation*}
(\alpha_{LR}^2+\mathcal{E}_{LR})(h_R^\star - h_L^\star) = \alpha_{LR}S^{hu}(\widetilde{w}_L,\widetilde{w}_R).
\end{equation*}
The coefficient $\alpha_{LR}^2+\mathcal{E}_{LR}$ can still vanish when $\mathcal{E}_{LR}=0$. To get rid of this problem, we introduce the following quantity
$$\Delta_{LR}^h =
\begin{cases} \displaystyle\frac{\alpha_{LR}S^{hu}(\widetilde{w}_L,\widetilde{w}_R)}{\alpha_{LR}^2+ \mathcal{E}_{LR}} & \text{if } \mathcal{E}_{LR}\neq 0, \\
h_R-h_L & \text{if } \mathcal{E}_{LR}=0,
\end{cases} $$
and we choose the following additional equation
\begin{equation} h_R^\star - h_L^\star = \Delta_{LR}^h. \label{eq:additional_eq_h}
\end{equation}
Solving the system \eqref{eq:Riemann solver relations consistency 1}--\eqref{eq:additional_eq_h}, we obtain
$$ h_L^\star = h^{HLL} - \frac{\lambda_R}{\lambda_R-\lambda_L} \Delta_{LR}^h, $$
$$ h_R^\star = h^{HLL} - \frac{\lambda_L}{\lambda_R-\lambda_L}\Delta_{LR}^h. $$
Nothing ensures these intermediate fluid heights to be positive. To address this issue, we adapt the cut-off procedure suggested in \cite{53Audusse_Chalons_Ung,34DansacBerthonClainFoucher2016,46Berthon_Blood_flow}. Let us introduce the threshold
\begin{equation}\delta=\min(\varepsilon, h_L,h_R,h^{HLL}) \label{eq:delta},
\end{equation}
where $\varepsilon>0$ is a small parameter.
If $h_L^\star<\delta$, we set $h_L^\star=\delta$, and $h_R^\star$ is modified according to \eqref{eq:Riemann solver relations consistency 1}. In this case, we have
$$h_R^\star = \left(1-\frac{\lambda_L}{\lambda_R}\right)h^{HLL}+\frac{\lambda_L}{\lambda_R} h_L^\star\ge \delta,$$
so both intermediate water heights $h_L^\star$ and $h_R^\star$ are positive. We proceed similarly if $h_R^\star<\delta$. Taking into account this procedure, the intermediate fluid heights write
\begin{equation} \label{eq:hLstar}
h_L^\star = \min\left(\max\left(h^{HLL}-\frac{\lambda_R}{\lambda_R-\lambda_L}\Delta_{LR}^h,\delta\right), \left(1-\frac{\lambda_R}{\lambda_L}\right)h^{HLL}+\frac{\lambda_R}{\lambda_L}\delta\right),
\end{equation}
\begin{equation} \label{eq:hRstar}
h_R^\star = \min\left(\max\left(h^{HLL}-\frac{\lambda_L}{\lambda_R-\lambda_L}\Delta_{LR}^h,\delta\right), \left(1-\frac{\lambda_L}{\lambda_R}\right)h^{HLL}+\frac{\lambda_L}{\lambda_R}\delta\right).
\end{equation}
Finally, we proceed similarly in order to derive the additional relations for the variable $hv$. We introduce the quantity
$$\Delta_{LR}^{v} =
\begin{cases}
\displaystyle\frac{\widetilde{q} \; S^{hv}(\widetilde{w}_L,\widetilde{w}_R)}{{\widetilde{q}}^2+\mathcal{E}_{LR}} & \text{if } \mathcal{E}_{LR}\neq 0, \\
v_R - v_L & \text{if } \mathcal{E}_{LR}=0,
\end{cases} $$
and we enforce the following additional equation
\begin{equation} \label{eq:additional_eq_v}
v_R^\star - v_L^\star = \Delta_{LR}^v.
\end{equation}
The system \eqref{eq:Riemann solver relations consistency 3}--\eqref{eq:additional_eq_v} then leads to
\begin{equation}
v_L^\star = \frac{(hv)^{HLL}}{h^{HLL}} + \frac{1}{(\lambda_R-\lambda_L)h^{HLL}}\left(S^{hv}(\widetilde{w}_L,\widetilde{w}_R)-\lambda_R h_R^\star\Delta_{LR}^v\right), \label{eq:vLstar}
\end{equation}
\begin{equation}
v_R^\star = \frac{(hv)^{HLL}}{h^{HLL}} + \frac{1}{(\lambda_R-\lambda_L)h^{HLL}}\left(S^{hv}(\widetilde{w}_L,\widetilde{w}_R)-\lambda_L h_L^\star\Delta_{LR}^v\right). \label{eq:vRstar}
\end{equation}
The approximate Riemann solver is then completely defined by relations \eqref{eq:link steady state source term intermediate 1}, \eqref{eq:qstar}, \eqref{eq:hLstar}, \eqref{eq:hRstar}, \eqref{eq:vLstar} and \eqref{eq:vRstar}. Let us notice that it is automatically weakly consistent by the choice of the three first equations \eqref{eq:Riemann solver relations consistency 1}, \eqref{eq:Riemann solver relations consistency 2} and \eqref{eq:Riemann solver relations consistency 3}. The cut-off procedure does not alter the weak consistency, since equation \eqref{eq:Riemann solver relations consistency 1} is still enforced when it is applied. Moreover, thanks to the cut-off procedure, both intermediate fluid heights are positive as soon as the speeds $\lambda_L$ and $\lambda_R$ satisfy \eqref{eq:lambda}.
We now prove the approximate Riemann solver is also well-balanced.
\begin{lemma} \label{lem:ARS_WB}
The approximate Riemann solver $\widehat{\mathcal{W}}_R$ is well-balanced.
\end{lemma}
\begin{proof}
Let us consider a local steady state $(\widetilde{w}_L,\widetilde{w}_R)$. To prove the result, we only need to show that $w_L^\star=w_L$ and $w_R^\star=w_R$. First, let us assume the cut-off procedure does not apply.
Since $(w_L^\star,w_R^\star)$ is defined as the unique solution of the system of equations \eqref{eq:Riemann solver relations consistency 1}, \eqref{eq:Riemann solver relations consistency 2}, \eqref{eq:Riemann solver relations consistency 3}, \eqref{eq:link steady state source term intermediate 1}, \eqref{eq:additional_eq_h}, \eqref{eq:additional_eq_v}, it is sufficient to prove that $(w_L,w_R)$ is solution of this system.
Since we have $\mathcal{E}_{LR}=0$, it is immediate that $(w_L,w_R)$ is solution of \eqref{eq:link steady state source term intermediate 1}, \eqref{eq:additional_eq_h}, \eqref{eq:additional_eq_v}. The approximate solver being weakly consistent and $(\widetilde{w}_L,\widetilde{w}_R)$ being a local steady state, equation \eqref{eq:weak_consistency} enforces
$$ S(\widetilde{w}_L,\widetilde{w}_R)=f(w_R)-f(w_L),$$
so the intermediate state of the HLL solver defined by \eqref{eq:def_HLL} satisfies
$$(\lambda_R-\lambda_L)w^{HLL}= \lambda_R w_R - \lambda_L w_L - S(\widetilde{w}_L,\widetilde{w}_R).$$
As a consequence, equations \eqref{eq:Riemann solver relations consistency 1}, \eqref{eq:Riemann solver relations consistency 2} and \eqref{eq:Riemann solver relations consistency 3} rewrite
\begin{align*}
& \lambda_R h_R^\star - \lambda_L h_L^\star = \lambda_R h_R - \lambda_L h_L, \\
& \lambda_R h_R^\star u_R^\star - \lambda_L h_L^\star u_L^\star = \lambda_R h_R u_R - \lambda_L h_L u_L, \\
& \lambda_R h_R^\star v_R^\star - \lambda_L h_L^\star v_L^\star = \lambda_R h_R v_R - \lambda_L h_L v_L.
\end{align*}
We deduce $(w_L,w_R)$ is a solution of these equations and thus $w_L^\star=w_L$ and $w_R^\star=w_R$.
Finally, we state that the cut-off procedure cannot apply in this case. Indeed, thanks to the definition \eqref{eq:delta}, the intermediate fluid heights computed before the cut-off procedure satisfy $h_L^\star=h_L\ge\delta$ and $h_R^\star=h_R\ge\delta$.
\end{proof}
The approximate Riemann solver $\widehat{\mathcal{W}}_R$ thus satisfies all the required properties.
\subsection{The final scheme}\label{sec:the_final_scheme}
We summarize in this section the full scheme and its properties.
\begin{theorem}\label{thm:first-order_scheme}
The approximate Riemann solver \eqref{eq:ARS} where the intermediate states are given by \eqref{eq:link steady state source term intermediate 1}, \eqref{eq:qstar}, \eqref{eq:hLstar}, \eqref{eq:hRstar}, \eqref{eq:vLstar} and \eqref{eq:vRstar} leads to a Godunov-type scheme that can be written under the form \eqref{eq:godunov_type_scheme}. The numerical flux
$$F(\widetilde{w}_L,\widetilde{w}_R)= \left( F^{h}(\widetilde{w}_L,\widetilde{w}_R), F^{hu}(\widetilde{w}_L,\widetilde{w}_R), F^{hv}(\widetilde{w}_L,\widetilde{w}_R) \right)^T,$$
is given by
\begin{align*}
& F^h(\widetilde{w}_L,\widetilde{w}_R) = \overline{hu} + \frac{\lambda_R}{2}(h_R^\star - h_R) + \frac{\lambda_L}{2}(h_L^\star-h_L),\\
& F^{hu}(\widetilde{w}_L,\widetilde{w}_R) =\overline{hu^2+\frac{gh^2}{2}} + \frac{\lambda_R}{2}(h_R^\star u_R^\star - h_Ru_R) +\frac{\lambda_L}{2}(h_L^\star u_L^\star - h_L u_L), \\
& F^{hv}(\widetilde{w}_L,\widetilde{w}_R) = \overline{huv} + \frac{\lambda_R}{2}(h_R^\star v_R^\star - h_Rv_R) + \frac{\lambda_L}{2} (h_L^\star v_L^\star-h_Lv_L),
\end{align*}
and the numerical source term
$$S(\widetilde{w}_L,\widetilde{w}_R)= \left( 0, S^{hu}(\widetilde{w}_L,\widetilde{w}_R), S^{hv}(\widetilde{w}_L,\widetilde{w}_R) \right)^T$$
is defined by \eqref{eq:Shu_general_definition} and \eqref{eq:Shv_general_definition}. \\
Under the CFL restriction \eqref{eq:CFL_order1} and if the speeds $\lambda_L$ and $\lambda_R$ are chosen accordingly to \eqref{eq:lambda}, this scheme is fully well-balanced and preserves the positivity of $h$.
\end{theorem}
\begin{proof}
The expression of the numerical flux is obtained from a straightforward computation in \eqref{eq:numerical flux general def}.
Assume $(w_L,w_R)$ are in $\Omega$. Since $h^{HLL} >0 $, the cut-off procedure ensures $h_L^\star > 0$ and $h_R^\star > 0$. Thus the variable $h$ remains positive in the approximate Riemann solver. According to \cref{lem:robust}, the scheme preserves the positivity of $h$.
The well-balanced property of the scheme is a direct consequence of \cref{lem:WB,lem:ARS_WB}.
\end{proof}
\section{Second-order scheme} \label{sec:secondorder}
In this section, we propose to improve the scheme precision using the MUSCL method. Our goal is to build a second-order scheme in space that preserves the good properties of the first-order one, namely the positivity of $h$ and the well-balanced property. The second-order in time is obtained with the usual Runge-Kutta method. We do not describe it here, but the reader can refer to \cite{shu1988efficient,gottlieb1998total,08Ordre2Berthon2001}.
We start by a description of the standard MUSCL method, and we explain why it is not adapted to get the fully well-balanced property for the RSW system. Indeed, no conservative reconstruction can preserve the complex structure of all the steady states defined by \eqref{eq:steadystates}. We explain in \cref{sec:Fully well-balanced recovering} how to recover the fully well-balanced property by adapting the ideas proposed in \cite{47Berthon_Dansac_Manning_Friction} and \cite{46Berthon_Blood_flow} to our generalised MUSCL scheme.
Up to this point, for the sake of conciseness, we neither mentioned explicitly the dependence on $\Delta x$ in the numerical fluxes and the source terms, nor in the definition of local steady states. However in the following, we will consider half-cells, which will impose to make appear these dependencies, in particular to determine if the scheme is fully well-balanced. It will also be useful to consider the $\Delta x$ that appears in the approximate Riemann solver $\widehat{\mathcal{W}}_R$ and the $\Delta x$ that appears in the numerical scheme definition \eqref{eq:godunov_type_scheme} as two separated parameters. For $d > 0$ the approximate Riemann solver is given by \begin{equation*}
\widehat{\mathcal{W}}_R\left(\frac{x}{t},\widetilde{w}_L,\widetilde{w}_R ,d\right) =
\begin{cases}
w_L & \text{ if } \frac{x}{t}< \lambda_L, \\
w_L^\star(d) & \text{ if } \lambda_L<\frac{x}{t}<0, \\
w_R^\star(d) & \text{ if } 0<\frac{x}{t}<\lambda_R, \\
w_R & \text{ if } \frac{x}{t}>\lambda_R.
\end{cases}
\end{equation*}
According to \cref{sec:principle}, the resulting Godunov-type scheme writes \begin{multline}\label{eq:godunov_type_scheme_d}
w_i^{n+1} = w_i^n - \frac{\Delta t}{\Delta x}\left( F \left(\widetilde{w}_i^n,\widetilde{w}_{i+1}^n,d \right) - F \left(\widetilde{w}_{i-1}^n,\widetilde{w}_{i}^n, d \right) \right) \\
+ \frac{\Delta t}{2\Delta x} \left( S \left(\widetilde{w}_{i-1}^n,\widetilde{w}_i^n,d \right) + S \left(\widetilde{w}_i^n,\widetilde{w}_{i+1}^n,d \right) \right),
\end{multline}
provided the CFL condition \eqref{eq:CFL_order1} is satisfied. Notice that for $d = \Delta x$, we recover the fully well-balanced scheme derived in \cref{sec:approximate_Riemann_solver}. Moreover, we establish the following lemma, that will be useful for the forthcoming proof.
\begin{lemma} \label{lem:schema_d_robust}
Under the CFL condition \eqref{eq:CFL_order1} and if the approximate Riemann solver speed waves $\lambda_L$ and $\lambda_R$ satisfy \eqref{eq:lambda}, then the Godunov-type scheme \eqref{eq:godunov_type_scheme_d} preserves the positivity of $h$, for all $d > 0$.
\end{lemma}
\begin{proof}
Independently of the parameter $d$, the cut-off procedure leads to positive intermediate states $h_L^\star$ and $h_R^\star$ according to definition \eqref{eq:hLstar}-\eqref{eq:hRstar}, since the speed waves $\lambda_L$ and $\lambda_R$ satisfy the condition \eqref{eq:lambda}. Then, we apply the \cref{lem:robust} to conclude the scheme \eqref{eq:godunov_type_scheme_d} preserves the positivity of $h$ for all $d > 0$.
\end{proof}
\subsection{Standard MUSCL method}\label{sec:standard_MUSCL_method}
The main idea of the MUSCL method is to reach second-order by considering a linear reconstruction of the solution on each cell, instead of a constant one. We recall here the standard reconstruction procedure.
Starting from a piecewise approximation at time $t^n$,
$$\widetilde{w}_{\Delta x}(x,t^n) = \widetilde{w}_i^n \text{ if } x \in K_i,$$
we reconstruct a linear approximation on each cell
$$\widehat{w}_{\Delta x}(x,t^n) = \sigma_i^n(\widetilde{w}) (x-x_i) + \widetilde{w}_i^n, \text{ if } x \in K_i, $$
where $\sigma_i^n(\widetilde{w})$ is a slope vector to determine. Let us emphasize that this procedure includes the topography.
The reconstructed states correspond to the value of $\widehat{w}_{\Delta x}$ at the interfaces of each cell and read as
\begin{equation}\label{eq:reconstruction}
\widetilde{w}_i^{n,\pm} = \widetilde{w}_i^n \pm \frac \Delta x 2 \sigma_i^n(\widetilde{w}),
\end{equation}
To avoid spurious oscillations, it is well-known that a limitation procedure must be applied to the slopes.
In this paper, we consider the minmod limiter function defined by
$$\mathrm{minmod}(\sigma_L,\sigma_R) =
\begin{cases}
\min(\sigma_L,\sigma_R) & \text{if } \sigma_L>0 \text{ and } \sigma_R > 0, \\
\max(\sigma_L,\sigma_R) & \text{if } \sigma_L<0 \text{ and } \sigma_R < 0, \\
0 & \text{otherwise}.
\end{cases}
$$
Then the slope vector is defined by $\sigma_i^n (\widetilde{w}) = \mathrm{minmod} \left( \frac{\widetilde{w}_i^n - \widetilde{w}_{i-1}^n}{\Delta x}, \frac{\widetilde{w}_{i+1}^n - \widetilde{w}_i^n}{\Delta x} \right)$. Other limiters can be considered, see \cite{14Toro2009,53Leveque2003} for instance. As usual, we enforce an additional limitation procedure on the first component of the slope vector $\sigma_i^n(\widetilde{w})$ in order to ensure that the fluid heights $h_i^{n,\pm}$ remain positive.
The standard MUSCL extension is obtained as follows.
For a first-order scheme under the form \eqref{eq:godunov_type_scheme},
the second-order scheme is defined by
\begin{multline} \label{eq:second-order scheme not WB}
w_i^{n+1} = w_i^n - \frac{\Delta t}{\Delta x} (F(\widetilde{w}_i^{n,+},\widetilde{w}_{i+1}^{n,-},d) - F(\widetilde{w}_{i-1}^{n,+},\widetilde{w}_i^{n,-},d) ) \\
+ \frac{\Delta t}{2\Delta x} ( S(\widetilde{w}_{i-1}^{n,+},\widetilde{w}_i^{n,-},d) + S_{c}(\widetilde{w}_i^{n,-},\widetilde{w}_i^{n,+},d) + S(\widetilde{w}_i^{n,+},\widetilde{w}_{i+1}^{n,-},d) ),
\end{multline}
where $S_{c}(\widetilde{w}_L,\widetilde{w}_R,d)$ is a centered source term, added to take into account the source term when its jumps at interfaces are small (see \cite{15BouchutLIVRE}). Several possibilities exist to define it.
In the present work, it will be determined naturally by considering the MUSCL scheme \eqref{eq:second-order scheme not WB} as a convex combination between two first-order schemes applied on the reconstructed states and on half-cells, as described in \cref{fig:MUSCL}. More precisely, we define
\begin{multline*}
w_i^{n+1,+} = w_i^{n,+} - \frac{\Delta t}{\Delta x/2} \left( F \left(\widetilde{w}_i^{n,+},\widetilde{w}_{i+1}^{n,-},\frac \Delta x 2 \right) - F \left(\widetilde{w}_{i}^{n,-},\widetilde{w}_i^{n,+}, \frac \Delta x 2 \right) \right) \\
+ \frac{\Delta t}{\Delta x} \left( S \left(\widetilde{w}_{i}^{n,-},\widetilde{w}_i^{n,+},\frac \Delta x 2\right) + S \left(\widetilde{w}_i^{n,+},\widetilde{w}_{i+1}^{n,-},\frac \Delta x 2 \right) \right),
\end{multline*}
and
\begin{multline*}
w_i^{n+1,-} = w_i^{n,-} - \frac{\Delta t}{\Delta x/2} \left( F \left(\widetilde{w}_i^{n,-},\widetilde{w}_{i}^{n,+},\frac \Delta x 2 \right) - F \left(\widetilde{w}_{i-1}^{n,+},\widetilde{w}_{i}^{n,-}, \frac \Delta x 2 \right) \right) \\
+ \frac{\Delta t}{\Delta x} \left( S \left(\widetilde{w}_{i-1}^{n,+},\widetilde{w}_i^{n,-},\frac \Delta x 2\right) + S \left(\widetilde{w}_i^{n,-},\widetilde{w}_{i}^{n,+},\frac \Delta x 2 \right) \right).
\end{multline*}
Taking the average of these two states, we obtain
\begin{multline}\label{eq:second-order scheme cvx}
w_i^{n+1} = \frac{w_i^{n,-}+w_i^{n,+}}{2} - \frac{\Delta t}{\Delta x}\left( F \left(\widetilde{w}_i^{n,+},\widetilde{w}_{i+1}^{n,-},\frac \Delta x 2 \right) - F \left(\widetilde{w}_{i-1}^{n,+},\widetilde{w}_{i}^{n,-}, \frac \Delta x 2 \right) \right) \\
+ \frac{\Delta t}{2\Delta x} \left( S \left(\widetilde{w}_{i-1}^{n,+},\widetilde{w}_i^{n,-},\frac \Delta x 2\right) + 2S \left(\widetilde{w}_{i}^{n,-},\widetilde{w}_i^{n,+},\frac \Delta x 2 \right) + S \left(\widetilde{w}_i^{n,+},\widetilde{w}_{i+1}^{n,-},\frac \Delta x 2 \right) \right).
\end{multline}
Assuming the reconstruction is conservative, namely $w_i^n=\frac{w_i^{n,-}+w_i^{n,+}}{2}$, we notice that the scheme \eqref{eq:second-order scheme cvx} can be written under the form \eqref{eq:second-order scheme not WB} with $d = \frac{\Delta x}{2}$ and by defining the centered source term as
$$ S_{c}(\widetilde{w}_i^{n,-},\widetilde{w}_i^{n,+},d) = 2 S\left(\widetilde{w}_i^{n,-},\widetilde{w}_i^{n,+},\frac{\Delta x}{2}\right).$$
An advantage of this procedure is that the MUSCL scheme \eqref{eq:second-order scheme cvx} automatically preserves the positivity of $h$ as soon as the associated first-order scheme does, up to a half CFL restriction.
\begin{figure}
\centering
\begin{tikzpicture}[scale=0.8]
\draw[<->] (-6,0) -- (6,0);
\draw[->] (-6.4,1) -- (-6.4,3);
\draw[-] (-2.5,0.5) -- (-2.5,3.5);
\draw[-] (2.5,0.5) -- (2.5,3.5);
\draw[dashed] (0,1) -- (0,3);
\draw[dashed] (-5,1) -- (-5,3);
\draw[dashed] (5,1) -- (5,3);
\draw[-] (2.5,-0.1) -- (2.5,0.1);
\draw[-] (-2.5,-0.1) -- (-2.5,0.1);
\draw[-] (5,-0.1) -- (5,0.1);
\draw[-] (-5,-0.1) -- (-5,0.1);
\draw[-] (0,-0.1) -- (0,0.1);
\draw[-] (-6,1) -- (6,1);
\draw[-] (-6,3) -- (6,3);
\draw[black] (-2.5,0) node [below] {$x_{i-1/2}$};
\draw[black] (2.5,0) node [below] {$x_{i+1/2}$};
\draw[black] (-6.4,1) node [below] {$t^n$};
\draw[black] (-6.4,3) node [above] {$t^{n+1}$};
\draw[black] (-5,1) node [below] {$w_{i-1}^n$};
\draw[black] (0,1) node [below] {$w_{i}^n$};
\draw[black] (5,1) node [below] {$w_{i+1}^n$};
\draw[black] (-3.75,1) node [above] {$w_{i-1}^{n,+}$};
\draw[black] (-1.25,1) node [above] {$w_{i}^{n,-}$};
\draw[black] (1.25,1) node [above] {$w_{i}^{n,+}$};
\draw[black] (3.75,1) node [above] {$w_{i+1}^{n,-}$};
\draw[black] (-1.25,3) node [below] {$w_{i}^{n+1,-}$};
\draw[black] (1.25,3) node [below] {$w_{i}^{n+1,+}$};
\draw[black] (0,3) node [above] {$w_{i}^{n+1}$};
\end{tikzpicture}
\caption{MUCL second-order scheme}
\label{fig:MUSCL}
\end{figure}
However, the well-balanced property is not reached as easily. Indeed, in order for the MUSCL scheme \eqref{eq:second-order scheme cvx} to be well-balanced, the reconstruction would have to satisfy for any discrete steady state $(\widetilde{w}_i^n)_{i\in\mathbb{Z}}$, $$\mathcal{E}(\widetilde{w}_i^{n,+},\widetilde{w}_{i+1}^{n,-},\Delta x/2) = \mathcal{E}(\widetilde{w}_i^{n,-},\widetilde{w}_i^{n,+},\Delta x/2)=0, \text{ for all } i \in \mathbb{Z}.$$
Unfortunately, we cannot provide such a reconstruction in our case. Indeed, we have to reconstruct four variables, including the conservative ones $h,hu,hv$ which leaves only one free variable to reconstruct. Moreover, according to definition \eqref{eq:steady states discretisation}, the moving steady states involve three expressions among which two are not conservative quantities. Therefore, we would need to reconstruct two free variables in order to preserve steady states.
We propose in the next section, to modify the MUSCL method and the reconstruction to get around that problem and recover a fully well-balanced second-order scheme.
\subsection{Fully well-balanced recovering}\label{sec:Fully well-balanced recovering}
We suggest a modification based on an idea introduced in \cite{47Berthon_Dansac_Manning_Friction} and \cite{46Berthon_Blood_flow}. The main principle is to consider the second-order scheme \eqref{eq:second-order scheme not WB} far from steady states and recover the first-order scheme \eqref{eq:godunov_type_scheme} near a steady state, which guarantee the scheme to be well-balanced.
The difficulty lies in the definition of being far from/close to a steady state.
For this purpose, we consider a smooth increasing function $\theta$, valued in $[0,1]$ and such that $\theta(0)=0$ and $\theta(x)\approx 1$ far from $0$. We choose the following function
$$\theta(x) = \frac{x^2}{x^2 + \Delta x^2}.$$
We set $\theta_i^n = \theta(\mathcal{E}_i^n)$, where $\mathcal{E}_i^n = \mathcal{E}(\widetilde{w}_{i-1}^n,\widetilde{w}_i^n,\Delta x) + \mathcal{E}(\widetilde{w}_i^n,\widetilde{w}_{i+1}^n,\Delta x)$ detects if both couples $(\widetilde{w}_{i-1}^n,\widetilde{w}_i^n)$ and $(\widetilde{w}_i^n,\widetilde{w}_{i+1}^n)$ are local steady states simultaneously.
The reconstructed states are now defined as a convex combination between the linear reconstructed states and the first-order states
\begin{equation}\label{eq:second-order_reconstruction}
\widetilde{w}_i^{n,\pm} = (1-\theta_i^n) \widetilde{w}_i^n + \theta_i^n \left( \widetilde{w}_i^n \pm \frac \Delta x 2 \sigma_i^n(\widetilde{w}) \right) = \widetilde{w}_i^n \pm \theta_i^n \frac \Delta x 2 \sigma_i^n(\widetilde{w})
\end{equation}
This reconstruction amounts to consider an additional limitation that involves the steady state detector $\theta_i^n$.
For a discrete steady state, we have $\theta_i^n = 0$ and we recover the first-order states $\widetilde{w}_i^{n,\pm} = \widetilde{w}_i^n$. Far from steady state and for a smooth solution, a mere computation shows that $\widetilde{w}_i^{\pm} = \widetilde{w}_i^n + \frac{\Delta x}{2} \sigma_i^n(\widetilde{w}) + O(\Delta x^3)$ when $\Delta x$ tends to $0$, which means the perturbation added to the usual second-order reconstruction is small enough to recover the seeking order.
Next, we define the scheme as
\begin{multline}\label{eq:second-order scheme WB}
w_i^{n+1} = w_i^n - \frac{\Delta t}{\Delta x}\left( F \left(\widetilde{w}_i^{n,+},\widetilde{w}_{i+1}^{n,-},\Delta x_1 \right) - F \left(\widetilde{w}_{i-1}^{n,+},\widetilde{w}_{i}^{n,-}, \Delta x_1 \right) \right) \\
+ \frac{\Delta t}{2\Delta x} \left( S \left(\widetilde{w}_{i-1}^{n,+},\widetilde{w}_i^{n,-},\Delta x_1\right) + 2S \left(\widetilde{w}_i^{n,-},\widetilde{w}_{i}^{n,+},\Delta x_2\right) + S \left(\widetilde{w}_i^{n,+},\widetilde{w}_{i+1}^{n,-},\Delta x_1 \right) \right),
\end{multline}
where the coefficients $\Delta x_1$ and $\Delta x_2$ have to be adapted, depending if we apply the first or second-order scheme. Far from steady states we need $\Delta x_1 = \Delta x_2 = \frac \Delta x 2$ in \eqref{eq:second-order scheme WB} to recover the second-order scheme \eqref{eq:second-order scheme not WB}. For a discrete steady state, the scheme reads as
\begin{multline}
w_i^{n+1} = w_i^n - \frac{\Delta t}{\Delta x}\left( F \left(\widetilde{w}_i^n,\widetilde{w}_{i+1}^n,\Delta x_1 \right) - F \left(\widetilde{w}_{i-1}^n,\widetilde{w}_{i}^n, \Delta x_1 \right) \right) \\
+ \frac{\Delta t}{2\Delta x} \left( S \left(\widetilde{w}_{i-1}^n,\widetilde{w}_i^n,\Delta x_1\right) + 2S \left(\widetilde{w}_i^n,\widetilde{w}_{i}^n,\Delta x_2\right) + S \left(\widetilde{w}_i^n,\widetilde{w}_{i+1}^n,\Delta x_1 \right) \right).
\end{multline}
We notice that $S(\widetilde{w},\widetilde{w},0) = 0$ according to the source term consistency \eqref{eq:general_source_consistency}. Therefore, we have to set $\Delta x_1 = \Delta x$ and $\Delta x_2 = 0$ to recover the first-order scheme \eqref{eq:godunov_type_scheme}.
In order to satisfy both these requirements, coefficients $\Delta x_1$ and $\Delta x_2$ are set as convex combinations as follows
\begin{equation} \label{eq:def_dx1_dx2}
\Delta x_1 = \Delta x \left( 1-\frac{\theta_i^n}{2} \right)\quad\text{and}\quad \Delta x_2 = \theta_i^n \frac \Delta x 2.
\end{equation}
We prove in the following theorem that the resulting second-order scheme is fully well-balanced, and that it preserves the positivity of $h$ under the classical second-order CFL restriction.
\begin{theorem}\label{thm:second-order_scheme}
Under the CFL condition
$$ \frac{\Delta t}{\Delta x} \max_{i \in \mathbb{Z}} \left( \vert \lambda^\pm(\widetilde{w}_i^{n,-},\widetilde{w}_i^{n,+})\vert, \vert \lambda^\pm(\widetilde{w}_i^{n,+},\widetilde{w}_{i+1}^{n,-}) \vert \right) \leq \frac{1}{4} ,$$
and if the speeds $\lambda_L$ and $\lambda_R$ of the approximate Riemann solver satisfy the condition \eqref{eq:lambda}, then the second-order scheme \eqref{eq:second-order_reconstruction}-\eqref{eq:second-order scheme WB}-\eqref{eq:def_dx1_dx2} is fully well-balanced and preserves the positivity of $h$.
\end{theorem}
\begin{proof} First, we consider a discrete steady state $(\widetilde{w}_i^n)_{i \in \mathbb{Z}}$. By definition, we have $\theta_i^n = 0$ for all $i \in \mathbb{Z}$. Hence, the scheme \eqref{eq:second-order_reconstruction}-\eqref{eq:second-order scheme WB}-\eqref{eq:def_dx1_dx2} gives
\begin{multline}
w_i^{n+1} = w_i^n - \frac{\Delta t}{\Delta x}\left( F \left(\widetilde{w}_i^n,\widetilde{w}_{i+1}^n,\Delta x \right) - F \left(\widetilde{w}_{i-1}^n,\widetilde{w}_{i}^n, \Delta x \right) \right) \\
+ \frac{\Delta t}{2\Delta x} \left( S \left(\widetilde{w}_{i-1}^n,\widetilde{w}_i^n,\Delta x\right) + S \left(\widetilde{w}_i^n,\widetilde{w}_{i+1}^n,\Delta x \right) \right),
\end{multline}
which is nothing but the fully well-balanced first-order scheme \eqref{eq:godunov_type_scheme}.
Now we prove the positivity-preserving property. We assume that $h_i^n $ is positive for all $i \in \mathbb{Z}$.
The update of variable $h$ with the scheme \eqref{eq:second-order_reconstruction}-\eqref{eq:second-order scheme WB}-\eqref{eq:def_dx1_dx2} writes
\begin{align*}
h_i^{n+1} & = h_i^n - \frac{\Delta t}{\Delta x}\left( F^h \left(\widetilde{w}_i^{n,+},\widetilde{w}_{i+1}^{n,-},\Delta x_1 \right) - F^h \left(\widetilde{w}_{i-1}^{n,+},\widetilde{w}_{i}^{n,-}, \Delta x_1 \right) \right)\\
& = \frac{1}{2}\left( h_i^{n,-} - \frac{\Delta t}{\Delta x/2} \left( F^h \left(\widetilde{w}_i^{n,-},\widetilde{w}_{i}^{n,+},\Delta x_1 \right) - F^h \left(\widetilde{w}_{i-1}^{n,+},\widetilde{w}_{i}^{n,-},\Delta x_1 \right) \right) \right) \\
& \quad + \frac{1}{2} \left( h_i^{n,+} - \frac{\Delta t}{\Delta x/2} \left( F^h \left(\widetilde{w}_i^{n,+},\widetilde{w}_{i+1}^{n,-},\Delta x_1 \right) - F^h \left(\widetilde{w}_{i}^{n,-},\widetilde{w}_i^{n,+}, \Delta x_1 \right) \right) \right).
\end{align*}
Then $h_i^{n+1}$ is a convex combination between first-order schemes applied on half-cells with parameter $d = \Delta x_1$. As proved in \cref{lem:schema_d_robust}, the first-order scheme preserves the positivity of $h$ independently of the value of the parameter $d$. Therefore, we conclude $h_i^{n+1} > 0 $ for all $i \in \mathbb{Z}$.
\end{proof}
\section{Numerical results}\label{sec:numerical_results}
This section is devoted to numerical experiments. For the sake of simplicity, the initial discretisation will be defined as
$$w_i^0 = w_0(x_i).$$
Considering a continuous steady solution, the initial discretisation can satisfy exactly the \cref{def:approx at steady state} of the discrete steady states. In this case, both our first-order and second-order schemes were proved to preserve the initial condition. This will be illustrated in \cref{testcase1}.
However, it is also possible that the initial discretisation of a continuous steady state does not lead to a discrete steady state according to \cref{def:approx at steady state}. The behaviour of our numerical schemes in such a case will be investigated in \cref{testcase2}.
In order to measure how close a given discretisation $(w_i^n)_{i\in \mathbb{Z}}$ at time $t^n$ is to a discrete steady state, we will use the steady state distance $$ \mathcal{E}^n_{\infty,j} = \max_{1 \leq i \leq N} \mathcal{E}(w_i^n,w_{i+1}^n),$$ where $j=1$ for the first-order scheme and $j=2$ for the second-order scheme.
In \cref{testcase3}, we test the long-time convergence towards a steady state on a topography with a bump, using the same distance $\mathcal{E}_{\infty,j}^n$.
Finally, in \cref{testcase4}, we consider a particular solution constant in space, but not in time, and for which we compute the errors in time.
\subsection{Moving steady state} \label{testcase1}
We consider here a simple moving steady state.
As initial data, we take (see \cref{fig:MovingSS})
$$h_0(x) = \exp^{2 x}, \ u_0(x) = \exp^{-2 x} \text{ and } v_0(x) = -fx,$$
and the topography is given by
$$z(x) = -\frac{1}{2}f^2 x^2 - \exp^{2 x} - \frac 12 \exp^{-4 x}.$$
We compute this test on the domain $[0,1]$ with $N=200$ cells and the parameters $f=g=1$.
The initial discretisation is a discrete steady state in the sense of definition \ref{def:approx at steady state}. Indeed the steady state distance at time $t_0 = 0$ is
$$ \mathcal{E}_{\infty,1}^0 = \mathcal{E}_{\infty,2}^0 = 8.87 \times 10^{-16}.$$ At final time $T_{\max} = 0.5$, the steady state is still preserved by both first-order and second-order schemes, even if small computationnal errors have spread. Indeed, the computation of the steady state distance at the end of the simulations gives
$$ \mathcal{E}_{\infty,1}^{T_{\max}} = 5.19 \times 10^{-14} \quad\text{and}\quad \mathcal{E}_{\infty,2}^{T_{\max}} = 8.86 \times 10^{-15}.$$
\begin{figure}\label{fig:MovingSS}
\centering
\includegraphics[scale=0.75]{SSWV_h+z.eps}
\caption{Initial variable h+z for the moving steady state}
\end{figure}
\subsection{Geostrophic steady state} \label{testcase2}
Next, we test the numerical schemes on another geostrophic steady state introduced in \cite{29ChertockDudzinski2018}. The computational domain is $[-5,5]$ with a flat topography ($z \equiv 0$). We set $f=10$, $g=1$, and we consider
$$h_0(x)=\frac{2}{g}-e^{-x^2}, u_0(x)=0, v_0(x)=\frac{2g}{f} xe^{-x^2},$$
as initial condition, which is a continuous steady state, see \cref{fig:GeosSS_Init}. The initial data discretisation is not exactly a discrete steady state, since we have for $N=200$ discretisation points $$ \mathcal{E}_{\infty,1}^0 = \mathcal{E}_{\infty,2}^0 = 4.06 \times 10^{-5}.$$
Therefore, \cref{thm:first-order_scheme,thm:second-order_scheme} does not guarantee the behaviour of the numerical schemes on this test case. However, at final time $T_{\max} = 200$ the first-order and second-order schemes lead to the steady state distances
$$\mathcal{E}_{\infty,1}^{T_{\max}}= 1.12 \times 10^{-7} \quad \text{and}\quad \mathcal{E}_{\infty,2}^{T_{\max}} = 2.53 \times 10^{-12}.$$
Both schemes seem to converge numerically to the steady state as $t$ goes to infinty.
Let us notice that according to \cref{sec:Fully well-balanced recovering}, the second-order scheme in space gives back the first-order scheme when the approximation is close to steady state is detected. The observed difference between the steady state distances at final time is due to the order of the time scheme.
\begin{figure}\label{fig:GeosSS_Init}
\centering
\includegraphics[scale=0.75]{GeosSS_Init_h.eps}
\includegraphics[scale=0.75]{GeosSS_Init_hv.eps}
\caption{Initial data for the geostrophic steady state}
\end{figure}
Let us now check the convergence of both schemes when $\Delta x$ tends to $0$.
We define the $L_1$ discrete error in space at time $t^n $ between the exact solution $w_0$ and the numerical approximation by
$$E^n = \Delta x \sum_{i=1}^N \vert w_0(x_i) - w_i^{n} \vert.$$
We present in \cref{tab:GeoSS_error} the discrete errors for variables $h$ and $hv$ at final time $T_{\max}$ for the first-order and second-order schemes. As one can see, both schemes converge to the steady state and reach second-order accuracy in space whereas only first-order accuracy is expected for the first-order scheme.
This behaviour can be formally explained by the fact that the initial discretisation satisfies the local steady state \cref{def:local steady state} up to second-order. Indeed, a straightforward expansion shows that
the initial discretisation satisfies
$$ g(h_0(x+\Delta x) - h_0(x)) - \Delta x f \frac{v_0(x)+v_0(x+\Delta x)}{2}
= O(\Delta x^2).$$
\begin{table}{}\label{tab:GeoSS_error}
\caption{$L^1$ error in space for the geostrophic steady state at time $T_{\max} = 200$ }
\centering
\subtable[first-order scheme]{
\begin{tabular}{|c|c|c|c|c|}
\hline N & \multicolumn{2}{c|}{h} & \multicolumn{2}{c|}{hv} \\
\hline 200 & $5.25 \times 10^{-5}$ & & $2.11 \times 10^{-4}$ & \\
\hline 400 & $1.31 \times 10^{-5}$ & 2.00 & $5.30 \times 10^{-5}$ & 1.99 \\
\hline 800 & $3.30 \times 10^{-6}$ & 1.99 & $1.38 \times 10^{-5}$ & 1.94 \\
\hline 1600 & $8.58 \times 10^{-7}$ & 1.94 & $3.73 \times 10^{-6}$ & 1.88 \\
\hline 3200 & $2.30 \times 10^{-7}$ & 1.91 & $1.02 \times 10^{-6}$ & 1.87 \\
\hline 6400 & $6.01 \times 10^{-8}$ & 1.93 & $2.73 \times 10^{-7}$ & 1.90 \\
\hline
\end{tabular}}
\subtable[second-order scheme]{
\begin{tabular}{|c|c|c|c|c|}
\hline N & \multicolumn{2}{c|}{h} & \multicolumn{2}{c|}{hv} \\
\hline 200 & $5.26 \times 10^{-5}$ & & $2.11 \times 10^{-4}$ & \\
\hline 400 & $1.31 \times 10^{-5}$ & 2.00 & $5.27 \times 10^{-5}$ & 2.00 \\
\hline 800 & $3.29 \times 10^{-6}$ & 2.00 & $1.32 \times 10^{-5}$ & 2.00 \\
\hline 1600 & $8.22 \times 10^{-7}$ & 2.00 & $3.30 \times 10^{-6}$ & 2.00 \\
\hline 3200 & $2.05 \times 10^{-7}$ & 2.00 & $8.25 \times 10^{-7}$ & 2.00 \\
\hline 6400 & $5.14 \times 10^{-8}$ & 2.00 & $2.06 \times 10^{-7}$ & 2.00 \\
\hline
\end{tabular}}
\end{table}
\subsection{Convergence towards a steady flow over a bump} \label{testcase3}
This test case aims to study the convergence towards a steady flow over a bump. It is a classical test for the shallow water equations adapted with the Coriolis source term in \cite{36bookZeitlinBouchut2007}. The topography is given by
$$z(x) =
\begin{cases}
0.2 - 0.05(x-10)^2 & \text{ if } 8 < x < 12, \\
0 & \text{ otherwise.}
\end{cases}
$$
We consider the following initial data
$$h_0(x) = 0.33,\quad u_0(x) = 0.18/0.33 ,\quad v_0(x) = 0.$$
We compute the scheme on the domain $[0,25]$ with $N=200$ cells and we set $f = \frac{2 \pi}{50}$ and $g = 9.81$. The boundary conditions are set as
$$(hu)(x=0) = 0.18,\quad h(x=25) = 0.33,\quad v(x=0) = 0.$$
The numerical solution at time $T_{\max} = 200$ is represented in \cref{fig:SFOB_solution}. The time evolution of the steady state distance $\mathcal{E}_{\infty,j}^n$ is shown for both schemes in \cref{fig:SFOB_eps}. We can see these distances diminishing through time, which means both schemes actually converge towards a steady state.
\begin{figure}\label{fig:SFOB_solution}
\centering
\includegraphics[scale=0.65]{Bump_h.eps}
\includegraphics[scale=0.65]{Bump_u.eps}
\includegraphics[scale=0.65]{Bump_v.eps}
\caption{Approximate solution of the steady flow over a bump test case at time $T_{\max} = 200$}
\end{figure}
\begin{figure}\label{fig:SFOB_eps}
\centering
\includegraphics[scale=0.75]{Bump_epsLR.eps}
\caption{Steady flow over a bump, steady state distance $\mathcal{E}^n_\infty$ in logarithmic scale}
\end{figure}
\subsection{Stationary state in space} \label{testcase4}
This test case is based on a particular exact solution of the RSW equations without topography. For a constant initial condition $(h_0,u_0,v_0)$ fixed, the exact solution of RSW equations writes
$$\begin{aligned}
& h(x,t) = h_0, \\
& u(t) = u_0 \cos(ft) + v_0 \sin(ft),\\
& v(t) = v_0 \cos(ft) - u_0 \sin(ft).
\end{aligned}$$
For any fixed time $t \geq 0$, the solution remains constant in space. We compute the scheme on domain $[0,1]$ until time $T_{\max} = 1$. We choose $$h_0=1,\quad u_0=1,\quad v_0 = 1$$ as initial data, with the parameters $f=g=1$ and we use periodic boundary conditions.
The solution is well-captured by the scheme as one can see in \cref{fig:Stationary state in space}, where we represent $hu$ and $hv$ with respect to time. Since the exact solution is known and constant in space, we can check the scheme's accuracy in time. We introduce the discrete $L^1$ error in time between the exact solution $w_{ex}$ and the numerical approximation at point $x_i$
$$ E_i = \sum_{n} (t^{n+1}-t^n) |w_{ex}(x_i,t^n)-w_i^n |. $$
Let us notice that the choice of the point $x_i$ is irrelevant. We recover the expected order of accuracy in time as one can see in error \cref{tab:CosSin_error}.
\begin{figure}\label{fig:Stationary state in space}
\centering
\includegraphics[scale=0.75]{CosSinT_u.eps}
\includegraphics[scale=0.75]{CosSinT_v.eps}
\caption{Stationary state in space at time $T_{\max} = 1$}
\end{figure}
\begin{table}{}\label{tab:CosSin_error}
\centering
\caption{$L^1$ error in time for the stationary in space test case at time $T_{\max} = 1$}
\subtable[first-order scheme]{
\begin{tabular}{|c|c|c|c|c|}
\hline $N$ & \multicolumn{2}{c|}{hu} & \multicolumn{2}{c|}{hv} \\
\hline $200$ & $3.82 \times 10^{-4}$ & 0.99 & $8.06 \times 10^{-5}$ & 0.99 \\
\hline $400$ & $1.91 \times 10^{-4}$ & 0.99 & $4.03 \times 10^{-5}$ & 0.99 \\
\hline $800$ & $9.56 \times 10^{-5}$ & 0.99 & $2.01 \times 10^{-5}$ & 0.99 \\
\hline $1600$ & $4.78 \times 10^{-5}$ & 0.99 & $1.01 \times 10^{-5}$ & 0.99 \\
\hline $3200$ & $2.39 \times 10^{-5}$ & 0.99 & $5.04 \times 10^{-6}$ & 0.99 \\
\hline $6400$ & $1.20 \times 10^{-5}$ & 0.99 & $2.52 \times 10^{-6}$ & 0.99 \\
\hline
\end{tabular}
}
\subtable[second-order scheme]{
\begin{tabular}{|c|c|c|c|c|}
\hline $N$ & \multicolumn{2}{c|}{hu} & \multicolumn{2}{c|}{hv} \\
\hline $200$ & $7.71 \times 10^{-9}$ & 1.99 & $3.58 \times 10^{-8}$ & 1.99 \\
\hline $400$ & $1.92 \times 10^{-9}$ & 1.99 & $8.95 \times 10^{-9}$ & 1.99 \\
\hline $800$ & $4.82 \times 10^{-10}$ & 2.00 & $2.24 \times 10^{-9}$ & 1.99 \\
\hline $1600$ & $1.20 \times 10^{-10}$ & 2.00 & $5.60 \times 10^{-10}$ & 1.99 \\
\hline $3200$ & $3.01 \times 10^{-11}$ & 2.00 & $1.40 \times 10^{-10}$ & 1.99 \\
\hline $6400$ & $7.52 \times 10^{-12}$ & 2.00 & $3.50 \times 10^{-11}$ & 1.99 \\
\hline
\end{tabular}
}
\end{table}
\section{Conclusions}\label{sec:conclusions}
In this work, we have built a second-order fully well-balanced scheme for the RSW system. In the first part, we have developed a fully well-balanced approximate Riemann solver by selecting carefully the numerical source term definitions and the relations used to define the intermediate states $w_L^\star$ and $w_R^\star$. The positivity of the variable $h$ has been recovered thanks to a cut-off procedure. We have proved in \cref{thm:first-order_scheme} that the resulting Godunov-type scheme satisfies all of the required features: consistency, positivity preserving and fully well-balanced property.
In the second part, we have proposed a way to extend the Godunov-type scheme to second-order. We have explained the limitations of the classical MUSCL method in view of the fully well-balanced property in the case of the RSW equations. Then we have adapted an idea proposed by \cite{46Berthon_Blood_flow,34DansacBerthonClainFoucher2016}, which consists in getting the standard MUSCL second-order scheme far from steady states and recovering the first-order fully well-balanced scheme near steady states. That procedure preserves the positivity of $h$ as proved in \cref{thm:second-order_scheme}.
Finally, we have presented some numerical experiments to illustrate the robustness and the efficiency of both first-order and second-order schemes.
This work can be easily extended to the two-dimensional RSW equations by involving a standard convex combination of 1D schemes by interface.
Additionally, the Coriolis parameter has been assumed constant all along this paper.
It would be an interesting development of this work to consider a space-dependent Coriolis force, since it would be more realistic for large-scale simulations.
\section*{Acknowledgments}
The authors would like to thank C. Berthon and V. Michel-Dansac for the fruitful discussions.
\bibliographystyle{siamplain}
|
2,869,038,154,002 | arxiv | \section{Introduction}
In the vicinity of the equilibrium state of the nuclear Fermi-liquid
drop the stiffness coefficients are positive and the system is
stable with respect to particle density and surface distortions. With
decreasing bulk density or increasing internal excitation energy
(temperature) the liquid drop reaches the regions of mechanical
or thermodynamical instabilities with respect to small particle
density and shape fluctuations and to separation into liquid and
gas phases. The process of the development of instability is a
complicated one. We will discuss some aspects of the process.
In particular, we will study the influence of a sharp liquid-drop
boundary on the instability with respect to small particle density
fluctuations. In actual nuclear processes (heavy-ion reactions, nuclear
fission etc.) nuclear matter is not static, and consequently the
development of instability depends not only on the equation of state,
but also on the dynamical effects such as the dynamical Fermi-surface
distortion or the relaxation processes. We will take into account
these aspects in studying the stability of the Fermi-liquid drop
in both regimes of the first- and zero sound modes.
\section{Bulk instability of the Fermi-liquid drop}
Let us consider small density fluctuations $\delta \rho ({\bf r}, t)$
starting from the nuclear fluid dynamic approach \cite{RiScb,book}.
The linearized equation of motion reads (see Ref. \cite{KiKoSh}),
\begin{equation}
m\,{\partial^2\over \partial t^2}\,\delta \rho =
\vec{\nabla}\rho_{eq}\,\vec{\nabla}{\delta E\over \delta \rho} +
\nabla_\nu\nabla_\mu P_{\nu \mu}^\prime,
\label{eq1}
\end{equation}
where $\rho_{eq}$ is the equilibrium density, $E$ is the total energy
and the pressure tensor, $P_{\nu \mu}^\prime$, represents the deviation
of the pressure from its isotropic part due to the Fermi surface
distortions.
The variational derivative $\delta E/\delta \rho$ in Eq. (\ref{eq1})
implies a linearization with respect to the density variation
$\delta \rho$:
\begin{equation}
{\delta E\over \delta \rho} = \Big({\delta E\over \delta \rho}
\Big)_{eq} + \hat{L}[\rho_{eq}]\,\delta \rho + {\cal O}\Big(\delta \rho
\Big)^2.
\label{de1}
\end{equation}
We point out that the first term on the r.h.s. of Eq. (\ref{de1})
does not enter Eq. (\ref{eq1}) because of the the equilibrium condition
$(\delta E/\delta \rho)_{eq} = \lambda_F = {\rm const}$, where
$\lambda_F$ is the chemical potential. The operator $\hat{L}$ can be
derived from the equation of state $E = E[\rho ]$. We will use the
extended Thomas-Fermi approximation for the internal kinetic energy
\cite{BhRo} and the Skyrme-type forces for the interparticle interaction
\cite{EnBrGo}. In the special case of a spin saturated and charge
conjugated nucleus, and neglecting spin-orbit and Coulomb effects,
the equation of state reads
$$
E[\rho ] = \int d{\bf r} \,\,\Big\{{\hbar^2\over 2\,m}\,
\left[{3\over 5}\,\Big({3\pi^2\over 2}\Big)^{2/3} \,\rho^{5/3} +
{1\over 4}\,\eta \,{(\vec{\nabla}\rho)^2\over \rho}\right]
$$
\begin{equation}
+ {3\over 8}\,t_0\,\rho^2 + {1\over 16}\,t_3\,\rho^3 +
{1\over 64}\,(9\,t_1 - 5\,t_2) \,(\vec{\nabla}\rho)^2\Big\}.
\label{e1}
\end{equation}
The effective forces used in Eq. (\ref{e1}) leads to an overestimate
of the incompressibility coefficient. This is a well-known feature of
Skyrme forces which can be overcome by taking non-integer powers of
$\rho$ in the potential energy density in Eq. (\ref{e1}). For our
purposes we shall, however, be content with the form (\ref{e1}). To
make quantitative estimates of the finite size effects on the bulk
instability of the liquid drop, we will assume a sharp surface
behaviour of $\rho_{eq}({\bf r})$ having a bulk density $\rho_0$ and
an equilibrium radius $R_0$. Taking into account Eqs. (\ref{de1}) and
(\ref{e1}), the operator $\hat{L}[\rho_{eq}]$ is then reduced to the
following form
\begin{equation}
\hat{L}[\rho_{eq}]\,\delta\rho =
{K\over 9}\,\nabla^2\,\delta\rho - 2\,(\beta + t_s\,\rho_0)\,
\nabla^2\,\nabla^2\,\delta\rho\,\,\,\,\,\, {\rm at}\,\,\,\,r < R_0,
\label{hatl}
\end{equation}
where
$$
\beta = {\hbar^2\over 8\,m}\,\eta,\,\,\,\,\,\,
t_s = {1\over 64}\,(9\,t_1 - 5\,t_2)
$$
and $K$ is the incompressibility coefficient
\begin{equation}
K = 6\,e_F\,(1 + F_0)\,\left(1 + {1\over 3}\,F_1\right)^{-1}.
\label{K1}
\end{equation}
The Landau parameters $F_l$ are given by
\begin{equation}
F_0 = {9\,\rho_0\over 8\,\epsilon_F}\,
\left[t_0 + {3\over 2}\,t_3\,\rho_0\right]\,{m^\ast\over m}
+ 3\,\left(1 - {m^\ast\over m}\right),\,\,\,
F_1 = 3\,\left({m^\ast\over m} - 1\right),
\label{Fl}
\end{equation}
where
$$
{m\over m^\ast} = 1 + {m\,\rho_0\over 8\,\hbar^2}\,
(3\,t_1 + 5\,t_2)
$$
and $e_F$ is the Fermi energy.
The pressure tensor $P_{\nu \mu}^\prime$ can be expressed through the
displacement field $\vec{\chi}({\bf r}, t)$ \cite{KoMaPl}. Assuming
also $\delta \rho \sim e^{-i\omega t}$, the pressure tensor
$P_{\nu \mu}^\prime$ is given by \cite{KiKoSh}
\begin{equation}
P_{\nu\mu}^\prime = {i\omega \tau
\over {1 - i\omega \tau}} P_{eq} \Lambda_{\nu\mu},
\label{p2}
\end{equation}
where $\tau$ is the relaxation time and we used the symbol
\begin{equation}
\Lambda_{\nu\mu} = \nabla_\nu \chi_{\mu} +
\nabla_\mu \chi_{\nu} - {2 \over 3}\delta_{\nu\mu}\nabla_
{\lambda}\chi_{\lambda}
\label{lambda}
\end{equation}
for this combination of gradients of the Fourier transform $\chi_{\nu}$
of the displacement field $\vec{\chi}({\bf r}, t)$. The equilibrium
pressure of a Fermi gas, $P_{eq}$, in Eq. (\ref{p2}), is given by
$$
P_{eq} = {1\over 3m}\int {d{\bf p}\over (2\pi \hbar)^3}\,\,
p^2\,f_{eq}({\bf r}, {\bf p}) \approx \rho_0\,p_F^2/5\,m,
$$
where $f_{eq}({\bf r}, {\bf p})$ is the equilibrium phase-space
distribution function and $p_F$ is the Fermi momentum. We point out
that Eq. (\ref{p2}) is valid for arbitrary relaxation time $\tau$ and
thus describes both the zero- and the first-sound limit as well as the
intermediate case.
Taking into account the continuity equation and Eqs. (\ref{hatl}),
(\ref{p2}) and (\ref{lambda}), the equation of motion (\ref{eq1}) can
be reduced in the nuclear interior to the following form (we consider
the isoscalar mode):
\begin{equation}
- m\,\omega^2 \,\delta\rho = \left({1\over 9}\,K -
{4\over 3}{i\omega \tau
\over {1 - i\omega \tau}} (P_{eq}/\rho_0)\right)
\nabla^2\delta \rho
- 2\,(\beta + t_s\,\rho_0)\,\nabla^2 \nabla^2 \delta\rho.
\label{0eq3}
\end{equation}
The solution of Eq. (\ref{0eq3}) for a fixed multipolarity $L$ is
given by
\begin{equation}
\delta \rho ({\bf r}, t) = \rho_0 \,j_L (qr)\,Y_{LM}(\theta, \phi)\,
\alpha_{LM}(t),
\label{drhoL}
\end{equation}
where $q$ is the wave number and $\alpha_{LM}(t)$ is the amplitude of
the density oscillations. We will distinguish between stable and
unstable regimes of density fluctuations. In the case of a stable mode
at $K > 0$, a solution of Eq. (\ref{0eq3}) of the form (\ref{drhoL})
has the following dispersion relation
\begin{equation}
\omega^2 = u^2\,q^2 - i\,\omega\,
{\gamma (\omega )\over m}\,q^2 + \kappa_s \,q^4.
\label{disp1}
\end{equation}
Here, $u$ is the sound velocity
\begin{equation}
u^2 = u_1^2 + \kappa_v,
\label{c}
\end{equation}
where $u_1$ is the velocity of the first sound
\begin{equation}
u_1^2 = {1\over 9\,m}\,K,
\label{c1}
\end{equation}
$\gamma (\omega )$ is the viscosity coefficient
\begin{equation}
\gamma (\omega ) = {4\over 3}{\rm Re} \Big({\tau
\over {1 - i\,\omega \,\tau }}\Big) {P_{eq}\over \rho_{eq}}
\label{gamma}
\end{equation}
and
\begin{equation}
\kappa_v = {4\over 3}{\rm Im} \Big({\omega \tau
\over {1 - i\,\omega \,\tau }}\Big) {P_{eq}\over m\,\rho_{eq}},\,\,\,\,
\,\,\,\,\,\,\,\kappa_s = {2\over m}\,(\beta + t_s\,\rho_0).
\label{kappa}
\end{equation}
The quantities $\kappa_v$ and $\gamma (\omega )$ appear due to the
Fermi-surface distortion effect. The dispersion relation (\ref{disp1})
determines both the real and the imaginary part of the eigenfrequency
$\omega$.
The equation of motion (\ref{0eq3}) has to be augmented by the boundary
condition. This is given by a condition of the balance of the surface
pressure $\delta P_{surf}$ with the volume sound pressure
$\delta P_{sound}$ on a free surface of the liquid drop, see Refs.
\cite{Lamb,BoMo}. It reads
\begin{equation}
m\,u^2\,\rho_0\,j_L(qR_0) = {1\over q^2\,R_0^2}\,
(L-1)\,(L+2)\,\sigma\,{\partial j_L(qr)\over
\partial r}\Big\vert_{r=R_0}.
\label{sec1}
\end{equation}
Let us consider now the volume instability regime, $K < 0$, and
introduce a growth rate $\Gamma = -i\,\omega$ ($\Gamma$ is real,
$\Gamma > 0$), see Ref. \cite{PeRa1}. Using Eq. (\ref{disp1}), one
obtains
\begin{equation}
\Gamma^2 = |u_1|^2 \,q^2 - \zeta (\Gamma )\,q^2 - \kappa_s\,q^4,
\label{0disp}
\end{equation}
where
\begin{equation}
\zeta (\Gamma ) = {4\over 3\,m}{\Gamma \tau \over
{1 + \Gamma \tau}}{P_{eq}\over \rho_0}.
\label{zeta}
\end{equation}
Equation (\ref{0disp}) is valid for arbitrary relaxation time $\tau$.
From it one can obtain the leading order terms in the different limits
mentioned above.
{\it (i) Frequent collision regime: $\tau \to 0$.}\\
The contribution from the dynamic distortion of the Fermi surface,
$\kappa_v$, can be neglected in this case and we have from Eqs.
(\ref{0disp}) and (\ref{gamma}),
\begin{equation}
\Gamma^2 = |u_1|^2 \,q^2 - \Gamma\,(\tilde{\gamma}/m)\,q^2 - \kappa_s\,q^4,
\label{0disp1}
\end{equation}
where $\tilde{\gamma}=(8/15)\,e_F\,\tau$ is the viscosity
coefficient.
In the case of small viscosity coefficient $\tilde{\gamma}$, one
has from Eq. (\ref{0disp1})
\begin{equation}
\Gamma^2 \approx |u_1|^2\,q^2 - \kappa_s\,q^4 -
{\tilde{\gamma}\over m}\,q^2\,\sqrt{|u_1|^2\,q^2 - \kappa_s\,q^4}.
\label{0disp2}
\end{equation}
The amplitude of the density oscillations, $\delta \rho_L({\bf r}, t)$,
grows exponentially if $\Gamma > 0$. Expression (\ref{0disp2})
determines two characteristic values of the wave number $q$, namely,
$q_{max}$ where the growth rate reaches a maximum of $\Gamma_{max}$, and
$q_{crit}$ where $\Gamma$ goes to zero, i.e., (see also \cite{PeRa2}),
\begin{equation}
\Gamma = \Gamma_{max}\,\,\,\, {\rm at}\,\,\,\,
q = q_{max} < q_{crit},\,\,\,\,
{\rm and}\,\,\,\,\Gamma = 0\,\,\,\,{\rm at}\,\,\,\, q = q_{crit}.
\label{def1}
\end{equation}
The values of $q_{max}$ and $q_{crit}$ are obtained from, see Eq.
(\ref{0disp1}),
\begin{equation}
{\partial \Gamma \over \partial q}\Big\vert_{q=q_{max}} = 0
\,\,\,\,\,\,\,\,{\rm and}\,\,\,\,\,\,\,
q_{crit}^2 = {|u_1|^2\over \kappa_s},
\,\,\,\, {\rm at}\,\,\,\,u_1^2 < 0.
\label{crit}
\end{equation}
Thus, the critical wave number $q_{crit}$ does not depend on
the viscosity. However, the presence of viscosity reduces the
instability, see also Fig. 1 below.
{\it (ii) Rare collision regime: $\tau \to \infty$.}\\
In this case, we have from Eqs. (\ref{0disp}), (\ref{kappa})
and (\ref{gamma})
\begin{equation}
\Gamma^2 = |u_1|^2\,q^2 - \kappa_v^\prime \,q^2 -\kappa_s\,q^4,
\label{0disp3}
\end{equation}
where
\begin{equation}
\kappa_v^\prime = {4\over 3\,m}{P_{eq}\over \rho_{eq}}.
\label{0kappa1}
\end{equation}
The critical value $q_{crit}$ and the value $q_{max}$ are given by
\begin{equation}
q_{crit}^2 = {{|u_1|^2 - \kappa_v^\prime}\over \kappa_s},
\,\,\,\,\,\,\,\,\,\,\,\, q_{max}^2 = {1\over 2}\,q_{crit}^2.
\label{crit1}
\end{equation}
Thus, the distortion of the Fermi-surface leads to a decrease of the
critical value $q_{crit}$, i.e., the Fermi-liquid drop becomes more
stable with respect to the volume density fluctuations due to the
dynamic Fermi-surface distortion effects.
\section{Numerical results and discussion}
In Fig. 1 we have plotted the instability growth rate $\Gamma$ as
obtained from Eq. (\ref{0disp}). The calculation was performed for the
Skyrme force SIII. The relaxation time was taken in the form
$\tau = \hbar\,\alpha/T^2$ \cite{AbKh} with $\alpha = 9.2\,$MeV and
$\alpha = 2.6\,$MeV \cite{KoPlSh1} and the bulk density $\rho_0$ was
taken as $\rho_0 = 0.3\,\rho_{sat}$, where $\rho_{sat}$ is the saturated
density $\rho_{sat} = 0.1453\,fm^{-3}$. We show also the result for the
nonviscous infinite nuclear matter and the nonviscous finite liquid drop
neglecting Fermi surface distortion effects. In a finite system, the
non-monotony behaviour of the instability growth rate as a function of
the wave number $q$ is due to the anomalous dispersion term in Eq.
(\ref{disp1}) created by the gradient terms in the equation of state.
We point out that the finite system becomes more stable with respect to
short-wave-length density fluctuations at $q > q_{max}$. We can also see
that the presence of viscosity decreases the instability. The strong
decrease of instability in a Fermi liquid drop (FLD), when compared with
the corresponding result for the usual liquid drop (LD), is because of
the Fermi surface distortion effects. In Fig. 2, this peculiarity of the
FLD can be seen in a transparent way for both the infinite nuclear matter
and the finite Fermi liquid drop.
For a saturated nuclear liquid one has for the force parameters
$t_0 < 0,\,\,\, t_3 > 0$ and $t_s > 0$. Thus, the critical value
$q_{crit}$, Eq. (\ref{crit}), increases with decreasing bulk density
$\rho_0$ at $u_1^2 < 0$, see also Eq. (\ref{Fl}). The existence of the
critical wave number $q_{crit}$ for an unstable mode is a feature of
the finite system. The growth rate $\Gamma$ depends on the multipolarity
$L$ of the nuclear density distortion and on the position of the
eigenvalue, $q_L$, in the interval of $q = 0\,\div\,q_{crit}$
\cite{PeRa2}. For a given $R_0$, the value of $q_L$ increases with $L$
for $L \geq 2$ because of the boundary condition (\ref{sec1}), see
Table 1. That means that if $q_L < q_{max}$ the instability increases
with $L$ and the nucleus becomes more unstable with respect to an
internal clusterization to small pieces (high multipolarity regime)
rather than to binary fission (low multipolarity regime). In contrast,
the binary fission is preferable if $q_{max} < q_L < q_{crit}$.
We give in Table 1 the values $q_L/k_F$ for two nuclei, $^{208}Pb$
and $^{40}Ca$ as obtained from Eq. (\ref{sec1}). The calculations
were performed with the surface tension parameter
$4\,\pi\,r_0^2\,\sigma = 17.2\,$MeV. We point out that the value of
$q_{max}$ is given here by $q_{max}/k_F = 0.69$. In Fig. 3 we have
plotted the instability growth rate at $T=6\,$MeV and
$\alpha = 9.2\,$MeV as function of the multipolarity $L$ of the
particle density fluctuations for two nuclei $^{208}Pb$ and $^{40}Ca$.
As is seen from Fig. 3, the lowest values of $L \leq 3$ give the
contribution to the instability growth rate $\Gamma$ for the nucleus
$^{40}Ca$. Thus, the nucleus $^{40}Ca$ is unstable with respect to
the fission under the conditions considered above. In contrast, the
instability growth rate of the nucleus $^{208}Pb$ includes the higher
multipolarity $L \leq 8$ and this nucleus has to be unstable with
respect to multifragmentation.
\section{Summary and conclusion}
Starting from the fluid dynamic equation of motion for the Fermi liquid
drop with a sharp surface, we have derived the dispersion relations
(\ref{disp1}) and (\ref{0disp}) for both the stable and the unstable
regime. The dispersion relations are influenced strongly by the
Fermi-surface distortion effect and the anomalous dispersion caused by
the finiteness of the system. The presence of the Fermi surface distortion
enhances the stiffness coefficient for a stable mode and reduces the
instability growth rate for an unstable one.
We have shown that the instability growth rate in an unstable finite
system is a non-monotony function of the wave number $q$ because of the
anomalous dispersion term. This is in contrast with the infinite nuclear
matter case where the instability growth rate increases with $q$. The
non-monotony behaviour of the instability growth rate $\Gamma (q)$ in
a finite Fermi liquid drop is accompanied with two characteristic wave
numbers $q_{max}$ and $q_{crit}$, see Eqs. (\ref{def1}) and (\ref{crit}).
The distortion of the Fermi-surface leads to a decrease of the critical
value $q_{crit}$. The decay mode of an unstable Fermi liquid drop depends
on the location of the eigen wave number $q$ on the slope of the curve
$\Gamma (q)$. The Fermi liquid drop is more unstable with respect to
multifragmentation if $q < q_{max}$ and the binary fission is preferable
if $q > q_{max}$. This is because the eigen wave number $q_L$, derived
from the secular equation (\ref{sec1}), increases with the multipolarity
$L$ of the particle density fluctuations. As an example, we have
demonstrated this phenomenon in the case of hot nuclei $^{40}Ca$ and
$^{208}Pb$. The nucleus $^{40}Ca$ is more unstable with respect to the
short wave fluctuations and prefers to decay into the binary fission
channel. The multifragmentation channel is preferable at the development
of the instability in the heavy nucleus $^{208}Pb$.
\section{Acknowledgements}
This work was supported in part by the US Department of Energy under
grant \# DOE-FG03-93ER40773 and the INTAS under grant \# 93-0151. We are
grateful for this financial support. One of us (V.M.K.) thanks the
Cyclotron Institute at Texas A\&M University for the kind hospitality.
\newpage
|
2,869,038,154,003 | arxiv | \section{Introduction}
Observations of the solar surface have revealed that the Sun harbors flows at a wide range of spatial and temporal scales \citep[see][for a review]{2005LRSP....2....6G}. These range from megameter-scaled convective cells referred to as granulation, to meridional circulations that span the expanse of the Sun and are believed to play an active part in angular momentum and flux transport processes \citep{2005LRSP....2....1M}. Flows at the largest scales also happen to be the ones that are understood poorly, and considerable efforts have been put into improving inference schemes in recent times to address this shortcoming in our understanding. Helioseismology enables us to relate surface measurements of seismic waves to convective flows within the Sun, therefore supplying us with an acoustic probe into the electromagnetically opaque solar interior. Seismic techniques have been applied to detect large-scale flows such as differential rotation \citep{1998ApJ...505..390S} and associated global features such as torsional oscillation and the near-surface shear layer, meridional circulation \citep{1996ApJ...460.1027H}, photospheric signatures of giant cells that are supposed to be associated with deep convection \citep{2013Sci...342.1217H}, although --- aside from rotational features --- a consensus on their properties has not yet been achieved. Meridional flows are of particular importance, as they are believed to play a key role in flux-transport dynamo models by conveying magnetic flux equator-wards at the bottom of the solar convection zone \citep{Dikpati_1999}, therefore understanding their subsurface profiles stands as an outstanding challenge for helioseismology. Presently their subsurface profiles are fairly uncertain, with \citet{2012ApJ...749L..13H} and \citet{2015ApJ...805..133J} suggesting a shallow return flow, \citet{2013ApJ...778L..38S,2013ApJ...774L..29Z} suggesting multiple cells in radius --- consistent with a shallow profile if the deeper cells remain undetected, while \citet{2015ApJ...813..114R} and \citet{2018ApJ...863...39M} find a single cell spanning the entire solar convection zone. The latitudinal profiles inferred also differ, with \citet{2013ApJ...778L..38S} suggesting multiple latitudinal cells, contrary to the other results. A careful study of the systematics involved in the analysis techniques might be necessary to unravel the differences in the conclusions reached by the various authors.
The standard solar model \citep[Model S,][]{1996Sci...272.1286C} is taken to be spherically symmetric, therefore seismic normal mode wavefunctions may be labelled by spherical harmonic degrees. Departures from spherical symmetry induced by convective flows lead to power being transferred between different wave modes, and a comparison of the deviation from a reference symmetric model would allow us to pinpoint the magnitude of the subsurface inhomogeneity. Various techniques have been used in the past to achieve this, ranging from time-distance helioseismology \citep{1993Natur.362..430D} that uses differences in seismic wave travel-times to estimate subsurface flows, ring-diagram analysis \citep{1988ApJ...333..996H} that uses shifts in the seismic power spectrum, to mode-couplings in Fourier space \citep{2007ApJ...668.1189W} that uses direct correlations between the wave modes for the estimation \citep[See][for a review]{2005LRSP....2....6G}. In this work we focus on time-distance helioseismology to frame an inverse problem and relate surface observations of seismic wave travel-times to subsurface flows.
An inference about the solar interior is usually drawn through an inverse problem that relates seismic wave parameters --- such as the travel-times of seismic waves --- to subsurface inhomogeneities, and the function that relates the two is referred to as the sensitivity kernel. This function encapsulates the physics of the solar model as well as the measurement procedure, and an accurate estimation of subsurface flows therefore requires a computation of the kernel that correctly accounts for the physics of wave propagation and the systematic effects associated with the measurement. A second challenge that needs to be overcome is that of an ill-conditioned inverse problem, given that the number of parameters to infer often vastly outnumbers the measurements available. Such an inference may be aided by rephrasing the inverse problem in terms of an alternate, smaller set of parameters. Luckily such a set is readily available --- that of the reciprocal space, which --- in spherical geometry --- is spanned by spherical harmonics. Large scale flows on the Sun may be described in terms of a limited set of low-degree spherical harmonics. Additionally, improving the signal-to-noise ratio of the measured seismic parameters often involves careful averaging, which necessitates multiple evaluations of the kernel. The computation of the sensitivity kernel therefore needs to be computationally efficient as well. In this work we present an approach to compute sensitivity kernels that is able to address each of these issues.
The set of seismic eigenfunctions in the Sun forms a complete basis, therefore the kernel may be expanded in this basis and expressed as a sum of normal modes. This approach naturally incorporates the geometry of the Sun through the profile of the eigenfunctions. Initial attempts at computing finite-frequency sensitivity kernels had assumed a Cartesian background medium \citep{2007AN....328..228B,2007ApJ...671.1051J,2015SSRv..196..201B}, however large-scale flows sense the curvature of the Sun so the an analysis to infer them needs to be carried out by accounting for spherical geometry. Such an approach had been used by \cite{2016ApJ...824...49B} to compute kernels for seismic wave travel times derived from cross-covariances, and a variant was used by \cite{2017ApJ...842...89M} to compute kernels for travel times that were derived directly from wave velocities. \cite{Gizon2017A&A...600A..35G} proposed an alternate approach where the kernels are computed numerically assuming azimuthal symmetry. This approach that does not necessitate spherical symmetry, therefore it is more flexible than predecessors. All of these approaches are however computationally intensive, as was demonstrated by \cite{2018A&A...616A.156F}, where the authors explored an alternate approach: compute the spherical-harmonic coefficients of the kernel instead its spatial profile, and parameterize the inverse problem in terms of these coefficients. The work presented in our paper follows a similar approach. We show that it is possible to include line-of-sight projections and differences in line-formation heights into the modelled cross-covariances, thereby potentially alleviating systematic trends that exist in seismic measurements. Much of the fundamentals of the analysis technique were developed by \citet{2020ApJ...895..117B} in the context of subsurface sound-speed perturbations, and this work extends the analysis to flows. Finally, such an approach need not be confined to travel-time analysis. \citet{2017A&A...599A.111N} had demonstrated that it is straightforward to include amplitudes of seismic wave covariances to constrain the inverse problem, which --- used alongside travel times --- might lead to more accurate results.
\section{Vector Spherical Harmonics}
\subsection{Helicity basis}
The analysis of vector fields in spherical-polar coordinates is convenient in a basis that is a complex linear combination of the basis vectors $\mathbf{e}_r$, $\mathbf{e}_\theta$ and $\mathbf{e}_\phi$, given by
\begin{equation}
\begin{aligned}
\mathbf{e}_{+1} & =-\frac{1}{\sqrt{2}}\left(\mathbf{e}_{\theta}+i\mathbf{e}_{\phi}\right),\\
\mathbf{e}_{0} & =\mathbf{e}_{r}, \\
\mathbf{e}_{-1} & =\frac{1}{\sqrt{2}}\left(\mathbf{e}_{\theta}-i\mathbf{e}_{\phi}\right).
\end{aligned}
\label{eq:helicity_spherical}
\end{equation}
We follow \citet{1988qtam.book.....V} and refer to this basis as the ``helicity" basis.
\subsection{Definition of the harmonics}
Vector spherical harmonics (VSH) --- which are vector eigenfunctions of the Laplacian $\nabla^{2}$ on the unit sphere --- form a complete basis to expand vector fields in spherical geometry. We refer the readers to \citet{1988qtam.book.....V} and \citet{1976RSPTA.281..195J} for a detailed introduction to these functions, and we state the important results that we use in this work. We refer the readers to \cite{2020ApJ...895..117B} for an introduction to the specific functions used here. We use two linearly related bases in our analysis that may be defined at a point $\hat{n}=(\theta,\phi)$ in terms of the spherical harmonic $Y_{\ell m}\left(\hat{n}\right)$ as:
\begin{enumerate}
\item Hansen VSH \citep{PhysRev.47.139,1957ApJ...126..457C}, defined as
\begin{align}
\Hansenlm{\left(-1\right)}\left(\hat{n}\right) & =Y_{\ell m}\left(\hat{n}\right)\mathbf{e}_{r},\nonumber \\
\Hansenlm{\left(0\right)}\left(\hat{n}\right) & =\frac{-i}{\sqrt{\ell\left(\ell+1\right)}}\mathbf{e}_{r}\times\grad_{\Omega}Y_{\ell m}\left(\hat{n}\right),\\
\Hansenlm{\left(1\right)}\left(\hat{n}\right) & =\frac{1}{\sqrt{\ell\left(\ell+1\right)}}\grad_{\Omega}Y_{\ell m}\left(\hat{n}\right).\nonumber
\end{align}
\item Phinney-Burridge (PB) VSH \citep{1973GeoJ...34..451P}, that may be expressed as a linear combination of the Hansen VSH basis as
\begin{equation}
\begin{aligned}\PBlm{+1} & =\frac{1}{\sqrt{2}}\left(\Hansenlm{\left(1\right)}-\Hansenlm{\left(0\right)}\right),\\
\PBlm{-1} & =\frac{1}{\sqrt{2}}\left(\Hansenlm{\left(1\right)}+\Hansenlm{\left(0\right)}\right),\\
\PBlm 0 & =\Hansenlm{\left(-1\right)},
\end{aligned}
\label{eq:PB_Hansen_conversion}
\end{equation}
The two bases are related through a rotation by $\pi/4$ about $\mathbf{e}_r$.
\end{enumerate}
The analysis scheme hinges on the fact that the Green functions are expressed most easily in the Hansen basis, whereas their components
in the spherical-polar basis are easier to represent in the PB basis. The contravariant components of the PB basis vectors $\PBlm{\gamma}$ in the helicity
basis are
\begin{equation}
\left[\PBlm{\gamma}\right]^\alpha =\sqrt{\frac{2\ell+1}{4\pi}}\,d_{m\alpha}^{\ell}\left(\theta\right)e^{im\phi} \delta_{\alpha,\gamma},
\end{equation}
where $d_{m\alpha}^{\ell}\left(\theta\right)$ is an element of the Wigner d-matrix and $\delta_{\alpha,\gamma}$ is the Kronecker delta function. We follow \cite{DahlenTromp} and refer to the diagonal components $\left[\PBlm{\gamma}\right]^\gamma$ as generalized spherical harmonics, defined as
\begin{equation}
\sgshlm{\gamma} \left(\theta,\phi\right) = \sqrt{\frac{2\ell+1}{4\pi}} d_{m\gamma}^{\ell}\left(\theta\right)e^{im\phi}.
\end{equation}
The fact that the PB VSH are diagonal in the helicity basis plays a pivotal role in the analysis presented in this work, and allows seamless conversions between a basis of VSH and the spherical-polar one.
\subsection{Derivatives of vector spherical harmonics}
The derivative of the VSH may be computed in the PB VSH following the relations described by \citet{1973GeoJ...34..451P} and \citet{DahlenTromp}, and we choose to retain the notation used by the latter. We may expand a function $f\left(\mathbf{x}\right)$ in the PB VSH basis as
\[
\mathbf{f}\left(\mathbf{x}\right)=\sum_{\ell m\alpha}f_{\ell m}^{\alpha}\left(r\right)\PB{\alpha}{\ell m}\left(\hat{n}\right).
\]
The gradient of $\mathbf{f}\left(\mathbf{x}\right)$ may be expressed as a sum over the gradients of the components. In the helicity basis,
we obtain
\begin{align}
\grad\left[f_{\ell m}^{\alpha}\left(r\right)\PBlm{\alpha}\left(\hat{n}\right)\right] & =\left(\frac{d}{dr}f_{\ell m}^{\alpha}\left(r\right)\right)\mathbf{e}_{0}\PBlm{\alpha}\left(\hat{n}\right)+\nonumber \\
& \frac{1}{r}f_{\ell m}^{\alpha}\left(r\right)\left[\Omega_{\ell}^{\alpha}\sgshlm{-1+\alpha}\left(\hat{n}\right)\mathbf{e}_{-1}\mathbf{e}_{\alpha}-\sgshlm{\alpha}\left(\hat{n}\right)\mathbf{e}_{-1}\mathbf{e}_{\alpha+1}\right.\nonumber \\
& \left.+\Omega_{\ell}^{-\alpha}\sgshlm{1+\alpha}\left(\hat{n}\right)\mathbf{e}_{+1}\mathbf{e}_{\alpha}-\sgshlm{\alpha}\left(\hat{n}\right)\mathbf{e}_{+1}\mathbf{e}_{\alpha-1}\right],\label{eq:grad_VSH_PB}
\end{align}
where $\mathbf{e}_{\alpha}=0$ for $\left|\alpha\right|>1$, and $\Omega_{\ell}^{\alpha}=\sqrt{\left(\ell+\alpha\right)\left(\ell-\alpha+1\right)/2}$.
\subsection{Integral of the three-term product}
One of the key steps in the analysis is evaluating the angular integral
\begin{equation}
I_{\ell_{1}m_{1}\ell_{2}m_{2}\ell_{3}m_{3}}^{n_{1}n_{2}n_{3}}\left(f_{\ell_{3}}^{n_{3}}\left(r\right)\right)=\int d\hat{n}\PB{n_{1}}{\ell_{1}m_{1}}\left(\hat{n}\right)\cdot\left[\PB{n_{2}}{\ell_{2}m_{2}}\left(\hat{n}\right)\cdot\grad\left(f_{\ell_{3}}^{n_{3}}\left(r\right)\PB{n_{3}}{\ell_{3}m_{3}}\left(\hat{n}\right)\right)\right].
\end{equation}
We evaluate the integral in Appendix \ref{sec:Appendix_VSH-triple-integral}, and show that it may be expressed in the form
\begin{align}
I_{\ell_{1}m_{1}\ell_{2}m_{2}\ell_{3}m_{3}}^{n_{1}n_{2}n_{3}}\left(f_{\ell_{3}}^{n_{3}}\left(r\right)\right) & =\left(-1\right)^{m_{2}}C_{\ell_{1}m_{1}\ell_{3}m_{3}}^{\ell_{2}-m_{2}}J_{\ell_{1}\ell_{2}\ell_{3}}^{n_{2}n_{1}n_{3}}\left(f_{\ell_{3}}^{n_{3}}\left(r\right)\right),\label{eq:triple_int}
\end{align}
where
\begin{align}
J_{\ell_{2}\ell_{1}\ell_{3}}^{n_{2}n_{1}n_{3}}\left(f\left(r\right)\right) & =\eta_{\ell_{2}}^{\ell_{1}\ell_{3}}\left(-1\right)^{n_{3}}\left[\delta_{n_{2},0}\left(\frac{d}{dr}f\left(r\right)\right)C_{\ell_{1}-n_{3}\ell_{3}n_{3}}^{\ell_{2}0}\delta_{n_{1},-n_{3}}\right.\nonumber \\
& +\delta_{n_{2},1}\frac{1}{r}f\left(r\right)\left\{ \Omega_{\ell}^{n_{3}}C_{\ell_{1}-n_{3}\ell_{3}n_{3}-1}^{\ell_{2}-1}\delta_{n_{1},-n_{3}}+C_{\ell_{1}-n_{3}-1\ell_{3}n_{3}}^{\ell_{2}-1}\delta_{n_{1},-n_{3}-1}\delta_{n_{3}}^{-1,0}\right\} \nonumber \\
& \left.+\delta_{n_{2},-1}\frac{1}{r}f\left(r\right)\left\{ \Omega_{\ell}^{-n_{3}}C_{\ell_{1}-n_{3}\ell_{3}n_{3}+1}^{\ell_{2}1}\delta_{n_{1},-n_{3}}+C_{\ell_{1}-n_{3}+1\ell_{3}n_{3}}^{\ell_{2}1}\delta_{n_{1},-n_{3}+1}\delta_{n_{3}}^{0,1}\right\} \right],\label{eq:J}\\
\eta_{\ell_{2}}^{\ell_{1}\ell_{3}} & =\sqrt{\frac{\left(2\ell_{1}+1\right)\left(2\ell_{3}+1\right)}{4\pi\left(2\ell_{2}+1\right)}},
\end{align}
$C_{\ell_{1}m_{1}\ell_{3}m_{3}}^{\ell_{2}-m_{2}}$ is the Clebsch-Gordan coefficient that connects the sum of the angular momenta $(\ell_1, m_1)$ and $(\ell_3, m_3)$ to $(\ell_2, -m_2)$, and $\delta_{a}^{b,c}=\delta_{a,b}+\delta_{a,c}$ is the sum of two Kronecker delta functions. The function $J_{\ell_{2}\ell_{1}\ell_{3}}^{n_{2}n_{1}n_{3}}\left(f\left(r\right)\right)$ satisfies the symmetry relation
\begin{equation}
J_{\ell_{2}\ell_{1}\ell_{3}}^{-n_{2}-n_{1}-n_{3}}\left(f\left(r\right)\right)=\left(-1\right)^{\ell+j_{1}+j_{2}}J_{\ell_{2}\ell_{1}\ell_{3}}^{n_{2}n_{1}n_{3}}\left(f\left(r\right)\right).\label{eq:J_symmetry}
\end{equation}
We note that the $J_{\ell_{2}\ell_{1}\ell_{3}}^{n_{2}n_{1}n_{3}}\left(f\left(r\right)\right)$ is non-zero only for the values of $\ell_{1}$, $\ell_{2}$ and $\ell_{3}$ that satisfy the triangle inequality $\left|\ell_{1}-\ell_{2}\right|\leq\ell_{3}\leq\ell_{1}+\ell_{2}$.
\section{Seismic measurements on the Sun}
Acoustic waves in the Sun are excited by vigorous transonic, non-adiabatic convective flows near the photosphere, and these waves subsequently traverse the solar interior to re-emerge and be detected at the surface of the Sun. A key seismic measurement is that of the line-of-sight projected wave velocity inferred from Doppler shifts of atmospheric absorption lines in the Sun. We choose to work in temporal-frequency domain bearing in mind that the background medium is temporally stationary. We work in spherical polar coordinates with the origin at the center of the Sun. A point $\mathbf{x}$ in the Sun may be described by its radial coordinate $r$, its co-latitude $\theta$ and azimuth $\phi$. We also use the notation $\hat{n}=(\theta,\phi)$ to denote a point on a shell at a fixed radius $r$. The isotropic background solar model at equilibrium may be described in terms of the radial profiles of the density $\rho$, pressure $p$, gravitational acceleration $\mathbf{g}$ and sound-speed $c$. The equation governing the propagation of seismic waves in temporal frequency domain at a point $\mathbf{x}=(r,\theta,\phi)$ in the Sun, given a source distribution $\mathbf{S}(\mathbf{x},\omega)$, may be represented in terms of the wave displacement $\bm{\xi}(\mathbf{x},\omega)$ as
\begin{equation}
-\rho\omega^{2}\bm{\xi}-2i\omega\gamma\bm{\xi} - \grad\left(\rho c^{2}\grad\cdot\bm{\xi}-\rho\bm{\xi}\cdot\mathbf{e}_{r}g\right)-g\mathbf{e}_{r}\grad\cdot\left(\rho\bm{\xi}\right) = \mathbf{S}(\mathbf{x},\omega),
\label{eq:waveeqn}
\end{equation}
where the frequency-dependent constant $\gamma$ denotes the attenuation experienced by the wave, and we have suppressed the coordinate-dependence on the left-hand side to simplify the notation. We follow the approach of \citep{2020ApJ...895..117B} and consider the damping constant $\gamma$ to be a polynomial function of the temporal frequency.
We condense the notation by referring to the terms on the left-hand side of Equation \eqref{eq:waveeqn} collectively as $\mathcal{L}\bm{\xi}(\mathbf{x},\omega)$, where the frequency-dependent wave operator $\mathcal{L}$ incorporates the spatial derivatives.
Doppler measurements of seismic waves on the Sun are sensitive to the line-of-sight projected component of the velocity. We assume that seismic observations are carried out at a point $\mathbf{x}_\mathrm{obs}=(r_\mathrm{obs},\theta_\mathrm{obs},\phi_\mathrm{obs})$. This is a great simplification of the actual process of line-formation, since spectral lines form over a broad range of heights in an unsteady atmosphere, therefore observations are not limited to a specific spatial location. We may interpret the radial coordinate $r_\mathrm{obs}$ as an average line-formation height, which is around $150$ km above the photosphere at the disk center for the Fe $6173\,\mbox{\normalfont\AA}$ line \citep{2011SoPh..271...27F} that the Helioseismic and Magnetic Imager \citep[HMI,][]{2012SoPh..275..207S} is sensitive to.
The line-of-sight projected velocity may be expressed in the frequency domain in terms of the line-of-sight vector $\bm{l}\left(\mathbf{x}_\mathrm{obs}\right)$ and the wave displacement $\bm{\xi}\left(\mathbf{x}_\mathrm{obs},\omega\right)$ as
\begin{equation}
v\left(\mathbf{x}_\mathrm{obs},\omega\right)=i\omega\,\bm{l}\left(\mathbf{x}_\mathrm{obs}\right)\cdot\bm{\xi}\left(\mathbf{x}_\mathrm{obs},\omega\right).
\label{eq:v_Doppler}
\end{equation}
The radial coordinate $r_\mathrm{obs}$ that an observation is sensitive to depends on the angular distance of the observation point from the disk center \citep{2015ApJ...808...59K}, which introduces a weak angular dependence on $r_\mathrm{obs}$. We note that the actual measured value will be a convolution of the projected velocity with the point-spread function of the detector, however we do not consider this in the present analysis.
The position-dependence of the line-of-sight vector $\bm{l}\left(\mathbf{x}\right)$ is weak owing to the fact that the distance between the Sun and the Earth is significantly larger than the solar radius $\left(R_{\odot}\approx0.0046\,\mathrm{AU}\right)$, so in practice the line-of-sight direction might be assumed to be identical at all points on the Sun without incurring significant errors. We retain the dependence in subsequent analysis as it does not pose any additional algebraic challenge. Despite the notation used in this work, the line-of-sight vector actually depends on two spatial points -- the point $\mathbf{x}$ on the Sun where seismic wave velocities are measured, as well as the spatial location of the detector. This implies that if we change only the measurement point keeping the detector location fixed, the line-of-sight direction does not transform as a vector field. This issue, however, does not pose a challenge to us as the vector may be trivially recomputed at each measurement point.
Waves on the Sun are excited stochastically by near-surface convection, and the wave sources may be modeled as a Gaussian random process. We follow \citet{2016ApJ...824...49B} and assume that the wave sources are purely radial. This is a simplifying assumption motivated by the fact that the highest flow velocities at the surface are detected in granular downdrafts, however our analysis does not depend fundamentally on this assumption. We denote the source distribution by $\mathbf{S}\left(\mathbf{x},\omega\right)=S_{r}\left(\mathbf{x},\omega\right)\mathbf{e}_{r}$, where the radial component $S_{r}$ has a mean of zero, and a covariance that may be modeled to be isotropic and limited to a shell of radius $r_{\mathrm{src}}$:
\begin{equation}
\left\langle S_{r}^{*}\left(\mathbf{x}_{1};\omega\right)S_{r}\left(\mathbf{x}_{2};\omega\right)\right\rangle =P\left(\omega\right)\delta\left(\mathbf{x}_{1}-\mathbf{x}_{2}\right)\frac{1}{r_{\mathrm{src}}^{2}}\delta\left(\left|\mathbf{x}_{1}\right|-r_{\mathrm{src}}\right),
\label{eq:S_cov}
\end{equation}
where $P(\omega)$ represents the frequency dependence of the source covariance, and the angular brackets denote an ensemble average. We assume $P(\omega)$ to be a Gaussian in this work with a mean of $\omega_0 = 2\pi\times 3\,\mathrm{mHz}$ and a width of $\Delta\omega = 2\pi\times 0.4\,\mathrm{mHz}$. The amplitude of $P(\omega)$ has been arbitrarily chosen to be $1$ as this does not affect travel-time measurements, however this needs to be calibrated for a full-waveform inversion. We choose the source to be located at $75$ km below the photosphere.
This model of the source covariance is inspired by simulations such as those by \citet{1991LNP...388..141N}, where it has been demonstrated that the excitation of waves take place in regions of high non-adiabatic pressure as well as turbulent pressure fluctuations, which occur in the Sun in a thin layer of width around a hundred kilometers below the photosphere. A more realistic model might include a radial profile of the source covariance, however this would significantly increase the computational cost and is beyond the scope of the present work.
A Gaussian source also implies that the wave displacement is a zero-mean Gaussian random variable. The fundamental measurement that interests us therefore is the two-point covariance of seismic waves $C(\mathbf{x}_1,\mathbf{x}_2,\omega)$ \citep{1993Natur.362..430D}. A change in the solar model affects the propagation of seismic waves in the Sun, and consequently alters the measured cross-covariance. In the following sections we develop the formalism to relate changes in the solar model to that of seismic wave travel-times projected from the cross-covariance, focusing specifically on changes introduced by flows in the solar interior.
\subsection{Green function}
Propagation of seismic waves in the Sun is governed by Equation \eqref{eq:waveeqn}, which may be rewritten in terms of the Green function $\mathbf{G}\left(\mathbf{x}_{\mathrm{obs}},\mathbf{x}_{\mathrm{src}},\omega\right)$ that describes the impulse response of the wave equation given an excitation at $\mathbf{x}_\mathrm{src}$ and a measurement at $\mathbf{x}_\mathrm{obs}$. The wave displacement $\bm{\xi}(\mathbf{x},\omega)$ is related to the sources $\mathbf{S}(\mathbf{x},\omega)$ through the Green function as
\begin{align}
\bm{\xi}\left(\mathbf{x}_{\mathrm{obs}},\omega\right) & =\int d\mathbf{x}_{\mathrm{src}}\,\mathbf{G}\left(\mathbf{x}_{\mathrm{obs}},\mathbf{x}_{\mathrm{src}},\omega\right)\cdot
\mathbf{S}\left(\mathbf{x}_{\mathrm{src}},\omega\right).\label{eq:xi_G_S}
\end{align}
We refer the readers to \cite{2020ApJ...895..117B} where the authors had described the numerical computation of the Green function. We may expand the Green function in the PB VSH basis as
\begin{equation}
\mathbf{G}\left(\mathbf{x}_{\mathrm{obs}},\mathbf{x}_{\mathrm{src}},\omega\right)=\sum_{\alpha,\beta=\pm1}\sum_{jm}\gfnomega{\alpha}{\beta}j\left(r_{\mathrm{obs}},r_{\mathrm{src}}\right)\PBjm{\alpha}\left(\hat{n}_{\mathrm{obs}}\right)\PB{\beta*}{jm}\left(\hat{n}_{\mathrm{src}}\right).\label{eq:G_PB_VSH}
\end{equation}
The components of the Green function satisfy the symmetry relations $\gfnomega{\pm\alpha}{\pm\beta}j=\gfnomega{\alpha}{\beta}j$ owing to the fact that the seismic eigenfunctions in the Sun lack a toroidal component. The Green tensor therefore has four independent components, and without loss of generality we choose these to be $G_{0}^{0}$, $G_{0}^{1}$, $G_{1}^{0}$ and $G_{1}^{1}$.
The Green function satisfies the reciprocity relation $\mathbf{G}\left(\mathbf{x}_{\mathrm{obs}},\mathbf{x}_{\mathrm{src}},\omega\right)=\mathbf{G}^{T}\left(\mathbf{x}_{\mathrm{src}},\mathbf{x}_{\mathrm{obs}},\omega\right)$, which may be expressed in terms of the components as
\begin{equation}
G_{\beta}^{\alpha}\left(r_{\mathrm{src}},r_{\mathrm{obs}},\omega\right)=G_{\alpha}^{\beta}\left(r_{\mathrm{obs}},r_{\mathrm{src}},\omega\right).\label{eq:reciprocity}
\end{equation}We denote the components of the Green function corresponding to a radial source by the symbol $\mathbf{G}_{r}$, which is defined by restricting Equation (\ref{eq:G_PB_VSH}) to $\beta=0$. We obtain
\begin{equation}
\mathbf{G}_{r}\left(\mathbf{x}_{\mathrm{obs}},\mathbf{x}_{\mathrm{src}},\omega\right)=\sum_{\alpha=\pm1}\sum_{jm}\gfnomega{\alpha}0j\left(r_{\mathrm{obs}},r_{\mathrm{src}}\right)\PBjm{\alpha}\left(\hat{n}_{\mathrm{obs}}\right)Y_{jm}^{*}\left(\hat{n}_{\mathrm{src}}\right).
\end{equation}
We compute the radial profiles of $G^\alpha_{\beta,j\omega}(r,r_\mathrm{src})$ numerically using a finite-difference scheme following \citet{2020ApJ...895..117B}.
\subsection{Cross-covariance}
The line-of-sight projected velocity $v(\mathbf{x}_\mathrm{obs},\omega)$ from Equation \eqref{eq:v_Doppler} is usually modelled as a zero-mean random variable, so its covariance represents the fundamental measurement in time-distance seismology. The covariance of the Doppler signal may be expressed in terms of the wave displacement as
\begin{align}
C\left(\mathbf{x}_{1},\mathbf{x}_{2},\omega\right) & =\left\langle v^{*}\left(\mathbf{x}_{1},\omega\right) \,v\left(\mathbf{x}_{2},\omega\right)\right\rangle =\omega^{2}\left\langle \bm{l}\left(\mathbf{x}_{1}\right)\cdot\bm{\xi}^{*}\left(\mathbf{x}_{1},\omega\right)\,\bm{l}\left(\mathbf{x}_{2}\right)\cdot\bm{\xi}\left(\mathbf{x}_{2},\omega\right)\right\rangle .
\end{align}
Using Equation (\ref{eq:xi_G_S}) and our model for the source covariance from Equation (\ref{eq:S_cov}), we may express the covariance in terms of the Green function as
\begin{equation}
C\left(\mathbf{x}_{1},\mathbf{x}_{2},\omega\right)=\omega^{2}P\left(\omega\right)\int d\Omega_{\mathrm{src}}\,\bm{l}\left(\mathbf{x}_{1}\right)\cdot\mathbf{G}_{r}^{*}\left(\mathbf{x}_{1},\mathbf{x}_{\mathrm{src}};\omega\right)\bm{l}\left(\mathbf{x}_{2}\right)\cdot\mathbf{G}_{r}\left(\mathbf{x}_{2},\mathbf{x}_{\mathrm{src}};\omega\right),
\end{equation}
where the integral is carried out over the angular distribution of the sources. We may evaluate the angular part of this integral analytically using the separation of variables of the Green function in the PB VSH basis (Equation \ref{eq:G_PB_VSH}), to obtain
\begin{equation}
C\left(\mathbf{x}_{1},\mathbf{x}_{2},\omega\right)=\omega^{2}P\left(\omega\right)\sum_{\alpha,\beta=-1}^{1}\sum_{jm}\gfnomega{\alpha*}0j\left(r_{1},r_{\mathrm{src}}\right)\gfnomega{\beta}0j\left(r_{2},r_{\mathrm{src}}\right)\bm{l}\left(\mathbf{x}_{1}\right)\cdot\PBjm{\alpha*}\left(\hat{n}_{1}\right)\bm{l}\left(\mathbf{x}_{2}\right)\cdot\PBjm{\beta}\left(\hat{n}_{2}\right).
\label{eq:Comegaexpr}
\end{equation}
We may recast the expression as
\begin{equation}
C\left(\mathbf{x}_{1},\mathbf{x}_{2},\omega\right)=\bm{l}\left(\mathbf{x}_{1}\right)\bm{l}\left(\mathbf{x}_{2}\right):\mathbf{C}\left(\mathbf{x}_{1},\mathbf{x}_{2},\omega\right),
\label{eq:Ctensor}
\end{equation}
where the $3\times3$ rank-$2$ tensor
\begin{equation}
\mathbf{C}\left(\mathbf{x}_{1},\mathbf{x}_{2},\omega\right) = \omega^{2}P\left(\omega\right)\sum_{\alpha,\beta=-1}^{1}\sum_{jm}\gfnomega{\alpha*}0j\left(r_{1},r_{\mathrm{src}}\right)\gfnomega{\beta}0j\left(r_{2},r_{\mathrm{src}}\right)\PBjm{\alpha*}\left(\hat{n}_{1}\right)\PBjm{\beta}\left(\hat{n}_{2}\right)
\end{equation}
captures the covariance between the various components of the velocity of seismic waves, and the colon indicates a double contraction $\left(\mathbf{A}:\mathbf{B}=A_{ij}B_{ij}\right)$. We plot the cross-covariance as a function of time in Figure \ref{fig:Ctlosheight} for two different combinations of observation heights, and by including as well as ignoring line-of-sight projection. We show that the results are sensitive to the systematic effects chosen, therefore precise modelling of the cross-covariances might need to account for these.
\begin{figure}
\includegraphics[scale=0.9]{C_heights_los.eps}
\caption{Cross-covariance as a function of time measured between the observation points $\mathbf{x}_1=(R_\odot+200\,\mathrm{km},\pi/2,0)$ and $\mathbf{x}_2=(R_\odot+r_2,\pi/2,\pi/3)$. The legend indicates the height $r$ above the photosphere (in km) at which the observation are carried out for the point $\mathbf{x}_2$. The solid lines represent covariances between the radial components of the wave velocities, whereas the dashed lines represent covariance between line-of-sight projected wave velocities.}
\label{fig:Ctlosheight}
\end{figure}
The advantage of rewriting the expression in the form as in Equation \eqref{eq:Ctensor} is that the tensor $\mathbf{C}$ is a function only of the measurement points $\mathbf{x}_1$ and $\mathbf{x}_2$, and does not depend on the detector location. This also means that under rotation of the observation points on the surface of an isotropic model of the Sun, the covariance $\mathbf{C}$ transforms as a scalar, that is $\left[\mathbf{C}(\mathbf{x}_1^\prime,\mathbf{x}_2^\prime,\omega)\right]^{\alpha \beta} = \left[\mathbf{C}(\mathbf{x}_1,\mathbf{x}_2,\omega)\right]^{\alpha \beta}$ where $\mathbf{x}_i^\prime$ is related to $\mathbf{x}_i$ through a rotation. The projection operator may be thought of as a final step carried out following the modeling of the covariance tensor of seismic wave velocities in the Sun. We demonstrate this rotational symmetry in Figure \ref{fig:Crot} for the points $\mathbf{x}_1\,=\,(R_\odot+200\,\mathrm{km},\pi/2,\pi/12)$ and $\mathbf{x}_2\,=\,(R_\odot+200\,\mathrm{km},\pi/2,\pi/3)$, where we compute the line-of-sight projected cross-covariance in two ways: (1) by using Equation \eqref{eq:Comegaexpr} directly for $\mathbf{x}_1$ and $\mathbf{x}_2$, and (2) by computing the tensor $\mathbf{C}(\mathbf{x}_1^\prime,\mathbf{x}_2^\prime,\omega)$ for $\mathbf{x}_1^\prime=(\pi/2,\pi/6)$ and $\mathbf{x}_2^\prime=(\pi/2,5\pi/12)$, and using the fact that it transforms as a scalar under rotation. We find a close match with the difference being almost entirely numerical, proving the ease of transforming tensors between pairs of points on the sphere that are related by a rotation. We note that such a rotational transformation crucially assumes a separation between the observation height and the angular coordinates, therefore this might not lead to accurate results if the angle of rotation is large, and the center-to-limb difference in line-formation height is significant.
\begin{figure}
\includegraphics[scale=0.8]{Ct_flows_rot.eps}
\caption{Line-of-sight projected cross-covariance as a function of time. The solid line is computed directly using Equation \eqref{eq:Comegaexpr} for $\mathbf{x}_1=(R_\odot+200\,\mathrm{km},\pi/2,\pi/12)$ and $\mathbf{x}_2=(R_\odot+200\,\mathrm{km},\pi/2,\pi/3)$, and the dots are computed for $\mathbf{x}^\prime_1=(R_\odot+200\,\mathrm{km},\pi/2,\pi/6)$ and $\mathbf{x}^\prime_2=(R_\odot+200\,\mathrm{km},\pi/2,5\pi/12)$ followed by rotating the coordinate system by $\pi/12$ about $\mathbf{e}_z$ to align $\mathbf{x}^\prime_i$ with $\mathbf{x}_i$. The close match demonstrates the transformation of the cross-covariance as a scalar under rotation.}
\label{fig:Crot}
\end{figure}
\section{Flows as a perturbation}
Model S \citep{1996Sci...272.1286C} --- which is often used as a standard solar model --- is spherically symmetric and does not explicitly account for the advection of seismic waves by flows present in the Sun. Weak flows in the Sun are therefore treated as perturbations about this model, and their magnitudes and profiles may be inferred in the first Born approximation \citep{2002ApJ...571..966G}. We denote the flow velocity at a point $\mathbf{x}$ within the Sun by the symbol $\mathbf{u}\left(\mathbf{x}\right)$. The advection of seismic waves by the underlying velocity fields is represented by the operator $\delta\mathcal{L}\left(\mathbf{x};\omega\right)=2i\omega\rho\mathbf{u}\left(\mathbf{x}\right)\cdot\grad$ to a linear order in the flow velocity. The resulting advection due to flows alters the local wave speed, and changes the measured seismic signal at the solar surface.
We expand the velocity field in the PB VSH basis as
\begin{equation}
\mathbf{u}\left(\mathbf{x}\right)=u_{00}^{0}\left(r\right)\PB 0{00}\left(\hat{n}\right)+\sum_{\gamma=-1}^{1}\sum_{\ell=1}^{\infty}\sum_{m=-\ell}^{\ell}u_{\ell m}^{\gamma}\left(r\right)\PBlm{\gamma}\left(\hat{n}\right).\label{eq:u_PB_VSH}
\end{equation}
The first term is purely radial and spherically symmetric, and we may choose to drop the term depending on the type of flow that we
are interested in. We use the shorthand $\sum_{\ell m\gamma}$ to denote $\sum_{\ell=0}^{\infty}\sum_{m=-\ell}^{\ell}\sum_{\gamma=-\min\left(1,\ell\right)}^{\min\left(1,\ell\right)}$ --- where $\min\left(1,\ell\right)$ chooses the minimum of $1$ and $\ell$ and restricts $\gamma$ to $0$ for $\ell=0$ --- and rewrite the flow field as
\begin{equation}
\mathbf{u}\left(\mathbf{x}\right)=\sum_{\ell m\gamma}u_{\ell m}^{\gamma}\left(r\right)\PBlm{\gamma}\left(\hat{n}\right).\label{eq:u_PB_VSH_aux}
\end{equation}
\subsection{A change in the Green function}
The Green function $\mathbf{G}\left(\mathbf{x}_{i},\mathbf{x}_{\mathrm{src}};\omega\right) $ dictates the propagation of seismic waves having a frequency $\nu=\omega/2\pi$ between the points $\mathbf{x}_{src}$ and $\mathbf{x}_i$ in the Sun. A shift in wave propagation properties may therefore be described in terms of an altered Green function, one that differs from the original by $\delta\mathbf{G}\left(\mathbf{x}_{i},\mathbf{x}_{\mathrm{src}};\omega\right) $. Our goal is to connect a variation in wave propagation to a corresponding difference in the background model of the Sun. A change in the wave operator by $\delta\mathcal{L}\left(\mathbf{x};\omega\right)$ leads to a variation in the Green function that may be computed in the first Born approximation to be
\begin{align}
\delta\mathbf{G}\left(\mathbf{x}_{i},\mathbf{x}_{\mathrm{src}};\omega\right) & =-\int d\mathbf{x}\mathbf{G}\left(\mathbf{x}_{i},\mathbf{x};\omega\right)\cdot\left[\delta\mathcal{L}\left(\mathbf{x};\omega\right)\mathbf{G}\left(\mathbf{x},\mathbf{x}_{\mathrm{src}};\omega\right)\right],\label{eq:first_born_approx}
\end{align}
The integral is carried out over all the scattering points in the solar interior. We evaluate the angular part of the integral analytically using Equation (\ref{eq:triple_int}), and cast Equation (\ref{eq:first_born_approx}) in the form
\begin{align}
\delta\mathbf{G}\left(\mathbf{x}_{i},\mathbf{x}_{\mathrm{src}};\omega\right) & =\sum_{\ell m\gamma}\int r^{2}dr\,u_{\ell m}^{\gamma}\left(r\right)\sum_{j_{1}j_{2}}\sum_{\alpha_{1}\beta_{2}}J_{\ell j_{1}j_{2}\omega;\alpha_{1}\beta_{2}}^{-\gamma}\left(r,r_{i},r_{\mathrm{src}}\right)\PB{j_{1}j_{2}\alpha_{1}\beta_{2}}{\ell m}\left(\hat{n}_{i},\hat{n}_{\mathrm{src}}\right),\label{eq:dG_PB_VSH}
\end{align}
where
\begin{equation}
J_{\ell j_{1}j_{2}\omega;\alpha_{1}\beta_{2}}^{\gamma}\left(r,r_{i},r_{\mathrm{src}}\right)=-2i\omega\rho\,\sum_{\alpha_{2}\beta_{1}}\gfnomega{\beta_{1}}{\alpha_{1}}{j_{1}}\left(r,r_{i}\right)J_{\ell j_{1}j_{2}}^{\gamma\beta_{1}\alpha_{2}}\left(\gfnomega{\alpha_{2}}{\beta_{2}}{j_{2}}\left(r,r_{\mathrm{src}}\right)\right),\label{eq:Jgamma_defn}
\end{equation}
with $J_{\ell j_{1}j_{2}}^{\gamma\beta_{1}\alpha_{2}}$ as defined in Equation (\ref{eq:J}), and the angular function $\PB{j_{1}j_{2}\alpha_{1}\beta_{2}}{\ell m}\left(\hat{n}_{i},\hat{n}_{\mathrm{src}}\right)$ is a bipolar spherical harmonic that couples the angular momenta $j_{1}$ and $j_{2}$ with $\ell$, defined as
\begin{equation}
\PB{j_{1}j_{2}\alpha_{1}\beta_{2}}{\ell m}\left(\hat{n}_{i},\hat{n}_{\mathrm{src}}\right)=\sum_{m_{1}m_{2}}C_{j_{1}m_{1}j_{2}m_{2}}^{\ell m}\PB{\alpha_{1}}{j_{1}m_{1}}\left(\hat{n}_{i}\right)\PB{\beta_{2}}{j_{2}m_{2}}\left(\hat{n}_{\mathrm{src}}\right).
\end{equation}
We derive the relation in Appendix \ref{subsec:Appendix_Green-function}. The radial component $J_{\ell j_{1}j_{2}\omega;\alpha_{1}\beta_{2}}^{\gamma}$ satisfies the following symmetry relations:
\begin{equation}
\begin{aligned}J_{\ell j_{1}j_{2}\omega;\alpha_{1}\beta_{2}}^{-\gamma} & =\left(-1\right)^{\ell+j_{1}+j_{2}}J_{\ell j_{1}j_{2}\omega;\alpha_{1}\beta_{2}}^{\gamma},\\
J_{\ell j_{1}j_{2}\omega;\pm\alpha_{1},\pm\beta_{2}}^{\gamma} & =J_{\ell j_{1}j_{2}\omega;\alpha_{1}\beta_{2}}^{\gamma}.
\end{aligned}
\label{eq:Jgamma_symemtry}
\end{equation}
The first equation tells us that $J_{\ell j_{1}j_{2}\omega;\alpha_{1}\beta_{2}}^{0}$ is non-zero only if $\ell+j_{1}+j_{2}$ is even.
Under the radial-source assumption, we only need to evaluate the terms $J_{\ell j_{1}j_{2}\omega;\alpha_{1}0}^{\gamma}$ for $\gamma=0$ and $\gamma=1$, bearing in mind that the $\gamma=-1$ term is related to the $\gamma=1$ term through Equation (\ref{eq:Jgamma_symemtry}).
We define the terms
\begin{align}
N_{\ell}^{j_{1}j_{2}} & =\sqrt{\frac{\left(2j_{1}+1\right)\left(2j_{2}+1\right)}{4\pi\left(2\ell+1\right)}},\\
\zeta_{\ell}^{j_{1}j_{2}} & =\frac{\left(\left(\Omega_{j_{1}}^{0}\right)^{2}+\left(\Omega_{j_{2}}^{0}\right)^{2}-\left(\Omega_{\ell}^{0}\right)^{2}\right)}{\Omega_{j_{2}}^{0}\Omega_{j_{1}}^{0}},\\
\end{align}to rewrite the radial function $J_{\ell j_{1}j_{2}\omega;\alpha_{1}\beta_{2}}^{\gamma}\left(r,r_{i},r_{\mathrm{src}}\right) $ as
\begin{align}
J_{\ell j_{1}j_{2}\omega;\alpha_{1}\beta_{2}}^{\gamma}\left(r,r_{i},r_{\mathrm{src}}\right) & =-2i\omega\rho N_{\ell}^{j_{1}j_{2}}C_{j_{1}0j_{2}-\gamma}^{\ell-\gamma}\,\left(\frac{\Omega_{j_{2}}^{0}}{r}\right)^{\left|\gamma\right|}\mathcal{G}_{\ell j_{1}j_{2}\omega;\alpha_{1}\beta_{2}}^{\gamma}\left(r,r_{i},r_{\mathrm{src}}\right),\label{eq:G_radial}
\end{align}
and list the values of $\mathcal{G}_{\ell j_{1}j_{2};\alpha_{1}\beta_{2}}^{0}\left(r,r_{i},r_{\mathrm{src}}\right)$ and $\mathcal{G}_{\ell j_{1}j_{2};\alpha_{1}\beta_{2}}^{1}\left(r,r_{i},r_{\mathrm{src}}\right)$ in Table \ref{tab:Expressions-for-J}. We note that $\mathcal{G}_{\ell j_{1}j_{2};\alpha_{1}\beta_{2}}^{-1}=\mathcal{G}_{\ell j_{1}j_{2};\alpha_{1}\beta_{2}}^{1}$. We list the Clebsch-Gordan relations involved in the evaluation of the terms $\mathcal{G}_{\ell j_{1}j_{2}\omega;\alpha_{1}\beta_{2}}^{\gamma}$ in Appendix \ref{sec:Appendix_VSH-triple-integral}.
\begin{table}
\renewcommand{\arraystretch}{1.5}%
\begin{tabular}{|c|c|}
\hline
Term & Expression\tabularnewline
\hline
\hline
$\mathcal{G}_{\ell j_{1}j_{2}\omega;\alpha_{1}\beta_{2}}^{0}\left(r,r_{i},r_{\mathrm{src}}\right)$ & $\gfnomega 0{\alpha_{1}}{j_{1}}\left(r,r_{i}\right)\frac{d}{dr}\gfnomega 0{\beta_{2}}{j_{2}}\left(r,r_{\mathrm{src}}\right)+\zeta_{\ell}^{j_{1}j_{2}}\gfnomega 1{\alpha_{1}}{j_{1}}\left(r,r_{i}\right)\frac{d}{dr}\gfnomega 1{\beta_{2}}{j_{2}}\left(r,r_{\mathrm{src}}\right)$\tabularnewline
\hline
$\mathcal{G}_{\ell j_{1}j_{2}\omega;\alpha_{1}\beta_{2}}^{1}\left(r,r_{i},r_{\mathrm{src}}\right)$ & $\begin{array}{c}
\gfnomega 0{\alpha_{1}}{j_{1}}\left(r,r_{i}\right)\gfnomega 0{\beta_{2}}{j_{2}}\left(r,r_{\mathrm{src}}\right)+\zeta_{\ell}^{j_{1}j_{2}}\gfnomega 1{\alpha_{1}}{j_{1}}\left(r,r_{i}\right)\gfnomega 1{\beta_{2}}{j_{2}}\left(r,r_{\mathrm{src}}\right)\\
-\frac{1}{\Omega_{j_{1}}^{0}}\gfnomega 1{\alpha_{1}}{j_{1}}\left(r,r_{i}\right)\gfnomega 0{\beta_{2}}{j_{2}}\left(r,r_{\mathrm{src}}\right)-\frac{1}{\Omega_{j_{2}}^{0}}\gfnomega 0{\alpha_{1}}{j_{1}}\left(r,r_{i}\right)\gfnomega 1{\beta_{2}}{j_{2}}\left(r,r_{\mathrm{src}}\right)
\end{array}$\tabularnewline
\hline
\end{tabular}
\caption{Expressions for $G_{\ell j_{0}j_{2};\alpha_{1}\beta_{2}}^{\gamma}\left(r,r_{i},r_{\mathrm{src}}\right)$ that contribute towards the change in the Green function in the PB VSH basis.\label{tab:Expressions-for-J}}
\renewcommand{\arraystretch}{1}
\end{table}
\subsection{Change in the cross-covariance}
The presence of flows in the background model alters properties of seismic waves such as the local propagation speed. Such a difference manifests itself in the surface measurements of wave velocity, and consequently in the two-point cross-covariances. We may express the resultant change in the cross-covariance in terms of the changes in the Green function as
\begin{equation}
\delta C\left(\mathbf{x}_{1},\mathbf{x}_{2};\omega\right)=\bm{l}\left(\mathbf{x}_{1}\right)\bm{l}\left(\mathbf{x}_{2}\right):\delta\mathbf{C}\left(\mathbf{x}_{1},\mathbf{x}_{2};\omega\right),\label{eq:dC}
\end{equation}where
\begin{align}
\delta\mathbf{C}\left(\mathbf{x}_{1},\mathbf{x}_{2};\omega\right) & =\omega^{2}P\left(\omega\right)\int d\Omega_{\mathrm{src}}\,\left[\delta\mathbf{G}_{r}^{*}\left(\mathbf{x}_{1},\mathbf{x}_{\mathrm{src}};\omega\right)\mathbf{G}_{r}\left(\mathbf{x}_{2},\mathbf{x}_{\mathrm{src}};\omega\right)+\left(1\leftrightarrow2\right)^{\dagger}\right],\label{eq:deltaC}
\end{align}and the subscript $r$ indicates that the second index of $\mathbf{G}$ is chosen to coincide with the radial direction at $\mathbf{x}_{\mathrm{src}}$. The term $\left(1\leftrightarrow2\right)^{\dagger}$ is obtained by switching the observation points $\mathbf{x}_{1}$ and $\mathbf{x}_{2}$ in the first term, followed by evaluating its conjugate-transpose. Substituting Equation (\ref{eq:dG_PB_VSH}) into Equation (\ref{eq:deltaC}) and integrating over the angular distribution of the sources, we obtain
\begin{equation}
\delta\mathbf{C}\left(\mathbf{x}_{1},\mathbf{x}_{2};\omega\right)=\sum_{\ell m\gamma}\int r^{2}dr\,u_{\ell m}^{\gamma}\left(r\right)\,\sum_{j_{1}j_{2}}\sum_{\alpha_{1}\alpha_{2}}\mathcal{C}_{\ell j_{1}j_{2}\omega;\alpha_{1}\alpha_{2}}^{\gamma}\left(r,r_{1},r_{2},r_{\mathrm{src}}\right)\PB{j_{1}j_{2},\alpha_{1}\alpha_{2}}{\ell m}\left(\hat{n}_{1},\hat{n}_{2}\right),\label{eq:dC_biposh}
\end{equation}where
\begin{align}
\mathcal{C}_{\ell j_{1}j_{2}\omega;\alpha_{1}\alpha_{2}}^{\gamma}\left(r,r_{1},r_{2},r_{\mathrm{src}}\right) & =\omega^{2}P\left(\omega\right)\left(J_{\ell j_{1}j_{2}\omega;\alpha_{1}0}^{-\gamma*}\left(r,r_{1},r_{\mathrm{src}}\right)\gfnomega{\alpha_{2}}0{j_{2}}\left(r_{2},r_{\mathrm{src}}\right)\right.\nonumber \\
& \left.\quad+\gfnomega{\alpha_{1}*}0{j_{1}}\left(r_{1},r_{\mathrm{src}}\right)J_{\ell j_{2}j_{1}\omega;\alpha_{2}0}^{\gamma}\left(r,r_{2},r_{\mathrm{src}}\right)\right),\label{eq:C_components_defn}
\end{align}
and $J_{\ell j_{1}j_{2}\omega;\alpha0}^{\gamma}$ is defined in Equation (\ref{eq:Jgamma_defn}). We derive the expression in Equation (\ref{eq:deltaC}) in Appendix \ref{sec:Appendix_source_angle_int}. The function $\mathcal{C}_{\ell j_{1}j_{2}\omega;\alpha_{1}\alpha_{2}}^{\gamma}$ obeys the symmetry relations in Equation \eqref{eq:Jgamma_symemtry}, as well as
\begin{equation}
\mathcal{C}_{\ell j_{1}j_{2}\omega;\alpha_{1}\alpha_{2}}^{\gamma}\left(r,r_{1},r_{2},r_{\mathrm{src}}\right)=\mathcal{C}_{\ell j_{2}j_{1}\omega;\alpha_{2}\alpha_{1}}^{-\gamma*}\left(r,r_{2},r_{1},r_{\mathrm{src}}\right).
\end{equation}
We define the line-of-sight-projected bipolar spherical harmonic
\begin{equation}
P_{\ell m}^{j_{1}j_{2},\alpha_{1}\alpha_{2}}\left(\mathbf{x}_{1},\mathbf{x}_{2}\right)=\bm{l}\left(\mathbf{x}_{1}\right)\bm{l}\left(\mathbf{x}_{2}\right):\PB{j_{1}j_{2},\alpha_{1}\alpha_{2}}{\ell m}\left(\hat{n}_{1},\hat{n}_{2}\right),
\end{equation}
and collect the terms summed over in Equation (\ref{eq:dC_biposh}) to define
\begin{equation}
\mathcal{C}_{\ell m}^{\gamma}\left(r,\mathbf{x}_{1},\mathbf{x}_{2};\omega\right)=\sum_{j_{1}j_{2}}\sum_{\alpha_{1}\alpha_{2}}\mathcal{C}_{\ell j_{1}j_{2}\omega;\alpha_{1}\alpha_{2}}^{\gamma}\left(r,r_{1},r_{2},r_{\mathrm{src}}\right)P_{\ell m}^{j_{1}j_{2},\alpha_{1}\alpha_{2}}\left(\mathbf{x}_{1},\mathbf{x}_{2}\right)
\end{equation}
in order to simplify the notation. We rewrite Equation \eqref{eq:dC} in terms of this as
\begin{equation}
\delta C\left(\mathbf{x}_{1},\mathbf{x}_{2};\omega\right)=\sum_{\ell m\gamma}\int r^{2}dr\,u_{\ell m}^{\gamma}\left(r\right)\,\mathcal{C}_{\ell m}^{\gamma}\left(r,\mathbf{x}_{1},\mathbf{x}_{2};\omega\right).\label{eq:dC_integrated}
\end{equation}
\section{Sensitivity kernel\label{subsec:Kernels}}
A change in the cross-covariance of seismic waves by $\delta C\left(\mathbf{x}_{1},\mathbf{x}_{2},\omega\right)$ as measured at the points $\mathbf{x}_1$ and $\mathbf{x}_2$ in turn results in a variation in the time $\tau_{12}$ that the wave takes to travel between these two points. At a linear order, this change in travel-time $\delta\tau_{12}$ may be related to the change in the cross-covariance through
\begin{equation}
\delta\tau_{12}=\int_{0}^{\infty}\frac{d\omega}{2\pi}\,2\Re\left[h^{*}\left(\mathbf{x}_{1},\mathbf{x}_{2},\omega\right)\delta C\left(\mathbf{x}_{1},\mathbf{x}_{2},\omega\right)\right],
\label{eq:dtau_freqdomain}
\end{equation}
\citep{2002ApJ...571..966G}. Substituting Equation (\ref{eq:dC_integrated}) into Equation (\ref{eq:dtau_freqdomain}), we obtain a relation between the travel-time shifts and the components of the background flow velocity field:
\begin{equation}
\delta\tau_{12}=\sum_{\ell m\gamma}\int r^{2}dr\,K_{\gamma,\ell m}\left(r,\mathbf{x}_{1},\mathbf{x}_{2}\right)u_{\ell m}^{\gamma}\left(r\right),\label{eq:travel-time-kernel_velocity-integral}
\end{equation}
where $K_{\gamma,\ell m}\left(r;\mathbf{x}_{1},\mathbf{x}_{2}\right)$, defined as
\begin{equation}
K_{\gamma,\ell m}\left(r;\mathbf{x}_{1},\mathbf{x}_{2}\right)=\int_{0}^{\infty}\frac{d\omega}{2\pi}\,\left[h^{*}\left(\mathbf{x}_{1},\mathbf{x}_{2},\omega\right)\mathcal{C}_{\ell m}^{\gamma}\left(r,\mathbf{x}_{1},\mathbf{x}_{2};\omega\right)+h\left(\mathbf{x}_{1},\mathbf{x}_{2},\omega\right)\left(-1\right)^{m}\mathcal{C}_{\ell-m}^{-\gamma*}\left(r,\mathbf{x}_{1},\mathbf{x}_{2};\omega\right)\right],\label{eq:kernel_components}
\end{equation}
is the covariant component of the sensitivity kernels corresponding to the component of the flow velocity denoted by $\gamma$ in the PB VSH basis. We see that $K_{-\gamma,\ell-m}=\left(-1\right)^{m}K_{\gamma,\ell m}^{*}$, reaffirming the vector nature of the kernel. Specifically, we find that the components of the kernel for $\gamma=-1$ and $\gamma=1$ are related through $K_{-1,\ell-m}=\left(-1\right)^{m}K_{1,\ell m}^{*}$. This also tells us that the kernel $K_{0,\ell0}$ --- which corresponds to axisymmetric radial flows --- is purely real. Equation \eqref{eq:travel-time-kernel_velocity-integral} sets up the inverse problem that we need to solve to compute the velocity components. We may further use the condition $u_{\ell m}^{\gamma*} = (-1)^m u_{\ell-m}^{-\gamma}$ --- arising from the fact that the velocity $\mathbf{u}(\mathbf{x})$ is real --- to limit the number of terms that appear in Equation (\ref{eq:travel-time-kernel_velocity-integral}).
We may use the symmetry relations from Equation (\ref{eq:Jgamma_symemtry}) to obtain the expression for $K_{\gamma,\ell m}\left(r;\mathbf{x}_{1},\mathbf{x}_{2}\right)$ in the PB VSH basis to be
\begin{align}
K_{\gamma,\ell m}\left(r;\mathbf{x}_{1},\mathbf{x}_{2}\right) & =\int_{0}^{\infty}\frac{d\omega}{2\pi}\,\sum_{j_{1}j_{2}}\sum_{\alpha_{1}\alpha_{2}}\mathcal{K}_{\ell j_{1}j_{2}\omega;\alpha_{1}\alpha_{2}}^{\gamma}\left(r,\mathbf{x}_{1},\mathbf{x}_{2}\right)P_{\ell m}^{j_{1}j_{2},\alpha_{1}\alpha_{2}}\left(\mathbf{x}_{1},\mathbf{x}_{2}\right),\label{eq:K_sepvar}
\end{align}
where we have defined
\begin{align}
\mathcal{K}_{\ell j_{1}j_{2}\omega;\alpha_{1}\alpha_{2}}^{\gamma}\left(r,\mathbf{x}_{1},\mathbf{x}_{2}\right) & =2\Re\left[h^{*}\left(\mathbf{x}_{1},\mathbf{x}_{2},\omega\right)\mathcal{C}_{\ell j_{1}j_{2}\omega;\alpha_{1}\alpha_{2}}^{\gamma}\left(r,r_{1},r_{2};\omega\right)\right].\label{eq:K_components_defn}
\end{align}
The function $\mathcal{K}_{\ell j_{1}j_{2}\omega;\alpha_{1}\alpha_{2}}^{\gamma}$ satisfies symmetry relations analogous to $\mathcal{C}_{\ell j_{1}j_{2}\omega;\alpha_{1}\alpha_{2}}^{\gamma}$. Specifically, we use
\begin{equation}
\mathcal{K}_{\ell j_{1}j_{2}\omega;\alpha_{1}\alpha_{2}}^{-\gamma}=\left(-1\right)^{\ell+j_{1}+j_{2}}\mathcal{K}_{\ell j_{1}j_{2}\omega;\alpha_{1}\alpha_{2}}^{\gamma},
\end{equation}
to see that $\mathcal{K}_{\ell j_{1}j_{2}\omega;\alpha_{1}\alpha_{2}}^{0}$ is non-zero only for even values of $\ell+j_{1}+j_{2}$, and the combinations $\mathcal{K}_{\ell j_{1}j_{2}\omega;\alpha_{1}\alpha_{2}}^{1}+\mathcal{K}_{\ell j_{1}j_{2}\omega;\alpha_{1}\alpha_{2}}^{-1}$ and $\mathcal{K}_{\ell j_{1}j_{2}\omega;\alpha_{1}\alpha_{2}}^{1}-\mathcal{K}_{\ell j_{1}j_{2}\omega;\alpha_{1}\alpha_{2}}^{-1}$ are non-zero for even and odd values of $\ell+j_{1}+j_{2}$ respectively.
We may compute the three-dimensional profile of the kernel by summing up over the kernel components and using $K^\gamma_{\ell m}=K^*_{\gamma,\ell m}$ to obtain
\begin{equation}
\mathbf{K}\left(\mathbf{x};\mathbf{x}_{1},\mathbf{x}_{2}\right)=\sum_{\gamma\ell m}K_{\gamma,\ell m}^{*}\left(r;\mathbf{x}_{1},\mathbf{x}_{2}\right)\PBlm{\gamma}\left(\hat{n}\right).\label{eq:K_3D}
\end{equation}We may use the expansion of the PB VSH in the spherical polar basis and obtain the appropriately directed components of the kernel to be
\begin{align}
K_{r}\left(\mathbf{x};\mathbf{x}_{1},\mathbf{x}_{2}\right) & =\sum_{\ell m}K_{0,\ell m}^{*}\left(r;\mathbf{x}_{1},\mathbf{x}_{2}\right)Y_{\ell m}\left(\hat{n}\right),\\
K_{\theta}\left(\mathbf{x};\mathbf{x}_{1},\mathbf{x}_{2}\right) & =-\sqrt{2}\sum_{\ell m}\Re\left[K_{1,\ell m}^{*}\left(r;\mathbf{x}_{1},\mathbf{x}_{2}\right)\sgshlm{+1}\left(\hat{n}\right)\right],\\
K_{\phi}\left(\mathbf{x};\mathbf{x}_{1},\mathbf{x}_{2}\right) & =\sqrt{2}\sum_{\ell m}\Im\left[K_{1,\ell m}^{*}\left(r;\mathbf{x}_{1},\mathbf{x}_{2}\right)\sgshlm{+1}\left(\hat{n}\right)\right].
\end{align}
We plot the cross-sections of the three-dimensional profile of the kernel in Figure \ref{fig:Ku_3D} choosing the observation points to be $\mathbf{x}_1=(R_\odot+200\,\mathrm{km},\pi/2,0)$ and $\mathbf{x}_2=(R_\odot+200\,\mathrm{km},\pi/2,\pi/3)$. The panel on the left shows a longitudinal slice through $\phi=\pi/6$ --- midway between the azimuths at which the measurements are carried out --- whereas the one on the right shows a latitudinal section through the Equator, passing through the observation points. The kernels have been computed by summing up over VSH modes of the flow velocity with angular degrees in the range $0\leq\ell\leq160$, where the upper bound arises from the limits of the numerical accuracy in evaluating Clebsch-Gordan coefficients. We describe the numerical evaluation in section \ref{sec:Numerical}.
\begin{figure*}
\includegraphics[scale=0.85]{Ku_3D_sections.eps}
\caption{\label{fig:Ku_3D} Longitudinal (left) and Equatorial (right) sections of the three-dimensional profile of the kernel for two observation points located on the Equator separated by 60 degrees at a height of $200$ km above the photosphere. The points have been marked in yellow in the panel on the right. The kernel has been multiplied by the radial sound-speed profile, and the color scale has been saturated to highlight the deeper layers. The kernel has been computed in CGS units.}
\end{figure*}
The kernels for $m=0$ are of particular interest as these correspond to azimuthally symmetric flow profiles such as meridional flows and differential rotation. We shall look at these in the following sections.
\subsection{Kernels for axisymmetric flows}
Components $u_{\ell0}^{\left(\alpha\right)}\left(r\right)$ of axisymmetric flows in the Hansen VSH basis have a geometrical interpretation arising from the fact that the Hansen basis vector $\Hansen{\left(1\right)}{\ell0}\left(\hat{n}\right)$ is directed along $\mathbf{e}_{\theta}$ whereas $\Hansen{\left(0\right)}{\ell0}\left(\hat{n}\right)$ is directed along $\mathbf{e}_{\phi}$. This implies that spheroidal velocity profiles such as meridional flows may be expressed in terms of the two sets of components $u_{\ell0}^{\left(-1\right)}\left(r\right)$ and $u_{\ell0}^{\left(1\right)}\left(r\right)$ whereas toroidal profiles may be expressed in terms of $u_{\ell0}^{\left(0\right)}\left(r\right)$. We may use the relationship between the Hansen and the PB VSH bases from Equation (\ref{eq:PB_Hansen_conversion}) alongside the conjugation relation $u_{\ell0}^{\alpha*}=u_{\ell0}^{-\alpha}$ to obtain
\begin{equation}
\begin{aligned}u_{\ell0}^{\left(1\right)}\left(r\right) & =\sqrt{2}\Re\left[u_{\ell0}^{1}\left(r\right)\right],\\
u_{\ell0}^{\left(0\right)}\left(r\right) & =-\sqrt{2}i\Im\left[u_{\ell0}^{1}\left(r\right)\right].
\end{aligned}
\label{eq:u_axisym_comp_PB_Hansen}
\end{equation}
This further implies that the tangential components of axisymmetric flows may be expanded in terms of just the PB VSH components $u_{\ell0}^{1}\left(r\right)$. We develop the following analysis in terms of the real and imaginary components of $u_{\ell0}^{1}\left(r\right)$ to demonstrate that the kernels are manifestly real.
We may use $K_{-1,\ell0}=K_{1,\ell0}^{*}$ and rewrite the expression for the travel-time shift from Equation (\ref{eq:travel-time-kernel_velocity-integral})
in the form
\begin{equation}
\delta\tau_{12}=\sum_{\ell}\int r^{2}dr\,\left[K_{0,\ell0}\left(r\right)u_{\ell0}^{0}\left(r\right)+2\Re\left[K_{1,\ell0}\left(r\right)\right]\Re\left[u_{\ell0}^{1}\left(r\right)\right]-2\Im\left[K_{1,\ell0}\left(r\right)\right]\Im\left[u_{\ell0}^{1}\left(r\right)\right]\right],\label{eq:dtau_axisym_PB}
\end{equation}where we have suppressed the explicit dependence of the kernel components on the observation points $\mathbf{x}_{1}$ and $\mathbf{x}_{2}$ for brevity. We define
\begin{equation}
\begin{aligned}K_{r,\ell0}\left(r\right) & =K_{0,\ell0}\left(r\right),\\
K_{\theta,\ell0}\left(r\right) & =2\Re\left[K_{1,\ell0}\left(r\right)\right],\\
K_{\phi,\ell0}\left(r\right) & =-2\Im\left[K_{1,\ell0}\left(r\right)\right],
\end{aligned}
\label{eq:K_r_theta_phi_l0}
\end{equation}
and rewrite Equation (\ref{eq:dtau_axisym_PB}) as \begin{equation}
\delta\tau_{12}=\sum_{\ell}\int_{0}^{R_{\odot}}r^{2}dr\,\left[K_{r,\ell0}\left(r\right)u_{\ell0}^{0}\left(r\right)+K_{\theta,\ell0}\left(r\right)\Re\left[u_{\ell0}^{1}\left(r\right)\right]+K_{\phi,\ell0}\left(r\right)\Im\left[u_{\ell0}^{1}\left(r\right)\right]\right].\label{eq:dtau_axisym_PB_geom}
\end{equation}
The first terms in the expression corresponds to a radial flow, the second to a poloidal flow, whereas the last term corresponds to a toroidal flow. We may use information about the geometrical orientations of the flow field --- if available --- to further restrict the number of coefficients. We also note that the components of the kernel as defined here are related to those in the Hansen basis through a scaling.
\subsection{Kernels for meridional flows}
Meridional flows are restricted to the $\mathbf{e}_{r}-\mathbf{e}_{\theta}$ plane by definition, and are assumed to be azimuthally symmetric. Under these assumptions we need to solve only for the $m=0$ component, and may further use the fact that the flow components are real and satisfy $u_{\ell0}^{+1}=u_{\ell0}^{-1}$. Equation (\ref{eq:dtau_axisym_PB_geom}) tells us that a change in travel time may be related to the flow coefficients through
\begin{align}
\delta\tau_{12} & =\sum_{\ell}\int r^{2}dr\,\left[K_{r,\ell0}\left(r\right)u_{\ell0}^{0}\left(r\right)+K_{\theta,\ell0}\left(r\right)u_{\ell0}^{1}\left(r\right)\right].
\end{align}
We therefore need to compute the components $K_{r,\ell0}\left(r;\mathbf{x}_{1},\mathbf{x}_{2}\right)$ and $K_{\theta,\ell0}\left(r;\mathbf{x}_{1},\mathbf{x}_{2}\right)$. We use Equations (\ref{eq:K_sepvar}) and (\ref{eq:K_r_theta_phi_l0}) to obtain
\begin{align}
K_{r,\ell0}\left(r;\mathbf{x}_{1},\mathbf{x}_{2}\right) & =\int_{0}^{\infty}\frac{d\omega}{2\pi}\,\sum_{j_{1}j_{2}}\sum_{\alpha_{1}\alpha_{2}}\mathcal{K}_{\ell j_{1}j_{2}\omega;\alpha_{1}\alpha_{2}}^{0}\left(r,\mathbf{x}_{1},\mathbf{x}_{2}\right)\Re\left[P_{\ell0}^{j_{1}j_{2},\alpha_{1}\alpha_{2}}\left(\mathbf{x}_{1},\mathbf{x}_{2}\right)\right],\label{eq:Kr_l0}\\
K_{\theta,\ell0}\left(r;\mathbf{x}_{1},\mathbf{x}_{2}\right) & =\int_{0}^{\infty}\frac{d\omega}{2\pi}\,\sum_{j_{1}j_{2}}\sum_{\alpha_{1}\alpha_{2}}\left(1+\left(-1\right)^{\ell+j_{1}+j_{2}}\right)\mathcal{K}_{\ell j_{1}j_{2}\omega;\alpha_{1}\alpha_{2}}^{1}\left(r,\mathbf{x}_{1},\mathbf{x}_{2}\right)\Re\left[P_{\ell0}^{j_{1}j_{2},\alpha_{1}\alpha_{2}}\left(\mathbf{x}_{1},\mathbf{x}_{2}\right)\right],\label{eq:Ktheta_l0}
\end{align}
We find that the contributions towards $K_{1,\ell0}^{\theta}$ comes only from the modes for which $\ell+j_{1}+j_{2}$ is even. The same constraint also implicitly holds for $K_{r,\ell0}$ as $\mathcal{K}_{\ell j_{1}j_{2}\omega;\alpha_{1}\alpha_{2}}^{0}$ is non-zero only for even values of $\ell+j_{1}+j_{2}$. The geometric orientation of the flow field would further reduce the number of $\ell$s contributing towards the travel time, for example meridional flows may be represented in terms of even $\ell$s.
We plot $K_{r,\ell0}$ and $K_{\theta,\ell0}$ for different values of $\ell$ in Figure \ref{fig:Kernels-merid_flow}, choosing the observation points to be $\mathbf{x}_1=(R_\odot+200\,\mathrm{km},\pi/2,0)$ and $\mathbf{x}_2=(R_\odot+200\,\mathrm{km},\pi/4,0)$. We may simplify the inverse problem further if we assume mass conservation, and solve for kernels corresponding to the $\phi$-component of the stream function. We describe this procedure in Section \ref{subsec:Mass-conservation:-kernels}.
\begin{figure}
\includegraphics[scale=1]{Kl0rtheta.eps}
\caption{Kernels for radial and tangential components of meridional flow for two observation points at $\mathbf{x}_1=(R_\odot+200\,\mathrm{km},\pi/2,0)$ and $\mathbf{x}_2=(R_\odot+200\,\mathrm{km},\pi/4,0)$. The kernels are in units of $\mathrm{s}/(\mathrm{cm}/\mathrm{s})/\mathrm{cm}^3$.}
\label{fig:Kernels-merid_flow}
\end{figure}
\subsection{Kernels for rotation \label{subsec:Kernels-for-rotation}}
Rotations of the Sun may be assumed to azimuthally symmetric $\left(m=0\right)$ and directed along $\mathbf{e}_{\phi}$. In this case the flow components are imaginary and satisfy $u_{\ell0}^{+1}=-u_{\ell0}^{-1}$. This implies that we need to solve for the kernel functions $K_{\phi,\ell0}\left(r;\mathbf{x}_{1},\mathbf{x}_{2}\right)$ that relate a change in travel time to the background flow through
\begin{align}
\delta\tau_{12} & =\sum_{\ell}\int r^{2}dr\,K_{\phi,\ell0}\left(r;\mathbf{x}_{1},\mathbf{x}_{2}\right)\Im\left[u_{\ell0}^{1}\left(r\right)\right].
\end{align}
We may rewrite the expression for $K_{\phi,\ell0}\left(r;\mathbf{x}_{1},\mathbf{x}_{2}\right)$ as
\begin{align}
K_{\phi,\ell0}\left(r;\mathbf{x}_{1},\mathbf{x}_{2}\right) & =\int_{0}^{\infty}\frac{d\omega}{2\pi}\,\sum_{j_{1}j_{2}}\sum_{\alpha_{1}\alpha_{2}}\left(\left(-1\right)^{\ell+j_{1}+j_{2}}-1\right)\mathcal{K}_{\ell j_{1}j_{2}\omega;\alpha_{1}\alpha_{2}}^{1}\left(r,\mathbf{x}_{1},\mathbf{x}_{2}\right)\Im\left[P_{\ell0}^{j_{1}j_{2},\alpha_{1}\alpha_{2}}\left(\mathbf{x}_{1},\mathbf{x}_{2}\right)\right].\label{eq:Kphi_l0}
\end{align}
We find that the contributions only come from the modes for which $\ell+j_{1}+j_{2}$ is odd. The transformation of the Hansen VSH under coordinate inversion indicates that we only need to solve for the coefficients $u_{\ell0}^{1}\left(r\right)$ for odd values of $\ell$ \citep{1991ApJ...369..557R}, with $\ell=1$ corresponding to uniform or radially differential rotation, and $\ell\geq3$ corresponding to latitudinal differential rotation. We compute the function $K_{\phi,\ell0}$ for the observation points $\mathbf{x}_1=(R_\odot+200\,\mathrm{km},\pi/2,0)$ and $\mathbf{x}_2=(R_\odot+200\,\mathrm{km},\pi/2,\pi/3)$, and plot their radial profiles in Figure \ref{fig:Kernels-for-rotation} for different values of $\ell$.
\begin{figure*}
\includegraphics{Kl0phi.eps}
\caption{Radial profiles of the imaginary part of the kernels for rotation for two observation points at $\mathbf{x}_1=(R_\odot+200\,\mathrm{km},\pi/2,0)$ and $\mathbf{x}_2=(R_\odot+200\,\mathrm{km},\pi/2,\pi/3)$, for various spherical harmonic degrees $\ell$ and the azimuthal order $m=0$. The kernels are in units of $\mathrm{s}/(\mathrm{cm}/\mathrm{s})/\mathrm{cm}^3$.}
\label{fig:Kernels-for-rotation}
\end{figure*}
\subsection{Mass conservation: kernels for the stream function \label{subsec:Mass-conservation:-kernels}}
A temporally-stationary, mass-conserving flow field $\mathbf{u}\left(\mathbf{x}\right)$ satisfies the continuity relation $\grad\cdot\left(\rho\mathbf{u}\right)=0$, and may be represented in terms of a stream function $\bm{\psi}\left(\mathbf{x}\right)$ as
\begin{equation}
\mathbf{u}\left(\mathbf{x}\right)=\frac{1}{\rho}\grad\times\bm{\psi}\left(\mathbf{x}\right).\label{eq:u_psi}
\end{equation}
The choice of stream function is not unique for a specified flow field $\mathbf{u}(\mathbf{x})$, as the transformation $\bm{\psi}\rightarrow\bm{\psi}+\grad\xi$ for a scalar field $\xi(\mathbf{x})$ leads the same flow velocity. This ambiguity may be eliminated by imposing a suitable constraint on $\bm{\psi}$, also referred to as gauge fixing. However this is not critical to our analysis, firstly because we are interested in the existence and not in the uniqueness of the stream function, and secondly because in the interesting special case of meridional flows, the stream function is toroidal, and consequently free from such an ambiguity.
We may evaluate the kernel for the stream function by substituting Equation \eqref{eq:u_psi} into
$
\delta\tau=\int d\mathbf{x}\,\mathbf{K}_{\mathbf{u}}\left(\mathbf{x}\right)\cdot\mathbf{u}\left(\mathbf{x}\right)
$
and integrating by parts, to obtain
\begin{equation}
\delta\tau=\int d\mathbf{x}\left(\bm{\nabla}\times\left(\frac{1}{\rho}\mathbf{K}_{\mathbf{u}}\left(\mathbf{x}\right)\right)\right)\cdot\bm{\psi}\left(\mathbf{x}\right)-\int dS\left.\frac{1}{\rho}\mathbf{K}_{\mathbf{u}}\left(\mathbf{x}\right)\cdot\bm{\psi\left(\mathbf{x}\right)}\right\rfloor _{S},
\end{equation}
where we have suppressed the explicit dependence of the kernel on the observation points to simplify the notation. The second term is a surface integral over the boundary of the domain, and may be dropped if the stream function $\psi(\mathbf{x})$ goes to zero at the extremities. In such a case the kernel for the stream function is related to that for the flow through
\begin{equation}
\mathbf{K}_{\psi}\left(\mathbf{x}\right)=\grad\times\left(\frac{1}{\rho}\mathbf{K}_{\mathbf{u}}\left(\mathbf{x}\right)\right).
\label{Kpsi_Ku_vec}
\end{equation}
We may split Equation \eqref{Kpsi_Ku_vec} into components in the PB VSH basis as
\begin{align}
K_{\psi,0,\ell m}\left(r\right) & =-\frac{i\Omega_{\ell}^{0}}{\rho r}\left(K_{-1,\ell m}\left(r\right)-K_{+1,\ell m}\left(r\right)\right),\\
K_{\psi,\pm1,\ell m}\left(r\right) & =\pm i\left(\frac{1}{r}\frac{d}{dr}\left(\frac{r K_{\pm1,\ell m}\left(r\right)}{\rho}\right)-\frac{\Omega_{\ell}^{0}}{\rho r}K_{0,\ell m}\left(r\right)\right).
\end{align}
\citep[see][ for the components of the curl]{DahlenTromp}. In the special case of meridional flow --- where the velocity field is entirely in the $\mathbf{e}_{r}-\mathbf{e}_{\theta}$ plane --- the stream function is directed along $\mathbf{e}_{\phi}$. In addition, an axisymmetric flow field would necessitate a stream function that is azimuthally symmetric as well. Drawing an analogy with section \ref{subsec:Kernels-for-rotation} and using $\psi_{\ell0}^{+1}\left(r\right)=-\psi_{\ell0}^{-1}\left(r\right)$, we compute the kernel component
\begin{align}
K_{\psi_{\phi},\ell0}\left(r\right) & = -\frac{1}{r}\frac{d}{dr}\left(\frac{r K_{\theta,\ell0}\left(r\right)}{\rho}\right) + 2\Omega_{\ell}^{0}\frac{K_{r,\ell0}\left(r\right)}{\rho r}.
\end{align}
A change in travel time would be related to the stream function component $\psi_{\ell0}^{+1}$ through
\begin{align}
\delta\tau_{12} & =\sum_{\ell}\int_{0}^{R_{\odot}}r^{2}dr\,K_{\psi_{\phi},\ell0}\left(r;\mathbf{x}_{1},\mathbf{x}_{2}\right)\Im\left[\psi_{\ell0}^{+1}\left(r\right)\right].
\end{align}
Once we evaluate the stream function, we may compute the flow coefficients from it using
\begin{align}
u_{\ell0}^{0}\left(r\right) & =\frac{2\Omega_{\ell}^{0}}{\rho r}\Im\left[\psi_{\ell0}^{+1}\left(r\right)\right],\quad u_{\ell0}^{\pm1}\left(r\right)=\frac{1}{\rho r}\frac{d}{dr}\left(r \Im\left[\psi_{\ell0}^{+1}\left(r\right)\right]\right).
\end{align}
We may further compute the flow velocity in spherical polar coordinates as
\begin{equation}
\begin{aligned}u_{r}\left(\mathbf{x}\right) & =\sum_{\ell}u_{\ell0}^{0}\left(r\right)\,Y_{\ell0}\left(\hat{n}\right),\\
u_{\theta}\left(\mathbf{x}\right) & =\sum_{\ell}\frac{1}{\Omega_{\ell}^{0}}\,u_{\ell0}^{+1}\left(r\right)\,\partial_{\theta}Y_{\ell0}\left(\hat{n}\right).
\end{aligned}
\end{equation}
We demonstrate that this approach reproduces the standard spherical-polar coordinate results by choosing the specific example of meridional flows, for which the stream function is axisymmetric and directed along $\mathbf{e}_\phi$. Such a flow is more conveniently analysed in the Hansen VSH basis. We note that for $m=0$, the Hansen basis vector $\Hansen{(0)}{\ell 0}(\hat{n}) = -i\mathbf{e}_\phi \partial_\theta Y_{\ell 0}(\hat{n})/\sqrt{(\ell(\ell + 1))}$. The azimuthal component of the stream function may therefore be represented as
\begin{equation}
\psi(\mathbf{x}) = \mathbf{e}_\phi\cdot\bm{\psi}(\mathbf{x})= \sum_{\ell}\Im\left[\psi^{(0)}_{\ell0}(r)\right] \frac{1}{\sqrt{\ell(\ell+1)}}\partial_{\theta}Y_{\ell0}(\hat{n}),
\label{eq:psi_legendre}
\end{equation}
where the components $\psi^{(0)}_{\ell 0}$ are related to the PB-basis components $\psi_{\ell 0}^{1}$ through $\psi^{(0)}_{\ell 0} = -\sqrt{2}\psi_{\ell 0}^{+1}$. To simplify the notation, we define $\psi_\ell(r)=\Im[\psi^{(0)}_{\ell 0}(r)]$ The flow velocity for meridional circulation may be expressed in the Hansen basis as
\begin{equation}
\rho\mathbf{u}\left(\mathbf{x}\right)=-\sum_{\ell}\sqrt{2}\Omega_{\ell}^{0}\frac{\psi_{\ell}(r)}{r}Y_{\ell0}\left(\hat{n}\right)\mathbf{e}_{r}+\frac{1}{\sqrt{2}\Omega_{\ell}^{0} r}\frac{d\left(r\psi_{\ell}(r)\right)}{dr}\,\partial_{\theta}Y_{\ell0}\left(\hat{n}\right)\,\mathbf{e}_{\theta}.\label{eq:u_psi_meridional}
\end{equation}
On the other hand, Equation \eqref{eq:u_psi} may be expanded in spherical polar coordinates to
\begin{equation}
\mathbf{u}\left(\mathbf{x}\right)=\frac{1}{\rho r\sin\theta}\partial_{\theta}\left(\psi\left(\mathbf{x}\right)\sin\theta\right)\mathbf{e}_{r}-\frac{1}{\rho r}\partial_{r} \left(r \psi\left(\mathbf{x}\right)\right)\,\mathbf{e}_{\theta}.
\label{eq:u_psi_meridional_spherical}
\end{equation}
We may substitute Equation \eqref{eq:psi_legendre} into Equation \eqref{eq:u_psi_meridional_spherical}, and use the fact that $Y_{\ell 0}(\hat{n})$ are the eigenfunction of the Laplacian on a sphere corresponding to an eigenvalue of $-\ell(\ell+1)$, to reproduce Equation \eqref{eq:u_psi_meridional}. This demonstrates that an inversion for the stream function is equivalent to solving for the radial components $\psi^{(0)}_{\ell 0}$ (or equivalently $\psi^{+1}_{\ell 0}$). Such an approach had been used by \citet{2015ApJ...813..114R} and \citet{2018ApJ...863...39M} to invert for meridional circulation.
We further demonstrate that the travel times computed using the stream function are identical to that computed using the flow by choosing a specific model of the stream function. We retain only the term corresponding to $\ell=2$ in Equation \eqref{eq:psi_legendre}, and choose the radial function $\psi_2(r)$ to be of the form
\begin{equation}
\psi_{2}\left(r\right)=A\,\rho\left(r\right)\exp\left(-\frac{\left(r-r_{0}\right)^{2}}{2\sigma^{2}}\right)d\left(r\right),
\label{eq:psi2}
\end{equation}
where $r_0 = 0.87R_\odot$, $\sigma = 0.05R_\odot$, the amplitude $A$ chosen to produce a maximum horizontal surface velocity of $20\,\mathrm{m}/\mathrm{s}$, and the function $d(r)$ being a decay term that ensures that the stream function falls to zero beyond the solar surface.
We plot the travel time shifts obtained between the points $\mathbf{x}_1=(R_\odot+200\,\mathrm{km},\pi/2,0)$ and $\mathbf{x}_2=(R_\odot+200\,\mathrm{km},\theta,0)$ for several choices of the co-latitude $\theta$ in Figure \ref{fig:streamfn_traveltimes}. We find that there is a reasonable agreement between the travel-time shifts computed using the two approaches.
\begin{figure}
\includegraphics{meridionalflow_traveltimes}
\caption{Left: Longitudinal cross-section of the stream function described in Equation \eqref{eq:psi2}. The color indicates the magnitude of the $\phi$-component of the stream function, and the arrows indicate the corresponding flow velocity. Right top: Travel-time shifts experienced by seismic waves traversing through the flow in the left panel, measured between two points on the same longitude, one at the Equator and the other located at various latitudes in the northern hemisphere. The line represents measurements using the kernel for flows, whereas the squares represent the same measurement but using the kernel for the stream function. Right bottom: Relative difference between the travel-time shifts in the top panel.}
\label{fig:streamfn_traveltimes}
\end{figure}
The number of parameters may be further reduced by representing the stream functions components $\psi_{\ell0}^{+1}\left(r\right)$ in a B-spline basis, for example as used by \citet{2018ApJ...863...39M}. This might lead to a significant simplification of inverse problems for meridional flows, as well as make them them better posed.
\subsection{Validating kernels for uniform rotation}
We verify our result for the kernel by comparing the wave travel times computed using two approaches: the first where we look at the change in cross-covariances arising in a rotating frame, and secondly where we treat the rotation as a flow about a steady background and evaluate the travel-time shift using Equation (\ref{eq:travel-time-kernel_velocity-integral}). We make the assumption that the cross-covariance is being measured between waves at two points on the equator separated azimuthally by $\Delta\phi$, both the points lying at the same observation radius $r_{\mathrm{obs}}$. We also leave out line-of-sight projections for algebraic simplicity. The cross-covariance in a frame rotating uniformly about the $\hat{z}$-axis at an angular speed $\Omega_{\mathrm{rot}}$ is related to that in a fixed frame through
\begin{equation}
C_{\mathrm{rotating}}\left(r_{\mathrm{obs}},\Delta\phi,t\right)=C_{\mathrm{fixed}}\left(r_{\mathrm{obs}},\Delta\phi-\Omega_{\mathrm{rot}}t,t\right).\label{eq:C_rot}
\end{equation}
The frequency-domain way of looking at the same would be a Doppler shift arising due to a uniformly-moving receiver. The difference in cross-covariances leads to a difference in measured travel times given by
\begin{align}
\delta\tau\left(\Delta\phi\right) & =\int dt\,h\left(t\right)\left(C_{\mathrm{rotating}}\left(r_{\mathrm{obs}},\Delta\phi,t\right)-C_{\mathrm{fixed}}\left(r_{\mathrm{obs}},\Delta\phi,t\right)\right).\label{eq:dtau_cc}
\end{align}
On the other hand, treating the uniform solid-body rotation as a flow leads to a velocity field $\mathbf{u}\left(\mathbf{x}\right)=\Omega_{\mathrm{rot}}r\sin\theta\,\mathbf{e}_{\phi}$. We may express this in the PB VSH basis as
\begin{equation}
\mathbf{u}\left(\mathbf{x}\right)=\sqrt{\frac{4\pi}{3}}i\Omega_{\mathrm{rot}}r\left(\PB{+1}{10}\left(\theta,\phi\right)-\PB{-1}{10}\left(\theta,\phi\right)\right),\label{eq:u_10_PB_VSH}
\end{equation}
We see the only non-zero spherical harmonic components correspond to $\ell=1$ and $m=0$. The shift in travel times in the first Born approximation may be obtained from Equation (\ref{eq:dtau_axisym_PB_geom}) as
\begin{align}
\delta\tau_{12} & =\int_{0}^{R_{\odot}}r^{2}dr\,K_{\phi,10}\left(r;\mathbf{x}_{1},\mathbf{x}_{2}\right)\Im\left[u_{10}^{+1}\left(r\right)\right],\label{eq:dtau_uniform_kernel}
\end{align}
where the kernel $K_{\phi,10}\left(r;\mathbf{x}_{1},\mathbf{x}_{2}\right)$ is obtained by substituting $\ell=1$ in Equation (\ref{eq:Kphi_l0}). We find that that the only contribution to $K_{\phi,10}\left(r;\mathbf{x}_{1},\mathbf{x}_{2}\right)$ comes from the modes corresponding to $j_{2}=j_{1}$, and we drop the subscript and use the symbol $j$ in subsequent analysis to refer
to the contributing wave modes. There is no contribution from $j=0$ as $j_{1}=j_{2}=0$ would restrict $\ell$ to $0$. The angular function $P_{10}^{jj,00}\left(\mathbf{x}_{1},\mathbf{x}_{2}\right)$ is equal
to the bipolar spherical harmonic $Y_{10}^{jj}\left(\hat{n}_{1},\hat{n}_{2}\right)$, which we evaluate explicitly to obtain
\begin{equation}
Y_{10}^{jj}\left(\hat{n}_{1},\hat{n}_{2}\right)=\frac{i\left(-1\right)^{j}}{4\pi\Omega_{j}^{0}}\sqrt{\frac{3\left(2j+1\right)}{2}}\partial_{\phi_{2}}P_{j}\left(\hat{n}_{1}\cdot\hat{n}_{2}\right),\label{eq:YBSH10}
\end{equation}
where $P_{j}$ represents the Legendre polynomial of degree $j$ (see Appendix \ref{sec:Appendix_Clebsch-Gordan-coefficients}). Substituting Equations (\ref{eq:YBSH10}) and $\mathcal{C}_{1jj\omega;00}^{1}$ from Equation (\ref{eq:C_components_defn}) into Equation (\ref{eq:Kphi_l0}), we obtain
\begin{align}
K_{\phi,10}\left(r;\mathbf{x}_{1},\mathbf{x}_{2}\right) & =8\sqrt{\frac{3}{4\pi}}\frac{\rho}{r}\sum_{j}\frac{\left(2j+1\right)}{4\pi}\int_{0}^{\infty}\frac{d\omega}{2\pi}\,\omega^{3}P\left(\omega\right)\Im\left[h^{*}\left(\mathbf{x}_{1},\mathbf{x}_{2},\omega\right)\right]\times\nonumber \\
& \Re\left[\gfnomega{0*}0j\left(r_{\mathrm{obs}},r_{\mathrm{src}}\right)\mathcal{G}_{1jj\omega;00}^{1}\left(r,r_{\mathrm{obs}},r_{\mathrm{src}}\right)\right]\partial_{\phi_{2}}P_{j}\left(\hat{n}_{1}\cdot\hat{n}_{2}\right),\label{eq:Kphi_10}
\end{align}
where the expression for $\mathcal{G}_{1jj\omega;00}^{1}$ in terms of the Green function components is listed in Table \ref{tab:Expressions-for-J}.
We compute the travel times for several observation distances using Equations (\ref{eq:dtau_cc}) and (\ref{eq:dtau_uniform_kernel}), and plot them in Figure \ref{fig:Travel-time-shifts}. The close match between these values serves to validate the sensitivity kernels computed in this work.
\begin{figure}
\includegraphics[scale=0.6]{dt_v}
\caption{Top: Travel-time shifts as a function of angular separation between two observation points on the equator for waves travelling through a uniformly rotating Sun. The travel-time shifts have been computed: (1) from the difference in the measured cross-covariances (Equation \eqref{eq:dtau_cc}), and (2) by using the first Born approximation (Equation \eqref{eq:dtau_uniform_kernel}). Bottom: Relative difference between the travel times in the top panel.}
\label{fig:Travel-time-shifts}
\end{figure}
\subsection{Numerical evaluation\label{sec:Numerical}}
We follow a two-step strategy in evaluating the kernel --- at the first step we evaluate the Green function components following \citet{2020ApJ...895..117B} and save them to disk, following which we read the functions in as necessary and compute the kernel using Equation (\ref{eq:K_sepvar}).
The computationally expensive step in the evaluation of the kernel is reading in the pre-computed Green-function FITS files from the
disk, therefore efficient computation of the kernel requires minimizing the number of FITS IO operations. The expression for the kernel in
Equation (\ref{eq:K_sepvar}), while succinct, is not the most convenient form for efficient numerical evaluation. We use Equations (\ref{eq:C_components_defn}) and (\ref{eq:K_components_defn}) to rewrite the expression for the kernel as
\begin{align}
K_{\gamma,\ell m}\left(r;\mathbf{x}_{1},\mathbf{x}_{2}\right) & =\sum_{j_{1}j_{2}}\sum_{\alpha_{1}\alpha_{2}}\int_{0}^{\infty}\frac{d\omega}{2\pi}\,\omega^{2}P\left(\omega\right)\times\nonumber \\
& \left(2\Re\left[h^{*}\left(\mathbf{x}_{1},\mathbf{x}_{2},\omega\right)\gfnomega{\alpha_{2}}0{j_{2}}\left(r_{2},r_{\mathrm{src}}\right)J_{\ell j_{1}j_{2}\omega;\alpha_{1}0}^{-\gamma*}\left(r,r_{1},r_{\mathrm{src}}\right)\right]P_{\ell m}^{j_{1}j_{2},\alpha_{1}\alpha_{2}}\left(\mathbf{x}_{1},\mathbf{x}_{2}\right)\right.\nonumber \\
& \left.+2\Re\left[h^{*}\left(\mathbf{x}_{1},\mathbf{x}_{2},\omega\right)\gfnomega{\alpha_{2}*}0{j_{2}}\left(r_{1},r_{\mathrm{src}}\right)J_{\ell j_{1}j_{2}\omega;\alpha_{1}0}^{-\gamma}\left(r,r_{2},r_{\mathrm{src}}\right)\right]P_{\ell m}^{j_{1}j_{2},\alpha_{1}\alpha_{2}}\left(\mathbf{x}_{2},\mathbf{x}_{1}\right)\right).\label{eq:K_numerical}
\end{align}
Written this way, the Green function component with the source at $r_{\mathrm{src}}$ needs to be read in only for the mode $j_{2}$, whereas the components with the sources at $r_{1}$ and $r_{2}$ needs to be read in for the mode $j_{1}$. Equation (\ref{eq:K_numerical}) appears to come at the expense of an additional computation of the bipolar spherical harmonic $P_{\ell m}^{j_{1}j_{2},\alpha_{1}\alpha_{2}}\left(\mathbf{x}_{2},\mathbf{x}_{1}\right)$, however this might be mitigated to some extent by noting that
\begin{equation}
P_{\ell m}^{j_{1}j_{2},\alpha_{1}\alpha_{2}}\left(\mathbf{x}_{2},\mathbf{x}_{1}\right)=\left(-1\right)^{j_{1}+j_{2}+\ell}P_{\ell m}^{j_{2}j_{1},\alpha_{2}\alpha_{1}}\left(\mathbf{x}_{1},\mathbf{x}_{2}\right),
\end{equation}
so we may store the values of $P_{\ell m}^{j_{1}j_{2},\alpha_{1}\alpha_{2}}\left(\mathbf{x}_{1},\mathbf{x}_{2}\right)$ as they are computed, and use pre-computed values of --- if available --- to evaluate $P_{\ell m}^{j_{1}j_{2},\alpha_{1}\alpha_{2}}\left(\mathbf{x}_{2},\mathbf{x}_{1}\right)$
without an explicit loop over the component harmonics.
The computational expense of evaluating the kernel components for all modes is substantial, so this technique is perhaps better suited for large-scale flows where we may restrict the computation to a narrow range of angular degrees. The evaluation time depends on the grid of wave modes used --- both in angular degrees and in temporal frequencies --- as well as the spherical harmonic modes of the flow for which kernels are evaluated. In the present analysis, the code has been written in the Julia programming language \citep{julia}, and is in the form of a map-reduce operation, where the map component --- sums over sections of a range of wave modes and frequencies --- is embarrassingly parallel. We describe the algorithm schematically in Algorithm \ref{algo:kernel}. Given a frequency grid of $N_\nu$ points, a set of $j_\mathrm{max}$ wave modes, a maximum angular degree of $\ell_\mathrm{max}$ and a maximum azimuthal order of $m_\mathrm{max}$ in the PB basis decomposition of the flow velocity, the number of terms that contribute towards the kernel is of the order $\mathcal{O}\left(N_\nu\,j^2_\mathrm{max} \ell_\mathrm{max} m_\mathrm{max}\right)$, where each term involves a sum over radial arrays. Computing the line-of-sight projected kernel would involve summing up four sets of arrays corresponding to $0\leq\alpha_1,\alpha_2\leq 1$, where we use the symmetry relations in Equation \eqref{eq:J_symmetry} to represent the terms corresponding to $\alpha_i=-1$ in terms of $\alpha_i=1$. The time required to read in FITS files from disk may also be reduced by caching the necessary Green-function arrays in memory. We use a grid of frequencies that spans $2.5$ mHz to $4.5$ mHz uniformly over $4000$ points. We also restrict ourselves to wave modes in the range $5 \leq j \leq 80$, where the lower limit arises from the fact that our radial grid does not extend all the way to the center of the Sun, and the upper limit arises from numerical accuracy of the publicly available library SHTOOLS \citep{doi:10.1029/2018GC007529} that we use to compute Clebsch-Gordan coefficients. We carry out the computation on $56$ cores on the Dalma cluster at New York University Abu Dhabi using 2.40GHz Intel Broadwell CPUs. We evaluate the kernel components for all modes $(\ell,m)$ satisfying $\ell \leq\ell_\mathrm{max}$, and we plot the computation time in Figure \ref{fig:runtimes} as a function of $\ell_\mathrm{max}$. The evaluation time required is dominated by FITS input-output operations for low cutoff values of $\ell$, whereas it starts being dominated by the kernel computations for a higher cutoff in $\ell$. This shows up in the reduction in the contrast in evaluation times as the cutoff in $\ell$ increases.
A further optimization might be carried out by noting that the Green functions have power concentrated along distinct ridges corresponding to standing modes in the Sun, therefore suitable filters might eliminate regions of the spectrum that do not contribute significantly to the overall result.
We note that the computational expense involved in this analysis significantly exceeds that required in computing kernels for sound-speed \citep[][]{2020ApJ...895..117B}, as the radial functions involved in evaluating the sound-speed kernel do not depend on $\ell$, whereas for flows, the functions $J_{\ell j_{1}j_{2}\omega;\alpha_{1}0}^{\gamma}\left(r,r_{1},r_{\mathrm{src}}\right)$ need to be re-computed for each $\ell$. We demonstrate the difference in computation time in the bottom panel of Figure \ref{fig:runtimes}, where the time required to compute the kernels for sound-speed are obtained from \citet{2020ApJ...895..117B}.
\begin{figure}
\begin{algorithm}[H]
\caption{Pseudocode to numerically evaluate the kernel components}
\label{algo:kernel}
\algblock[Name]{Parallel}{EndParallel}
\begin{algorithmic}[1]
\Function{$K_{\gamma,\ell m}$}{$\mathbf{x}_{1},\mathbf{x}_{2}$,$\omega_\mathrm{array}$,$j_\mathrm{array}$,$\ell_\mathrm{max}$}
\State Evaluate $h\left(\mathbf{x}_{1},\mathbf{x}_{2},\omega\right)$ and send to all processors
\State Split $\omega_\mathrm{array}$ and $j_\mathrm{array}$ over the available processors
\Parallel
\State $\omega_\mathrm{processor}$ = local section of $\omega_\mathrm{array}$
\State $j_\mathrm{processor}$ = local section of $j_\mathrm{array}$
\State Evaluate $P_{\ell m}^{j_{1}j_{2},\alpha_{1}\alpha_{2}}\left(\mathbf{x}_{1},\mathbf{x}_{2}\right)$ and $P_{\ell m}^{j_{1}j_{2},\alpha_{1}\alpha_{2}}\left(\mathbf{x}_{2},\mathbf{x}_{1}\right)$ for necessary parameter values
\State $K_{\gamma,\ell m}$ = 0
\For{$\omega$ in $\omega_\mathrm{processor}$ and $j_2$ in $j_\mathrm{processor}$}
\State Read in $G_{j_2,\omega}(r,r_\mathrm{src})$ and $\partial_r G_{j_2,\omega}(r,r_\mathrm{src})$
\For{$j_1$ in $j_\mathrm{array}$}
\State Read in $G_{j_1,\omega}(r,r_\mathrm{1})$ and $G_{j_1,\omega}(r,r_\mathrm{2})$
\For{$\alpha_1$ in $0:1$, $\gamma$ in $0:1$ and $i$ in 1:2}
\State Evaluate and store individual terms in $\mathcal{G}_{\ell j_{0}j_{2};\alpha_{1}0}^{\gamma}\left(r,r_{i},r_{\mathrm{src}}\right)$ that are independent of $\ell$
\EndFor
\For{$\ell$ in $0:\ell_\mathrm{max}$}
\If{$|j_1 - j_2|\leq \ell \leq j_1 + j_2$}
\For{$\gamma$ in $0:1$, $\alpha_1$ in $0:1$ and $i$ in 1:2}
\State Evaluate and store $J_{\ell j_{1}j_{2}\omega;\alpha_{1}0}^{\gamma}\left(r,r_{i},r_{\mathrm{src}}\right)$
\EndFor
\For{$m$ in $0:\ell$}
\State $T_1$ = 0
\For{$\gamma$ in $0:1$}
\State $T_{\gamma}$ = Sum over $\alpha_1$ and $\alpha_2$ in Equation \eqref{eq:K_numerical}
\State $K_{\gamma,\ell m}$ += $T_{\gamma}$
\EndFor
\State $K_{-1,\ell m}$ += $(-1)^{\ell+j_1+j_2} T_{1}$
\EndFor
\EndIf
\EndFor
\EndFor
\EndFor
\EndParallel
\State $K_{\gamma,\ell m}$ = Sum over $K_{\gamma,\ell m}$ across all processors
\State Return $K_{\gamma,\ell m}$
\EndFunction
\end{algorithmic}
\end{algorithm}
\end{figure}
\begin{figure}
\includegraphics[scale=0.8]{runtimesflows.eps}
\caption{Top: Computational time required to evaluate the kernel components for all modes labelled by $(\ell,m)$ where $|m| \leq \ell$ and $\ell \leq \ell_\mathrm{max}$. The dashed line indicates a scaling $\propto \ell_\mathrm{max}^2$. Bottom: Comparison between the computational time required to compute the kernels for flows (solid line) with that required to compute the kernels for sound speed \citep[][dotted line]{2020ApJ...895..117B}}
\label{fig:runtimes}
\end{figure}
\subsection{Exploiting spherical symmetry}
One advantages of a spherical-harmonic decomposition of the kernel is that the transformation of bipolar spherical harmonics on rotation of coordinate systems is well known --- they get coupled to other components with the same degree $\ell$ through the Wigner D-matrix. If a rotation characterized by the Euler angles $(\alpha,\beta,\gamma)$ is carried out to the coordinate frame, the components $P_{\ell,m}(\hat{n}_1,\hat{n}_1)$ of a two-point field on the surface of a sphere in the new coordinate frame are related to those in the old one through
\begin{equation}
P_{\ell m}\left(\hat{n}_{1}^{\prime},\hat{n}_{2}^{\prime}\right)=\sum_{m^{\prime}}D_{m^{\prime}m}^{\ell}\left(\alpha,\beta,\gamma\right)P_{\ell m^{\prime}}\left(\hat{n}_{1},\hat{n}_{2}\right).\label{eq:BSH_rot}
\end{equation}
where the Wigner D-matrix $D^\ell_{m^\prime m}(\alpha,\beta,\gamma)$ acts as the rotation matrix. The relation is valid for tensor spherical harmonics as well, where the non-$m$ indices are carried through unchanged.
We use this relation to note that the kernel components $K_{\gamma,\ell m}$ need to be evaluated only once for each angular spacing between the two observation points, and subsequently be evaluated for other points that are spaced identically using Equation \eqref{eq:BSH_rot}. We demonstrate the procedure by choosing two sets of points $\hat{n}_1=(\pi/2,0)\,,\hat{n}_2=(\pi/4,0)$, and $\hat{n}_1^\prime=(\pi/2-\pi/10,0)\,,\hat{n}_2^\prime=(\pi/4-\pi/10,0)$, where the latter pair is related to the former by a rotation about the $y$-axis by $\pi/10$ radians, and all observations are assumed to be carried out at a height of $200$ km above the photosphere. To demonstrate the procedure we compute the kernels without assuming line-of-sight projections, but that may be incorporated into the analysis by simultaneously rotating the harmonics as well as the projection vectors. We compute the kernel components in two approaches: (1) by computing the kernel for $(\hat{n}_1,\hat{n}_2)$ and rotating the components using Equation \eqref{eq:BSH_rot} to obtain the components for $(\hat{n}_1^\prime,\hat{n}_2^\prime)$, and (2) by directly evaluating the kernel components for $(\hat{n}_1^\prime,\hat{n}_2^\prime)$. We refer to the former approach as the "rotated" one, whereas the latter is the "direct" computation. We plot the radial profile of the real part of the kernel component $K_{11}(\hat{n}_1^\prime,\hat{n}_2^\prime)$ computed in the two approaches in Figure \ref{fig:kernelrot}. We demonstrate that the two approaches produce identical results, therefore illustrating the promise of such an approach. Additionally such an approach may allow efficient averaging of kernels over arcs in a point-arc measurement configuration, where the angular distance between the observation points stays fixed.
We note that this particular symmetry is useful only in the scenario where we do not consider center-to-limb variations in observation heights. Including this breaks spherical symmetry irreparably, and a full evaluation of the kernel might be necessary.
\begin{figure}
\includegraphics[scale=0.8]{Kulm_rotated.eps}
\caption{Radial profiles of kernel components $K_{11}(r,\mathbf{x}_1,\mathbf{x}_2)$ for $\mathbf{x}_1=(R_\odot+200\,\mathrm{km},\pi/2,0)$ and $\mathbf{x}_1=(R_\odot+200\,\mathrm{km},\pi/2,\pi/4)$, computed directly by using Equation \eqref{eq:kernel_components}., and by rotating that computed for two points shifted by $\pi/10$ along the Equator. The kernels have been computed using the radial components of the wave velocity.}
\label{fig:kernelrot}
\end{figure}
\section{Conclusion}
We have presented a scheme that may be used to evaluate sensitivity kernels for large-scale flows in the Sun in spherical geometry, while accounting for line-of-sight projection and line-formation heights that leave systematic imprints in the measurements. Further work needs to be carried out to incorporate filters that are used on seismic data to get these kernels to correspond exactly to measurements. Time-distance analysis also usually relies on travel-time differences rather than the point-to-point travel times themselves, but this is easy to incorporate into this analysis scheme.
The computation of the kernels is carried out assuming that the observation heights are different at different points on the Sun. In this paper we have not explored the ramifications of this on the forward problem of estimating travel-times given profiles of subsurface flows, however it might be interesting to check to what extent this contributes to the systematic travel-time shifts observed by \citet{2013ApJ...774L..29Z} and \citet{2015ApJ...808...59K}. The physical origin of the center-to-limb effect is not clear, with effects such as interactions of seismic waves with granulation \citep{2012ApJ...760L...1B,2012SoPh..275..207S,2015ApJ...808...59K} and foreshortening \citep{2016SoPh..291..731Z} also potentially polluting seismic measurements, although, as the authors demonstrate, the latter does not affect travel-time differences significantly. Eliminating certain trends from first principle might help in studying the ones that remain.
The analysis presented here is computationally more efficient than previous attempts to numerically evaluate the full three-dimensional kernel, nevertheless it remains significantly expensive if a large number of modes are simultaneously sought. This approach is more suited to studies where a small range of modes are necessary, such as for large-scale or axisymmetric flows. Fortunately these constitute several classes of flows on the Sun that are of interest. The approach of \citet{2018A&A...616A.156F} relies on a scalar wave equation, therefore it is expected to be more efficient at computing kernels. It will be interesting to compare the trade-off between computational time and accuracy between the two approaches.
\acknowledgments
This work was supported by NYUAD Institute Grant
G1502 "NYUAD Center for Space Science".
This research was carried out on the High Performance Computing resources at New York University Abu Dhabi.
|
2,869,038,154,004 | arxiv | \section{Introduction}\label{sec:intro}
Semantic segmentation for urban scenes is an important yet challenging
task for a variety of vision-based applications, including autonomous
driving cars, smart surveillance systems, etc. With the success of
convolutional neural networks (CNNs), numerous successful
fully-supervised semantic segmentation solutions have been proposed in
recent years \cite{long2015fully, chen2016deeplab}. To achieve
satisfactory performance, these methods demand a sufficiently large
dataset with pixel-level labels for training. However, creating such
large datasets is prohibitively expensive as it requires human
annotators to accurately trace segment boundaries. Furthermore, it is
difficult to collect traffic scene images with sufficient variations in
terms of lighting conditions, weather, city and driving routes.
To overcome the above-mentioned limitations, one can utilize the modern
urban scene simulators to automatically generate a large amount of synthetic images
with pixel-level labels. However, this introduces another
problem, \emph{i.e.} distributions mismatch between the source domain
(synthesized data) and the target domain (real data). Even if we
synthesize images with the state-of-the-art simulators
\cite{richter2016playing, ros2016synthia}, there still exists visible
appearance discrepancy between these two domains. The testing
performance in the target domain using the network trained solely by the
source domain images is severely degraded. The domain adaptation (DA)
technique is developed to bridge this gap. It is a special example of
transfer learning that leverages labeled data in the source domain to
learn a robust classifier for unlabeled data in the target domain. DA
methods for object classification have several challenges such as shifts
in lighting and variations in object's appearance and pose.
There are even more challenges in DA methods for semantic segmentation
because of variations in the scene layout, object scales and class
distributions in images. Many successful domain-alignment-based methods
work for DA-based classification but not for DA-based segmentation.
Since it is not clear what comprises data instances in a deep segmenter
\cite{Zhang_2017_ICCV}, DA-based segmentation is still far from its
maturity.
In this work, we propose a novel fully convolutional tri-branch network
(FCTN) to solve the DA-based segmentation problem. In the FCTN, two
labeling branches are used to generate pseudo segmentation ground-truth
for unlabeled target samples while the third branch learns from these
pseudo-labeled target samples. An alternating re-labeling and
re-training mechanism is designed to improve the DA performance in a
curriculum learning fashion. We evaluate the proposed method using
large-scale synthesized-to-real urban scene datasets and demonstrate
substantial improvement over the baseline network and other benchmarking
methods.
\begin{figure*}[!t]
\centering
\includegraphics[width=0.7\linewidth]{./images/overview.pdf}
\vspace{0.5em}
\caption{An overview of the proposed fully convolutional tri-branch
network (FCTN). It has one shared base network denoted by $F$ followed
by three branches of the same architecture denoted by $F_1$, $F_2$ and
$F_t$. Branches $F_1$ and $F_2$ assign pseudo labels to images in the
unlabeled target domain, while branch $F_t$ is trained with supervision
from images in the pseudo-labeled target domain.}
\label{fig:overview}
\vspace{-1em}
\end{figure*}
\section{Related Work}\label{sec:related_work}
The current literatures on visual domain adaptation mainly focus on
image classification \cite{Gabriela2017DABook}. Being inspired by
shallow DA methods, one common intuition of deep DA methods is that
adaptation can be achieved by matching the distribution of features in
different domains. Most deep DA methods follow a siamese architecture
with two streams, representing the source and target models. They aim to
obtain domain-invariant features by minimizing the divergence of
features in the two domains and a classification loss
\cite{long2015learning, sun2016deep, tzeng2015simultaneous,
tzeng2017adversarial}, where the classification loss is evaluated in the
source domain with labeled data only. However, these methods assume the
existence of a universal classifier that can perform well on samples
drawn from whichever domain. This assumption tends to fail since the
class correspondence constraint is rarely imposed in the domain
alignment process. Without such an assumption, feature distribution
matching may not lead to classification improvement in the target
domain. The ATDA method proposed in \cite{saito2017asymmetric} avoids
this assumption by employing the asymmetric tri-training. It can assign
pseudo labels to unlabeled target samples progressively and learn from
them using a curriculum learning paradigm.
This paradigm has been proven effective in the weakly-supervised learning tasks \cite{li2017multiple} as well.
Previous work on segmentation-based DA is much less. Hoffman \emph{et.
al} \cite{hoffman2016fcns} consider each spatial unit in an activation
map of a fully convolutional network (FCN) as an instance, and extend
the idea in \cite{tzeng2015simultaneous} to achieve two objectives: 1)
minimizing the global domain distance between two domains using a fully
convolutional adversarial training and 2) enhancing category-wise
adaptation capability via multiple instance learning. The adversarial
training aims to align intermediate features from two domains. It
implies the existence of a single good mapping from the domain-invariant
feature space to the correct segmentation mask. To avoid this
condition, Zhang \emph{et. al} \cite{Zhang_2017_ICCV} proposed to
predict the class distribution over the entire image and some
representative super pixels in the target domain first. Then, they use
the predicted distribution to regularize network training. In this work,
we avoid the single good mapping assumption and rely on the remarkable
success of the ATDA method \cite{saito2017asymmetric}. In particular,
we develop a curriculum-style method that improves the cross-domain
generalization ability for better performance in DA-based segmentation.
\section{Proposed Domain Adaptation Network}\label{sec:approach}
The proposed fully convolutional tri-branch network (FCTN) model for
cross-domain semantic segmentation is detailed in this section. The
labeled source domain training set is denoted by $\mathcal{S} =
\{(x_i^s, y_i^s)\}_{i=1}^{n_s}$ while the unlabeled target domain
training set is denoted by $\mathcal{T} = \{x_i^t\}_{i=1}^{n_t}$, where
$x$ is an image, $y$ is the ground truth segmentation mask and $n_s$ and
$n_t$ are the sizes of training sets of two domains, respectively.
\subsection{Fully Convolutional Tri-branch Network Architecture}\label{ssec:architecture}
An overview of the proposed FCTN architecture is illustrated in Fig.
\ref{fig:overview}. It is a fully convolutional network that consists of
a shared base network ($F$) followed by three branch networks ($F_1$,
$F_2$ and $F_t$). Branches $F_1$ and $F_2$ are labeling branches. They
accept deep features extracted by the shared base net, $F$, as the input
and predict the semantic label of each pixel in the input image.
Although the architecture of the three branches are the same, their
roles and functions are not identical. $F_1$ and $F_2$ generate pseudo
labels for the target images based on prediction. $F_1$ and $F_2$ learn
from both labeled source images and pseudo-labeled target images. In
contrast, $F_t$ is a target-specific branch that learns from
pseudo-labeled target images only.
We use the DeepLab-LargeFOV (also known as the DeepLab v1)
\cite{chen2015semantic} as the reference model due to its simplicity and
superior performance in the semantic segmentation task. The
DeepLab-LargeFOV is a re-purposed VGG-16 \cite{simonyan2014very} network
with dilated convolutional kernels. The shared base network $F$ contains
13 convolutional layers while the three branche networks are formed by
three convolutional layers that are converted from fully connected
layers in the original VGG-16 network. Although the DeepLab-LargeFOV is
adopted here, any effective FCN-based semantic segmentation framework
can be used in the proposed FCTN architecture as well.
\subsection{Encoding Explicit Spatial Information}
Being inspired by PFN \cite{liang2015proposal}, we attach the pixel
coordinates as the additional feature map to the last layer of $F$. The
intuition is that the urban traffic scene images have structured layout
and certain classes usually appear in a similar location in images.
However, a CNN is translation-invariant by nature. That is, it makes
prediction based on patch-based feature regardless of the patch
location in the original image. Assume that the last layer in $F$ has a
feature map of size $H \times W \times D$, where $H$, $W$ and $D$ are
the height, width and depth of the feature map, respectively. We
generate two spatial coordinate maps $\mat{X}$ and $\mat{Y}$ of size $H
\times W$, where values of $\mat{X}(p_x,p_y)$ and $\mat{Y}(p_x,p_y)$ are
set to be $p_x/W$ and $p_y/H$ for pixel $p$ at location $(p_x,p_y)$, respectively. We
concatenate spatial coordinate maps $\mat{X}$ and $\mat{Y}$ to the
original feature maps along the depth dimension. Thus, the output
feature maps are of dimension $H \times W \times (D+2)$. By
incorporating the spatial coordinate maps, the FCTN can learn more
location-aware representations.
\subsection{Assigning Pseudo Labels to Target Images}\label{ssec:labeling}
Being inspired by the ATDA method \cite{saito2017asymmetric}, we
generate pseudo labels by feeding images in the target domain training
set to the FCTN and collect predictions from both labeling branches.
For each input image, we assign the pseudo-label to a pixel if the
following two conditions are satisfied: 1) the classifiers associated
with labeling branches, $F_1$ and $F_2$, agree in their predicted labels
on this pixel; 2) the higher confidence score of these two predictions
exceeds a certain threshold. In practice, the confidence threshold is
set very high (say, 0.95 in our implementation) because the use of many
inaccurate pseudo labels tends to mislead the subsequent network
training. In this way, high-quality pseudo labels for target images are
used to guide the network to learn target-specific discriminative
features. The pseudo-labeled target domain training set is denoted
by $\mathcal{T}_l=\{(x_i^t, \hat{y}_i^t)\}_{i=1}^{n_t}$, where
$\hat{y}$ is the partially pseudo-labeled segmentation mask. Some
sample pseudo-labeled segmentation masks are shown in Fig.
\ref{fig:pseudo_label}. In the subsequent training, the not-yet-labeled
pixels are simply ignored in the loss computation.
\begin{figure}[htb]
\centering
\includegraphics[width=.32\linewidth]{./images/vis_PL/im/aachen_000016_000019_leftImg8bit}
\includegraphics[width=.32\linewidth]{./images/vis_PL/im/dusseldorf_000181_000019_leftImg8bit}
\includegraphics[width=.32\linewidth]{./images/vis_PL/im/hamburg_000000_055414_leftImg8bit}
\includegraphics[width=.32\linewidth]{./images/vis_PL/gt/aachen_000016_000019_gtFine_labelIds}
\includegraphics[width=.32\linewidth]{./images/vis_PL/gt/dusseldorf_000181_000019_gtFine_labelIds}
\includegraphics[width=.32\linewidth]{./images/vis_PL/gt/hamburg_000000_055414_gtFine_labelIds}
\includegraphics[width=.32\linewidth]{./images/vis_PL/ps_pretrain/aachen_000016_000019_leftImg8bit}
\includegraphics[width=.32\linewidth]{./images/vis_PL/ps_pretrain/dusseldorf_000181_000019_leftImg8bit}
\includegraphics[width=.32\linewidth]{./images/vis_PL/ps_pretrain/hamburg_000000_055414_leftImg8bit}
\includegraphics[width=.32\linewidth]{./images/vis_PL/ps_epoch1/aachen_000016_000019_leftImg8bit}
\includegraphics[width=.32\linewidth]{./images/vis_PL/ps_epoch1/dusseldorf_000181_000019_leftImg8bit}
\includegraphics[width=.32\linewidth]{./images/vis_PL/ps_epoch1/hamburg_000000_055414_leftImg8bit}
\vspace{0.5em}
\caption{Illustration of pseudo labels used in the 2-round curriculum
learning in the GTA-to-Cityscapes DA experiments. The first row shows
the input images. The second row shows the ground truth segmentation
masks. The third and fourth row shows the pseudo labels used in the
first and second round of curriculum learning, respectively. Note in the
visualization of pseudo labels, white pixels indicate the unlabeled pixels. Best
viewed in color.} \label{fig:pseudo_label}
\vspace{-1em}
\end{figure}
\subsection{Loss Function}\label{ssec:loss}
\textbf{Weight-Contrained Loss.} As suggested in the standard
tri-training algorithm \cite{zhou2005tri}, the three classifiers in
$F_1$, $F_2$ and $F_t$ must be diverse. Otherwise, the training
degenerates to self-training. In our case, one crucial requirement to
obtain high-quality pseudo-labels from two labeling branches $F_1$ and
$F_2$ is that they should have different views on one sample and make decisions
on their own.
Unlike the case in the co-training algorithm \cite{blum1998combining},
where one can explicitly partition features into different sufficient
and redundant views, it is not clear how to partition deep features
effectively in our case. Here, we enforce divergence of the weights of
the convolutional layers of two labeling branches by minimizing their
cosine similarity. Then, we have the following filter weight-constrained
loss term:
\begin{equation}
L_w = \frac{\inner{\vec{w_1}}{\vec{w_2}}}{\norm{\vec{w_1}}\norm{\vec{w_2}}}
\end{equation}
where $\vec{w_1}$ and $\vec{w_2}$ are obtained by the flattening and
concatenating the weights of convolutional filters in convolutional
layers of $F_1$ and $F_2$, respectively.
\textbf{Weighted Pixel-wise Cross-entropy Loss.} In the curriculum
learning stage, we take a minibatch of samples with one half from
$\mathcal{S}$ and the other half from $\mathcal{T}_l$ at each step. We
calculate the segmentation losses separately for each half of samples.
For the source domain images samples, we use the vanilla pixel-wise
softmax cross-entropy loss, denoted by $L_{\mathcal{S}}$, as the
segmentation loss function.
Furthermore, as mentioned in Sec.
\ref{ssec:labeling}, we assign pseudo labels to target domain pixels
based on predictions of two labeling branches. This mechanism tends to
assign pseudo labels to the prevalent and easy-to-predict classes, such
as the road, building, etc., especially in the early stage (this can be seen in Fig. \ref{fig:pseudo_label}). Thus, the
pseudo labels can be highly imbalanced in classes. If we treat all
classes equally, the gradients from challenging and relatively rare
classes will be insignificant and the training will be biased toward
prevalent classes. To remedy this, we use a weighted cross-entropy loss
for target domain samples, denoted by $L_{\mathcal{T}_l}$. We calculate
weights using the median frequency balancing scheme
\cite{eigen2015predicting}, where the weight assigned to class $c$ in
the loss function becomes
\begin{equation}
\alpha_c = \frac{median\_freq}{freq(c)},
\end{equation}
where $freq(c)$ is the number of pixels of class $c$ divided by the
total number of pixels in the source domain images whenever $c$ is
present, and $median\_freq$ is the median of these frequencies
$\{freq(c)\}_{c=1}^{C}$, and where $C$ is the total number of classes.
This scheme works well under the assumption that the global class
distributions of the source domain and the target domain are similar.
\textbf{Total Loss Function.} There are two stages in our training
procedure. We first pre-train the entire network using minibatches from
$\mathcal{S}$ so as to minimize the following objective function:
\begin{equation}\label{eq:pretrain}
L = \alpha L_w + L_{\mathcal{S}}
\end{equation}
Once the curriculum learning starts, the overall objective function
becomes
\begin{equation}\label{eq:curriculum}
L = \alpha L_w + L_{\mathcal{S}} + \beta L_{\mathcal{T}_l}
\end{equation}
where $L_{\mathcal{S}}$ is evaluated on $\mathcal{S}$ and averaged over
predictions of $F_1$ and $F_2$ branches, $L_{\mathcal{T}_l}$ is
evaluated on $\mathcal{T}_l$ and averaged over predictions of all three
top branches, and $\alpha$ and $\beta$ are hyper-parameters determined
by the validation split.
\begin{table*}[htb]
\centering
\setlength\tabcolsep{3pt}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
\multirow{2}[4]{*}{Model} & \multicolumn{19}{c|}{per-class IoU} & \multirow{2}[4]{*}{mIoU} \\
\cline{2-20} & \begin{sideways}road\end{sideways} & \begin{sideways}sidewlk\end{sideways} & \begin{sideways}bldg.\end{sideways} & \begin{sideways}wall\end{sideways} & \begin{sideways}fence\end{sideways} & \begin{sideways}pole\end{sideways} & \begin{sideways}t. light\end{sideways} & \begin{sideways}t. sign\end{sideways} & \begin{sideways}veg.\end{sideways} & \begin{sideways}terr.\end{sideways} & \begin{sideways}sky\end{sideways} & \begin{sideways}person\end{sideways} & \begin{sideways}rider\end{sideways} & \begin{sideways}car\end{sideways} & \begin{sideways}truck\end{sideways} & \begin{sideways}bus\end{sideways} & \begin{sideways}train\end{sideways} & \begin{sideways}mbike\end{sideways} & \begin{sideways}bike\end{sideways} & \\
\hline
No Adapt & 31.9 & 18.9 & 47.7 & 7.4 & 3.1 & 16.0 & 10.4 & 1.0 & 76.5 & 13.0 & 58.9 & 36.0 & 1.0 & 67.1 & 9.5 & 3.7 & 0.0 & 0.0 & 0.0 & 21.1 \\
FCN \cite{hoffman2016fcns} & 70.4 & \textbf{32.4} & 62.1 & 14.9 & 5.4 & 10.9 & 14.2 & 2.7 & 79.2 & 21.3 & 64.6 & 44.1 & 4.2 & 70.4 & 8.0 & 7.3 & 0.0 & 3.5 & 0.0 & 27.1 \\
\hline
No Adapt & 18.1 & 6.8 & 64.1 & 7.3 & 8.7 & 21.0 & 14.9 & 16.8 & 45.9 & 2.4 & 64.4 & 41.6 & \textbf{17.5} & 55.3 & 8.4 & 5.0 & \textbf{6.9} & 4.3 & 13.8 & 22.3 \\
CDA \cite{Zhang_2017_ICCV} & 26.4 & 22.0 & 74.7 & 6.0 & \textbf{11.9} & 8.4 & 16.3 & 11.1 & 75.7 & 13.3 & \textbf{66.5} & 38.0 & 9.3 & 55.2 & \textbf{18.8} & \textbf{18.9} & 0.0 & \textbf{16.8} & \textbf{14.6} & 27.8 \\
\hline
No Adapt & 59.7 & 24.8 & 66.8 & 12.8 & 7.9 & 11.9 & 14.2 & 4.2 & 78.7 & 22.3 & 65.2 & 44.1 & 2.0 & 67.8 & 9.6 & 2.4 & 0.6 & 2.2 & 0.0 & 26.2 \\
Round 1 & 66.9 & 25.6 & 74.7 & 17.5 & 10.3 & 17.1 & 18.4 & 8.0 & 79.7 & 34.8 & 59.7 & \textbf{46.7} & 0.0 & 77.1 & 10.0 & 1.8 & 0.0 & 0.0 & 0.0 & 28.9 \\
Round 2 & \textbf{72.2} & 28.4 & \textbf{74.9} & \textbf{18.3} & 10.8 & \textbf{24.0} & \textbf{25.3} & \textbf{17.9} & \textbf{80.1} & \textbf{36.7} & 61.1 & 44.7 & 0.0 & \textbf{74.5} & 8.9 & 1.5 & 0.0 & 0.0 & 0.0 & \textbf{30.5} \\
\hline
\end{tabular}%
\caption{Adaptation from GTA to Cityscapes. All numbers are measured in \%. The last three
rows show our results before adaptation, after one and two rounds of curriculum learning using
the proposed FCTN, respectively.}\label{tab:GTA}%
\end{table*}%
\begin{figure*}[tb!h]
%
\centering
\includegraphics[width=.24\linewidth]{./images/vis_results/im/munster_000051_000019_leftImg8bit}
\includegraphics[width=.24\linewidth]{./images/vis_results/gt/munster_000051_000019_gtFine_labelIds}
\includegraphics[width=.24\linewidth]{./images/vis_results/pretrain/munster_000051_000019_leftImg8bit}
\includegraphics[width=.24\linewidth]{./images/vis_results/adapt/munster_000051_000019_leftImg8bit}
\includegraphics[width=.24\linewidth]{./images/vis_results/im/frankfurt_000001_040575_leftImg8bit}
\includegraphics[width=.24\linewidth]{./images/vis_results/gt/frankfurt_000001_040575_gtFine_labelIds}
\includegraphics[width=.24\linewidth]{./images/vis_results/pretrain/frankfurt_000001_040575_leftImg8bit}
\includegraphics[width=.24\linewidth]{./images/vis_results/adapt/frankfurt_000001_040575_leftImg8bit}
\includegraphics[width=.24\linewidth]{./images/vis_results/im/frankfurt_000001_002646_leftImg8bit}
\includegraphics[width=.24\linewidth]{./images/vis_results/gt/frankfurt_000001_002646_gtFine_labelIds}
\includegraphics[width=.24\linewidth]{./images/vis_results/pretrain/frankfurt_000001_002646_leftImg8bit}
\includegraphics[width=.24\linewidth]{./images/vis_results/adapt/frankfurt_000001_002646_leftImg8bit}
\begin{minipage}[b]{.24\linewidth}
\centering
\centerline{Input Image}\medskip
\end{minipage}
\begin{minipage}[b]{0.24\linewidth}
\centering
\centerline{}
\centerline{Ground Truth}\medskip
\end{minipage}
\begin{minipage}[b]{0.24\linewidth}
\centering
\centerline{No Adapt}\medskip
\end{minipage}
\begin{minipage}[b]{0.24\linewidth}
\centering
\centerline{Ours}\medskip
\end{minipage}
\caption{Domain adaptation results from the Cityscapes \texttt{Val} set. The
third column shows segmentation results using the model trained solely
by the GTA dataset, and the fourth column shows the segmentation results
after two rounds of the FCTN training (best viewed in color).}\label{fig:GTA_res}
\vspace{-1em}
%
\end{figure*}
\subsection{Training Procedure}\label{ssec:training}
The training process is illustrated in Algorithm \ref{alg:tri-training}.
We first pretrain the entire FCTN on the labeled source domain training
set $\mathcal{S}$ for $iters$ iterations, optimizing the loss function in
Eq. (\ref{eq:pretrain}). We then use the pre-trained model to generate
the initial pseudo labels for the target domain training set
$\mathcal{T}$, using the method described in Sec. \ref{ssec:labeling}.
We re-train the network using $\mathcal{S}$ and $\mathcal{T}_l$ for
several steps. At each step, we take a minibatch of samples with half
from $\mathcal{S}$ and half from $\mathcal{T}_l$, optimizing the terms
in Eq. (\ref{eq:curriculum}) jointly. We repeat the re-labeling of
$\mathcal{T}$ and the re-training of the network for several rounds
until the model converges.
\begin{algorithm}[H]
\caption{Training procedure for our fully convolutional tri-branch network (FCTN). }
\label{alg:tri-training}
\begin{algorithmic}
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\Require labeled source domain training set $\mathcal{S} = \{(x_i^s, y_i^s)\}_{i=1}^{n_s}$ and unlabeled target domain training set $\mathcal{T} = \{x_i^t\}_{i=1}^{n_t}$
\\ \textit{Pretraining on $\mathcal{S}$} :
\For {$i = 1$ to $iters$}
\State train $F, F_1, F_2, F_t$ with minibatches from $\mathcal{S}$
\EndFor
\\ \textit{Curriculum Learning with $\mathcal{S}$ and $\mathcal{T}$} :
\For {$i = 1$ to $rounds$}
\State $\mathcal{T}_l \gets$ \Call{Labeling}{$F, F_1, F_2, \mathcal{T}$} \Comment{See Sec. \ref{ssec:labeling}}
\For {$k = 1$ to $steps$}
\State train $F, F_1, F_2$ with samples from $\mathcal{S}$
\State train $F, F_1, F_2, F_t$ with samples from $\mathcal{T}_l$
\EndFor
\EndFor\\
\Return $F, F_t$
\end{algorithmic}
\end{algorithm}
\section{Experiments}\label{sec:experiments}
We validate the proposed method by experimenting the adaptation from the
recently built synthetic urban scene dataset GTA
\cite{richter2016playing} to the commonly used urban scene semantic
segmentation dataset Cityscapes \cite{Cordts2016Cityscapes}.
Cityscapes \cite{Cordts2016Cityscapes} is a large-scale urban scene
semantic segmentation dataset. It provides over 5,000 finely labeled
images (train/validation/test: 2,993/503/1,531), which are labeled with
per pixel category labels. They are with high resolution of $1024 \times
2048$. There are 34 distinct semantic classes in the dataset, but only
19 classes are considered in the official evaluation protocol.
GTA \cite{richter2016playing} contains 24,966 high-resolution labeled
frames extracted from realistic open-world computer games, Grand Theft
Auto V (GTA5). All the frames are vehicle-egocentric and the class
labels are fully compatible with Cityscapes.
We implemented our method using Tensorflow\cite{abadi2016tensorflow} and
trained our model using a single NVIDIA TITAN X GPU. We initialized the
weights of shared base net $F$ using the weights of the
VGG-16 model pretrained on ImageNet. The hyper-parameter settings were
$\alpha=10^{3}, \beta=100$. We used a constant learning rate $10^{-5}$
in the training. We trained the model for $70k$, $13k$ and $20k$
iterations in the pre-training and two rounds of curriculum learning,
respectively.
We use synthetic data as source labeled training data and Cityscapes
\texttt{train} as an unlabeled target domain, while evaluating our
adaptation algorithm on Cityscapes \texttt{val} using the predictions
from the target specific branch $F_t$. Following Cityscapes official
evaluation protocol, we evaluate our segmentation domain adaptation
results using the per-class intersection over union (IoU) and mean IoU
over the 19 classes. The detailed results are listed in Table.
\ref{tab:GTA} and some qualitative results are shown in Fig.
\ref{fig:GTA_res}. We achieve the state-of-the-art domain adaptation
performance. Our two rounds of curriculum learning boost the mean IoU
over our non-adapted baseline by 2.7\% and 4.3\%, respectively.
Especially, the IoU improvement for the small objects (\emph{e.g.} pole,
traffic light, traffic sign etc.) are significant (over 10\%).
\section{Conclusion}\label{sec:conclusion}
A systematic way to address the unsupervised semantic segmentation
domain adaptation problem for urban scene images was presented in this
work. The FCTN architecture was proposed to generate high-quality pseudo
labels for the unlabeled target domain images and learn from pseudo
labels in a curriculum learning fashion. It was demonstrated by the DA
experiments from the large-scale synthetic dataset to the real image
dataset that our method outperforms previous benchmarking methods by a
significant margin.
There are several possible future directions worth exploring. First, it
is interesting to develop a better weight constraint for the two
labeling branches so that even better pseudo labels can be generated.
Second, we may impose the class distribution constraint on each
individual image \cite{Zhang_2017_ICCV} so as to alleviate the confusion
between some visually similar classes, \emph{e.g.} road and sidewalk,
vegetation and terrain etc. Third, we can extend the proposed method to
other tasks, \emph{e.g.} instance-aware semantic segmentation.
\vfill\pagebreak
\bibliographystyle{IEEEbib}
|
2,869,038,154,005 | arxiv | \section{Introduction}
When lower mass stars evolve and come to core He burning,
their properties can become those of the ``instability strip'' in the HRD.
The atmospheric structure then reaches,
as it were, a certain ``undecidedness'',
causing rhythmic attempts to reach a stable state.
The atmosphere cannot find its ``thermal equilibrium'' in
the sense explored at a general level by Renzini et\,al. (1992)
in the gedankenexperiment the ``gravothermal hysteresis cycle''.
The rhythmic expansion and contraction of the atmosphere of
pulsating RR\,Lyrae stars is caused by the $\kappa$-effect.
The optical depth $\kappa$ in the layer in which He is ionized
has a rhythmic variation:
a higher level of ionization causes lower opacity,
leading to a higher photon throughput, causing local cooling,
which leads to recombination and thus to higher opacity,
reduced radial energy transport so to increasing inner temperatures,
coming full circle to increase ionization
(see, e.g., Cox 1980; Gautschy \& Saio 1995).
This cyclic behaviour produces hysteresis effects
in the colour indices of the emergent stellar radiation
as well as in various other observables (such as spectral line strengths).
These all are based on, of course, hysteresis in
surface gravity and effective temperature.
Using photometry of RR Lyr stars,
one can derive the variation in the parameters of the stellar surface.
The first investigations of this kind were carried out by
Oke \& Bonsack (1960), Oke et\,al. (1962), Oke (1966), and
Danziger \& Oke (1967), who used spectral scanner data
in comparison with model atmospheres to obtain $T_{\rm eff}$ and $\log g$
to calculate the change in radius.
They noted that the changes in the atmosphere imply that
one observes light from different atmospheric layers
so that the radii thus derived might not be reliable.
The information can also be obtained from Str\"omgren photometry.
After the establishment of the Str\"omgren-photometric system and its early
calibration (see, e.g., Breger 1974) this photometry was used
by van Albada \& de Boer (1975) to derive the parameters
$\Theta=5040/T_{\rm eff}$, $\log g$, and $R$ for all phases of the pulsation.
McNamara \& Feltz (1977) and Siegel (1982) also used the Str\"omgren-system.
Similar studies were performed by Lub (1977a, 1977b, 1979)
with the Walraven photometric system.
Only for a few RR\,Lyrae stars has the cycle of variation been followed
in photometric detail in the Str\"omgren system.
Thus only for a few of these stars is the variation in parameter values
over the cycle accurately known.
Photometry is normally performed sequentially
in whichever wavelength bands selected.
Sequentiality mandates good photometric conditions
to obtain accurate colour indices
for the derivation of reliable stellar parameters.
Moreover, sequential measurements in the chosen bands are
asynchronous among the bands.
Performing measurements simultaneously in well selected photometric bands
allows to obtain precise colour indices
even in poor photometric conditions.
For the research presented here, we used the
Bonn University Simultaneous CAmera, {B\sc usca} (Reif et\,al. 1999).
To transform the colour indices to astrophysical parameters, the
calibration of
the Str\"omgren system by Clem et\,al. (2004) has been used.
{B\sc usca} also allows a rapid succession of measurements
leading to a better coverage of fine structure in light curves.
The Baade-Wesselink method (Baade 1926, Wesselink 1946) requires the
measurement of the radial velocity variation to produce
a full characterization of a pulsating star.
Following van Albada \& de Boer (1975) we will again show that,
with accurate photometry, $R$ can be derived
(rather, the apparent angular extent,
which indicates the radius when the distance to the star is known)
and the changes in $R$
can be used to calculate the run of atmospheric velocities $V_{\rm pul}$
through the pulsational cycle.
Str\"omgren photometry allows, in principle,
to derive astrophysical parameters
more accurately than possible from the widely used Johnson photometry.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{dBM-f1.ps}}
\caption[]{Lightcurves of the RR\,Lyrae stars in $y$.
Data from subsequent cycles have been combined
(see Table\,\ref{tabobs} for observing dates).
The curves have been shifted arbitrarily in $y$ to fit them into one panel.
For the stars lower in the figure, the lightcurve coverage is incomplete.}
\label{flightcurves}
\end{figure}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{dBM-f2.ps}}
\caption[Photometry without beta]{Lightcurves of
\object{RR Gem} and \object{SY Ari}
observed with {B\sc usca} in the Str\"omgren-system.
Note the different brightness amplitude of the two stars.
At the bottom, the instrumental brightness of the mean of two
comparison stars (selected in the field of each variable) is shown.
}
\label{photn}
\end{figure}
\section{Observations and data reduction}
\label{secobs}
The observations were carried out in three sessions at the
Calar Alto Observatory of the Deutsch-Spanisches Astronomie Zentrum (DSAZ)
in southern Spain in three successive winter seasons,
between 4 and 9 Jan.~2005, between 16 and 19 Dec.~2005,
and between 26 and 28 Nov.~2006.
The times of maximum brightness
within the observing sessions were calculated for each star from
the period $P$ and a reference epoch as
$V_{\rm max} = {\rm JD}({\rm ref.epoch})) \times (nP)$ with appropriate $n$.
This allowed optimal phase-coverage planning of the photometry.
In the various figures to follow,
the data are plotted against phase $\Phi$ based on
the observed maximum.
For a few stars, the maximum could not be measured
so the phase was determined from early epoch maxima.
Our programme consisted of observing 18 relatively bright RR\,Lyrae
(for accurate positions and additional data, see Maintz 2005).
Of 12 of these, (almost) complete lightcurves were obtained
(coverage $>$70\%) as given in Table\,\ref{tabobs}
and Fig.\,\ref{flightcurves} (and Fig.\,\ref{photn}).
However, for X\,CMi we missed the maximum.
Of 6 stars (GM\,And, OV\,And, GI\,Gem, TY\,Cam, TT\,Lyn, CZ\,Lac),
only smaller portions (15 to 57\%) of the light curve could be obtained;
these stars are not discussed further.
The stars selected are fundamental mode pulsators (see Fig.\,\ref{rrtype}).
They generally have lightcurve bumps.
\begin{table*}
\caption[Table of dates of observations]{The stars with basic information, the J.D. of the observations, and the phase ranges covered}
\begin{tabular}{lrrrrrr}
\hline
\hline
Star & \object{CI And} & \object{SY Ari} & \object{AR Per} & \object{BR Tau} & \object{BH Aur} & \object{TZ Aur} \\
\hline
RA & 01 55 08.29 & 02 17 34.04 & 04 17 17.19 & 04 34 42.89 & 05 12 04.26 & 07 11 35.01 \\
DEC & 43 45 56.47 & 21 42 59.26 & 47 24 00.63 & 21 46 21.72 & 33 57 46.95 & 40 46 37.13 \\
mean $y$ ; \ $E(B-V)$ \ [mag] & 11.86 ; 0.22&12.21 ; 0.25&9.40 ; 0.73&11.96 ; 0.31 & 11.43 ; 0.19 & 11.43 ; 0.19 \\
distance [pc] ; \ [M/H] & 1741 ; $-$0.83 & 2100 ; $-1.4:$\tablefootmark{a} & 612 ; $-$0.43 & 1835 ; $-0.7:$\tablefootmark{a} & 1400 ; +0.14 & 1500 ; $-$0.79 \\
period [d] & 0.484728 & 0.5666815 & 0.425551 & 0.3905928 & 0.456089 & 0.39167479 \\
exp. time [s] $v,b,y$; $u,b,y$ & 40; 40 & 70; 70 & 20; 60 & 70; 210 & 70; 210 & 30; 30\\
start of observations & & & & & & \\
\hspace*{3mm}at JD = 2453370. & +5.2914 & +5.2947 & & & & +5.6153 \\
phase covered & 0.82 to 1.26 & 0.66 to 1.05 & & & & 0.67 to 0.95 \\
& +7.4215 & +7.2612 & & & & +6.4747 \\
& 1.21 to 1.43 & 0.13 to 0.65 & & & & 0.86 to 1.60 \\
& & +8.3353 & & & & +7.5535 \\
& & 1.03 to 1.16 & & & & 0.62 to 0.97 \\
& & +9.2568 & & & & +8.3106 \\
& & 0.65 to 0.91 & & & & 0.55 to .89 \\
\hspace*{3mm}at JD = 2453720. & +1.32 & & & +3.2969 & +1.2980 & \\
phase covered & 0.66 to 0.85 & & & 0.53 to 1.14 & 0.52 to 1.39 & \\
& & & & & +2.5868 & \\
& & & & & 0.34 to 0.54 & \\
\hspace*{3mm}at JD = 2454060. & & & +7.2981 & +7.5013 & & \\
phase covered & & & 0.75 to 1.63 & 0.73 to 0.87 & & \\
\\[-6pt]
\hline
\hline
Star & \object{AA CMi} & \object{RR Gem} & \object{X CMi} & \object{TW Lyn} & \object{SZ Gem} & \object{AS Cnc} \\
\hline
RA & 07 17 19.17 & 07 21 33.53 & 07 21 44.62 & 07 45 06.29 & 07 53 43.45 & 08 25 42.11 \\
DEC & 01 43 40.06 & 30 52 59.45 & 02 21 26.30 & 43 06 41.56 & 19 16 23.93 & 25 43 08.80 \\
mean $y$ ; $E(B-V)$ [mag]&11.13 ; 0.16&11.03 ; 0.22&12.23 ; 0.35&11.91 ; 0.16 & 11.45 ; 0.13 & 12.58 ; 0.16\\
distance [pc] ; \ [M/H] & 1220 ; $-$0.15 & 1260 ; $-$0.29 & 1766 ; $-$0.71 & 1549 ; $-$0.66 & 1601 ; $-$1.46 & 2324 ; $-$1.89 \\
period [d] & 0.476327 & 0.397292 & 0.57138 & 0.481862 & 0.5011270 & 0.61752 \\
exp. time [s] $v,b,y$; $u,b,y$ & 40; 40 / 20; 60 & 20; 20 & 70; 70 & 40; 40 & 20; 20 & 40; 40 \\
start of observations & & & & & & \\
\hspace*{3mm}at JD = 2453370. & & +5.5257 & +8.6759 & +6.5937 & +5.5385 & +7.5695 \\
phase covered & & 0.98 to 1.49 & 1.08 to 1.17 & 1.16 to 1.51 & 0.39 to 0.77 & 0.73 to 1.05 \\
& & +6.4776 & +9.3641 & +7.2652 & +6.4800 & +8.4688 \\
& & 0.38 to 1.10 & 0.29 to 0.93 & 0.55 to 0.76 & 1.26 to 1.48 & 1.18 to 1.63 \\
& & +8.5121 & & +8.3421 & +7.6349 & +9.4268 \\
& & 0.50 to 0.88 & & 0.79 to 1.25 & 0.57 to 0.84 & 0.73 to 1.28 \\
& & & & +9.2625 & & \\
& & & & 0.70 to 0.76 & & \\
\hspace*{3mm}at JD = 2453720. & +2.4285 & & +1.5757 & & +1.5348 & +1.4625 \\
phase covered & 0.77 to 1.09 & & 1.21 to 1.32 & & 0.82 to 1.28 & 0.59 to 0.69 \\
\hspace*{3mm}at JD = 2454060. & +6.6066 & & +6.6110 & & & \\
phase covered & 1.02 to 1.32 & & 1.07 to 1.28 & & & \\
\hline
\hline
\end{tabular}
\vspace*{1mm}
\tablefoottext{a} [M/H] (in logarithmic units) estimated from our $m_1$
(see Sect.\,\ref{colourloops}).
\label{tabobs}
\end{table*}
{B\sc usca} (Reif et\,al. 1999) is operated at the 2.2m telescope of the
Calar Alto Observatory.
{B\sc usca} splits the telescope beam above the Cassegrain focus
via dichroic beam splitters in 4 wavelength channels
between 3200 and 9000 \AA, with edges at 4300, 5400 and 7300 \AA.
Each channel is equipped with a (4k)$^2$ CCD.
In each channel, a filter wheel allows the placement of
appropriate filters.
For the measurements we used the Str\"omgren $u,v,b,y$-system,
extended by the Cousins $I$ band\footnote{Since the Cousins $I$
is about 10 times wider than the bands of the Str\"omgren-system,
the $I$ measurements were mostly overexposed and have not been used
for this paper.
For a few stars we also made some measurements in the H-Balmer filters
H$\beta$W (wide) and H$\beta$N (narrow),
which are in the same channel as the Str\"omgren-$b$ filter.
Since the H$\beta$N (narrow) filter is yet much narrower than the
Str\"omgren-filters, exposure times would have to be excessively long and
coverage of the light curves in these filters poor
so these data have not been used either.}.
The $y$ band is close to the cut-off of a {B\sc usca}-dichroic filter;
however, the transformation of $y$ into the standard system
could be performed without problems.
Because of the proximity of the wavelengths of $u$ and $v$,
their filters are in the same {B\sc usca} channel.
Since the $u$ and $v$ bands could not be measured simultaneously,
the photometry alternated between the simultaneous exposures in
$y,b,v,(I)$ and $y,b,u,(I)$.
For two stars, light curves in the four Str\"omgren-bands
are shown in Fig.\,\ref{photn}.
More data are shown in Maintz (2008a).
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{dBM-f3.ps}}
\caption[]{Plot of the amplitude in $y$ versus period of our stars
showing that they are fundamental mode RRa-type pulsators.
The band between the dotted lines marks the zone of RRa and RRb pulsators
(see Ledoux \& Walraven 1958).
}
\label{rrtype}
\end{figure}
To calibrate the data,
the instrumental brightness of each variable was obtained by comparison
with two non-variable stars in the field.
Where possible, one star with a brightness near the $y_{\rm max}$
of the RR\,Lyrae star was used
and another of brightness close to $y_{\rm min}$ of the variable.
(In the data a known suspected variable was quantified; see Maintz 2008b.)
The airmass correction was obtained from these comparison stars.
To secure the absolute calibration,
we observed appropriate stars from the list of Perry et\,al.\,(1987)
at moments in the light cycle of the variable
where the changes in brightness were small or monotonous.
The instrumental magnitudes of the (airmass-corrected) photometry
were related to the true magnitudes of the calibration stars.
This permitted the calibration of the RR\,Lyrae star photometry.
First, the RR\,Lyrae $y$ measurements were calibrated.
Since the instrumental $(b-y)_{\rm inst}$ from {B\sc usca} is more accurate
than $b_{\rm cal}$-$y_{\rm cal}$ (the same is true for the other indices),
the calibrated $b-y$ and $v-b$ were obtained from the instrumental index
and that of the calibration stars.
The sequentially obtained colour indices $u-y$ and $v-y$ were then used
to interpolate in time to obtain $u-v$, which was calibrated.
Using these indices,
$c_1= (u-v)-(v-b)$ and $m_1 = (v-b) - (b-y)$ were calculated.
The RR\,Lyrae star CCD photometry was performed with equal exposure times
throughout the cycle,
of 20\,s for the brightest and 70\,s for the faintest stars.
Exposure times are given in Table\,\ref{tabobs}.
For three stars the exposure times in the $u,b,y$ measurements were
$3\times$ those of the $v,b,y$ measurements (see Table\,\ref{tabobs}).
The (simultaneous) reading-out of the CCDs took about 2~min.
Thus the CCD exposures followed each other quickly,
within 2.5\,min for the brighter stars and 3.5\,min for the fainter ones.
The short exposure times did not lead to large time smearing,
not even when the star changed quickly in brightness.
In most cases, two stars were observed alternatingly in rapid succession
to achieve good light curve coverage for as many stars as possible.
This alternation explains why most light curves presented have data intervals
longer than 3.5~min.
Basic data ($d$, $E(B-V)$, [M/H]) were taken from Beers et\,al. (2000),
Fernley et\,al. (1998), and Layden (1994), in part taking averages.
Some of the stars are known to be behind gas with extinction
and we corrected for that using expressions given by Clem et\,al. (2004).
For four stars we calculated $d$ from $M_V$ based on [Fe/H]
using the formula of Fernley et\,al. (1998).
The adopted distances are given in Table\,\ref{tabobs}.
\section{Colour-index loops}
\label{colourloops}
For three stars the run through the cycle of the derived
absolutely calibrated colour indices is shown in Fig.\,\ref{photstrom}.
The $b-y$ versus $c_1$ diagram for two typical stars from our sample,
RR Gem and TZ Aur (Fig.\,\ref{colloops}, left panel),
shows clear colour index loops.
The index $m_1$ varies in line with $c_1$ (Fig.\ref{photstrom}).
Figure\,\ref{colloops} (right panel)
shows the very moderate $m_1$ loops in a colour-colour plot.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{dBM-f4.ps}}
\caption[Photometry Stroemgren indices]{Lightcurves in the
calibrated Str\"omgren-indices of the stars \object{RR~Gem}, \object{SY Ari},
and \object{BR Tau}.
The phase coverage of the photometry
can be reconstructed from the information given in Table\,\ref{tabobs}
(for \object{BR~Tau} the coverage is only 70\%)
and can be recognised in Fig.\,\ref{flightcurves}.}
\label{photstrom}
\end{figure}
The ($c_1$,$b-y$) curves show the interplay
of the surface parameters $\log g$ and $T_{\rm eff}$ during the cycle.
The loop is essentially due to $c_1$,
which represents (above $T_{\rm eff} \simeq 6000$\,K)
the strength of the Balmer jump.
At low $T_{\rm eff}$, the Balmer level is hardly populated.
When the temperature increases (bluer $b-y$),
$T_{\rm eff}$ and $n_e$ (and thus $\log g$) of the atmosphere gas increase
and (collisional) excitation of hydrogen becomes noticeable.
When the temperature decreases the population of the Balmer level decreases
as well.
In RR\,Lyrae star atmospheres, the ionisation of hydrogen hardly plays a role
(except for an increase in $n_e$) because the highest temperatures
in the pulsational cycle are $T_{\rm eff} <9000$~K.
It is evident from the data that
the highest temperatures appear when the star is close to maximum light.
During most of the period, the measured colour indices
indicate low temperatures (large $b-y$, small $c_1$),
as visible in the clump of data points in the lower-right corner
of the left-side panels of Fig.\,\ref{colloops}.
The brightening leg and the dimming leg of the colour curves
do not overlap.
During brightening, $b-y$ changes faster than $c_1$,
which is indicative of a faster rise in $T_{\rm eff}$ than in
the excitation to the Balmer level.
In the dimming branch the converse is found (see Fig.\,\ref{colloops}, left).
Thus the colour curve exhibits hysteresis.
As for the loops in $m_1$ (Fig.\,\ref{colloops} right),
these are quite moderate and do not show much
in the sense of hysteresis.
The change in $m_1$ over the cycle is on average 0.1 mag
($m_1$ is largest at maximum light).
For our stars, $m_1$ is larger for stars known to have larger [M/H]
(see also the calibration for red giant stars by Hilker 2000), while
for stars with [M/H] $<-1$\,dex $m_1$ is only marginally metal dependent.
At the $T_{\rm eff}$ of RR\,Lyrae stars,
cycle variations in $m_1$ related to the [M/H] are small.
Nevertheless, using these trends, we estimated [M/H] for the two stars
for which these values were unavailable from other sources
(see Table\,\ref{tabobs}).
The variation in $m_1$ over the cycle
must be caused by changes in temperature and electron density,
which both influence the level of ionisation and excitation of ions.
\begin{figure*}
\resizebox{\hsize}{!}{
\includegraphics{dBM-f5-1.ps}
\includegraphics{dBM-f5-2.ps}
}
\caption[colour loops]{
Colour loops in $b-y,c_1$ (at left)
and in $b-y,m_1$ (at right) of several stars.
The data points of the brightening phase are marked with~$\bullet$.\\
{\bf Left:}
Clear hysteresis-like behaviour in $b-y,c_1$ is shown for
\object{RR Gem} and \object{TZ Aur}.
Arrows mark the direction of time.
At phases between 0.3 and 0.85 (see Fig.\,\ref{photstrom}),
the colour index $c_1$ seems to just scatter.
The $b-y,c_1$ combination represents mostly $T_{\rm eff}$
but also Balmer excitation.\\
{\bf Right:}
Colour loops in $b-y,m_1$
(shown for \object{RR Gem}, \object{AS Cnc}, \object{SY Aur}, \object{TW Lyn},
and SZ~Gem) marginally exhibit hysteresis-like behaviour.
The index $m_1$ generally represents metal content [M/H],
its change in RR\,Lyrae stars is mostly due to changing $T_{\rm eff}$.
}
\label{colloops}
\end{figure*}
\section{Behaviour of atmospheric parameters}
\subsection{Deriving parameters in the stationary case}
The parameters $T_{\rm eff}$ and $\log g$ were determined from our photometry
using the conversion grids given by Clem et\,al. (2004).
These grids are based on modelled spectral energy distributions and
are provided in steps of 0.5 dex of metal content [M/H].
Clem et\,al.\ made extensive calibrations of the Str\"omgren system
against the parameters of non-variable stars.
They show that individual Population\,II star colours
can be matched with models to within $\simeq$200\,K and $\simeq$0.1 dex,
showing what is the ultimate uncertainty of a match.
For Population\,I stars, the effect
of small variations in metal content (0.3 dex) on the colours,
and thus on the derived $T_{\rm eff}$ and $\log g$, is small.
The dependence of colours on metal content is significant only
for $T_{\rm eff} < 6000$\,K.
For each star we used the grid for which [M/H] was closest to
the metal content of each star.
The parameters $T_{\rm eff}$ and $\log g$ can be used to derive
additional parameters describing the behaviour of the stellar atmosphere.
The conditions in a regularly pulsating stars
are in ``quasi-hydrostatic equilibrium'' (Cox 1980; Ch.\,8.3),
meaning that the use of a calibration based on stable stars is justified.
However, when a RR\,Lyrae star brightens from $V_{\rm min}$ to $V_{\rm max}$
within approximately one hour,
the atmosphere is probably not in quasi-hydrostatic equilibrium.
Doubling of Ca and H lines has been seen (e.g., Struve 1947, Sanford 1949)
and H lines have been seen in emission (e.g., Preston \& Paczynski 1964)
in that part of the RR\,Lyrae cycle,
which according to Abt (1959) originate from dynamic effects.
Since no models exist that take these effects for photometry into account
there is no other option than to use static models.
Furthermore, the light we detect comes from the photosphere, i.e.,
the layer in which, at the particular wavelength observed,
the optical depth is $\tau$ $\simeq$ $0.7$.
This means that in the course of the pulsational cycle it is not necessarily
the same gas from which the light detected emanates.
We return to these aspects later (Sect.\,\ref{sectau}).
The luminosity of a star is related to the surface parameters
$R$ and $T_{\rm eff}$ through the familiar equation
$L = 4\pi R^2\cdot \sigma T_{\rm eff}^4$,
which can be rewritten in the relative logarithmic form
\begin{equation}
\log{\Bigl(\frac{L}{L_\odot}\Bigr)} =
2 \log{\Bigl(\frac{R}{R_\odot}\Bigr)} +
4 \log{\Bigl(\frac{T}{T_\odot}\Bigr)} \ \ \ .
\label{elogLRT}
\end{equation}
The surface gravity of a star is defined as $g = {\rm G}\ {M}/{R^2}$.
This expression can be transformed into a logarithmic one
relative to solar values given by
\begin{equation}
\log{\Bigl(\frac{g}{g_\odot}\Bigr)} =
\log{\Bigl(\frac{M}{M_{\odot}}\Bigr)}
- 2 \log{\Bigl(\frac{R}{R_\odot}\Bigr)} \nonumber \ \ \ .
\label{elogMRg}
\end{equation}
Using Eqs.\,\ref{elogLRT} and \ref{elogMRg}, one can then eliminate
the radius so that the combined equation has
the stellar mass, temperature, gravity, and luminosity as variables,
\begin{equation}
\log{ \Bigl(\frac{L}{L_\odot}\Bigr)} + \log g =
4\log{T_{\rm eff}} + \log{ \Bigl(\frac{M}{M_{\odot}}\Bigr)} -10.68 \ \ \ ,
\label{elogMTgL}
\end{equation}
where $-10.68= (\log T_{\rm eff} -\log g)_{\odot}$ in cgs units.
This equation can be used to calculate $\log g$.
\subsection{Deriving parameters in the pulsating case}
\label{parpulsating}
In a pulsating atmosphere the parameters $T_{\rm eff}$, $\log g$, and $R$
vary continuously
(note that no time subscripts are used in this paper).
\noindent
$\bullet$ $T_{\rm eff}$, $L$.\hspace*{2mm}
The time-dependent values $L$ and $T_{\rm eff}$ can be derived from the
photometry if one knows the stars distance.
$T_{\rm eff}$ is simply inferred from a calibrated $b-y$
(using the grid of Clem et\,al. 2004).
$L$ has to be determined from $y$,
the integral over the spectral energy distribution
and the distance $d$ of the star.
As for $L$, an RR\,Lyrae star has the convenient property that
the maximum of its spectral energy distribution (in $B_{\lambda}$)
lies close to the middle of the visual wavelength band,
near Johnson $V$ or Str\"omgren $y$.
This is true during the entire cycle:
$T_{\rm eff}$ actually varies only between roughly 5000 and 9000\,K.
This means that
the bolometric correction is neither large nor varies a lot.
We have performed the bolometric correction
using the classic values of Schmidt-Kaler (1982).
We fitted a quadratic equation
in the temperature range relevant for RR\,Lyrae stars.
Distances were adopted as described in Sect.\,\ref{secobs}.
If the distance $d$ of a star were off by 20\%,
this would result in an error of 0.08 in $\log L$.
Results for $L$ and $T_{\rm eff}$ are shown in Fig.\,\ref{tefflloop}.
\noindent
$\bullet$ $R$.\hspace*{2mm}
The radius can be calculated from Eq.\,\ref{elogLRT}.
For examples of radius change see Fig.\,\ref{parcurve}.
\noindent
$\bullet$ $\log g$, $\log g_{\rm BJ}$, $\log g_{\rm eff}$.\hspace*{2mm}
The surface gravity $g$ can be calculated from Eq.\,\ref{elogMTgL}
if one knows the luminosity of the star (discussed above)
and its mass (see below).
This gravity is
like an overall equilibrium gravity and is henceforth called $g(T,L,M)$.
There is a further aspect affecting the gravity.
In a pulsating atmosphere, the actual surface gravity
($g(T,L,M)$ at the level $\tau \simeq 0.7$) is modified
due to acceleration of the atmosphere during the pulsation cycle
with respect to the ``normal'' gravity.
Thus, $g$ is modified by an acceleration term, $d^2R/dt^2$.
The total gravity is called the ``effective'' gravity
\begin{equation}
g_{\rm eff} = g(T,L,M) + \frac{d^2R}{dt^2} \ \ \ ,
\label{eloggeff}
\end{equation}
indicating the vertical gravitational force in the atmosphere.
The time-dependent Str\"omgren photometry measures
the actual condition of a stellar atmosphere.
The gravity derived from $b-y,c_1$ is a function of gas density,
visible in the excitation of the Balmer level
and in $c_1$ representing the Balmer jump.
We will call this gravity the {\bf ``Balmer-jump'' gravity},
$\log g_{\rm BJ}$.
This gravity is independent of $\log g_{\rm eff}$ and instead
represents the pressure of the gas sampled, i.e., $P_{\rm gas}=f(T,\rho)$.
We will return to the values derived for $\log g(T,L,M)$
and $\log g_{\rm BJ}$ in Sect.\,\ref{secggeff}.
\noindent
$\bullet$ $M_V$, $M$.\hspace*{2mm}
RR\,Lyrae star distances have thus far been derived
using a reference value for $M_V$.
This value of $M_V$ does not apply to all RR\,Lyrae stars
since horizontal-branch (HB) stars
evolve over more than $\Delta \log L = 0.1$ on the HB.
If, e.g., $M_V$ of a star were to differ by 0.25 mag from the
reference value adopted, $L$ would be off by 0.10.
The mass of RR\,Lyrae stars is in the range from $0.6$ to $0.8$\,M$_{\odot}$.
We adopted a mass of $M = 0.7$\,M$_{\odot}$,
thereby considering that our RR\,Lyrae stars can deviate 15\% from
that value.
This deviation propagates to a maximum deviation of 0.06 dex
in a calculated $\log g(T,L,M)$.
A few observational determinations of the absolute magnitude $M_V$ and
mass $M$ of horizontal-branch and RR\,Lyrae stars exist,
such as those of de Boer et\,al.{} (1995), Moehler et\,al.{} (1995),
de Boer et\,al.{} (1997),
Moehler et\,al.{} (1997), Tsujimoto et\,al.{} (1998), and Gratton (1998).
We return to the effect of incorrect values
of these reference parameters in Sect.\,\ref{errorbudget}.
\vspace*{1mm}
The run of $T_{\rm eff}$, $\log g_{\rm BJ}$, and $R$ derived as described
above are shown for three of our stars in Fig.\,\ref{parcurve}.
The shape of these curves is very similar to those presented by
van Albada \& de Boer (1975).
But the data of Fig.\,\ref{parcurve} is less noisy because
the photometry with CCDs is faster and more precise than possible
with the slower photomultipliers of that time,
and because of the simultaneity of the {B\sc usca} measurements.
\begin{table}
\caption[]{Cycle averages of several RR Lyr star parameters}
\begin{center}
\begin{tabular}{llccc}
\hline
\hline
Star \hspace*{3mm} & $\langle L \rangle$\hspace*{2mm} & $\langle T_{\rm eff}\rangle$ & $\log \langle g(T,L,M) \rangle$ & $\log \langle g_{\rm BJ} \rangle$ \\
\hline
\object{RR Gem} & 43.5 & 6647 & 2.84 & 3.33 \\
\object{TW Lyn} & 31.6 & 6196 & 2.85 & 3.61 \\
\object{AS Cnc} & 49.3 & 6592 & 2.77 & 3.46 \\
\object{SY Ari} & 50.0 & 6059 & 2.61 & 2.93 \\
\object{SZ Gem} & 52.9 & 6739 & 2.78 & 3.38 \\
\object{BH Aur} & 46.4 & 5997 & 2.63 & 3.04 \\
\object{TZ Aur} & 50.3 & 5934 & 2.58 & 2.86 \\
\hline
\object{AR Per}\tablefootmark{a} & 56.3 & 6030 & 2.55 & 3.66 \\
\object{CI And}\tablefootmark{b} & 47.7 & 6270 & 2.69 & 3.19 \\
\object{BR Tau}\tablefootmark{c} & 44. & 5980 & 2.65 & 2.92 \\
\object{AA Cmi}\tablefootmark{c} & 45. & 6340 & 2.73 & 3.28 \\
\hline
\end{tabular}
\end{center}
\tablefoot{
$L$ in L$_{\odot}$; $T$ in K; $g$ in cm\,s$^{-2}$.
\ $L$ derived from $y$, $A_V$, $d$, and B.C.\\
\tablefoottext{a}{Light curve sparsely measured (see Fig.\,\ref{flightcurves}).}
\tablefoottext{b}{Light curve with a large gap in the descending branch; interpolated.}
\tablefoottext{c}{Relatively large observing gaps (see Fig.\,\ref{flightcurves}); interpolation uncertain.}
}
\label{ttabaver}
\end{table}
\subsection{Phase resampling in steps of 0.02}
\label{resampling}
To facilitate the calculation of averages over the phase
and additional analyses,
we resampled the curves of $L$, $T_{\rm eff}$, $R$, and $\log g_{\rm BJ}$
as derived from the observational data using steps of $\Delta\Phi=0.02$
(starting with $i=0$ near maximum light).
These resampled values (given henceforth with subscript 0.02)
can be easily integrated (fixed time step).
The result of these resamplings can be seen as curves in
Figs.\,\ref{tefflloop} and \ref{parcurve} (and in further figures).
Time averages over the cycle of our stars have been calculated based on the
0.02 phase stepped lightcurves.
For a few stars, the lightcurve coverage was not complete and we made
reasonable interpolations in the gaps to obtain the time averages.
\subsection{Time-averaged quantities}
\label{timeaverage}
The structure of a ``non-pulsating RR Lyr star'' is given by the
time-averaged parameters $\langle T_{\rm eff}\rangle$,
$\langle L\rangle$ and $\langle g \rangle$.
The time-averaged gravity, $\langle g \rangle $,
has in a purely periodic system zero contribution from $d^2R/dt^2$.
Oke and collaborators and van Albada \& de Boer felt that
since $g$ varies strongly, the ``average'' value
is best derived from the quiet part of the cycle,
i.e., from $0.3<\Phi<0.85$.
However, Fig.\,\ref{favvrad} makes clear that this has shortcomings.
In that phase interval, $\log g(T,L,M)$ is (for most stars)
lower than average by between 0.1 and 0.2 dex,
because the atmosphere is then slowly but continuously contracting
(but see Sect.\,\ref{secggeff}).
The time-averaged quantities of essential parameters were calculated from
the resampled curves in steps of $\Delta \Phi=0.02$ (see above),
the integral was derived as the sum of 50 values.
Thus
\begin{equation}
\langle T_{\rm eff} \rangle = \int T_{\rm eff} \ dt = \bigl(\Sigma_{i=1}^{50}\ (\,(T_{\rm eff})_{0.02})\,_i\bigr)\ /\ 50
\end{equation}
and
\begin{equation}
\langle L \rangle = \bigl(\Sigma_{i=1}^{50}\ (L_{0.02})\,_i\bigr)\ /\ 50 \ \ \ ,
\end{equation}
individual values of $L_{0.02}$ being calculated as described
in Sect.\,\ref{parpulsating}.
The averages of the two gravity versions are
\begin{equation}
\langle g(T,L,M) \rangle = \bigl(\Sigma_{i=1}^{50}\ (\,g(T,L,M)_{0.02})\,_i\bigr)\ /\ 50
\end{equation}
and
\begin{equation}
\langle g_{\rm BJ} \rangle = \bigl(\Sigma_{i=1}^{50}\ (\,(g_{\rm BJ})_{0.02})\,_i\bigr)\ /\ 50 \ \ \ .
\end{equation}
Time-averaged values for 11 stars are given in Table\,\ref{ttabaver}
and are for four stars included in Fig.\,\ref{tefflloop}.
It can immediately be seen that the values of
$\langle g(T,L,M) \rangle$ and $\langle g_{\rm BJ} \rangle$
are quite dissimilar.
We discuss this further in Sect.\,\ref{secggeff}.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{dBM-f6.ps}}
\caption[Loops in Teff and L]{For four RR\,Lyrae
(\object{TW Lyn}, \object{RR Gem}, \object{AS Cnc}, and \object{SY Ari}),
the run of $T_{\rm eff}$ and $L$ is shown in an HRD.
The originally derived values are plotted as 3-pointed stars,
the resampled curve (in steps of 0.02 in phase, Sect.\,\ref{resampling})
is shown as a full line.
The mean values over the cycle (from Table\,\ref{ttabaver})
are given as~$\bullet$.
The background data (ZAMS, metal-poor ZAHB,
the evolution tracks of a 0.55 and a 0.6 M$_{\odot}$\ HB star)
are taken from de Boer \& Seggewiss (2008; Fig 10.9).
Two lines of constant $R$ are indicated.
The almost vertical dashed lines mark the approximate location
of the pulsational instability strip from Gautschi \& Saio (1995).
The stellar surface parameters of the RR\,Lyrae show hysteresis.
}
\label{tefflloop}
\end{figure}
\subsection{Error budget}
\label{errorbudget}
In our analyses, the following uncertainties had to be taken into account.
Photometric errors are, given the instrumental characteristics of {B\sc usca},
small.
We estimated the uncertainties in $y$ to be 0.03 mag.
The bolometric corrections are small and introduce errors of up to 2\%.
This leads to a total typical {\sl photometric} uncertainty in $L$ of 3\%.
In the colour indices $b-y$, $m_1$, and $c_1$, the uncertainties are 0.02 mag
(smaller than the error in $y$
because of the simultaneity of the measurements).
We used distances from the literature.
The effect of their uncertainties on $L$ is given below.
To obtain $T_{\rm eff}$ and $\log g_{\rm BJ}$,
the indices $b-y$ and $c_1$ were used with the grid of Clem et\,al. (2004).
The grid is not perfect and reading $T_{\rm eff}$ and $\log g_{\rm BJ}$ from
that grid by eye, we assessed that
(based on readings performed three times for the entire cycle for one star)
the uncertainty in $T_{\rm eff}$ is about 1\%,
and in $\log g_{\rm BJ}$ about 0.05 dex.
Since the effects of metallicity, [M/H],
are only significant for $T_{\rm eff} < 6000$\,K,
which occurs for only 3 of our less well-observed stars near minimum light,
we are confident that our error estimates are only at those temperatures
affected by a possibly small mismatch in [M/H].
Given the measurement uncertainties in $L$ (3\%) and $T_{\rm eff}$ (1\%),
it follows that the uncertainty in the calculated $R$ values is 4\%.
If, in addition, a distance were incorrect by +10\% that would give
an additional offset for a given star in $R$ of +5\%.
As mentioned in Sect.\,\ref{parpulsating},
we assumed a fixed mass for all stars of 0.7\,M$_{\odot}$,
and indicated a margin of error of 15\%.
This error would lead for a given star to an offset of 0.06 dex
in a calculated $\log g(T,L,M)$.
The values of $\log g(T,L,M)$\ calculated from Eq.\,\ref{elogMTgL}
have a measurement error of about 8\% or 0.03 dex.
A distance error of +10\% leads to an offset in $\log g(T,L,M)$\ of $-$0.04,
an error in the mass $M$ of +10\% to an offset in $\log g(T,L,M)$\ of +0.04.
Finally we note that the distances of RR\,Lyrae have essentially all
been derived from a reference $M_V$ (distance from distance modulus),
in some cases adjusted for metal content, [Fe/H].
For a review of the latest calibrations we refer to Sandage \& Tamman (2006).
However, in spite of all calibration efforts,
$M_V$ has a spread in $M_V$ of between 0.2 and 0.4 mag for a given [Fe/H]
or an uncertainty of even up to 1.0 mag if [Fe/H] is unknown.
Moreover, given the evolution of RR\,Lyrae stars on the HB,
a single and universal value of $M_V$ does not exist anyway;
the evolution of an HB star may lead to an increase in $\log L$ of
0.5 dex when reaching the terminal-age HB
(see, e.g., de Boer \& Seggewiss 2008; their figure~10.9).
Therefore, if a considered star were brighter than the chosen reference value,
its derived distance modulus and thus distance
would be too small.
If the true $M_V$ of the star is unbeknown
to be 0.25 mag brighter than the reference value,
the distance (as given in Table\,\ref{tabobs})
will turn out to be too small by 12\%.
This affects the derived $L$, then being too small by 0.10 dex.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{dBM-f7.ps}}
\caption[parameter curves]{Run of the parameters $T_{\rm eff}$,
$\log g_{\rm BJ}$, and $R$ (derived as described in Sect.\,\ref{parpulsating})
during the pulsational cycle of \object{RR Gem}, \object{AS Cnc},
and \object{TW~Lyn} using the calibration grid by Clem et\,al. (2004).
The thin dashed lines are the values resampled at $\Delta\Phi$ = 0.02
(see Sect.\,\ref{resampling}).
}
\label{parcurve}
\end{figure}
The mean values of Table\,\ref{ttabaver}
were derived from a large number of individual measurements,
so the observational scatter is basically cancelled.
What remains is the error related to the calibration grid and
also the error in the assumed distance and/or $M_V$
(affecting $\log g_{\rm BJ}$ and $L$ as just indicated, as well as $R$).
\section{Hysteresis of the parameters $T_{\rm eff}$, $R$, and $\log g_{\rm BJ}$}
The values of the atmospheric parameters for all points in the
cycle were derived from the photometry as described above.
In Fig.\,\ref{tefflloop} we show the run of $T_{\rm eff}$ and $L$
of four of our stars in an HRD.
These parameters exhibit loops through the diagram
extending over and beyond the instability strip.
The loopings are reminiscent of hysteresis.
In the older literature, the lagging behind in the change of some parameter
compared to another parameter is called ``phase-lag'' (see, e.g., Cox 1980).
The behaviour of $T_{\rm eff}$ and $\log g_{\rm BJ}$ against phase
(Fig.\,\ref{parcurve}) shows remarkable features.
For all stars, $\log g_{\rm BJ}$ begins to increase
about $\Delta\Phi=0.05$ before $T_{\rm eff}$ starts to increase.
Furthermore,
$\log g_{\rm BJ}$ reaches its highest value
when $T_{\rm eff}$ has increased about halfway to its maximum value.
And when $T_{\rm eff}$ reaches its maximum,
$\log g_{\rm BJ}$ is already decreasing (at phase $\simeq 0$).
The resulting value of $R$
(that of Fig.\,\ref{parcurve} is derived from Eq.\,\ref{elogLRT})
indicates that $R$ is smallest when $\log g_{\rm BJ}$ is largest.
The run of $R$ shows that RR\,Lyrae are relatively extended stars
for about half of the cycle ($0.25 < \Phi < 0.75$),
while large changes in radius take place between $0.75 < \Phi < 1.25$,
with the most rapid change occurring between $0.9 < \Phi < 1.15$.
At $\Phi \simeq 0.9$, a drastic reduction in the derived $R$ takes place
(the atmosphere apparently starts to collapse),
reaching the smallest $R$ near $\Phi\simeq 0.95$.
At smallest $R$ the atmosphere must have a high density.
Up to the highest density, $T_{\rm eff}$ is at the same level
as in the quiet phase leading up to the collapse.
Soon after $\Phi\simeq 0.95$, $R$ increases again.
In general,
when $y$ is brightest $T_{\rm eff}$ is at its highest and $R$ is very small;
but immediately after minimum light, at the high peak in $\log g_{\rm BJ}$,
$R$ is at its smallest.
\section{Spectral line strengths}
\label{speclines}
For three of our stars (\object{TZ Aur}, \object{RR Gem}, \object{RS Boo}),
spectra were obtained
(in January 2006, independently of the Str\"omgren-photometry)
with the focal reducer spectrograph at the 1\,m telescope of the
Observatory ``Hoher List'' of the AIfA.
Radial velocities could not be obtained with this instrument.
The spectra have a resolution of $\sim$3~\AA.
Because of telescope size and focal-reducer set-up,
exposure times had to be rather long,
of the order of 3\% of the period, leading to phase smearing.
We determined the equivalent withs of the lines of H$\alpha$, H$\beta$,
H$\gamma$, Na\,{\sc i}\,D, and Ca\,{\sc ii}\,K.
For RR\,Gem and TZ\,Aur, we show (Fig.\,\ref{rrgemspec})
how the strengths of the H$\gamma$ and Ca\,K lines vary during the cycle.
Results for RS\,Boo are not shown because the lightcurve coverage was poor.
\noindent
{\bf H}$\gamma$.
The strength of Balmer lines is governed by $T_{\rm eff}$ as expected
since the Balmer level can be populated
only during the higher $T_{\rm eff}$ part of the RR\,Lyrae cycle.
The temporal behaviour of $W$(H$\gamma$) and $T_{\rm eff}$
(Fig.\,\ref{rrgemspec}) is indeed very similar.
\noindent
{\bf Ca\,{\sc ii}}\,K.
The strength of Ca\,K is predominantly set by the gas density, i.e.,
the ionization balance forces Ca\,{\sc ii} to be reduced at higher density
(when $R$ is smallest).
The curves of $W$(CaK) and $R$ are very similar in terms of
their temporal behaviour (clearly for RR\,Gem).
The Na\,{\sc i}\,D absorption is very weak and provides little information,
except that Na\,{\sc i}\,D is strong when $T_{\rm eff}$ is at its lowest;
the increased strength is caused by the shift of the ionization balance
towards the neutral state at lower $T_{\rm eff}$.
\section{The variation in the pulsation velocity $V_{\rm pul}$}
\subsection{The coarse variation in $V_{\rm pul}$}
\label{varvrad}
Figure\,\ref{parcurve} shows for three stars the change in radius, $R$,
as derived from Eq.\,\ref{elogLRT}.
One can thus calculate the velocity of the stellar atmosphere
(the ``pulsation velocity'', $V_{\rm pul}$).
We used
\begin{equation}
V_{\rm pul} = \frac{\Delta R}{\Delta t} =
\frac{R_{\Phi}-R_{\Phi+\Delta\Phi}}{t_{\Phi}-t_{\Phi+\Delta\Phi}} \ \ \ ,
\label{evrad}
\end{equation}
where $R$ is at intervals of fixed value
$\Delta \Phi= 0.02$ (see Sect.\,\ref{timeaverage}).
To avoid being affected by peaks in the noise we applied
a running triangular (1,2,1) smoothing to the curves of $R$
before calculating $V_{\rm pul}$.
In Fig.\,\ref{favvrad}, $V_{\rm pul}$ (as well as the smoothed $R$) is given
for eight stars with the best data.
Note that $V_{\rm pul}$ plotted is that seen from the centre of the star
(and is not a heliocentric value).
\begin{figure}
\resizebox{\hsize}{!}{
\includegraphics{dBM-f8-1.ps}
\includegraphics{dBM-f8-2.ps}
}
\caption[spectral line strengths]
{For \object{RR Gem} and \object{TZ Aur}, the run of the equivalent widths $W$
of the spectral lines Ca\,{\sc ii}\,K and H$\gamma$ are shown and compared
with the parameters $R$ and $T_{\rm eff}$ (as in Fig.\,\ref{parcurve}).
The vertical scale is that of $R$ [R$_{\odot}$] of RR\,Gem.
The values of the other parameters ($W$ in \AA\ and $T_{\rm eff}$ in K) have
been renormalized to fit this vertical scale (see the labels to the curves).
For RR\,Gem, H$\gamma$ is proportional to $T_{\rm eff}$ and
is strongest when $T_{\rm eff}$ is largest (stronger excitation);
Ca\,K is weakest when the radius $R$ is smallest
(at constant $T$ and higher density, $n_e$ shifts the ionization balance).
The spectral observations of RR\,Gem come from two nights
in the same time frame as but independent of the Str\"omgren photometry
(explaining the overlap of data in phase).
In TZ\,Aur the relation of $R$ with Ca\,K is less clear.
The spectra of TZ\,Aur did not cover the entire light curve
(gap near $\Phi = 0.15$).
}
\label{rrgemspec}
\end{figure}
\begin{figure*}
\resizebox{\hsize}{!}
{\includegraphics{dBM-f9-1.ps}
\includegraphics{dBM-f9-2.ps}
\includegraphics{dBM-f9-3.ps}
\includegraphics{dBM-f9-4.ps}}
\resizebox{\hsize}{!}
{\includegraphics{dBM-f9-5.ps}
\includegraphics{dBM-f9-6.ps}
\includegraphics{dBM-f9-7.ps}
\includegraphics{dBM-f9-8.ps}}
\caption[curves with average vrad]
{The change in $V_{\rm pul}$ is shown together with the change in $R$
(note that the scale of $R$ differs from panel to panel).
{\sl Top row}: the ``well behaved'' and well-observed stars.
{\sl Bottom row}: the stars with less well behaved $V_{\rm pul}$-curves
(partly due to poor light-curve coverage and to interpolation).
{\bf Top curve and data in each panel}:
The values of $R$ were calculated from photometric data
using Eqs.\,\ref{elogLRT} and \ref{elogMRg} (as in Fig.\,\ref{parcurve}).
The resampled data ($\Delta\Phi=0.02$)
were used to derive the variation in $R$ over these intervals.
The curves shown give the resampled values
after an additional running triangular (1,2,1) smoothing.
Note that, for several stars,
the period was not fully covered by our observations.
{\bf Bottom curves in each panel}:
The run of $V_{\rm pul}$ as derived from the smoothed $R$-curve is shown.
To guide the eye, a line at $V_{\rm pul}$=0 km\,s$^{-1}$ is added.
The average of the $V_{\rm pul}$ curves of the
four well observed stars (the stars shown in the top row)
has also here been added as a dashed line
(for each star a small appropriate phase shift was added).
Clearly, the wiggles in $V_{\rm pul}$ are not noise but are systematic
$V_{\rm pul}$-variations.
This is confirmed by the stars of the bottom row,
whose $V_{\rm pul}$ curves were {\sl not} included in the calculated average.
The average $V_{\rm pul}$ curve shows a rhythm with a period of about $P/7$.
\label{favvrad}
}
\end{figure*}
The calculated variation in $V_{\rm pul}$ over the cycle
(see Fig.\,\ref{parcurve}) has a general pattern.
There is a pronounced minimum close to $\Phi\simeq 0.95$
where $V_{\rm pul} \simeq -150$ to $-400$ km\,s$^{-1}$, followed by
a steep rise to a level of $V_{\rm pul} \simeq +50$ to 100 km\,s$^{-1}$.
In the course of the cycle, $V_{\rm pul}$ then slowly decreases and slowly
becomes negative reaching a level of $V_{\rm pul} \simeq -50$ km\,s$^{-1}$.
A sudden additional decrease sets then in to reach the most negative value
of $V_{\rm pul}$, the value this description started with.
It shows that the atmosphere really collapses between $\Phi=0.9$ and $1.0$.
Thereafter, the expansion is drastic
but beyond $\Phi \simeq 0.05$ it is far more gradual
(at $V_{\rm pul} \simeq 50$ km\,s$^{-1}$) as it decelerates.
The behaviour of $V_{\rm pul}$ is similar to that of a model
atmosphere as presented in the gedankenexperiment by Renzini et\,al. (1992).
That gedankenexperiment had a star with a core and an envelope,
in which the luminosity offered to the base of the envelope,
$L_{\rm B}$, was manipulated.
A gentle decrease in this $L_{\rm B}$ from a high $L_{\rm B}$ state
could be accommodated by the envelope
until some critical value of $L_{\rm B}$ at which the envelope collapsed.
Then, in the gedankenexperiment,
$L_{\rm B}$ was raised again and the now compact atmosphere
slowly expanded until some other critical value of $L_{\rm B}$ was reached,
now leading to a run-away expansion.
In the case of RR\,Lyrae, the run of $L$ is controlled by the opacity
in the He$^+$-He$^{++}$ ionization boundary layer.
With increasing transmission of energy through the envelope, $L$ increases
and the star expands until at some limit the envelope has become so tenuous
that (since $L$ can no longer increase) it must collapse.
Having determined the run of $V_{\rm pul}$ one can derive $\frac{d^2R}{dt^2}$.
These acceleration values change considerably during the pulsational cycle.
However, because of the uncertainty in all $V_{\rm pul}$ values
due to opacity effects to be discussed in Sect.\,\ref{companddepth},
we refrain from commenting on these acceleration values.
\subsection{The detailed variation in $V_{\rm pul}$ and a correlation with $P$}
\label{avvrad}
The small-scale variations in the curves of $V_{\rm pul}$
(Fig.\,\ref{favvrad}) might be regarded as due to photometric noise.
However, on closer inspection there is significant structure.
We selected the well-behaved $V_{\rm pul}$-curves
to determine an average $V_{\rm pul}$-curve for an RR\,Lyr star.
For that, we chose the stars RR\,Gem, TW\,Lyn, AS\,Cnc, and SY\,Ari
(the top row panels in Fig.\,\ref{favvrad}).
Before the curves can be added for averaging, small phase shifts were applied
because the chosen $y_{\rm max}$ was not perfectly aligned with the
epoch relation and the resampling at 0.02 intervals in phase
may also have introduced small phase shifts.
Figure\,\ref{favvrad} shows the results for these stars.
It is evident that the wiggles in the $V_{\rm pul}$-curves
are very similar for these four stars.
For the four less well-observed stars not included in the averaging,
a comparison with the average $V_{\rm pul}$-curve shows
that these stars have also the same behaviour in $V_{\rm pul}$.
For all stars, the wiggles in the average $V_{\rm pul}$ indicate
that after the collapse the envelope oscillates
(continues to oscillate) with a rhythm of about 1/7 of the period $P$.
This rhythm at $P/7$ is the same for all our stars,
independent of their actual period.
It is as if the envelope has a residual oscillation triggered by the collapse.
The collapse itself occurs from $V_{\rm pul}$$\simeq 0$ through the highest
collapse velocity back to $V_{\rm pul}$$\simeq 0$ km\,s$^{-1}$,
which is a time equal to about $P/7$.
One might suspect that $P/7$ is due to our resampling at 0.02 phase intervals,
the period being (almost) exactly 7 of our data points.
However,
we note that Papar\'o et\,al. (2009) reported a specific role of the
7th harmonic in one well-studied {\sc CoRoT} RR\,Lyr star.
The {\sc CoRoT} data do not have the 0.02 phase resampling rhythm.
Barcza (2003) found in the RR\,Lyr star SU\,Dra from photometry
an ``undulation''' of $P/5$.
It is unclear what the cause of these periodicities is.
\section{Comparison with literature data and optical depth effects}
\label{companddepth}
We derived the time-dependent radius of each star from our photometry
and from its change the velocity of the atmosphere.
The analysis above was performed without considering
analyses of other RR\,Lyrae data presented in the literature.
A well-known method for deriving the radius of RR\,Lyrae stars
is the Baade-Wesselink method,
where the radial velocities observed over the cycle are integrated
to infer the variation in radius (see, e.g., Liu \& Janes 1990).
This method has been applied to several RR\,Lyrae stars.
\subsection{Comparison with other results}
For two of our stars (\object{RR Gem} and \object{AR Per}), cycle velocities
were presented by Liu \& Janes (1989).
Their velocities were obtained by cross-correlating their RR\,Lyrae star
spectra with spectra of two standard stars,
one of spectral type F6\,IV and one of A2\,IV.
The values they derived from the two standard stars are very similar so, that
their final velocity curves are based on just the standard of type A2\,IV.
Our velocity curve for RR\,Gem has a shape quite different from the one
of Liu \& Janes (1989).
Note that in their figure\,5 they give the {\sl observed radial} velocity,
whereas in our Fig.\,\ref{favvrad}
the velocity as seen from the stellar centre is given.
Their velocity curve shows a large speed away from the stellar centre
at $\Phi\simeq 0$
followed by a gradual slowing down to approach the minimum in velocity
at $\Phi\simeq 0.7$.
The velocity curves of other RR\,Lyrae stars of Liu \& Janes
show the same behaviour.
The amplitude in their velocity curves is $\simeq$60 km\,s$^{-1}$.
Adopting the usual geometry factor $p=1.32$
(see Liu \& Janes 1990; but see Fernley 1994)
to convert from observed radial velocity to pulsational velocity,
their velocities translate
into an amplitude in $V_{\rm pul}$ of $\simeq$80 km\,s$^{-1}$.
On the other hand, our stars have amongst themselves similar behaviour.
Their outer layers experience during the cycle
only a modest change in velocity
except for a rapid fall toward the stellar centre as of $\Phi \simeq 0.85$
to a fastest centre approach at $\Phi \simeq 0.95$,
followed by a rapid decline in velocity
to reach a modest expansion velocity near $\Phi \simeq 0.05$.
In our data, the full amplitude of the velocity $V_{\rm pul}$
is $\simeq$300 km\,s$^{-1}$.
\begin{figure*}
\resizebox{\hsize}{!}{
\includegraphics{dBM-f10-1.ps}
\includegraphics{dBM-f10-2.ps}
\includegraphics{dBM-f10-3.ps}
\includegraphics{dBM-f10-4.ps}
\includegraphics{dBM-f10-5.ps}
\includegraphics{dBM-f10-6.ps}
}
\caption[logg curves]
{Run of the two aspects of $\log g$\
(resampled data, see Sect.\,\ref{resampling})
for the well observed stars.
{\bf Top curves}: The slowly varying curve is $\log g(T,L,M)$\ derived from
Eqs.\,\ref{elogLRT} and \ref{elogMRg}
(thus using $T$ and $L$ derived from $y$,$b-y$),
and the full horizontal line gives the average of $ \langle g(T,L,M) \rangle$
over the cycle (in logarithmic form; see Table\,\ref{ttabaver}).
The peaked curve shows the gravity, $\log g_{\rm BJ}$,
derived from the $b-y,c_1$ photometry and the grid calibrated
to obtain $T_{\rm eff}$ and $\log g_{\rm BJ}$.
{\bf Bottom curves}:
the ratio $g(T,L,M)/g_{\rm BJ}$ is shown on a logarithmic scale.
Clearly, when $\Delta = \log (g(T,L,M)/g_{\rm BJ}) < 0$
the atmosphere gas must be quite compressed.
The curve of $V_{\rm pul}$ has added
(in units as in Fig.\,\ref{favvrad}).
}
\label{glog}
\end{figure*}
The difference between our velocity curves and those of Liu \& Janes
is mostly caused by the drastic reduction of $R$ we find at $\Phi\simeq 0.9$.
Eliminating this behaviour from our $R$ curves
(eliminating the $R$ data for $0.85 < \Phi < 0.05$) results in
a velocity amplitude of $V_{\rm pul}$ of up to $\simeq$90 km\,s$^{-1}$,
quite similar to the one of Liu \& Janes mentioned above.
In short, the radius change of an RR\,Lyrae star based on the Baade-Wesselink
radial velocity curves does {\sl not} chime with the pronounced change in $R$
we derived (using Eq.\,\ref{elogLRT}) from the photometry.
This difference is almost certainly caused by effects of optical depth.
\subsection{Effects of optical depth $\tau$}
\label{sectau}
When analysing our data we tacitly assumed that the detected light originates,
throughout the cycle,
in the same gas and thus from the same atmospheric layer.
Such assumptions have been made in almost all analyses of variable star data
(also with the Baade-Wesselink method), but they need not be correct.
Furthermore, Abt (1959) noted that
the continuum light originates in layers with $\tau \simeq 0.7$
while the spectral lines rather are formed in layers with $\tau \simeq 0.3$.
During the cycle, compaction of the atmosphere and/or changes in
level of ionization may lead to changes in $\tau$ in that gas,
thus to different layers being sampled in the photometry.
This is also the case for spectral lines:
changes in the density and temperature of a layer due to vertical motion
will lead to changes in the local ionization balance and the excitation state
of ions, thus to a varying strength of the relevant absorption lines.
During the cycle a spectral line may therefore form in different gas layers.
Furthermore, the strength of a line formed at greater depths is also
influenced by the level of photon scattering filling in the spectral line.
The phase lag between lines from metals and hydrogen discovered by
van Hoof \& Struve (1953) in a $\beta$\,Cepheid star
has been attributed to similar optical depth effects.
For RR\,Lyrae stars, this ``van Hoof effect'' was studied by, e.g.,
Mathias et\,al. (1995) and Chadid \& Gillet (1998).
Our sketchy spectral data (Fig.\,\ref{rrgemspec}, Ca\,{\sc ii} and H$\gamma$)
exhibit a phase lag too,
which can easily be attributed to changes in the gas conditions
that also cause the hysteresis in $T_{\rm eff}$ and $\log g_{\rm BJ}$.
In spectra of several RR\,Lyrae stars, line {\sl doubling} has been reported
near $\Phi=0$ (see, e.g., Sanford 1949).
Considerable increases in line width may also occur near that phase
(e.g., Fe\,{\sc ii} lines; see, e.g., Chadid 2000).
Spectral line doubling implies that there are gas layers in the same line of
sight with the same ions but which are at different (radial) velocities.
This doubling is seen mostly in the lower level Balmer series lines
but also in other intrinsically strong lines such as Ca H\&K.
Thus, the spectral absorption in those lines comes from
different geometric depths in the atmosphere.
Note that when for the determination of radial velocities template spectra
are used, any velocity differences between ionic species are averaged out.
\subsection{Photometry and spectroscopy sample different layers}
Consider an outward moving dense layer of the stellar atmosphere.
Upon its expansion and cooling, its continuum optical depth becomes smaller.
Thus, the level of $\tau\simeq 0.7$ from which the continuum light emerges
moves physically downward through the gas,
leading to smaller photometrically derived $R$ values.
If this intrinsic $\tau$-level moves rapidly,
the concomittant rapid decrease in derived $R$
is naturally interpreted as an extremely rapid downward velocity,
even when the gas itself hardly has a vertical motion.
The level from which the (metal) spectral lines emerge
(near $\tau \simeq 0.3$)
will exhibit this effect later, i.e.,
only after the gas has been lifted further
to larger radii (cooled and rarefied further to lower\,$\tau$).
This happens later than the change in the gas levels
releasing the continuum light.
In this case, geometrically deeper levels may then be
(spectroscopically) sampled suggesting a decrease in $R$
even if the gas itself still moves outward.
We note one can see a difference in that sense between lines of
Fe\,{\sc ii} and Fe\,{\sc i} (Chadid \& Gillet 1998).
Finally, the intrinsically strong lines,
such as the lower Balmer series lines and a few other lines with
large intrinsic absorption capability, will exhibit this effect yet later.
However, in deeper layers (at different radial velocity)
the local Balmer absorption may already show up with its own radial velocity
($\lambda$-shifted with respect to the Balmer line in higher layers),
thus leading to line-doubling from lower levels of the Balmer series.
The presented data do not allow us to asses
the details of these optical depth effects.
We emphasize that the parameters derived from the photometry
always refer to the conditions in the layer with $\tau \simeq 0.7$,
the layer from which the measured continuum light emanates.
It may be that some of our interpretations need to be revised once
more detailed measurements of the particular gas layers would be available.
The differences between our run of $R$ (and of $V_{\rm pul}$ derived from it)
and the Baade-Wesselink run of $V_{\rm rad}$ (and of $R$ derived from it) can
be understood as being caused by the sampling of layers with different $\tau$.
The Baade-Wesselink method samples layers with $\tau$ smaller
than $\tau$ of the layers sampled with continuum photometry.
It thus is unsurprising that the run of $R$
as derived from these methods differs in important ways,
all due to the physical condition ($\tau$) of the gas layer from which
the measured radiation (be it spectral lines or the continuum) emerges.
At the end of Sect.\,\ref{avvrad} it was noted that oscillations
are visible in $V_{\rm pul}$ with $P/7$.
Perhaps these variations are also caused by temporal small
optical depth effects ($T_{\rm eff}$ and $\log g_{\rm BJ}$)
and do not represent fluctuations in gas velocity.
\section{The variations in $\log g(T,L,M)$\ and $\log g_{\rm BJ}$, and verifications of distance and mass}
\label{secggeff}
\label{distmass}
The run of $\log g(T,L,M)$ was derived from Eq.\,\ref{elogMTgL}.
The actual surface gravities, $\log g_{\rm BJ}$, were derived from $b-y,c_1$.
In Fig.\,\ref{glog} we plot three parameters: $\log g(T,L,M)$,
$\log g_{\rm BJ}$ and the logarithmic ratio $\log (g(T,L,M)/g_{\rm BJ})$.
When $\log g(T,L,M) - \log g_{\rm BJ} <0$ (or $\log (g(T,L,M)/g_{\rm BJ}) <1$)
the atmosphere is clearly compressed (collapsed).
The data in Fig.\,\ref{glog} indicate that in the quiet part of the cycle
($0.4<\Phi<0.8$) the gravity $\log g(T,L,M)$ is, for several stars,
not equal to $\log g_{\rm BJ}$.
However, in those quiet parts of the cycle,
all parameters are apparently stable
(there are most likely no optical depth effects at this point) and
so we would expect $\log g(T,L,M)$ and $\log g_{\rm BJ}$ to be equal there.
We can explore how one might adjust $d$ and $M$ to make $\log g(T,L,M)$ from
Eq.\,\ref{elogMTgL} equal to $\log g_{\rm BJ}$ in that phase interval.
For RR\,Lyrae stars,
that phase ($\Phi\simeq 0.6$) has $T_{\rm eff} \simeq 6000$~K,
only a little higher than $(T_{\rm eff})_{\odot}$.
The distances of our stars were defined as described in Sect.\,\ref{secobs}.
They are taken from the literature and they
are primarily determined from an adopted reference value of $M_V$.
For some stars we calculated $d$ from $M_V$ (including effects of [Fe/H]).
We recall from Sect.\,\ref{errorbudget} (on the error budget) that
the sum of the measurement errors in $\log g(T,L,M)$ and $\log g_{\rm BJ}$
may be as large as 0.08 dex.
If these two gravities were to be equal
and if we assumed the full margin of error of 0.08, then, e.g.,
$\log g(T,L,M)$ for RR Gem should be reduced by $\simeq$0.22 dex,
that of AS Cnc by $\simeq$0.12,
while that of TW Lyn should be larger by $\simeq$0.07 dex.
If the adopted $M_V$ (or $d$) and $M$ of our stars were wrong
and had to be changed,
this would produce (when using Eq.\,\ref{elogMTgL})
the tabulated effects:
\begin{center}
\begin{tabular}{ccc|ccc}
\hline
\multicolumn{3}{c}{assumed change} & \multicolumn{3}{c}{effects} \\
\hline
$M_V$ & ($d$) & $M$ & $\log g(T,L,M)$ & $L$ & $R$\\
\hline
$-0.20$ & $+10$\% & & $-0.04$ & $+20$\% & +10\%\\
& & +10\% & +0.04 & - & - \\
\hline
\end{tabular}
\end{center}
\noindent
We note again that changing the distance is for the stars observed
only implicit;
distances of RR\,Lyrae stars have (except for those
in the papers referred to in Sect.\,\ref{parpulsating})
not been determined by other means than through $M_V$.
Thus only $M_V$ or the mass $M$ can be changed.
For our stars a change in either $M_V$ or $M$
(or perhaps a combination thereof)
is needed to make $\log g(T,L,M)$ and $\log g_{\rm BJ}$ match.
The changes in $M_V$ indicated below are within the range permissible by the
non-uniqueness of $M_V$, as mentioned in Sect.\,\ref{errorbudget}.
For the stars shown in Fig.\,\ref{glog} the changes are: \\
\noindent
- \object{RR Gem}: $M_V$ 1.0 mag brighter, $M$ 50\% smaller.
The optimum would be to change $M_V$
leading to $L$ larger by a factor 2.5 (0.4 dex in $\log L$).
Reducing the mass by 50\% is no option, it then would be too low.
A combination of changes might also fit.\\
- \object{TW Lyn}: $M_V$ 0.4 mag fainter, $M$ 20\% larger.
Changing $M_V$ is not the right option because its $L$ value
would, in Fig.\,\ref{tefflloop}, be even further below the ZAHB.
Hence $M$ should be 20\% higher at 0.85 M$_{\odot}$.
Note that (in Fig.\,\ref{tefflloop})
the mean value of $L$ can be made larger
when adopting a yet larger mass.\\
- \object{AS Cnc}: $M_V$ 0.6 mag brighter, $M$ 30\% lower.
Changes similar to but smaller than those for RR Gem would be needed.\\
- \object{SY Ari}: no changes are needed within the errors.\\
- \object{SZ Gem}: $M$ smaller by 0.1 dex, to give $M= 0.56$ M$_{\odot}$.\\
- \object{BH Aur}: no changes are needed within the errors. \\
- \object{TZ Aur}: to make the two gravities match,
$\log g(T,L,M)$ should be $\simeq$0.3 dex lower.
This would be achieved if $M$ were considerably lower.
Alternatively, $M_V$ should (with $M=0.7$\,M$_{\odot}$) be so much brighter,
that $L$ is doubled.
In both cases, TZ\,Aur would then be a well-evolved HB star.
We do not comment on the less well-observed stars
(the bottom four in Table\,\ref{ttabaver}).
\section{The lightcurve bump}
The stars in our programme were primarily selected from the class said to have
lightcurve ``bumps''.
These bumps in the run of brightness need not be very pronounced
(for $y$-magnitudes see Fig.\,\ref{flightcurves}).
One goal of the observations was to investigate
whether these bumps in brightness affect the colours of the stars.
Examples of the bump region for a few stars are given in Fig.\,\ref{bump}.
First, we note that most of our stars with good light-curve coverage
hardly show the light-curve bump in the Str\"omgren colour indices.
Among these are \object{TW Lyn}, \object{BH Aur}, \object{X CMi},
\object{TZ Aur}, \object{BR Tau}, and \object{RR Gem}, which
have either a prolonged low level bump or no clear colour-index bump at all.
While \object{SY Ari} has a bump in $y$ that has no effect
in its colour indices, \object{AS Cnc} does show an effect.
For the other stars, light-curve coverage was either poor or missing in this
range of phases.
In \object{AS Cnc} the brightening and dimming is clearly recognisable
and takes place in a short timespan.
We note that the colour indices $u-b$ and $u-v$ change
before $y$ brightens.
Both indices become bluer meaning that the Balmer continuum radiation
brightens without an increase in $T_{\rm eff}$.
In this part of the cycle, the atmosphere is cool and
the Balmer continuum brightening could mean that the gas cools even further
leading to an additional reduction in opacity
based on the lower Balmer level excitation.
Since $T$ has not yet changed,
the changes should be due to a reduction in gas density.
Does extra light escape because of the lower $\tau$?
At this point in the cycle, the atmosphere is quite extended
and about to shrink.
This happens at $\Phi \simeq -0.1$ (see Fig.\,\ref{bump} and earlier ones).
Bono \& Stellingwerf (1994) attribute the bump feature to shockfronts.
\begin{figure}
\resizebox{\hsize}{!}
{\includegraphics{dBM-f11-1.ps}\includegraphics{dBM-f11-2.ps}\includegraphics{dBM-f11-3.ps}}
\caption[]{
Three examples of Str\"omgren-photometry light curves of RR\,Lyrae stars
near the possible ``bump'' in brightness.
\object{TW Lyn} either does not show a bump or its bump is rather extended
in phase;
there is no or little effect on the colour indices.
\object{SY Ari} does exhibit a bump in $y$
but there are no effects in its colour indices.
For \object{AS Cnc}, the colour indices $u-b$ and $u-v$ clearly change
before $y$ changes,
indicating a change in the Balmer jump before a brightness change.
The index $b-y$ (representing temperature) changes somewhat later.
}
\label{bump}
\end{figure}
\section{Further remarks}
\subsection{Comparison with models}
Numerous models have been constructed for the pulsation
and for various other detailed aspects of RR\,Lyrae
(see, e.g., Fokin et\,al. 1999).
A comparison with models is of limited value since
most models are one-dimensional.
More importantly,
it is not always clear how theory has been transformed to observables.
Do the theoretical predictions for $V$, $R$, spectral line strengths,
and velocities all refer to information from the layer
with the relevant $\tau$ (thus to what one really would observe)
or to a matter-defined gas layer?
Parameters of RR\,Lyrae stars have been determined
using different kinds of data and with different modelling.
We refer to the analysis of S\'odor et\,al. (2009) for results
based on the Baade-Wesselink method and its refinements
as well as to additional literature.
\subsection{Blazhko effect}
A good portion of RR\,Lyrae stars shows cycle-to-cycle variations
in their light curves, the so-called Blazhko effect
(for references see, e.g., Jurcsik et\,al. 2009).
Some of the stars of our sample are also known to be Blazhko stars
but they exhibit these variations only at a moderate level.
However, these variations may affect the stellar parameters derived.
The data we acquired cover to short a time span to address and assess
possible cycle-to-cycle variations in the stellar parameters.
\subsection{Relation between stellar mass and kinematics}
Six of our stars were part of the study of kinematics of RR\,Lyrae stars
(Maintz \& de Boer 2005).
Of these, five (\object{CI And}, \object{AR Per}, \object{TZ Aur},
\object{RR Gem}, and \object{TW Lyn})
are stars of the disk population (according to their kinematics),
while \object{SZ Gem} is a star with halo kinematics.
Halo RR\,Lyrae are understood to be older than disk RR\,Lyrae,
so should (on average) have a lower mass.
For \object{TW Lyn}, we found that its mass should be higher than the
adopted reference value and instead be $\simeq$0.85 M$_{\odot}$,
which chimes with it being a star of the disc.
We note that S\'odor et\,al. (2009) found
a mass of $\simeq$0.82 M$_{\rm \odot}$ for \object{RR Gem},
our analysis indicated $\simeq$0.7 M$_{\odot}$.
For \object{SZ Gem}, we found a mass of $\simeq$0.56\,M$_{\odot}$,
this low mass being in line with the expected
(larger age and) lower mass of halo stars.
\object{BH Aur} and \object{SY Ari} are found to correspond to the reference
value of $M=0.7$ M$_{\odot}$, which is in line with their being disk stars.
\section{Conclusions}
Based on simultaneous Str\"omgren photometry and a comparison
of the photometric indices with a calibrated $T_{\rm eff}$, $\log g$ grid,
accurate atmospheric parameters of RR\,Lyrae stars have been determined.
Curves of phase related runs of $\log g_{\rm BJ}$
(the gravity from the Balmer jump) and $T_{\rm eff}$ were derived.
One can then calculate the change in radius of the stars over the cycle.
By including additionally obtained spectra, one arrives at the following
description of the behaviour of RR\,Lyrae stars.
The straightforward interpretation of the data derived for $R$ is that
at phase $\Phi \simeq 0.9$ the atmosphere begins to collapse
soon reaching a high gas density
(when the star has its greatest brightness).
This manifests itself in a large Balmer jump (large $\log g_{\rm BJ}$).
Within an interval $\Delta\Phi\simeq 0.1$ the atmosphere then expands again.
An alternative interpretation is that at phase $\Phi \simeq 0.9$
the optical depth of the extended outer layers has become so small
that the level of $\tau \simeq 0.7$ sampled in the photometry
rushes through the gas inward mimicking a drastic reduction in radius.
The contraction taking place anyway then elevates the density in that level
having $\tau \simeq 0.7$, raising the optical depth there so that
photometry subsequently samples layers again higher up in the atmosphere,
suggesting a rapid expansion.
The sudden increase in density manifests itself
in lage values of $\log g_{\rm BJ}$.
During all these changes the atmosphere exhibits oscillation ripples
with a rhythm of $P/7$.
A possible mismatch of the observed $\log g_{\rm BJ}$ and the calculated
$\log g(T,L,M)$ in the quiet, descending part of the light curve can be used
to assess the applicability of the mean parameters adopted for the stars,
i.e., the absolute brightness $M_V$ (no geometric distances are known)
and the mass~$M$.
One can then determine the individual values of $M_V$ and $M$ for a star.
Extending this kind of simultaneous Str\"omgren-photometry
while avoiding multiplexing two stars
would provide a much denser coverage of the light curves
and thus a more accurate determination of the cycle variation in $R$.
If spectra were then taken simultaneously,
in which a range of spectral lines is included
(lower and higher ionisation stages,
strong and weak lines of the Balmer series)
and of a nature to allow velocity determinations,
one may hope to distinghuish the effects caused by the sampling
of light from layers at different optical depth.
\acknowledgements{We thank Oliver Cordes, Klaus Reif and the
AIfA electronics group for their dedication to {B\sc usca},
and the staff at the Calar Alto Observatory
and the Observatorium Hoher List for their technical support.
We thank K. Kolenberg, M. Papar\'o and K. Werner for advice.
We are grateful that the referee, Dr. \'A. S\'odor,
graciously gave many suggestions for improvement
and asked pertinent questions.
These stimulated us to carry the interpretation a step further.
We thank C. Halliday for linguistic advice.}
|
2,869,038,154,006 | arxiv | \section{INTRODUCTION}
\label{introduction}
Protoplanetary disks are
crucial objects in low-mass star formation, possessing three vital functions: they
(i) aid the dissipation of angular momentum away from the young stellar system,
(ii) allow the efficient accretion of matter onto the young star and
(iii) contain all material, dust and gas,
which may end up in a planetary system orbiting the main-sequence star.
In this work, we investigate the chemistry and molecular composition of a
protoplanetary disk surrounding a young star which
will evolve into a main-sequence star resembling our Sun.
At the low temperatures encountered in many astrophysical regions
($\sim$~10~K to $\sim$~100~K),
molecules are readily excited into higher rotational energy states and
subsequently emit radiation at (sub)millimeter wavelengths.
Early observations of T Tauri stars at these
wavelengths revealed the presence of molecular material in a flattened disk-like
structure (e.g.\ \citet{dutrey94}) and in Keplerian rotation about the parent star
(e.g.\ \citet{guilloteau94}).
Since then, molecular rotational line emission originating from a disk
has been observed in several
T Tauri systems including
TW Hydrae \citep{kastner97,vanzadelhoff01,vandishoeck03,ceccarelli04,thi04,qi04,qi06,qi08},
DM Tauri \citep{dutrey97,ceccarelli04,ceccarelli05,guilloteau06,dutrey07,pietu07} and
LkCa 15 \citep{vanzadelhoff01,aikawa03,qi03,thi04,dutrey07,pietu07}.
Most species detected are small simple molecules, molecular ions and radicals such as
CO, HCO$^+$, CN, HCN, CS, C$_2$H and N$_2$H$^+$, along with several associated
isotopologues (e.g.\ $^{13}$CO, C$^{18}$O, DCO$^+$, H$^{13}$CN, H$_2$D$^+$ and C$^{34}$S).
The most complex species observed to date is the small organic molecule,
formaldehyde, H$_2$CO \citep{dutrey97,aikawa03, dutrey07}
with methanol, CH$_3$OH, thus far eluding detection (e.g.\ \citet{thi04}).
\citet{ceccarelli05} report a detection of
deuterated water, HDO, in the disk of DM Tau, although this result
has since been disputed by \citet{guilloteau06}.
Infra-red emission has also been observed originating from disks
embedded in young stellar objects and arising from
vibrational transitions in gas-phase molecules capable of survival
in the warmest regions ($>$~350~K).
Thus, infra-red emission probes not only a different physical region of
the disk to that probed by (sub)mm emission, but also uses
different molecules as tracers, hence providing complimentary chemical
information.
The molecules detected thus far at infra-red wavelengths are CO, HCN, OH, H$_2$O,
CO$_2$ and C$_2$H$_2$ \citep{carr04,lahuis06,carr08,salyk08,pascucci09}
with an upper limit determined for CH$_4$ \citep{gibb07}.
Observations of molecular line emission from disks, to date, have been
hampered by the small angular size of these objects on the sky and
the limitations of existing facilities, explaining why the species
detected are those which are abundant and possess
relatively simple rotational energy spectra (e.g.\ CO).
Single-dish facilities which operate at (sub)mm wavelengths
such as the 15~m James Clerk Maxwell Telescope (JCMT) in Hawaii and
the IRAM~30~m telescope in Spain, have been predominantly
used in the detections of the molecular species in the T Tauri systems
listed.
With beam-sizes much larger than the typical source size,
usually a single molecular line profile is generated characterising
emission from the entire disk.
In order to spatially resolve the emission and hence trace the
radial and vertical physical and chemical structure, interferometry
must be employed and indeed, \citet{qi08} report spatially
resolved emission arising from molecular rotational transitions
in the disk of TW Hya using the Sub-Millimeter Array (SMA).
The discipline of (sub)mm astronomy is scheduled for a revolutionary transformation
with the first light of the Atacama Large Millimeter Array (ALMA) in Chile
expected in 2012 (see \url{http://www.almaobservatory.org}).
ALMA, with its 50 12~m telescopes and fully variable configuration, will have the
spatial resolution necessary to observe molecular line emission from protoplanetary
disks on sub-milli-arcsecond scales and enable the
tracing of the molecular content of disks to within $\approx$~0.1~AU of
the parent star at its highest operational frequencies.
It is anticipated that the sensitivity and high spectral resolution of ALMA
will lead to the potentially overwhelming detection of many further
molecular species, including complex organic molecules considered the
building blocks of life, in many astrophysical sources including protoplanetary disks.
Motivated by the impending completion of ALMA, we
have constructed a high resolution combined chemical and physical
model of a protoplanetary disk surrounding a typical T Tauri star using
as comprehensive a chemical network as computationally possible.
In the work presented here, our objectives were (i) to calculate
the chemical structure of protoplanetary disks on small
(sub-milli-arcsecond in the inner disk) scales,
(ii) to investigate the influence of various chemical
processes, such as non-thermal desorption and grain-surface chemistry,
thought to be important in disks, and
(iii) to subsequently determine potential molecular tracers of each process.
We also used our model to (i) compute molecular line emission profiles
for rotational transitions which have been observed in disks using
existing facilities, (ii) compare our modelled line profiles and intensities with existing
observations and (iii) produce molecular line emission maps at the expected
spatial resolution of ALMA for disks at various distances and inclinations.
This second study and corresponding set of results will be covered in a
subsequent paper (Walsh et al. in preparation).
Our study also aims to help answer some
fundamental questions concerning the evolution of stars, planets and ultimately, life.
Is it possible for primordial (possibly organic) material created
in a young star's protoplanetary
disk to survive the assimilation into planets and other planetary system objects?
Is our solar system's chemical and thus, planetary composition unique?
How intrinsically linked
are star formation, planet formation and the origin of life in the universe?
These questions are evermore important as we move into the era of exoplanet research
and the hunt for planets and the signatures of life in external stellar systems.
In Section~\ref{diskmodel} we
describe the theoretical foundation and generation
of the physical model used to characterise our protoplanetary disk (Section~\ref{physicalmodel}) and
the chemical network we have
collated and used in our calculation of the disk chemical evolution (Section~\ref{chemical model})
including gas-phase chemistry (Section~\ref{gasphasechemistry}),
photochemistry (Section~\ref{photochemistry}), gas-grain interactions
(Section~\ref{gasgraininteractions}) and
grain-surface chemistry (Section~\ref{grainsurfacechemistry}).
The results of our chemical evolution calculations are
covered in Section~\ref{results} where we discuss the
chemical structure and stratification in the disk
(Section~\ref{chemicalstructure}),
the effects of our included chemical processes
(Sections~\ref{nonthermaldesorptioneffects} and \ref{grainsurfacechemistryeffects}),
the disk ionisation fraction (Section~\ref{diskionisationfraction}) and the radial
molecular column densities (Section~\ref{columndensities}).
We briefly discuss our work in relation to similar projects by other research groups
in Section~\ref{comparison} and finally,
in Section~\ref{summary}, we summarise our work and outline our main conclusions
and further work we intend to undertake.
\section{PROTOPLANETARY DISK MODEL}
\label{diskmodel}
\subsection{Physical Model}
\label{physicalmodel}
The physical model of a protoplanetary disk we use in this work is from
\citet{nomura05} with the addition of X-ray heating as described in
\citet{nomura07}.
They self-consistently modelled the density and temperature
profiles of gas and dust in a protoplanetary disk accounting for
UV and X-ray irradiation by the central star and
subsequently computed molecular hydrogen line
emission at ultraviolet and infrared wavelengths.
Here, we have used this model to compute
the chemical structure of a protoplanetary disk with the
ultimate aim to expand on their work by
calculating molecular line emission from disks at (sub)mm wavelengths.
In the remainder of this section, we give a brief overview of our physical model and
we refer readers to the original papers for the mathematical and
computational details.
We consider an axisymmetric disk surrounding
a typical T Tauri star with mass, $M_\ast$~=~0.5~$M_\odot$, radius, $R_\ast$~=~2~$R_\odot$
and temperature, $T_\ast$~=~4000~K \citep{kenyon95}.
The density and temperature distributions are determined through iteratively solving
the equations for hydrostatic equilibrium in the vertical direction and the
local thermal balance between the heating and cooling of the gas.
The theoretical foundation of this model comes from the
\emph{standard accretion disk model} of \citet{lynden74} and \citet{pringle81} which
defines a surface density distribution for the disk given the parent star's
mass and radius and a disk mass accretion rate, $\dot{M}$.
The kinematic viscosity in the disk is parameterised according to the
work of \citet{shakura73}, the so-called \emph{$\alpha$-prescription}.
We adopt a viscous parameter,
$\alpha$~=~0.01 and a mass accretion rate, $\dot{M}$~=~10$^{-8}$~$M_\odot$~yr$^{-1}$.
The heating mechanisms included are grain photo-electric heating by
far-ultraviolet photons and X-ray heating due to hydrogen ionisation by
X-ray photons with cooling via gas-grain collisions and line transitions.
We use a model
spectrum created by fitting the observed XMM-Newton X-ray spectrum of the classical
T Tauri star, TW Hya (e.g.\ \citet{kastner02}) with a two-temperature thin thermal plasma model
(MEKAL model see e.g.\ \citet{liedahl95}).
The X-ray luminosity is $L_{X}$~$\sim$~$10^{30}$~erg~s$^{-1}$
and the resulting X-ray spectrum is given in Figure~1 of \citet{nomura07}.
The UV radiation field in disks has two sources, the star and the interstellar medium.
In this disk model, the radiation field due
to the T Tauri star has three components: black-body emission at the star's effective temperature,
optically thin hydrogenic bremsstrahlung emission and strong Lyman-$\alpha$ line emission.
All components are necessary to accurately model the excess UV emission observed towards
classical T Tauri stars thought to arise from an accretion shock as
disk material impinges upon the stellar surface
(e.g.\ \citet{valenti00, johnskrull00}).
The total FUV luminosity in our model is $L_{UV}$~$\sim$~$10^{31}$~erg~s$^{-1}$
with the calculation of the radiation field in the disk described in detail in
Appendix~C of \citet{nomura05} and the resulting spectrum shown in Figure~C.1 in that paper.
We assume the dust and gas in the disk is well-mixed and adopt
the dust-size distribution model which reproduces the observational extinction
curve of dense clouds \citep{weingartner01}.
The calculation of the dust opacity in the disk is as described in
Appendix~D of \citet{nomura05} with the
resulting monochromatic absorption coefficient shown in Figure~D.1.
We note here that this is an over-simplification of the treatment of the dust-size distribution in
protoplanetary disks and we are currently working on improving our model by adding in the effects
of dust-grain settling and coagulation.
In Figure~\ref{figure1} we display the resulting number density (cm$^{-3}$), gas temperature (K)
and dust temperature (K)
as a function of disk radius and height (top, middle and bottom rows, respectively).
To illustrate the extreme vertical gradients in the physical conditions
at small radii, we display the density and temperature both within 10~AU (left panels) and
305~AU (right panels).
We describe the physical structure of our disk in the Appendix.
\begin{figure*}
\subfigure{\includegraphics[width=0.5\textwidth]{./density_map_10AU.eps}}
\subfigure{\includegraphics[width=0.5\textwidth]{./density_map.eps}}
\subfigure{\includegraphics[width=0.5\textwidth]{./gas_temp_map_10AU.eps}}
\subfigure{\includegraphics[width=0.5\textwidth]{./gas_temp_map.eps}}
\subfigure{\includegraphics[width=0.5\textwidth]{./dust_temp_map_10AU.eps}}
\subfigure{\includegraphics[width=0.5\textwidth]{./dust_temp_map.eps}}
\caption{Number density (top), gas temperature (middle) and dust temperature (bottom)
as a function of disk radius and height up to maximum radii of $r$~=~10~AU (left) and 305~AU (right).}
\label{figure1}
\end{figure*}
\subsection{Chemical Model}
\label{chemical model}
The structure of the disk described in the preceding section leads to
a multitude of different physical regimes and as such, we need
to account for every chemical process which may occur.
The axisymmetric structure results in a cold, dense midplane where even the
most volatile molecules are expected to freeze out onto dust grains creating
an icy mantle and depleting the gas of molecules.
Moving in the vertical direction, the density decreases and the temperature
increases driving the evaporation of molecules from grain surfaces
and stimulating a rich gas-phase chemistry resulting in further molecular
synthesis.
Further towards the surface, the radiation fields increase in strength
dissociating and ionising molecules into constituent radicals, atoms and ions.
A similar stratification is expected in the radial direction as the temperature
and density in the disk midplane both increase with decreasing distance from the
star.
When the midplane dust temperature reaches a value higher than the desorption temperature
of a particular molecule, it is returned to the gas phase.
This point is known as the \emph{snow line} and can occur at a unique radius
for each molecule.
At small radii, due to the high densities found in the midplane,
there is a significant column density of material shielding this region from
the intense UV and X-ray fields of the star such that molecules are expected to
survive in the midplane at radii within $\sim$~0.1~AU.
In order to investigate the chemical structure thoroughly, we used
a large gas-phase network supplemented with
gas-grain interactions, including freeze out and thermal desorption.
We considered various non-thermal desorption mechanisms, namely, cosmic-ray induced desorption,
photodesorption and X-ray desorption.
To probe the efficacy of molecular synthesis on grain-surfaces
we also added a large grain-surface reaction network.
\subsubsection{Gas-Phase Chemistry}
\label{gasphasechemistry}
Our gas-phase chemistry is extracted from the latest release of the `dipole-enhanced' version
of the UMIST Database for Astrochemistry (\url{http://www.udfa.net}), henceforth referred to as `Rate06'
\citep{woodall07}.
We include almost the entire Rate06 gas-phase network removing only those species
(and thus reactions) which contain
fluorine, F, and phosphorus, P, in order to reduce computation time.
We deemed the loss of F- and P-containing species to have a minimal impact on the remaining chemistry.
Our gas-phase network thus consists of 4336 reactions involving 378 species composed of the
elements H, He, C, N, O, Na, Mg, Si, S, Cl and Fe.
The initial elemental fractional abundances (relative to total H nuclei density)
we use are the set of oxygen-rich low-metallicity abundances from
\citet{graedel82}, listed in Table~8 of \citet{woodall07}.
We find by $10^{6}$ years, the typical age of protoplanetary disks, the chemistry has
forgotten its origins, justifying our use of initial elemental abundances.
We intend in future models to calculate the chemical evolution of a parcel of gas as it
follows a streamline in the accretion flow in which case the input abundances should reflect
the molecular make-up of the ambient cloud material.
Our model grid has over 12,000 grid points in 129
logarithmically spaced radial steps from 0.04~AU to 305~AU.
\subsubsection{Photochemistry}
\label{photochemistry}
In the models presented here, we have approximated our photoreaction rates at each point in the
disk, $k^{ph}(r,z)$, by scaling the rates from Rate06 (which assume the interstellar UV field) using
the wavelength integrated UV flux calculated at each point, $G_{FUV}(r,z) = \int_{912\AA}^{2000\AA} G_{FUV}(\lambda,r,z) \;
\mathrm{d}\lambda$.
Hence, the rate for a particular photoreaction at each $(r,z)$ is given by
\begin{equation}
k^{ph} = \frac{G_{FUV}}{G_0}k_0 \quad \mathrm{s}^{-1},
\end{equation}
where $G_0$ is the interstellar UV flux and $k_0$ is the rate expected in the interstellar medium.
\subsubsection{Gas-Grain Interactions}
\label{gasgraininteractions}
Gas-grain interactions are important in large areas of protoplanetary disks
as the dust temperature can reach values lower than the freeze-out temperatures
of molecules.
If the freeze out of gas-phase species is allowed, then the evaporation of molecules from
dust grains must also be included.
In this work, we consider both the thermal and non-thermal desorption of molecules from dust grains.
For the thermal desorption of a particular molecule to occur, the dust-grain temperature must exceed the
freeze-out temperature of that molecule.
Non-thermal desorption requires an input of energy from an external source and is thus independent
of dust-grain temperature.
As protoplanetary disks are irradiated by UV and X-ray photons from the central star
as well as UV photons and cosmic-rays originating from the interstellar medium,
the non-thermal desorption mechanisms we investigate are cosmic-ray induced desorption,
photodesorption, and X-ray desorption.
Our gas-phase chemical network has thus been supplemented with an additional
1154 gas-grain interactions involving 149 surface species.
The accretion rate (or freeze-out rate), $k_i^a$, of species $i$ onto dust-grain surfaces is treated
using the standard prescription \citep{hasegawa92},
\begin{equation}
k_i^a = S_i \sigma_d \left< v_i \right> n_d \quad \mathrm{s}^{-1},
\label{accretionrate}
\end{equation}
where $S_i$ is the sticking coefficient, here assumed to equal unity for all species,
$\sigma_d = \pi a^2$ is the geometrical cross section of a dust grain with radius, $a$,
$<v_i> = (k_BT/m_i)^{1/2}$ is the thermal velocity of species $i$ at a temperature,
$T$ and with mass, $m_i$, $k_B$ is Boltzmann's constant,
and $n_d$ is the number density of dust grains.
The thermal desorption rate, $k_i^d$, of species $i$ is dependent on dust-grain temperature,
$T_d$ \citep{hasegawa92}, and is given by
\begin{equation}
k_i^d = \nu_0(i) \exp \left( \frac{-E_d (i)}{T_d}\right) \quad \mathrm{s}^{-1},
\label{thermaldesorption}
\end{equation}
where $E_d(i)$ is the binding energy of species $i$ to the dust grain in units of K.
The characteristic vibrational frequency of each adsorbed species in its potential well,
$\nu_0 (i)$, is represented by a harmonic oscillator relation \citep{hasegawa92},
\begin{equation}
\nu_0(i) = \sqrt{\frac{2n_s E_d(i)}{\pi^2 m_i}} \quad \mathrm{s}^{-1},
\label{vibrationalfrequency}
\end{equation}
where, here, $E_d(i)$ is in units of erg and $n_s = 1.5 \times 10^{15}$~cm$^{-2}$ is the number
density of surface sites on each dust grain.
The binding energies, $E_d$, for several important molecules (mainly following those collated by
\citet{hasegawa92} and \citet{willacy98}) are listed in Table~\ref{table1}.
We intend to conduct a review of our set of desorption energies in light of
more recent experimental results e.g.\ a recent investigation into the desorption of
methanol by \citet{brown07} determined a binding energy of $\approx$~5000~K, as opposed to
the theoretical value of 2140 K used here (see Table~\ref{table1}).
This binding energy was determined for pure methanol ice as opposed to methanol adsorbed
onto, or mixed with, water ice.
We find throughout our model that the ratio of methanol to water ice is less than 1\%.
Similar experiments for methanol adsorbed onto water ice (Brown, private communication)
suggest that the binding energy of methanol in this complex is comparable with that determined for
pure methanol but due to overlapping desorption features the results are difficult to analyse.
Recent work by \citet{bottinelli10} comparing laboratory data with observations of methanol in young stellar
objects (YSOs)
suggests that methanol ice in these environments likely exists as pure ice or mixed with CO and/or CO$_2$ ice
which is consistent with its formation via hydrogenation of CO on dust grains.
Considering the latter molecules are non-polar, it is possible that the binding energy of methanol in
astrophysical ices is lower than that determined in the laboratory experiments.
Increasing the binding energy of methanol to a value of $\approx$~5000~K
will increase the desorption temperature from $\approx$~30~K to 40~K to $\sim$~100~K.
This will push the `snow line' for methanol closer to the star but should have little effect on the
outer disk methanol abundances where the dust temperature is $<$~30~K.
We expect non-thermal desorption to dominate over thermal desorption in the upper layers of the disk.
\begin{deluxetable}{lcc}
\tablecaption{Molecular Binding Energies \label{table1}}
\tablewidth{0pt}
\tablehead{\colhead{Molecule} & \colhead{Binding Energy (K)} & \colhead{Reference}}
\startdata
CO & 960 & 1\\
N$_2$ & 710 & 2\\
HCN & 4170 & 2\\
CO$_2$ & 2690 & 3\\
H$_2$O & 4820 & 4\\
NH$_3$ & 3080 & 4\\
CH$_4$ & 1080 & 2\\
C$_2$H$_2$ & 2400 & 2\\
H$_2$CO & 1760 & 5\\
CH$_3$OH & 2140 & 5
\enddata
\tablerefs{(1) \citet{sandford88}, (2) \citet{yamamoto83}, (3) \citet{sandford90},
(4) \citet{sandford93}, (5) \citet{hasegawa93}}
\end{deluxetable}
To calculate the cosmic-ray induced desorption rate for each species, $k_i^{crd}$,
we use the method of \citet{leger85} and \citet{hasegawa93}.
They assume that dust grains with a radius of 0.1~$\mu$m are impulsively heated by the impact of
relativistic Fe nuclei with energies of 20 to 70 MeV nucleon$^{-1}$ which deposit, on average,
an energy of 0.4~MeV into each dust grain.
Assuming that the majority of molecules desorb around 70~K, the cosmic-ray induced desorption
rate can be approximated by
\begin{equation}
k_i^{crd} \approx f(70\;\mathrm{K})k_i^d(70\;\mathrm{K}) \quad \mathrm{s}^{-1},
\label{cosmicraydesorption}
\end{equation}
where $k_i^d(70\;\mathrm{K})$ is the the thermal desorption energy of species $i$ at a temperature
of 70~K, calculated using Equation~(\ref{thermaldesorption}).
The parameter, $f(70\;\mathrm{K})$, is the fraction of time spent by grains in the vicinity of 70~K
and can loosely defined as the ratio of the desorption cooling time ($\approx 10^{-5}$~s$^{-1}$) to
the time interval between successive heatings to 70~K (3.16~$\times 10^{13}$~s) so that
$f(70\;\mathrm{K})\approx 3.16 \times 10^{-19}$ (for further details see \citet{hasegawa93}).
Note that the method of calculating the cosmic-ray induced desorption rates is species
dependent and a function of surface binding energy.
In contrast, the photodesorption rates are indiscriminate, based on the
experimental results of \citet{westley95} and \citet{oberg07}.
Their results suggest each photon absorbed by the grain mantle returns a particular
number of molecules, independent of binding energy, to the gas phase
so that the desorption rate of each species varies according to its fractional abundance on
dust-grain surfaces.
The overall photodesorption rate is calculated, similar to the work of \citet{willacy00} and
\citet{willacy07}, using
\begin{equation}
k^{pd} = F_{UV}Y_{UV}\sigma_{d}x_{d} \quad \mathrm{s}^{-1},
\label{photodesorption1}
\end{equation}
where, $F_{UV}$ is the UV radiative flux in units of photons~cm$^{-2}$~s$^{-1}$, $Y_{UV}$
is the experimentally determined photodesorption yield in units of molecules photon$^{-1}$,
$\sigma_{d}$ is the geometrical dust-grain cross section in cm$^2$ and $x_{d}$ is the
fractional abundance of dust grains.
Note that the attenuation of UV radiation is accounted for in our calculation of $F_{UV}$.
Hence, the photodesorption rate for a specific species, $k_i^{pd}$, is calculated using
\begin{equation}
k_i^{pd} = k^{pd} \frac{n_i^s}{n_{tot}^s} \quad \mathrm{s}^{-1},
\label{photodesorption2}
\end{equation}
where, $k^{pd}$ is given by Equation~(\ref{photodesorption1}), $n_i^s$ is the number density
of species $i$ frozen out onto grain surfaces and $n_{tot}^s$ is the total
number density of grain-surface species.
More recent experiments by \citet{oberg09a,oberg09b} suggest that
photodesorption rates are also dependent on the depth of the ice layer on grain
surfaces with the molecular yield also dependent on ice composition.
We intend to explore these experimental results in future models.
For the X-ray desorption rates, we follow the same formulation as for photodesorption,
covered in the theory of \citet{leger85} and \citet{najita01}.
At this point, it is worth noting that X-ray desorption is the least
theoretically or experimentally
constrained of all the non-thermal desorption mechanisms considered here.
The overall X-ray desorption rate, $k^{xr}$, is given by
\begin{equation}
k^{xr} = F_{XR}Y_{XR}P_{abs}\sigma_d x_d \quad \mathrm{s}^{-1},
\label{xraydesorption}
\end{equation}
where, $F_{XR}$ is the X-ray photon flux in units of photons~cm$^{-2}$~s$^{-1}$,
$Y_{XR}$ is the desorption yield in units of molecules~photon$^{-1}$ and the
product, $P_{abs}\sigma_{d}$, is the effective cross section with
$P_{abs}$, the probability of X-ray absorption by the dust grain.
The X-ray desorption rate for each
individual species, $k_i^{xr}$, is calculated according to
the fractional abundance of species $i$ on the dust grains
following Equation~(\ref{photodesorption2}).
Here, we adopt a value $Y_{XR} = 200$ from the investigations of
\citet{najita01} and for the effective grain cross section we
use values from the work of \citet{dwek96} regarding energy deposition into grains
by energetic photons in the energy range 10~eV to 1~MeV.
\citet{najita01} consider X-ray desorption from grains of
various compositions and morphologies and conclude that both have a significant influence
on the X-ray desorption yields calculating values for
$Y_{XR}$ ranging between 10 and $\approx$~4000 molecules photon$^{-1}$.
In this work, we adopt a conservative estimate of the yield
of 200 molecules photon$^{-1}$ as this is the value for the dust morphology
which most closely matches our simple dust-grain model.
Given the large X-ray luminosities of T Tauri stars (see e.g.\ \citet{kastner97,kastner02}), we plan
a more thorough study on
the effects of X-ray desorption in protoplanetary disks taking into consideration the
X-ray energy spectrum as a function of disk radius and height and
investigating the full parameter space considered in the work of \citet{najita01}.
\subsubsection{Grain-Surface Chemistry}
\label{grainsurfacechemistry}
We use the grain-surface network from \citet{hasegawa92} and \citet{hasegawa93}
which has 221 reactions involving an additional 9 surface species which do not have
a gas-phase equivalent (e.g.\ CH$_3$O).
To calculate the reaction rate coefficients, we use the theory outlined
in detail in \citet{hasegawa92}.
The rate coefficient for a grain-surface reaction between species $i$ and $j$ can be defined as
\begin{equation}
k_{ij} = \kappa_{ij}\left( R_{diff}(i) + R_{diff}(j)\right) \left( 1/n_d \right) \quad \mathrm{cm}^{3}\;\mathrm{s}^{-1}.
\label{grainsurface1}
\end{equation}
Here, $\kappa_{ij}$ is the probability that the reaction happens upon encounter and is equal to unity
for an exothermic reaction without an energy barrier.
For reactions with an activation energy, $E_A$, and at least one light reactant
i.e.\ H or H$_2$, $\kappa_{ij} = \exp(-2b/\hbar\sqrt{2\mu E_A})$ where
$b$ is the barrier thickness and $\mu = m_im_j/(m_i+m_j)$ is the reduced mass
of the reaction system.
This expression is the exponential part of the quantum mechanical probability for
tunneling through a rectangular barrier of thickness, $b$.
The term, $R_{diff}$, is the diffusion rate of an adsorbed species and
is the inverse of the diffusion time, $t_{diff}$, defined as $t_{diff} = N_s t_{hop}$ s,
where $N_s$ is the total number of surface sites per dust grain and $t_{hop}$ is the timescale
for an adsorbed species to `hop' from one surface site to another.
The expression for $t_{hop}$ depends on the mass of the species and is given by
\begin{equation}
t_{hop} =
\begin{cases}
{\nu_0(i)}^{-1} \exp\left( \frac{2b}{\hbar}\sqrt{2m_iE_b(i)}\right) \; \mathrm{s} \;
& \; \mbox{H/H$_2$} \\
{\nu_0(i)}^{-1} \exp\left( \frac{E_{b}(i)}{kT_d} \right) \; \mathrm{s}
& \; \mbox{other species}
\end{cases}
\label{grainsurface2}
\end{equation}
where $E_b(i)\approx 0.3 E_d(i)$ is the energy barrier between surface sites.
All other parameters have been defined previously.
\section{RESULTS}
\label{results}
We calculate the chemical abundances in the disk as a function of disk radius,
height and time.
The results displayed here are extracted at a time of 10$^6$~yr, the
typical age of visible T Tauri stars with accompanying protoplanetary disks.
Throughout this section, fractional abundance refers to the abundance of each species
with respect to total particle number density.
In Section~\ref{chemicalstructure}, we display and discuss results from
model PH+CRH only, to illustrate the global chemical structure and
stratification in the disk.
Table~\ref{table2} lists the names and ingredients of each model for which we present results.
Our `fiducial' model is model PH+CRH since most current chemical models
of protoplanetary disks include photodesorption and cosmic-ray induced desorption by default.
In model CRH we remove photodesorption to investigate the influence of cosmic-ray induced desorption,
in model XD we look at the effects of X-ray desorption only, and in model GR, we investigate the
addition of grain-surface chemistry to our fiducial model.
Of course, there are many more permutations of the ingredients
which are worthwhile considering in the future e.g. X-ray desorption plus grain-surface
chemistry.
\begin{deluxetable*}{lccccc}
\tablecaption{Chemical Models \label{table2}}
\tablewidth{0pt}
\tablehead{&\colhead{0}&\colhead{CRH}&\colhead{PH+CRH}&\colhead{XD}&\colhead{GR}}
\startdata
Thermal desorption & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark \\
Cosmic-ray induced desorption & & \checkmark & \checkmark & & \checkmark \\
Photodesorption & & & \checkmark & & \checkmark \\
X-ray desorption & & & & \checkmark & \\
Grain-surface chemistry & & & & & \checkmark
\enddata
\end{deluxetable*}
\subsection{Chemical Structure}
\label{chemicalstructure}
Figure~\ref{figure2} displays the fractional abundances of those molecules observed in disks
(CO, HCO$^+$, HCN, CN, CS, C$_2$H, H$_2$CO and N$_2$H$^+$)
as a function of disk radius and height, up to maximum radii of 10~AU (left column) and 305~AU
(right column).
The global abundance distribution of molecules is governed by the binding energy of each
molecule to dust grains and the UV radiation field strength.
We see most molecules existing predominantly in a molecular layer of varying thicknesses
at a height, $z/r$~$\approx$ 0.3 to 0.5 in the outer disk and $\approx$~0.2 to 0.3 in the inner disk
with freeze out causing depletion
in the midplane and photolysis causing destruction in the upper layers.
CO is an exception to this and is abundant
(x(CO)~$\approx$~$10^{-4}$) throughout the majority of the
depth of the outer disk ($>$~50~AU) with depletion due to freeze out
in the disk midplane only occurring beyond a radius of $\approx$~250~AU.
In the inner disk ($r$~$<$~50~AU), gas-phase CO is abundant in the disk midplane due to its
low binding energy to the dust grains.
In this region, however, we see most molecules confined to the `molecular layer'.
In the outer disk HCO$^+$ has a peak fractional abundance of
$\sim$~10$^{-10}$ to $\sim$~$10^{-9}$ throughout most of the disk, mirroring the
distribution of CO.
Within $\approx$~50~AU, it is confined to a thin layer at a
height $z/r$~$\approx$~0.3 with x(HCO$^{+}$)~$\sim$~10$^{-6}$ which coincides
with the transition zone where the gas composition changes from molecular
to atomic hydrogen.
Gas-phase HCN has a peak fractional abundance of x(HCN)~$\sim$~$10^{-7}$
existing in the molecular layer throughout the disk.
HCN can remain frozen out onto dust grains within radii $\approx$~1~AU
of the parent star demonstrating the effects of the vastly different desorption energies
of CO and HCN (960~K and 4170~K, respectively).
The distribution of CN is complementary to that of HCN as it is
predominantly formed via the photodissociation of the latter molecule.
Hence, throughout the disk, CN exists in a layer above that of
HCN with a fractional abundance $\sim$~10$^{-6}$.
In the outer disk, CN can survive in the surface region, however, the
increasing UV field strength in the inner disk means that
CN is also destroyed by photodissociation in the disk surface.
The distribution of the radicals, CS and C$_2$H are similar to that of CN
since both are formed predominantly via the UV photolysis of larger precursor
molecules (e.g.\ H$_2$CS and C$_2$H$_2$).
H$_2$CO reaches its maximum fractional abundance (x(H$_2$CO)~$\sim$~$10^{-8}$) in the outer disk,
although, within 10~AU, this value is reduced to $\sim$~$10^{-10}$
and H$_2$CO is confined to the molecular layer.
H$_2$CO is returned to the gas phase in the disk midplane within $\approx$~1~AU.
The fractional abundance distribution for N$_2$H$^+$ is different to
any of the molecules considered thus far.
N$_2$H$^+$ reaches its maximum fractional abundance of $\approx$~$10^{-10}$ in the outer
disk only and is present where gas-phase CO is depleted e.g.\
in the disk midplane beyond a radius of 250~AU.
In dense regions, the main destruction mechanism of N$_2$H$^+$ is via reaction
with CO.
In the upper layers, N$_2$H$^+$ increases in abundance due to the increased
abundance of both N$_2$ and cations, such as, H$_3$$^+$.
Within 10~AU, x(N$_2$H$^+$) remains less than $\approx$~10$^{-13}$.
\begin{figure*}
\subfigure{\includegraphics[width=0.5\textwidth]{./CO_map_10AU.eps}}
\subfigure{\includegraphics[width=0.5\textwidth]{./CO_map.eps}}
\subfigure{\includegraphics[width=0.5\textwidth]{./HCO+_map_10AU.eps}}
\subfigure{\includegraphics[width=0.5\textwidth]{./HCO+_map.eps}}
\subfigure{\includegraphics[width=0.5\textwidth]{./HCN_map_10AU.eps}}
\subfigure{\includegraphics[width=0.5\textwidth]{./HCN_map.eps}}
\subfigure{\includegraphics[width=0.5\textwidth]{./CN_map_10AU.eps}}
\subfigure{\includegraphics[width=0.5\textwidth]{./CN_map.eps}}
\captcont{Fractional abundances of several molecules observed in disks as a function of
disk radius and height up to maximum radii of 10~AU (left) and 305~AU (right). }
\label{figure2}
\end{figure*}
\begin{figure*}
\subfigure{\includegraphics[width=0.5\textwidth]{./CS_map_10AU.eps}}
\subfigure{\includegraphics[width=0.5\textwidth]{./CS_map.eps}}
\subfigure{\includegraphics[width=0.5\textwidth]{./C2H_map_10AU.eps}}
\subfigure{\includegraphics[width=0.5\textwidth]{./C2H_map.eps}}
\subfigure{\includegraphics[width=0.5\textwidth]{./H2CO_map_10AU.eps}}
\subfigure{\includegraphics[width=0.5\textwidth]{./H2CO_map.eps}}
\subfigure{\includegraphics[width=0.5\textwidth]{./N2H+_map_10AU.eps}}
\subfigure{\includegraphics[width=0.5\textwidth]{./N2H+_map.eps}}
\caption{(Continued.)}
\end{figure*}
Figure~\ref{figure3} displays the fractional abundances of those additional molecules observed
at infrared wavelengths, H$_2$O (top), OH (second),
CO$_2$ (third) and C$_2$H$_2$ (bottom), as a function of disk radius and height up to a maximum
radius of 10~AU.
We display results from within 10~AU only as infrared emission
originates from the inner hot, dense disk material.
Gas-phase H$_2$O is confined to the molecular layer with a fractional
abundance x(H$_2$O)~$\sim$~$10^{-4}$. As above, freeze out is responsible for
depletion in the midplane and photolysis for depletion in the upper layers.
H$_2$O is returned to the gas phase in the midplane at a radius $\approx$~1 to 2~AU.
The distribution of OH is complimentary to that of H$_2$O residing throughout the disk in a
layer above that of
gas-phase H$_2$O, reaching a peak fractional abundance of $\sim$~$10^{-4}$.
The distribution of CO$_2$ is similar to that of CO in the inner disk, existing only
in the midplane with a maximum value of $10^{-4}$ within a few AU of the star.
The snow-line for CO$_2$, however, is at the much smaller radius of $\sim$~10~AU
(as opposed to $\approx$~250~AU).
Acetylene, C$_2$H$_2$, reaches a peak fractional abundance of $\sim$~$10^{-8}$,
in the molecular layer.
C$_2$H$_2$ and similar molecules are formed in hotter regions where oxygen is
depleted from the gas phase
via freeze out of oxygen containing molecules onto dust grains driving a
carbon chemistry and hydrocarbon synthesis.
\begin{figure*}
\subfigure{\includegraphics[width=0.5\textwidth]{./H2O_map_10AU.eps}}
\subfigure{\includegraphics[width=0.5\textwidth]{./OH_map_10AU.eps}}
\subfigure{\includegraphics[width=0.5\textwidth]{./CO2_map_10AU.eps}}
\subfigure{\includegraphics[width=0.5\textwidth]{./C2H2_map_10AU.eps}}
\caption{Fractional abundances of H$_2$O (top left), OH (top right),
CO$_2$ (bottom left) and C$_2$H$_2$ (bottom right)
as a function of disk radius and height up to a radius of 10~AU.}
\label{figure3}
\end{figure*}
In Figure~\ref{figure10} (online only) we display the fractional abundances of the gas-phase
molecules discussed above, along with constituent atoms and
grain-surface analogues where applicable, as a
function of disk height at radii $r$~=~0.1~AU, 1~AU, 10~AU
and 100~AU.
\subsection{Effects of Non-thermal Desorption}
\label{nonthermaldesorptioneffects}
In this section, we discuss the effects of each of our non-thermal desorption mechanisms,
cosmic-ray induced desorption, photodesorption and X-ray desorption, on the disk chemical
structure (models CRH, PH+CRH and XD in Table~\ref{table2}, respectively).
We call our control model, which includes thermal desorption only, model 0.
First, we show in Figure~\ref{figure4} those regions of the disk in which molecules
are depleted if we take into account thermal desorption only
presenting, as an example, the gas-phase CO fractional abundance as a function of disk radius and
height from model 0.
We discuss how efficiently each non-thermal desorption mechanism works against
depletion throughout the disk in Sections~\ref{cosmicraydesorptioneffects}
to \ref{xraydesorptioneffects}.
We display the fractional abundances of several gas-phase species as a function of disk radius and
height comparing results from each of our non-thermal desorption models in
Figure~\ref{figure11} (online only).
In Figure~\ref{figure4} there are three notable areas where CO is depleted from the gas phase,
(i) in the midplane beyond a radius of $\approx$~250~AU, (ii) in a layer at a
height of $z/r$~$\approx$~0.3 and (iii) in the disk surface between a radius
of a few AU to $\approx$~50~AU.
In region (i), the depletion is due to freeze out of CO onto dust grains as the dust temperature
here is below the desorption temperature of CO. The absence
of any non-thermal desorption means that CO is completely removed from the gas phase.
For region (ii), the depletion is due to the destruction of CO by UV radiation.
At this point, the
UV field is strong enough to dissociate CO into its constituent atoms (atomic carbon and oxygen),
however, the dust temperature is also low enough for the freeze out of molecules which have
binding energies larger than that of CO e.g.\ H$_2$O.
By a time of $10^6$~years, the time at which we extract our abundances, atomic carbon and oxygen
are trapped in molecules contained in the icy mantle.
Above this height, the dust temperature becomes high enough for many molecules to desorb thermally
thus replenishing the stock of C and O to reform CO.
In region (iii), a similar effect to that in region (ii) occurs
due to the decoupling of the dust and gas temperatures in
the disk surface with the dust temperature up to two orders of magnitude lower than the
gas temperature.
The thermal desorption rate is dependent only on the dust temperature ($\propto$~$\exp(-E_d/T_d)$)
whereas the accretion rate depends both on the gas temperature and the number density of dust grains
(see Equations~\ref{accretionrate} and \ref{thermaldesorption}).
Since the gas temperature can be much higher than the dust temperature in this region,
the accretion rate can supersede that of thermal desorption so that by $10^6$ years,
molecules which can survive the intense UV radiation field are able to freeze out onto dust grains.
Specifically, H$_2$O molecules are able to survive as ice on dust grains thereby depleting
the gas phase of oxygen-bearing molecules such as CO.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{./CO_0.eps}
\caption{Fractional abundance of gas-phase CO as a function of disk radius and height using
results from model 0. }
\label{figure4}
\end{figure}
\subsubsection{Cosmic-ray Induced Desorption}
\label{cosmicraydesorptioneffects}
The left columns of Figures~\ref{figure5} and \ref{figure6} display the fractional abundances
of several molecules and molecular
ions as a function of disk height at radii $r$~=~10~AU and 305~AU, respectively,
comparing the results from model 0 (solid lines) with model CRH (dotted lines).
Cosmic-ray induced desorption has the smallest effect on the gas-phase abundances and
only in the outer disk midplane.
The fractional abundances of gas-phase CO and H$_2$O are enhanced in the disk midplane
at 305~AU in the results for model CRH, compared with those for model 0, although
the values reached remain orders of magnitude smaller than those in the upper disk layers.
The fractional abundance of N$_2$H$^+$ is also enhanced, due in part to the release of
N$_2$ from dust grains in this region by cosmic-rays and, in fact, reaches
its maximum fractional abundance in the disk midplane.
As our model disk is truncated at 305~AU, we would expect the effects of cosmic-ray induced
desorption to continue beyond this radius.
\subsubsection{Photodesorption}
\label{photodesorptioneffects}
Photodesorption is the most experimentally constrained non-thermal desorption mechanism
we have included in our model.
Photodesorption has an effect in the upper disk layers where the UV radiation
field has reached an appreciable strength yet the temperature remains low
enough for freeze out to occur.
We compare the results from model 0 (solid lines) with those from model
PH+CRH (dotted lines) in the middle plots of Figures~\ref{figure5} and \ref{figure6}.
Photodesorption enhances the abundance of molecules in the molecular region
of the disk, counteracting the depletion effects discussed at
the beginning of Section~\ref{nonthermaldesorptioneffects}.
Clearly seen at a radius of 10~AU is the smoothing of molecular
abundances throughout the middle region of the disk with both model results
converging higher in the surface.
At a radius of 305~AU, there is a smoothing out of abundances counteracting depletion
but also a slight difference in the distribution of molecules
in the surface regions.
The fractional abundances of CO and CS are enhanced in model PH+CRH
relative to model 0, whereas those of HCO$^+$, H$_2$O, HCN and N$_2$H$^+$ are reduced.
This is due to the alteration of gas-phase chemistry when photodesorption is included as
a significant amount of all molecules can remain in the gas phase in the mid and upper
layers of the disk. Thus, those molecules which ordinarily would be frozen out in the absence
of photodesorption are available to take part in gas-phase reactions e.g.\ N$_2$H$^+$ is
destroyed via reaction with gas-phase CO so that an enhancement in the abundance of the latter leads to a
corresponding drop in that of the former.
\subsubsection{X-ray Desorption}
\label{xraydesorptioneffects}
X-ray desorption is the least theoretically or experimentally constrained non-thermal
desorption mechanism we considered, hence, we have used
conservative estimates of molecular yields and thus X-ray desorption
rates.
The right-hand plots in Figures~\ref{figure5} and \ref{figure6} suggest that, even using conservative
estimates, X-ray desorption has the largest effect on gas-phase molecular abundances.
X-rays, with their higher energy, can penetrate deeper into the disk material
than UV photons, hence, X-rays have an effect in the disk midplane as well as
in the molecular region.
At $r$~=~10~AU (top plot), the abundances of both H$_2$O and CS are enhanced in model XD
relative to the results from models 0, CRH and PH+CRH.
X-ray desorption also acts to smooth out abundances in the upper disk, similar
to the effects of photodesorption.
At 305~AU, the effects of the inclusion of X-ray desorption are most apparent.
The fractional abundances of all molecules considered here, with the exception
of CS, are enhanced in the midplane of the disk, to values comparable with
those found in the upper disk regions.
In fact, it appears that the inclusion of X-ray desorption acts to smooth
or homogenise the fractional abundances of gas-phase
CO, H$_2$O and HCO$^+$ throughout the depth of the disk.
Again, as seen in the results for photodesorption, the gas-phase distributions
are altered in model XD compared with model 0 due to the alteration of
the gas-phase chemistry.
In the upper disk, the results for models PH+CRH and XD are similar with the
exception of the fractional abundance of CS which is enhanced in abundance between
height of $\approx$~70~AU and $\approx$~150~AU in model XD relative to
model PH+CRH.
\begin{figure*}
\centering
\subfigure{\includegraphics[width=0.32\textwidth]{./0_vs_CRH_10AU.eps}}
\subfigure{\includegraphics[width=0.32\textwidth]{./0_vs_PH+CRH_10AU.eps}}
\subfigure{\includegraphics[width=0.32\textwidth]{./0_vs_XD_10AU.eps}}
\subfigure{\includegraphics[width=0.32\textwidth]{./0_vs_CRH_10AU_1.eps}}
\subfigure{\includegraphics[width=0.32\textwidth]{./0_vs_PH+CRH_10AU_1.eps}}
\subfigure{\includegraphics[width=0.32\textwidth]{./0_vs_XD_10AU_1.eps}}
\caption{Fractional abundances of several gas-phase molecules and molecular ions
as a function of disk height at a radius, $r$~=~10~AU comparing
results from model 0 (solid lines) with each non-thermal desorption model (dotted lines),
CRH (left), PH+CRH (middle) and XD (right).
Note that the results for N$_2$H$^+$ from model 0 are too small to appear on our plot.}
\label{figure5}
\end{figure*}
\begin{figure*}
\centering
\subfigure{\includegraphics[width=0.32\textwidth]{./0_vs_CRH.eps}}
\subfigure{\includegraphics[width=0.32\textwidth]{./0_vs_PH+CRH.eps}}
\subfigure{\includegraphics[width=0.32\textwidth]{./0_vs_XD.eps}}
\subfigure{\includegraphics[width=0.32\textwidth]{./0_vs_CRH_1.eps}}
\subfigure{\includegraphics[width=0.32\textwidth]{./0_vs_PH+CRH_1.eps}}
\subfigure{\includegraphics[width=0.32\textwidth]{./0_vs_XD_1.eps}}
\caption{Fractional abundances of several gas-phase molecules and molecular ions
as a function of disk height at a radius, $r$~=~305~AU comparing
results from model 0 (solid lines) with each non-thermal desorption model (dotted lines),
CRH (left), PH+CRH (middle) and XD (right).}
\label{figure6}
\end{figure*}
\subsection{Effects of Grain-surface Chemistry}
\label{grainsurfacechemistryeffects}
The addition of grain surface chemistry is expected to aid the synthesis
of complex organic molecules in regions of the disk where significant freeze out has occurred.
In this discussion, we look at the abundances of small organic (saturated)
molecules in the outer disk in particular.
In model GR (see Table~\ref{table2}), in addition to grain-surface chemistry, we also
add cosmic-ray induced desorption and photodesorption.
Figure~\ref{figure7} shows the fractional abundances of several small organic molecules
as a function of disk height at radii, $r$~=~100~AU (left) and 305~AU (right), for model
PH+CRH (solid lines) compared with model GR (dotted lines).
At 100~AU, the fractional abundance of HCOOH (formic acid) and H$_2$CO (formaldehyde)
are enhanced in the disk midplane in model GR, relative to model PH+CRH, with the abundance
of HCOOH also enhanced in the upper disk layer.
Most of the organic molecules considered reach their peak fractional abundance between
25 and 40~AU.
Here, the fractional abundances of CH$_3$OH (methanol),
HCOOCH$_3$ (methyl formate) and CH$_3$OCH$_3$ (dimethyl ether) are all enhanced
to a value $\sim$~$10^{-13}$ in model GR, orders of magnitude larger than the respective values
from model PH+CRH.
At the very outer edge of our disk model, $r$~=~305~AU, the fractional abundances of
all molecules are enhanced in model GR relative to model PH+CRH.
Of note is the extreme enhancement seen in the abundances of
CH$_3$OH, HCOOCH$_3$ and CH$_3$OCH$_3$, again by several orders of magnitude, to fractional
abundances which are potentially observable ($\sim$~$10^{-11}$ to $\sim$~$10^{-10}$).
By fitting observed line intensities of rotational transitions in a selection of molecules
with a simple disk model,
\citet{thi04} estimate the column density of the relatively complex molecule, H$_2$CO, in several protoplanetary
disks as lying between $\sim$~10$^{12}$ and 10$^{13}$~cm$^{-2}$ which in the outer disk translates roughly
to a fractional abundance of $\sim$~10$^{-10}$ as the H$_2$ column density here is
$\sim$~10$^{22}$~cm$^{-2}$.
Complex organic molecules in hot cores and dark clouds are routinely observed with fractional abundances
$\gtrsim$~10$^{-11}$ (see e.g. \citet{herbst09}).
We display the fractional abundances of the molecules discussed in this section as a function
of disk radius and height comparing results from model PH+CRH and model GR in Figure~\ref{figure12}
(online only).
\begin{figure*}
\subfigure{\includegraphics[width=0.5\textwidth]{./organics_100AU.eps}}
\subfigure{\includegraphics[width=0.5\textwidth]{./organics_305AU.eps}}
\subfigure{\includegraphics[width=0.5\textwidth]{./organics_100AU_1.eps}}
\subfigure{\includegraphics[width=0.5\textwidth]{./organics_305AU_1.eps}}
\caption{Fractional abundances of several small organic molecules as a function of disk
height at radii, $r$~=~100~AU (left) and 305~AU (right) for model PH+CRH (solid lines) and
model GR (dotted lines).}
\label{figure7}
\end{figure*}
\subsection{Disk Ionisation Fraction}
\label{diskionisationfraction}
The ionisation fraction in protoplanetary disks is an important parameter as it is thought that
this drives the accretion flow in the disk through the coupling of the gas with
the strong magnetic fields generated by the system.
The required turbulence is generated via magneto-rotational instabilities or MRI
\citep{balbus91,hawley91}.
For effective accretion, the ionisation fraction is required to exceed a critical
level which is dependent on the nature of the star-disk system (see e.g. \citet{ilgner06}).
Regions in which the ionisation fraction falls below this critical value and where
effectively magneto-hydrodynamic accretion is switched off, are termed `dead zones'.
Figure~\ref{figure8} shows the electron fractional abundance as a function of disk radius and
height within a radius of 10~AU (left) and 305~AU (right) using the results from model PH+CRH.
The ionisation fraction varies between a minimum value of $\sim$~$10^{-12}$ in the densest region of
the disk up to a value of $\sim$~0.1 in the hottest, most irradiated surface region closest to
the star.
We find that we attain similar electron abundances throughout the disk regardless of
chemical model. The ionisation threshold required for effective accretion is related to the magnetic
Reynold's number \citep{gammie96} which must be determined in advance of addressing the location of
any `dead zones' in our particular star-disk system.
The question of whether accretion is suppressed in our disk model will be considered in detail
in a subsequent publication in which we also investigate the effects of the recalculation
of photo-rates and direct X-ray ionisation on the disk chemical structure and ionisation
fraction.
\begin{figure*}
\subfigure{\includegraphics[width=0.5\textwidth]{./electrons_map_10AU.eps}}
\subfigure{\includegraphics[width=0.5\textwidth]{./electrons_map.eps}}
\caption{Fractional abundance of electrons as a function of disk radius and
height up to maximum radii of 10~AU (left) and 305~AU (right).}
\label{figure8}
\end{figure*}
\subsection{Radial Column Densities}
\label{columndensities}
The column density, $N_i$, at each radius, $r$, for each species, $i$,
is calculated by integrating the number density over the depth of the disk i.e.\
\begin{equation}
N_i(r) = \int^{z_{+\infty}}_{z_{-\infty}} n_i(r,z)\; \mathrm{d}z \qquad \mathrm{cm^{-2}}.
\label{columdensity}
\end{equation}
The radial column densities provide an excellent means to trace the radial mass
distribution in the disk and also to compare directly results from each of our chemical models to
determine the species sensitive to each chemical process.
In Table~\ref{table3} we list the column densities of various important molecules
at radii of 1~AU, 10~AU, 100~AU and 305~AU for each chemical model and we display
the column densities of many of the molecules discussed thus far,
as a function of radius, up to maximum
radii of 10~AU (left) and 305~AU (right) in Figure~\ref{figure13} (online only).
At a radius of 1~AU, the column densities are relatively insensitive to the choice
of chemical model, unsurprising given that most molecules are in the gas phase
at this radius and the chemistry is dominated in the midplane by neutral-neutral
reactions and in the surface by photo-chemistry.
At 10~AU, the effects of the inclusion of non-thermal desorption become apparent.
The column densities for those models which include photodesorption
are consistently higher for all molecules with the column densities of
HCN, CN and CS particularly
sensitive.
X-ray desorption has a similar effect although we also see a dramatic increase in the
column densities of H$_2$O and CO$_2$ due to the penetrative power of X-rays in this
region.
This is due to the relatively strong X-ray field at this radius coupled with
the low column density of intervening absorbing material (from the disk surface
to the midplane).
Grain-surface chemistry has only a mild effect at 10~AU on the column densities of
the listed molecules.
For the molecular ions, HCO$^+$ and N$_2$H$^+$, the addition of photodesorption
causes a rise in the column densities of both molecules while the addition
of X-ray desorption produces a fall in the column density of the former
and a rise in that of the latter.
At 100~AU, we see some of the same behaviour as at 10~AU with photodesorption
increasing the column densities of CO, HCN, CN, CS, C$_2$H, H$_2$CO, H$_2$O, CO$_2$ and
C$_2$H$_2$ with HCN, CS and CO$_2$ particularly affected.
X-ray desorption further enhances the column densities of these species.
Here, we begin to see the effects of grain-surface chemistry with
HCN, H$_2$O and CO$_2$ casualties of the increased synthesis of more complex species.
Finally, in the outer disk, the effects of cosmic-ray induced desorption
on the column density of N$_2$H$^{+}$ become apparent, so that this molecule is
a potential observable tracer of this desorption mechanism.
We can also see how both photodesorption and X-ray desorption act to counteract
the depletion of CO onto dust grains in the outer cold disk midplane thus
enhancing its overall column density.
In the outer disk, observable in the column densities is the detrimental effect
that the addition of grain-surface chemistry
has on the column densities of HCN and CS in particular, as N and S atoms are incorporated
into larger, more complex species, via grain-surface reactions.
We can also see the dramatic effect on methanol due to the inclusion of
grain-surface chemistry with its column density enhanced by around three orders of
magnitude.
\begin{deluxetable}{lccccc}
\tablecaption{Column Densities\label{table3}}
\tablewidth{0pt}
\tablehead{\colhead{Species} & \colhead{0} & \colhead{CRH} & \colhead{PH+CRH} & \colhead{XD} &
\colhead{GR}}
\startdata
\cutinhead{1 AU}
H$_2$ & 1.9(25) & 1.9(25) & 1.9(25) & 1.9(25) & 1.9(25)\\
CO & 2.3(21) & 2.3(21) & 2.3(21) & 2.3(21) & 2.3(21) \\
HCO$^+$ & 1.7(14) & 1.7(14) & 1.7(14) & 1.7(14) & 1.7(14) \\
HCN & 2.0(17) & 2.0(17) & 2.0(17) & 2.0(17) & 2.0(17) \\
CN & 2.7(14) & 2.7(14) & 2.7(14) & 2.7(14) & 2.7(14) \\
CS & 4.5(12) & 4.5(12) & 4.5(12) & 4.5(12) & 4.5(12) \\
C$_2$H & 4.1(14) & 4.1(14) & 4.1(14) & 4.1(14) & 4.1(14) \\
H$_2$CO & 1.3(12) & 1.3(12) & 1.3(12) & 1.3(12) & 1.3(12) \\
N$_2$H$^+$ & 5.5(10) & 5.5(10) & 5.3(10) & 5.2(10) & 5.3(10) \\
OH & 1.7(16) & 1.7(16) & 1.7(16) & 1.7(16) & 1.7(16) \\
H$_2$O & 1.7(21) & 1.7(21) & 1.7(21) & 1.6(21) & 1.7(21) \\
CO$_2$ & 4.6(20) & 4.6(20) & 4.6(20) & 4.6(20) & 4.6(20) \\
C$_2$H$_2$ & 1.8(15) & 1.8(15) & 1.8(15) & 1.8(15) & 1.7(15) \\
CH$_3$OH & 2.6(14) & 2.6(14) & 2.6(14) & 2.6(14) & 2.6(14) \\
\cutinhead{10 AU}
H$_2$ & 2.6(24) & 2.6(24) & 2.6(24) & 2.6(24) & 2.6(24) \\
CO & 3.6(20) & 3.6(20) & 3.6(20) & 3.8(20) & 3.2(20) \\
HCO$^+$ & 4.9(13) & 4.9(13) & 5.4(13) & 3.5(13) & 4.4(13) \\
HCN & 2.7(11) & 2.7(11) & 7.1(14) & 8.0(14) & 7.0(14) \\
CN & 1.6(13) & 1.6(13) & 3.9(14) & 3.9(14) & 2.1(14) \\
CS & 2.0(12) & 1.5(12) & 1.0(14) & 2.5(14) & 3.4(13) \\
C$_2$H & 3.6(13) & 3.6(13) & 2.1(14) & 2.1(14) & 1.4(14) \\
H$_2$CO & 3.1(11) & 3.0(11) & 1.4(12) & 1.4(12) & 1.4(12) \\
N$_2$H$^+$ & 1.4(09) & 1.4(09) & 1.4(10) & 1.4(10) & 1.6(10) \\
OH & 8.7(15) & 8.7(15) & 8.8(15) & 8.8(15) & 8.8(15) \\
H$_2$O & 2.6(15) & 2.6(15) & 4.3(15) & 2.0(16) & 4.3(15) \\
CO$_2$ & 4.4(16) & 4.4(16) & 4.9(16) & 1.3(18) & 4.0(16) \\
C$_2$H$_2$ & 1.1(14) & 1.1(14) & 8.7(13) & 2.8(13) & 1.5(14) \\
CH$_3$OH & 2.1(07) & 2.1(07) & 7.7(07) & 1.6(08) & 8.5(07) \\
\cutinhead{100 AU}
H$_2$ & 2.0(23) & 2.0(23) & 2.0(23) & 2.0(23) & 2.0(23) \\
CO & 2.2(19) & 2.2(19) & 2.3(19) & 2.8(19) & 2.2(19) \\
HCO$^+$ & 2.2(14) & 2.2(14) & 2.2(14) & 8.3(13) & 4.9(13) \\
HCN & 1.6(12) & 1.6(12) & 2.1(14) & 3.7(14) & 2.3(13) \\
CN & 1.2(13) & 1.2(13) & 2.6(14) & 2.6(14) & 2.4(14) \\
CS & 7.0(09) & 7.0(09) & 3.2(13) & 9.7(13) & 1.7(13) \\
C$_2$H & 1.1(13) & 1.1(13) & 1.2(14) & 1.2(14) & 1.1(14) \\
H$_2$CO & 7.3(11) & 7.2(11) & 1.9(12) & 1.9(12) & 1.7(12) \\
N$_2$H$^+$ & 5.0(10) & 5.2(10) & 3.7(10) & 3.1(10) & 7.0(10) \\
OH & 4.4(14) & 4.4(14) & 2.2(14) & 2.2(14) & 2.8(14) \\
H$_2$O & 1.2(14) & 1.2(14) & 1.1(15) & 2.6(15) & 7.3(14) \\
CO$_2$ & 3.8(12) & 3.8(12) & 3.4(15) & 1.2(16) & 7.7(14) \\
C$_2$H$_2$ & 2.8(12) & 2.8(12) & 1.9(13) & 2.0(13) & 1.6(13) \\
CH$_3$OH & 5.6(06) & 5.6(06) & 7.5(07) & 1.5(08) & 4.8(08) \\
\cutinhead{305 AU}
H$_2$ & 5.7(22) & 5.7(22) & 5.7(22) & 5.7(22) & 5.7(22) \\
CO & 5.2(17) & 5.2(17) & 1.2(18) & 1.8(18) & 1.1(18) \\
HCO$^+$ & 3.2(13) & 3.2(13) & 1.2(13) & 2.6(13) & 3.6(13) \\
HCN & 2.8(13) & 2.8(13) & 9.1(13) & 1.8(14) & 7.4(12) \\
CN & 9.9(13) & 9.9(13) & 1.6(14) & 1.7(14) & 1.6(14) \\
CS & 2.0(10) & 2.0(10) & 3.1(12) & 4.3(13) & 8.3(11) \\
C$_2$H & 3.1(13) & 3.1(13) & 8.0(13) & 8.0(13) & 7.2(13) \\
H$_2$CO & 6.3(12) & 6.3(12) & 2.1(12) & 2.5(12) & 2.4(12) \\
N$_2$H$^+$ & 2.5(12) & 5.1(12) & 5.0(12) & 3.7(12) & 1.2(13) \\
OH & 7.9(14) & 8.0(14) & 8.0(13) & 1.7(14) & 2.0(14) \\
H$_2$O & 1.4(15) & 1.4(15) & 7.8(14) & 1.4(15) & 1.2(15) \\
CO$_2$ & 9.4(13) & 9.5(13) & 1.9(15) & 3.4(15) & 7.9(14) \\
C$_2$H$_2$ & 1.3(13) & 1.3(13) & 1.3(13) & 3.1(13) & 2.3(13) \\
CH$_3$OH & 1.6(08) & 1.6(08) & 5.6(07) & 7.7(08) & 1.6(11)
\enddata
\tablecomments{$a(b)$ means $a \times 10^{b}$}
\end{deluxetable}
\subsection{Comparison with Other Models}
\label{comparison}
A direct comparison with other chemical models of protoplanetary disks is difficult as no
two models are identical in either their physical basis or their chemical networks.
Given the plethora of preceding work in this field, we limit this short discussion
to more recent models which are comparable to ours in chemical complexity.
The work presented here builds upon previous investigations into the importance of
cosmic-ray induced desorption and photodesorption in disks by \citet{willacy00} and \citet{willacy07},
the latter of which uses the model of \citet{dalessio01} for their physical framework.
We obtain encouragingly similar results to \citet{willacy07} in particular, although,
her primary objective was the investigation into deuterated species in disks.
As such, the reaction network of the non-deuterated species was truncated to accommodate
the additional reactions involving deuterium and deuterium-containing molecules.
Our work also differs in that our physical model includes X-ray heating and also explicitly
determines the gas temperature which can decouple from the dust-grain temperature
in regions where cooling via gas-grain collisions is inefficient.
\citet{willacy09} also use their deuterated reaction network and adapt their model
to investigate the chemical structure of a disk within 30~AU of the central star.
Again, we achieve similar results although differences in the set of
molecular desorption energies used manifests as differences in the positions of `snow lines' for
molecules such as HCN.
Also differing prescriptions for
the UV radiation field in the disk leads to different distributions of radicals such as C$_2$H and CN.
Photodesorption and cosmic-ray induced desorption are now routinely included in modern chemical
models of protoplanetary disks (e.g.\ \citet{woitke09, henning10}).
X-ray desorption and grain-surface chemistry have both been included in work by
by other groups (e.g.\ \citet{semenov08,henning10}) although not explicitly investigated given
the theoretical uncertainty behind the exact mechanism of X-ray desorption for the
former process and the usual truncation of chemical networks for the latter.
Our work presented here and subsequent follow-up publications on
X-ray desorption and grain-surface chemistry, we hope, will go some way to
addressing this.
\section{SUMMARY}
\label{summary}
In this work, we have presented a selection of results from our high-resolution combined
chemical and physical model of a protoplanetary disk surrounding a typical T Tauri star, constructed
in order to trace the physical and thus chemical structure on small scales.
We use a protoplanetary disk model in which the gas density and temperature distributions
are obtained self-consistently and UV and X-ray irradiation by the central star is calculated
by solving the radiative transfer equation.
To this we applied a large comprehensive
chemical network including gas-phase chemistry, gas-grain interactions and grain-surface
chemistry.
We investigated the effects of each non-thermal desorption mechanism thought to
be important in disks: cosmic-ray induced desorption, photodesorption and X-ray desorption.
We also added a large grain-surface network to investigate the effectiveness of grain-surface
reactions on the synthesis of relatively complex organic molecules.
Using the results from model PH+CRH (see Table~\ref{table2}) extracted at a time
of $10^{6}$~years, we find that the disk chemical structure closely mirrors the disk physical structure
with the freeze out of molecules onto dust grains creating an icy mantle
in the cold, dense midplane and an abundance of molecules
in a layer above the midplane created through sublimation and the resulting
rich gas-phase chemistry.
In the disk surface, the molecular abundances drop as the UV and X-ray radiation fields peak in strength
dissociating molecules and ionising both molecules and atoms.
The resulting disk ionisation fraction increases with increasing disk height.
There is similar stratification in the radial direction as the increasing temperature drives the
evaporation of molecules.
The temperature dependence of the desorption energies results in a unique `snow-line' for each molecules
with more volatile molecules returned to the gas in the midplane at larger radii than
less volatile ones.
In particular, both HCN and H$_2$O remain frozen onto dust grains to within $\approx$~1 to 2 AU
of the central star.
The addition of cosmic-ray induced desorption has only a small effect on
the gas-phase abundances in the outer disk midplane although since our model is truncated at
305~AU we would expect that this effect will continue at radii beyond this value.
Photodesorption, the most experimentally constrained of the mechanisms considered here,
has a larger effect although only in the molecular and surface regions of the disk where there is
an appreciable UV flux.
It is especially effective at enhancing the gas-phase abundances of non-volatile molecules such as
H$_2$O.
X-ray desorption, has the largest effect, smoothing the abundances of gas-phase species throughout the
height of the disk and acting to homogenise the fractional abundances.
However, X-ray desorption is the least theoretically or experimentally constrained and thus our
results must be treated with caution, pending further investigation.
The addition of grain-surface chemistry also yields some encouraging results worthy
of revisiting and further study.
In the outer disk, where the freeze out of molecules is most prevalent, the abundances
of relatively complex organic molecules e.g.\ CH$_3$OH, HCOOCH$_3$ and CH$_3$OCH$_3$ are enhanced
to potentially observable values when grain-surface chemistry and photodesorption are included
in our model.
Thus, the observation of rotational transitions of these, and related, species in protoplanetary
disks should provide an excellent means of testing grain-surface chemistry theory.
Due to limitations in existing facilities, the most complex molecule observed, as yet, in disks is H$_2$CO.
ALMA, however, will have the sensitivity and spectral resolution necessary to
observe rotational transitions in these minor species.
Indeed, preliminary synthetic spectra we have calculated suggest that the rotational
transition lines of methanol are enhanced above the detection threshold of ALMA when grain-surface
chemistry is included in our model (Walsh et al., in preparation).
We have shown that running models
of this nature, in which we test experimental data and theory,
varying the different chemical ingredients, are necessary for disentangling the different physical influences
on the chemical content of protoplanetary disks.
The influence of the physical conditions and processes on the molecular content is also a strong function
of radius and so high-resolution models, which trace the chemical structure on small scales, are also preferred.
In this brief overview of our model, we have shown that X-ray desorption and grain-surface chemistry can have
a powerful effect on the molecular content of disks and we intend to expand upon the work presented here with
follow-up papers on both chemical processes.
Although in this paper we have shown that the distribution and radial column densities of particular
molecules are sensitive to the inclusion or omission of certain chemical processes, in order to test the
viability of using these molecules as tracers we must compute the radiative transfer in the disk and directly
compare our results with observations. This work will be reported in a subsequent paper (Walsh et al., in
preparation).
\acknowledgments
We wish to thank an anonymous referee for his or her constructive comments
which helped improve our paper.
C.\ Walsh acknowledges DEL for a studentship and JSPS for the award of a short-term
fellowship to conduct research in Japan. H.\ Nomura acknowledges the JGC-S Scholarship
Foundation, the Grant-in-Aid for Scientific Research 21740137 and the
Global COE Program ``The Next Generation of Physics, Spun from Universality and
Emergence'' from MEXT, Japan.
Astrophysics at QUB is supported by a grant from the STFC.
|
2,869,038,154,007 | arxiv | \section{Introduction}
In this paper the pseudomonotonicity of special compositions of nonlinear operators is shown.
More specifically, we consider the system
\begin{eqnarray*}
A (»x,»y) & = & x_0^*, \\
B (»x,»y) & = & y_0^*
\end{eqnarray*}
for operators $ A : X \times Y \rightarrow X ^*$ and $ B : X \times Y \rightarrow Y ^*$ on reflexive Banach spaces $ X $ and $ Y $.
Assume that for every $»x\in X $ the mapping $»Bx:= B (»x,\>.\> ): Y \rightarrow Y ^*$ is uniquely invertible
and define $ R »x:=»Bx^{-1}( y_0^*)$.
Then the given system is solvable if and only if $ A (»x, R »x)= x_0^*$ admits a solution.\\
We provide sufficient conditions that ensure the pseudomonotonicity of the mapping $ S »x:= A (»x, R »x)$.
Then existence results for this system can be obtained from classical theory of pseu\discretionary{-}{}{}domono\discretionary{-}{}{}tone operators.
To this end, we introduce the subclass of semimonotone operators
(which is a variant of a respective subclass of pseu\discretionary{-}{}{}domono\discretionary{-}{}{}tone operators considered in
\cite{lions,zeid2b,papa,deim}).
The operators of this subclass enjoy a mixture of monotonicity and of compactness properties.
This can be seen as a generalization of those differential operators that
are monotone in the highest order terms and compact in the terms of lower order.\\
The conditions we presume in order to prove the pseudomonotonicity of $ S $
consist of the strong monotonicity of $ B $ in $»y$,
the semimonotonicity of $ A $ in $»x$,
and further assumptions on the coupling of both equations of the system.
The latter include respective Lipschitz conditions.
Furthermore, we require that when splitting $ A $ into a monotone and a compact part
the composition operator $ S $ still inherits the monotonicity property of $ A $.
This leads to a restriction on the influence both parts of the system may exert on each other
and is given as a condition on the Lipschitz and strong monotonicity constants.
{\def\xa#1#2#3{&\pboxc{0cm}{\pboxc{0.87\hsize}{$#1$}\pboxl{0.04\hsize}{#2}\pboxl{0.09\hsize}{$#3$}}&}
In our application this reduction technique is applied to
a model of phase-separation in a binary mixture incorporating elastic effects.
To be more specific, we consider
on a time interval ${\cal T}$
and on a domain $\Omega $, with ${\Gamma_D}$ and ${\Gamma_N}$ being disjoint parts of the boundary,
the following parabolic equation of forth order in space of Cahn--Hilliard type
coupled to an elliptic equation accounting for elastic effect given by
\begin{eqnarray*}
\xa{ \partial_t »u - \mathop{\rm div}(»M\nabla(\mu \partial_t »u+»w)) \>=\> 0 }{on}{{\cal T}\times\Omega ,} \\
\xa{ »w = \varphi '(»u) - \mathop{\rm div}( b_1 (»u,\nabla»u, e ))+ b_2 (»u,\nabla»u, e ) }{on}{{\cal T}\times\Omega ,} \\
\xa{ \mathop{\rm div} b_0 (»u,\nabla»u, e ) \>=\> 0,\enspace \enspace \enspace e =\epsilon({\bf u}):=\frac12(D{\bf u}+D{\bf u}^t) }{on}{{\cal T}\times\Omega ,}
\end{eqnarray*}
together with the boundary and initial conditions
\begin{eqnarray*}
\xa{ »M\nabla(\mu \partial_t »u+»w)\cdot\vec n \>=\> 0,\enspace \enspace \enspace b_1 (»u,\nabla»u, e )\cdot\vec n\>=\> 0 }{on}{{\cal T}\times\partial\Omega ,} \\
\xa{ b_0 (»u,\nabla»u, e )\vec n \>=\> 0 }{on}{{\cal T}\times{\Gamma_N},} \\
\xa{ {\bf u} \>=\> 0 }{on}{{\cal T}\times{\Gamma_D},} \\
\xa{ »u(0) \>=\> u_0 }{on}{\Omega .}
\end{eqnarray*}
These equations model
the mass balance for the concentration $»u$ of one of the components,
the related chemical potential $»w$,
and a quasi-steady mechanical equilibrium, respectively,
with $ b_0 $ being the stress tensor
which depends in a nonlinear way on $»u,\,\nabla»u$ and on the linearized strain tensor $ e $.
The latter is given as the symmetric part of the derivative of the displacement ${\bf u}$.
Furthermore, $»M$ is the (constant) mobility matrix,
the functions $ b_1 $ and $ b_2 $ together with convex functional $\varphi $
determine the chemical potential $»w$ and thus model the behavior of the material.
Note that $ b_0 , b_1 $ and $ b_2 $ may explicitly depend on $(t,x)\in{\cal T}\times\Omega $,
which is suppressed in the notation to enhance the readability.
The constant $\mu $ is non-negative.
If it is strictly positive, then the model includes additional contributions to the diffusion flux
resulting from the concept of microforces, cf.~Fried, Gurtin~\cite{fried1,fried2}
and Gurtin~\cite{gurtin}.
We prove the existence of solutions in an appropriate weak sense.
For this purpose, we make use of a general framework for evolution equations by Gr\"oger~\cite{konni},
which allows to include suitable (possibly degenerate) linear operators inside the time derivative.
Then, using our general result the coupled system can be reduced to a single parabolic operator equation
involving a pseu\discretionary{-}{}{}domono\discretionary{-}{}{}tone operator.
To this end,
we have to ensure the aforementioned assumptions on the coupling.
This is done with the help of result on $W^{1,p}$ regularity for some $p>2$ for the solution ${\bf u}$
to the mechanical equilibrium.
For different models of Cahn--Hilliard type for phase separation coupled to elastic effects
and related existence existence result exemplarily we refer to \cite{miranville,garcke1,sprekels}.
In~\cite{pawlow} the elastic effects are not assumed to be quasi-steady.
This leads to a coupled system of parabolic-hyperbolic type.
A model which incorporates a damage process was considered in~\cite{heinemann}.
The remainder of this paper is organized as follows:
In Section~\ref{s2:} we introduce our notion of semimonotone operators and show that they form
a subclass of all pseu\discretionary{-}{}{}domono\discretionary{-}{}{}tone mappings.
Further, we provide conditions on $ A $ and $ B $ such that $ S $ is semimonotone.
Section~\ref{s3:} gives a short introduction into the approach to evolution equations developed by
Gr\"oger~\cite{konni} and states a corresponding existence result.
In Section~\ref{s4:} the results of the preceding sections are applied to the model above of
phase-separation with elastic effects.
We introduce an appropriate notion of weak solutions and give conditions on the functions
$ b_0 , b_1 , b_2 $ and $\varphi $ that are used in order to prove the existence of solutions.
\section{Semimonotone operators}
\label{s2:}
This section introduces the class semimonotone operators and shows that it is a subset of all pseu\discretionary{-}{}{}domono\discretionary{-}{}{}tone operators.
Conditions are given that ensure the composition of two operators with special properties to be semimonotone.
In Section~\ref{s4:} we use this result to reformulate an elliptic-parabolic system as a single evolution
equation of pseu\discretionary{-}{}{}domono\discretionary{-}{}{}tone type.
For this equation we derive an existence result from the classical theory of pseu\discretionary{-}{}{}domono\discretionary{-}{}{}tone operators.
Before starting with our analysis, let us fix some notations.
For a Banach space $ X $, we denote by $ ||\iBlock{.}|| _{ X } $ its norm, its dual space by $ X ^*$ and
with $ \iKlammerB{.}{.} \iSub{ X }: X ^*\times X \rightarrow {{\mathbb R}}$ its dual pairing.
In this paper, we will only consider real Banach space.
$ X _\omega $ indicates the spaces $ X $ equipped with its weak topology.
If it is clear from the context, we simply write $ ||\iBlock{.}|| $ and $ \iKlammerB{.}{.} $
for $ ||\iBlock{.}|| _{ X } $ and $ \iKlammerB{.}{.} \iSub{ X }$, respectively.
Moreover, the (in general multi-valued) duality mapping of $ X $ is given by $ J_{ X } \subset X \times X ^*$.
Here and below, we identify mappings with their graphs and, occasionally,
singletons $\{x\}$ with $x$ itself.
For a Hilbert space $ H $ we denote by $ \iKlammerA{.}{.} \iSub{ H }$ its inner product.
Then $ J_{ H } $ coincides with the canonical isomorphism from $ H $ onto $ H ^*$
given by Riesz's theorem.
The identity mapping of set $M$ regarded as an operator from $M$ into some superset $M'\supset M$
is written as ${\rm Id}_{M\rightarrow M'}$.
Finally, for sets $ M_1 , M_2 , M_3 $, $x\in M_1 $ and $F: M_1 \times M_2 \rightarrow M_3 $
we write $F_x: M_2 \rightarrow M_3 $ for the mapping $ M_2 \ni y\mapsto F(x,y)$.
Now, let $ X $ and $ Y $ be real, reflexive Banach spaces.
We start by recalling the definition of pseu\discretionary{-}{}{}domono\discretionary{-}{}{}tone operators.
\begin{definition}[{\rm $ T $--pas, pseu\discretionary{-}{}{}domono\discretionary{-}{}{}tone operators}]\label{..}
Let $ T : X \rightarrow X ^*$ be an operator.
A sequence $( x_n )_{n\in{{\mathbb N}}}$ in $ X $ will be called a $ T $--pas if $( x_n )$
converges weakly in $ X $ to an element $»x\in X $ and it holds
\[ \mathop{\overline {\lim}}_{n\mathop{\rightarrow }\infty} \iKlammerB{ T x_n }{ x_n -»x} \>\le\> 0. \]
Furthermore, $ T $ is said to be pseu\discretionary{-}{}{}domono\discretionary{-}{}{}tone if
for every $ T $--pas $( x_n )_{n\in{{\mathbb N}}}$ converging weakly to $»x\in X $,
\[ \iKlammerB{ T »x}{»x-v} \>\le\> \mathop{\underline {\lim}}_{n\mathop{\rightarrow }\infty} \iKlammerB{ T x_n }{ x_n -v} \]
holds for every $v\in X $.
\end{definition}
Our notational shortcut of a $ T $--pas stands for a 'pseudomonotonously active sequence'.
The definition of pseudomonotonicity follows Zeidler~\cite{zeid2b}.
Note that the original definition of Br\`ezis involves nets instead of sequences
and requires the operator to satisfy a certain boundedness condition.
\begin{remark}\label{r1:Doni}
If $ T : X \rightarrow X ^*$ is pseu\discretionary{-}{}{}domono\discretionary{-}{}{}tone and if $( x_n )$ is $ T $--pas with limit $»x$,
then by choosing $»v=»x$ we obtain $\mathop{\underline {\lim}}_n \iKlammerB{ T x_n }{ x_n -»x} \ge0$
and hence $\lim_n \iKlammerB{ T x_n }{ x_n -»x} =0$.
\end{remark}
\begin{definition}\label{}
For arbitrary vector spaces $U$ and $V$, $ u_0 \in U$ and for any multi-valued operator $T\subset U\times V$
the translation $\mathfrak{T}_{ u_0 }T\subset U\times V$ of \,$T$ is given by
$(\mathfrak{T}_{ u_0 }T)u:=\mathfrak{T}_{ u_0 }Tu:=T(u- u_0 )$ for $u\in U$.
\end{definition}
An important consequence of the class of pseu\discretionary{-}{}{}domono\discretionary{-}{}{}tone operators is its closedness under summation and translation.
\begin{proposition}\label{p2:PsmSum}
If $ x_0 \in X $ and if the operators $ T , T_1 , T_2 : X \rightarrow X ^*$ are pseu\discretionary{-}{}{}domono\discretionary{-}{}{}tone, so are $ T_1 + T_2 $ and $\mathfrak{T}_{ x_0 } T $.
\end{proposition}
{\em Proof}.
For the pseudomonotonicity of $ T_1 + T_2 $ see~\cite{zeid2b}, Prop~27.6, p.~586.
Let $( x_n )$ a $\mathfrak{T}_{ x_0 } T $--pas with weak limit $»x$ and $»v$ be arbitrary.
Then $ y_n := x_n - x_0 $ is a $ T $--pas with limit $»y:=»x- x_0 $ and by the pseudomonotonicity of $ T_1 $ we get
for $»u:=»v- x_0 $ that
\[
\iKlammerB{\mathfrak{T}_{ x_0 } T »x}{»x-»v}
\>=\> \iKlammerB{ T »y}{»y-»u}
\>\le\> \mathop{\underline {\lim}}_{n\mathop{\rightarrow }\infty} \iKlammerB{ T y_n }{ y_n -»u}
\>=\> \mathop{\underline {\lim}}_{n\mathop{\rightarrow }\infty} \iKlammerB{\mathfrak{T}_{ x_0 } T x_n }{ x_n -»v}
\]
which finishes the proof.
\endproof
\begin{definition}\label{}
Let $ L :D( L )\rightarrow X ^*$ be a linear, closed operator with domain $D( X )$ dense in $ X $.
We set $»Z:=D( L )$ and equip its with the graph norm of $ L $, i.e.
\[
||\iBlock{»x}|| _{Z} := ( ||\iBlock{»x}|| _{ X } ^2 + ||\iBlock{ L »x}|| _{ X ^*} ^2)^{1/2}.
\]
An operator $ T : X \rightarrow X ^*$ is said to be pseu\discretionary{-}{}{}domono\discretionary{-}{}{}tone with respect to $ L $,
if $I^* T I:Z\rightarrow Z^*$ is pseu\discretionary{-}{}{}domono\discretionary{-}{}{}tone, whereas $I:={\rm Id}_{Z\rightarrow X }$ is the identity regarded
as a mapping from $Z$ into $ X $.\\
$ T : X \rightarrow X ^*$ is called radially continuous in $»x\in X $ if the mapping
$t\mapsto \iKlammerB{ T (»x+t»v)}{»v} $ from ${{\mathbb R}}$ into itself is continuous in $t=0$ for every $»v\in X $.
Finally, we call $ T : X \rightarrow X ^*$ coercive with respect to $ x_0 \in X $ if
\[
\lim_{ ||\iBlock{»x}|| \mathop{\rightarrow }\infty} \iKlammerB{ T »x}{»x- x_0 } \>=\> 0.
\]
\end{definition}
Pseudomonotone operators occurring in PDEs often have a special structure:
a monotone part (usually terms of highest order) together with a
compact perturbation (lower order terms).
The following notion generalizes this behavior.
\begin{definition}[{\rm Semimonotone operators}]\label{d1:Semi} \def\xa#1#2#3{#1&\hskip3mm& \hbox to 12.5cm{$\displaystyle #2$\hfil$#3$}}
We call an operator $ T : X \rightarrow X ^*$ semimonotone if $ T $ has the form $ T »x=\xwt{ T }(»x,»x)$
for a mapping $\xwt{ T }: X \rightarrow X ^*$ satisfying the conditions:
\begin{eqnarray*}
\xa{(S1)}{ \iKlammerB{\xwt{ T }(»x,»x)-\xwt{ T }(»y,»x)}{»x-»y} \>\ge\> 0 }{ \forall»x,»y\in X },\\
\xa{(S2)}{ y_n \mathop{\rightharpoonup}»y \mbox{ \>is a $ T $--pas} \enspace \enspace \Longrightarrow \enspace \enspace \xwt{ T }(»x, y_n )\mathop{\rightharpoonup}\xwt{ T }(»x,»y) }{ \forall»x\in X },\\
\xa{(S3)}{ y_n \mathop{\rightharpoonup}»y \mbox{ \>is a $ T $--pas} \enspace \enspace \Longrightarrow \enspace \enspace \iKlammerB{\xwt{ T }(»x, y_n )}{ y_n -»y} \mathop{\rightarrow }0}{ \forall»x\in X },\\
\xa{(S4)}{ »x\mapsto \xwt{ T }(»x,»y) \mbox{ \> is radially continuous in the point $»x=»y$} }{ \forall»y\in X }.
\end{eqnarray*}
In this case, $\xwt{ T }$ is called a semimonotone representative of $ T $.
\end{definition}
\begin{remark}\label{..}
Different authors denote different classes of operators as being semimonotone.
Deimling~\cite{deim}, Zeidler~\cite{zeid2b} and Hu/Papageorgiou~\cite{papa}
use definitions which are more restrictive than~\ref{d1:Semi}
as well as Lions~\cite{lions} and his operators of 'variational type'.
We use Definition \ref{d1:Semi} instead, since it is more simple and more general,
but nevertheless collects all the properties we need.
\end{remark}
The following proposition shows that semimonotone operators are indeed pseu\discretionary{-}{}{}domono\discretionary{-}{}{}tone.
\begin{proposition}\label{p1:SemiPsm}
If $ T : X \rightarrow X ^*$ is a semimonotone operator, then $ T $ is pseu\discretionary{-}{}{}domono\discretionary{-}{}{}tone.
\end{proposition}
{\em Proof}.
Let $( x_n )$ be a $ T $--pas with $ x_n \mathop{\rightharpoonup}»x$, $»v\in X$ and $\xwt{ T }$ be a semimonotone representative of $ T $.
We put $ w_t :=»x+t(v-»x)$ for $0<t\le1$.
The monotonicity condition (S1) applied to $ x_n $ and $ w_t $ implies
\[
\iKlammerB{\xwt{ T }( x_n , x_n )-\xwt{ T }( w_t , x_n )}{ x_n -»x+»x- w_t } \>\ge\> 0.
\]
With $»x- w_t =t(»x-»v)$, this can be rewritten as
\[
t\, \iKlammerB{ T x_n }{»x-v} \enspace \ge\enspace -\> \iKlammerB{ T x_n }{ x_n -»x} »+ \iKlammerB{\xwt{ T }( w_t , x_n )}{ x_n -»x} »+ t\, \iKlammerB{\xwt{ T }( w_t , x_n )}{»x-v} .
\]
Passing to the limit inferior on both sides and using (S2), (S3) and the fact that $( x_n )$
is a $ T $--pas, we end up with
\[
t \!\mathop{\underline {\lim}}_{n\mathop{\rightarrow }\infty} \iKlammerB{ T x_n }{»x-v} \>\ge\> t\, \iKlammerB{\xwt{ T }( w_t ,»x)}{»x-v} .
\]
Now we divide by $t$ and pass with $t\mathop{\rightarrow }0$ to the limit in order to obtain
$
\mathop{\underline {\lim}}_{n} \iKlammerB{ T x_n }{»x-v} \>\ge\> \iKlammerB{ T »x}{»x-v}
$
by the radial continuity (S4). This inequality together with $\lim_{n} \iKlammerB{ T x_n }{ x_n -»x} =0$
(cf.\ Remark~\ref{r1:Doni}) yields
\[
\mathop{\underline {\lim}}_{n\mathop{\rightarrow }\infty} \iKlammerB{ T x_n }{ x_n -v}
\enspace \ge\enspace \mathop{\underline {\lim}}_{n\mathop{\rightarrow }\infty} \iKlammerB{ T x_n }{ x_n -»x} »+ \mathop{\underline {\lim}}_{n\mathop{\rightarrow }\infty} \iKlammerB{ T x_n }{»x-v}
\enspace \ge\enspace \iKlammerB{ T »x}{»x-v} .
\]
This proves the pseudomonotonicity of $ T $.
\endproof
In order to study systems, we consider the following continuity property.
\begin{definition}[{\rm Sequential solutional continuity}]\label{..}
Suppose that $ X $ and $»Z$ are two topological spaces,
$ Y $ is an arbitrary set and that $ T : X \times Y \rightarrow »Z$.
We say that $ T $ is sequentially solutionally continuous in $»x\in X $ and $»z\in»Z$ if the equation
$ T (»x,»y)=»z$ has a unique solution $»y\in Y $, and if for every sequence $( x_n )_{n\in{{\mathbb N}}}$ converging
to $»x$ in $ X $ holds
\[
T ( x_n ,»y) \>\mathop{\rightarrow }\> »z \enspace \enspace \mbox{in }»Z.
\]
Furthermore, $ T $ is said to be sequentially solutionally continuous in $»z\in»Z$ if $ T $ is so in
$»x$ and $»z$ for every $»x\in X $.
\end{definition}
Next, assumptions are given that guarantee the pseudomonotonicity of the operator $ S $ from the introduction.
We suppose the uniformly strong monotonicity and the sequential solutional continuity of $»\xwt{ B }$ as well as Lipschitz conditions.
The assumptions {\rm(A3.2)} and {\rm(A3.3)} can be seen as a counterpart to conditions (S2)--(S4)
used in the definition of semimonotone operators.
\begin{definition}[{\rm Assumptions {\rm(A1)}, {\rm(A2)} and {\rm(A3)}}]\label{a2:Semi}{\parskip3p
\def\xa#1#2{\rucky{2cm}{\hfil{\rm #1}\hfil}{#2}
\def\xb#1#2#3#4{\pboxc{2cm}{\rm #1}\hskip10mm\pboxl{7.5cm}{$#2$}\pboxc{1cm}{$#3$}\pboxl{2cm}{$#4$}
\def\xc#1#2#3{\pboxc{2cm}{\rm #1}\hbox to 13.5cm{#2\hss$#3$}
\def\xd#1#2#3#4{\pboxl{2.5cm}{#1}\pboxc{6cm}{$#2$\pboxc{7mm}{$#3$}$#4$}
We say the {\rm(A1)} is fulfilled if the following conditions are met:
\xa{{\rm(A1.1)}}{$X,Y$ are real, reflexive Banach spaces and
and $ y_0^*\in Y ^*$,}
\xa{{\rm(A1.2)}}{$ A : X \times Y \rightarrow X ^*$ and $ B : X \times Y \rightarrow Y ^*$ together with
$\xwt{ A }: X \times X \times Y \rightarrow X ^*$ and $\xwt{ B }: X \times X \times Y \rightarrow Y ^*$ are mappings such that
$ A (»x,»y)=\xwt{ A }(»x,»x,»y), \enspace B (»x,»y)=\xwt{ B }(»x,»x,»y)$
for all $(»x,»y)\in X \times Y $.
}
\xa{{\rm(A1.3)}}{The mapping $»y\mapsto \xwt{ B }( x_1 , x_2 ,»y)$ from $ Y $ into $ Y ^*$ is strongly monotone uniformly in
$( x_1 , x_2 )\in X \times X $, i.e.\ there exists an $\alpha_B >0$ such that
\[ \iKlammerB{\xwt{ B }( x_1 , x_2 , y_1 )-\xwt{ B }( x_1 , x_2 , y_2 )}{ y_1 - y_2 } \iSub{ Y } \>\ge\> \alpha_B \, ||\iBlock{ y_1 - y_2 }|| _{ Y } ^2 \]
for all $ x_1 , x_2 \in X $ and $ y_1 , y_2 \in Y $.
Furthermore, $»y\mapsto \xwt{ B }( x_1 , x_2 ,»y)$ is radially continuous for every tuple $( x_1 , x_2 )\in X \times X $.}
If furthermore there are constants $\beta_A ,\beta_B \ge0$ and $\alpha_A >0$ such that\\[12pt]
\xb{{\rm(A2.1)}}{ \iKlammerB{\xwt{ A }( x_1 , x_2 ,»y) - \xwt{ A }( x_2 , x_2 ,»y)}{ x_1 - x_2 } \iSub{ X } }{\ge}{ \alpha_A \, ||\iBlock{ x_1 - x_2 }|| _{ X } ^2, } \\[0.5ex]
\xb{{\rm(A2.2)}}{ ||\iBlock{\xwt{ A }( x_1 , x_2 , y_1 ) - \xwt{ A }( x_1 , x_2 , y_2 )}|| _{ X ^*} }{\le}{ \beta_A \, ||\iBlock{ y_1 - y_2 }|| _{ Y } , } \\[0.5ex]
\xb{{\rm(A2.3)}}{ ||\iBlock{\xwt{ B }( x_1 , x_2 ,»y) - \xwt{ B }( x_2 , x_2 ,»y)}|| _{ Y ^*} }{\le}{ \beta_B \, ||\iBlock{ x_1 - x_2 }|| _{ X } } \\[12pt]
hold for all $ x_1 , x_2 \in X $ and $»y\in Y $, then we say that {\rm(A2)} is satisfied.
Finally, {\rm(A3)} is fulfill if {\rm(A2)} \!and the following conditions are satisfied \\[2ex]
\xc{{\rm(A3.1)}}{the mapping $(»x,»y)\mapsto \xwt{ B }(»x_0 ,»x,»y)$ from $ X _\omega \times Y $ into $ Y ^*$ }{ } \\[0.5mm]
\xc{ }{is sequentially solutionally continuous in $»x=»x_0 $ and $ y_0^*$ }{ \forall»x_0 \in X , } \\[3mm]
\xc{{\rm(A3.2)}}{if $ x_n \mathop{\rightharpoonup}»x,\enspace y_n \mathop{\rightarrow }»y$ and $\mathop{\overline {\lim}}\limits_{n\mathop{\rightarrow }\infty} \iKlammerB{\xwt{ A }(»x, x_n , y_n )}{ x_n -»x} \iSub{ X }\le0$ }{ } \\[0.5ex]
\xc{ }{\xd{then it holds }{ \xwt{ A }(»x_0 , x_n , y_n ) }{ \mathop{\rightharpoonup} }{ \xwt{ A }(»x_0 ,»x,»y)\enspace } }{ } \\[0.5ex]
\xc{ }{\xd{and }{ \iKlammerB{\xwt{ A }(»x_0 , x_n , y_n )}{ x_n -»x} \iSub{ X } }{ \mathop{\rightarrow } }{ 0 } }{ \forall»x_0 \in X , } \\[3mm]
\xc{{\rm(A3.3)}}{the mapping $»x\mapsto \xwt{ A }(»x,»x_0 ,»y)$ is radially continuous in $»x_0 $ }{ \forall»x_0 \in X ,\>»y\in Y , } \\[3mm
\xc{{\rm(A3.4)}}{ $\alpha_A \,\alpha_B \>\ge\> \beta_A \,\beta_B $ }{ }\\[2ex]
for all sequences $( x_n )_{n\in{{\mathbb N}}}$ in $ X $ and $( y_n )_{n\in{{\mathbb N}}}$ in $ Y $.
}\end{definition}
Particularly, for every $»y\in Y $ the mapping $»x\mapsto A (»x,»y)$ is semimonotone.
Moreover, since $\xwt{ B }_{( x_1 , x_2 )}: Y \rightarrow Y ^*$ is strongly monotone and radially continuous,
the equation $\xwt{ B }_{( x_1 , x_2 )}»y= y_0^*$ has a unique solution $»y\in Y $ for every $ x_1 , x_2 \in X $.
The corresponding solution operator and its composition with $\xwt{ A }$ are denoted by
$»\xwt{ R }$ and $»\xwt{ S }$, respectively.
\begin{definition}[{\rm Operators $\xwt{ R }$ and $\xwt{ S }$}]\label{d2:RS}{\def\xa#1{\pboxl{3cm}{$#1$}
\def\xb#1{\pboxl{8cm}{$#1$}
\def\xc#1#2{\xa{#1} && \xb{#2}
\def\xd#1#2{\xa{#1} \hskip7mm \xb{#2}
Assume {\rm(A1)} and $ x_1 , x_2 \in X $.
The bijectivity of $\xwt{ B }_{( x_1 , x_2 )}$ allows us to define the operators
$\xwt{ R }$ and $\xwt{ S }$ on $ X \times X $ into $ Y $ respectively $ X ^*$ as
\begin{eqnarray*}
\xc{ \xwt{ R }: X \times X \rightarrow Y , }{ \xwt{ R }( x_1 , x_2 ):=\xwt{ B }_{( x_1 , y_2 )}^{\>-1}\,( y_0^*). } \\[0.5ex]
\xc{ \xwt{ S }: X \times X \rightarrow X ^*, }{ \xwt{ S }( x_1 , x_2 ):=\xwt{ A }( x_1 , x_2 ,\xwt{ R }( x_1 , x_2 )). }
\end{eqnarray*
}\end{definition}
The following lemma provides simple Lipschitz and monotonicity properties of $\xwt{ R }$ and $\xwt{ S }$.
\begin{lemma}\label{p1:LipMon}
If the {\rm(A1)} is fulfilled, then
\[
||\iBlock{ \xwt{ R } z_1 -\xwt{ R } z_2 }|| _{ Y } \enspace \le\enspace \f1{\alpha_B } \> ||\iBlock{ \xwt{ B }(»z,\xwt{ R } z_1 )-\xwt{ B }(»z,\xwt{ R } z_2 ) }|| _{ Y ^*}
\]
holds for pairs $»z, z_1 , z_2 \in X \times X $.
If {\rm(A2)} is satisfied, then for all $ x_1 , x_2 \in X $
\begin{eqnarray*}
||\iBlock{\xwt{ R }( x_1 , x_2 )-\xwt{ R }( x_2 , x_2 )}|| _{ Y } & \>\le\> & \f{\beta_B }{\alpha_B }\, ||\iBlock{ x_1 - x_2 }|| _{ X } , \\[0.5ex]
\iKlammerB{\xwt{ S }( x_1 , x_2 )-\xwt{ S }( x_2 , x_2 )}{ x_1 - x_2 } \iSub{ X } & \>\ge\> & \f{\alpha_A \alpha_B -\beta_A \beta_B }{\alpha_B }\, ||\iBlock{ x_1 - x_2 }|| _{ X } ^2.
\end{eqnarray*
\end{lemma}
{\em Proof}. {
For $»z, z_1 , z_2 \in X \times X $ {\rm(A1.3)} implies
\begin{eqnarray*}
||\iBlock{ \xwt{ R } z_1 -\xwt{ R } z_2 }|| _{ Y } ^2 & \le & \f{1}{\alpha_B }\> \iKlammerB{ \xwt{ B }(»z,\xwt{ R } z_1 )-\xwt{ B }(»z,\xwt{ R } z_2 )}{\xwt{ R } z_1 -\xwt{ R } z_2 } \iSub{ Y } \\[0.5ex]
& \le & \f{1}{\alpha_B }\> ||\iBlock{ \xwt{ B }(»z,\xwt{ R } z_1 )-\xwt{ B }(»z,\xwt{ R } z_2 )}|| _{ Y ^*} \, ||\iBlock{\xwt{ R } z_1 -\xwt{ R } z_2 }|| _{ Y }
\end{eqnarray*
and hence the first inequality.
Assuming {\rm(A2)} and $ x_1 , x_2 \in X $ and with $ »z_i :=( »x_i , x_2 )$, from the definition of $\xwt{ R }$ we have
$\xwt{ B }( »z_i ,\xwt{ R } »z_i )= y_0^*$ and therefore by the first inequality that
\[
||\iBlock{ \xwt{ R } z_1 -\xwt{ R } z_2 }|| _{ Y } \enspace \le\enspace \f1{\alpha_B }\> ||\iBlock{\xwt{ B }( z_2 ,\xwt{ R } z_2 ) - \xwt{ B }( z_1 ,\xwt{ R } z_2 )}|| _{ Y ^*} \enspace \le\enspace \f{\beta_B }{\alpha_B }\> ||\iBlock{ x_1 - x_2 }|| _{ X } ,
\]
which is the second inequality.
Together with {\rm(A2.1)} and {\rm(A2.2)}, this yields the estimation
\begin{eqnarray*}
&& \hb \iKlammerB{ \xwt{ S }( x_1 , x_2 )-\xwt{ S }( x_2 , x_2 ) }{ x_1 - x_2 } \iSub{ X } \\[0.5ex]
& = & \iKlammerB{\xwt{ A }( x_1 , x_2 ,\xwt{ R }( x_1 , x_2 ))-\xwt{ A }( x_2 , x_2 ,\xwt{ R }( x_1 , x_2 )) }{ x_1 - x_2 } \iSub{ Y } \\[0.5ex]
& & +\enspace \iKlammerB{\xwt{ A }( x_2 , x_2 ,\xwt{ R }( x_1 , x_2 ))-\xwt{ A }( x_2 , x_2 ,\xwt{ R }( x_2 , x_2 )) }{ x_1 - x_2 } \iSub{ Y } \\[0.5ex]
& \ge & \alpha_A \, ||\iBlock{ x_1 - x_2 }|| _{ X } ^2 »- \beta_A \, ||\iBlock{\xwt{ R }( x_1 , x_2 )-\xwt{ R }( x_2 , x_2 )}|| _{ Y } ||\iBlock{ x_1 - x_2 }|| _{ X } \\[0.5ex]
& \ge & \f{\alpha_A \alpha_B -\beta_A \beta_B }{\alpha_B } \>\> ||\iBlock{ x_1 - x_2 }|| _{ X } ^2,
\end{eqnarray*
and finishes the proof.
}\endproof
The following lemma is crucial in order to prove the pseudomonotonicity of $\xwt{ B }$.
\begin{lemma}\label{p1:RRcont}{\def\xa#1#2{\pboxr{1cm}{$#1$} \enspace \mapsto \enspace \pboxl{2cm}{$#2$}
Suppose {\rm(A1)}, $ x_0 \in X $ and that $ X _T $ denotes $ X $ equipped with some topology $T$.
If the mapping $ \xwt{ B }_{ x_0 } : X _T \times Y \rightarrow Y ^*$ is sequentially solutionally continuous in $ x_0 $ and $ y_0^*$,
then $ \xwt{ R }_{ x_0 } : X _T \rightarrow Y $ is continuous in $ x_0 $.
}\end{lemma}
{\em Proof}.
Let $( x_n )_{n\in{{\mathbb N}}}$ be a sequence in $ X $ converging to $ x_0 $ with respect to $ X _T $. Lemma~\ref{p1:LipMon}
provides the estimation
\[
||\iBlock{\xwt{ R }( x_0 , x_n )-\xwt{ R }( x_0 , x_0 )}|| _{ Y }
\enspace \le\enspace \f1{\alpha_B }\, ||\iBlock{\xwt{ B }( x_0 , x_n ,\xwt{ R }( x_0 , x_n ))-\xwt{ B }( x_0 , x_n ,\xwt{ R }( x_0 , x_0 ))}|| _{ Y ^*}
\]
for all $»n\in{{\mathbb N}}$.
Furthermore, $\xwt{ B }( x_0 , x_n ,\xwt{ R }( x_0 , x_n ))= y_0^*=\xwt{ B }( x_0 , x_0 ,\xwt{ R }( x_0 , x_0 ))$ from the definition of $\xwt{ R }$.
Hence, the sequential solutional continuity of $\xwt{ B }$ implies that $\xwt{ R }( x_0 , x_n )$ converges strongly to $\xwt{ R }( x_0 , x_0 )$ in $ Y $.
\endproof
The next theorem provides the semimonotonicity and hence the pseudomonotonicity of $ S $.
\begin{theorem}[{\rm Semimonotone Reduction}]\label{t1:SemReduct}{\def\xa#1#2{\pboxl{6mm}{{\rm #1}}{#2}
Suppose {\rm(A3)} and $ y_0^*\in X ^*$, and let $»F$ be the mapping $( A , B )»: X \times Y \rightarrow X ^*\times Y ^*$.
Then the operators $ R $ and $ S $ of Definition~\ref{d2:RS} satisfy the following statements:\\[0.5ex]
\xa{i)}{$ »F(»x,»y) »= ( x_0^*, y_0^*) \enspace \enspace \Lral\enspace \enspace »y= R »x$ \enspace and \enspace $ S »x »= x_0^*$, } \\[0.5ex]
\xa{ii)}{$ S : X \rightarrow X ^* $ \enspace is semimonotone with the semimonotone representative $\xwt{ S }: X \times X \rightarrow X ^*$. }\\[0.5ex]
}\end{theorem}
{\em Proof}.
Part i) follows from {\rm(A1)}.
To prove ii) we show that the operator $\xwt{ S }$ satisfies the conditions (S1)--(S4).
The condition (S1) immediately follows from Lemma~\ref{p1:LipMon} combined with condition~{\rm(A3.4)}.
To show (S2) and (S3), let us consider an $ S $--pas $( x_n )_{n\in{{\mathbb N}}}$ which weakly converges
to $»x\in X $. The monotonicity property~(S1) of $\xwt{ S }$ yields
\begin{equation} \label{«cs1}
\iKlammerB{\xwt{ S }(»x, x_n )}{ x_n -»x} \iSub{ X } \>\le\> \iKlammerB{\xwt{ S }( x_n , x_n )}{ x_n -»x} \iSub{ X } »= \iKlammerB{ S x_n }{ x_n -»x} \iSub{ X }.
\end{equation}
Passing to the limit superior on both sides and using the $ S $--pas property of $( x_n )$ shows
\[
\mathop{\overline {\lim}}_{n\mathop{\rightarrow }\infty} \iKlammerB{\xwt{ A }(»x, x_n ,\xwt{ R }(»x, x_n )}{ x_n -»x} \iSub{ X } »= \mathop{\overline {\lim}}_{n\mathop{\rightarrow }\infty} \iKlammerB{\xwt{ S }(»x, x_n )}{ x_n -»x} \iSub{ X } \>\le\> 0.
\]
The sequential solutional continuity~{\rm(A3.1)} together with Proposition~\ref{p1:RRcont} yields
\begin{equation} \label{«cs2}
\xwt{ R }(»x, x_n ) \mathop{\relbar\joinrell\rightarrow} \xwt{ R }(»x,»x) \enspace \enspace \mbox{in\enspace } Y.
\end{equation}
Thus, we can apply {\rm(A3.2)} in order to obtain
\begin{eqnarray}
& \xwt{ S }(»x, x_n ) »= \xwt{ A }(»x, x_n ,\xwt{ R }(»x, x_n )) \mathop{\relbar\joinrell\rightharpoonup} \xwt{ A }(»x,»x,\xwt{ R }(»x,»x)) »= \xwt{ S }(»x,»x), & \nonumber\\[4pt]
& \lim\limits_{n\mathop{\rightarrow }\infty} \iKlammerB{\xwt{ S }(»x, x_n )}{ x_n -»x} \iSub{ X } »= \lim\limits_{n\mathop{\rightarrow }\infty} \iKlammerB{\xwt{ A }(»x, x_n , R (»x, x_n )}{ x_n -»x} \iSub{ X } »= 0.& \label{«cs3}
\end{eqnarray
These are the properties (S2) and (S3).
Finally, it is easy to check that the radial continuity {\rm(A3.3)} in combination with the
Lipschitz properties {\rm(A2.2)} and \nix(\ref{«cs2}) imply that the mapping
\[
»x \>\mapsto \> \xwt{ S }(»x,»x_0 ) »= \xwt{ A }(»x,»x_0 ,\xwt{ R }(»x,»x_0 ))
\]
is radially continuous in $»x_0 \in X $. This shows (S4) and therefore completes the proof.
\endproof
\begin{remark}\label{..}
Assume $\alpha_A \alpha_B >\beta_A \beta_B $. Then, by Lemma~\ref{p1:LipMon} the operator $\xwt{ S }$ satisfies a
strong monotonicity condition in the first argument.
Hence, \nix(\ref{«cs1}) can be strengthened to
\[
\iKlammerB{\xwt{ S }(»x, x_n )}{ x_n -»x} \iSub{ X } + c\, ||\iBlock{ x_n -»x}|| _{ X } ^2 \>\le\> \iKlammerB{ S x_n }{ x_n -»x} \iSub{ X }
\]
with $c:=\f{1}{\alpha_B }(\alpha_A \alpha_B -\beta_A \beta_B )>0$.
This together with the $ S $--pas condition on $( x_n )$ and the convergence
$ \iKlammerB{\xwt{ A }(»x_0 , x_n , y_n )}{ x_n -»x} \iSub{ X } \mathop{\rightarrow } 0 $ \>from~{\rm(A3.2)}
shows that $( x_n )$ even converges strongly to $»x$.
Consequently, if $\alpha_A \alpha_B > \beta_A \beta_B $, we can relax {\rm(A3)} by requiring the desired convergence properties
in {\rm(A3.2)} only if $ x_n \mathop{\rightarrow }»x$, $ y_n \mathop{\rightarrow }»y$ and $\mathop{\overline {\lim}}\limits_{n\mathop{\rightarrow }\infty} \iKlammerB{\xwt{ A }(»x, x_n , y_n )}{ x_n -»x} \iSub{ X }\le0.$
\end{remark}
The final proposition of this section ensures the demicontinuity of $ S $.
\begin{proposition}\label{p1:Demi}
Assume {\rm(A2)} and suppose for every $ x_0 \in X $ and $»y\in Y $ that
{\def\xa#1{»x\>\mapsto \>\pboxl{1.3cm}{$#1$}
\begin{eqnarray*}
\xa{\xwt{ R }( x_0 ,»x)} && \mbox{is continuous,} \\[0.5ex]
\xa{ A (»x,»y)} && \mbox{is demicontinuous.}
\end{eqnarray*}
Then $ S : X \rightarrow X ^*$ is demicontinuous.
\end{proposition}
{\em Proof}.
Assume that $ x_n \mathop{\rightarrow }»x$. Lemma~\ref{p1:LipMon} and the continuity of $ R $ imply
\begin{eqnarray*}
&& \hb
\lim_{n\mathop{\rightarrow }\infty} ||\iBlock{ R x_n - R »x}|| _{ X } \\[0.5ex]
& \le & \lim_{n\mathop{\rightarrow }\infty} ||\iBlock{\xwt{ R }( x_n , x_n )-\xwt{ R }(»x, x_n )}|| _{ X } »+ \lim_{n\mathop{\rightarrow }\infty} ||\iBlock{\xwt{ R }(»x, x_n )-\xwt{ R }(»x,»x)}|| _{ X } \\[0.5ex]
& = & 0.
\end{eqnarray*
Thus, condition~{\rm(A2.2)} in combination with the demicontinuity of $ A $ yields
\begin{eqnarray*}
&& \hb \lim_{n\mathop{\rightarrow }\infty} \iKlammerB{ S x_n - S »x}{»v} \iSub{ X } \\[0.5ex]
& = & \lim_{n\mathop{\rightarrow }\infty} \iKlammerB{ A ( x_n , R x_n )- A ( x_n , R »x)}{»v} \iSub{ X }
+\lim_{n\mathop{\rightarrow }\infty} \iKlammerB{ A ( x_n , R »x)- A (»x, R »x)}{»v} \iSub{ X } \\[0.5ex]
& = & 0
\end{eqnarray*
for every $»v\in X $. This finishes the proof.
\endproof
\begin{remark}\label{«csr1}
The continuity assumption on $\xwt{ R }$ is fulfilled, for instance, if {\rm(A3.1)} holds (cf. Lemma~\ref{p1:RRcont}).
Moreover, if $ A $ is continuous in the first argument, then $ S $ is continuous.
\end{remark}
\section{Abstract evolution equations}
\label{s3:}
Before turning to a special application of Theorem~\ref{t1:SemReduct} in the next section,
we present some elements of the framework of Gr\"oger~\cite{konni} for evolution equations
allowing to include compositions with certain linear operators under the time derivative.
Well-known embedding theorems and results on existence, uniqueness or the continuous dependence on
the data hold also within this framework.
For further details and proofs we refer to~\cite{konni,jens,doni}.
Throughout this section we suppose the following.
\begin{assumption}\label{a2:Spaces}
Let $ V $ be a reflexive Banach space such that $ V $ and $ V ^*$ are strictly convex,
$ H $ a Hilbert space and $ K \in L( V ; H )$ be an operator
having dense image $ K ( V )$ in $ H $.
The operator $ E \in L( V ; V ^*)$ is given by $ E := K ^* J_H K $
«($ J_H $ is the duality mapping of~$ H $«).
Moreover, suppose that ${\cal T}=\mathop{]\kern1pt 0 ,T\kern1pt[}$ with $T>0$ and $1<p,p'<\infty$ with
$\f1p+\f1{p'}=1$ and $p\ge2$.
\end{assumption}
\begin{remark}\label{r3:E}\nix\\
\ifnum\the\remarkcc=0\else \\\fi\advance\remarkcc by 1{\bf \the\remarkcc.}\enspace
The operator $ E \in L( V ; V ^*)$ is positive and symmetric
(i.e. $ \iKlammerB{ E »u}{»u} \ge 0,$
$ \iKlammerB{ E »u}{»v} = \iKlammerB{ E »v}{»u} $
$ \forall »u,»v\in V )$.
Conversely, given any positive and symmetric operator $ E \in L( V ; V ^*)$
we can choose $H$ as the completion of the
pre-Hilbert space $ V /\ker E $ with inner product $ \iKlammerA{»u}{»v} := \iKlammerB{ E »u}{»v} $
and $ K »u:=[»u]$ in order to satisfy~Assumption~\ref{a2:Spaces}.\ifnum\the\remarkcc=0\else \\\fi\advance\remarkcc by 1{\bf \the\remarkcc.}\enspace
If $ K $ is injective, it is a bijection from $ V $ onto $ K ( V )\subset H $.
Therefore, $ V $ and $ H $ can be regarded as a usual evolution triple by
identifying $ V $ with $ K ( V )$,
equipping $ K ( V )$ with the norm
$ ||\iBlock{x}|| _{ K ( V )} := ||\iBlock{ K ^{-1} x}|| _{ V } $ and
considering the $ K ( V )\hookrightarrow H \cong H ^*\hookrightarrow ( K ( V ))^*$.
We use this identification of\/ $ V $ with $ K ( V )$ even if
$ V $ is a subset of $ H $ itself, cf.~Section~\ref{s4:}.
\end{remark}
Corresponding to these spaces and operators
we define ${\cal V }:=L^2({\cal T}; V )$ and ${\cal H }:=L^2({\cal T}; H )$ with standard norms
and identity ${\cal V }^*$ with $L^2({\cal T}; V ^*)$
(which we can do since $ V $ is reflexive and therefore possesses the Radon-Nikod\'ym property, cf. \cite{uhl}).
Moreover, we set
$
({\cal E}»u)(t):= E »u(t)$ and $
({\cal K }»u)(t):= K »u(t)
$
in order to obtain ${\cal E}\in L({\cal V };{\cal V }^*)$ and ${\cal K }\in L({\cal V };{\cal H })$.
The space ${\cal W}$ is the space of all $»u\in{\cal V }$ such that ${\cal E}»u\in{\cal V }^*$ possesses a
weak time derivative which again belongs to ${\cal V }^*$:
\begin{eqnarray*}
{\cal W} := \{»u\in{\cal V } \>|\> {\cal E}»u \mbox{ has a weak derivative } ({\cal E}»u)'\in{\cal V }^*\}, \enspace
||\iBlock{»u}|| _{{\cal W}} := ( ||\iBlock{»u}|| _{{\cal V }} ^2 »+ ||\iBlock{({\cal E}»u)'}|| _{{\cal V }^*} ^2)^{1/2}.
\end{eqnarray*}
Furthermore, we define the linear operator ${\cal L }\subset{\cal V }\times{\cal V }^*$ by
\[
D({\cal L }) := {\cal W}\subset{\cal V }, \enspace \enspace
{\cal L }»u:=({\cal E}»u)'\in{\cal V }^*
\]
and ${\cal I}\in L({\cal W};{\cal V })$ as the identity ${\cal I}:=\mathop{\rm Id}\rulenix_{{\cal W}\mathop{\rightarrow }{\cal V }}$
regarded as a mapping from ${\cal W}$ into ${\cal V }$.
For these spaces we obtain the following density result and a formula of integration by parts.
\begin{proposition}\label{lab.prop.fevol.banach}
The space ${\cal W}$ is a reflexive Banach space and
$\{»u|_{{\cal T}} »: »u\in C^\infty_c({{\mathbb R}}; V )\}$ is a dense subspace.
\end{proposition}
\begin{proposition}\label{p3:PartInt}
The operator ${\cal K }$ maps ${\cal W}$ continuously into the space $C(\overline {{\cal T}}; H )$, meaning that every class of
equivalent functions in ${\cal K }({\cal W})\subset L^p({\cal T}; H )$ possesses a representative that is
continuous from ${\cal T}$ into $ H $ with continuous extension onto $\overline {{\cal T}}$.
Furthermore, in this sense the formulas hold for all $»u,»v\in{\cal W}$ and $ t_1 , t_2 \in\overline {{\cal T}}$
\begin{eqnarray*}
& \iKlammerA{({\cal K }»u)( t_2 )}{({\cal K }»v)( t_2 )} \iSub{ H } »- \iKlammerA{({\cal K }»u)( t_1 )}{({\cal K }»v)( t_1 )} \iSub{ H } \hskip5cm &\\
&\hskip5cm \>=\> \int_{ t_1 }^{ t_2 } \big[ \iKlammerB{({\cal E}»u)'(t)}{»v(t)} \iSub{ V } »+ \iKlammerB{({\cal E}»v)'(t)}{»u(t)} \iSub{ V } \big] \,dt, &\\[4pt]
& ||\iBlock{({\cal K }»u)( t_2 )}|| _{ H } ^2 »- ||\iBlock{({\cal K }»u)( t_1 )}|| _{ H } ^2
\>=\> 2\int_{ t_1 }^{ t_2 } \iKlammerB{({\cal E}»u)'(»t)}{»u(»t)} \iSub{ V }\,dt. &
\end{eqnarray*
\end{proposition}
In order to incorporate the treatment of initial data of evolution equations directly into
the operators and the spaces let us consider
\[
\widehat {{\cal W}} «:= {\cal W}\times H , \enspace \enspace \enspace
\widehat {{\cal V }} «:= {\cal V }\times H ,
\]
with the product norm $ ||\iBlock{(x,y)}|| _{X\times Y} :=( ||\iBlock{x}|| _{X} ^2+ ||\iBlock{y}|| _{Y} ^2)^{1/2}$ on $X\times Y$ for
two normed vector spaces $X$ and $Y$.
{\def\xa#1#2#3{&\pboxl{1.3cm}{$#1$}\pboxl{6cm}{$#2$}\pboxl{4cm}{$#3$}&
To a given $»h\in H $ the (single-valued) operators $\widehat {{\cal L }}\subset \widehat {{\cal V }}\times \widehat {{\cal V }}^*$ and
${\cal L }_{»»h}\in {\cal V }\times {\cal V }^* $ are defined by
\begin{eqnarray*}
\xa{ D(\widehat {{\cal L }}) }{ :=\{(»u,({\cal K }»u)(0)) \>|\> »u\in{\cal W}\}, }{ \rulenix \widehat {{\cal L }}(»u,»h):=({\cal L }»u, J_H »h), } \\[0.5ex]
\xa{ D({\cal L }_{»»h}) }{ :=\{»u\in{\cal W} »: ({\cal K }»u)(0)=»h\}, }{ {\cal L }_{»»h}:={\cal L }|_{D({\cal L }_{»»h})}. }
\end{eqnarray*
A fundamental result is the maximal monotonicity of $\widehat {{\cal L }}$
\begin{proposition}\label{}
The operator $\widehat {{\cal L }}\in \widehat {{\cal V }}\times \widehat {{\cal V }}^* $ is a linear, maximal monotone operator.
\end{proposition}
\begin{corollary}\label{}
For every $»h\in H $, the operator ${\cal L }_{»»h}\in {\cal V }\times {\cal V }^* $ is maximal monotone.
\end{corollary}
{\em Proof}.
By \cite[Theorem~32.F]{zeid2b} a monotone mapping $ T \subset X \times X ^*$ on a reflexive Banach space $ X $
with $ X $ and $ X ^*$ being strictly convex
is maximal monotone if and only if $ T + J_{ X }$ is surjective.
Therefore, let an arbitrary $»u^*\in{\cal V }^*$ be given.
Applied to $\widehat {{\cal L }}$, the theorem in question shows the existence of a
$\widehat {»u}=(»u, h_1 )\in\widehat {{\cal V }}$ such that $(\widehat {{\cal L }}+»J_{\widehat {{\cal V }}})\widehat {»u}=(»u^*,2 J_H »h)$.
Since $\widehat {»u}\in D(\widehat {{\cal L }})$, it follows that $ h_1 =({\cal K }»u)(0)$.
Moreover, it is easy to check that $ J_{\widehat {{\cal V }}}(»v,»g)=( J_{{\cal V }}»v, J_H »g)$.
This implies $({\cal L }+»J_{{\cal V }})»u=»u^*$ and $2 J_H ({\cal K }»u)(0)=2 J_H »h$.
Consequently, we conclude that $»u\in D({\cal L }_{»»h})$ and $({\cal L }_{»»h}+»J_{{\cal V }})»u=»u^*$.
\endproof
The following theorems provides conditions that ensure the solvability of evolution inclusions.
\begin{theorem}\label{t2:Exist}
Suppose Assumption~\ref{a2:Spaces} and $(»f, h )\in{\cal V }^*\times H $.
Let ${\cal A}:{\cal V }\rightarrow {\cal V }^*$ be bounded, maximal monotone and
${\cal B}:{\cal V }\rightarrow {\cal V }^*$ be bounded, demicontinuous, coercive with respect to a $ w_0 \in{\cal W}$ such that
${\cal B}$ is pseu\discretionary{-}{}{}domono\discretionary{-}{}{}tone with respect to ${\cal L }_{0}$.
Then there exists a $»u\in{\cal W}$ with
\[
({\cal L }+{\cal A}+{\cal B})»u »= »f,\enspace \enspace \enspace ({\cal K }»u)(0)= h .
\]
\end{theorem}
{\em Proof}.
1. Let us choose a $»w\in D({\cal L }_{- h })$ (note that ${\cal L }_{- h }\neq\emp$ since it is maximal monotone).
First, we show that there exists a $»v\in D({\cal L }_{0})$ with
\[
({\cal L }_{0}+ \mathfrak{T}_{»w}{\cal A}+ \mathfrak{T}_{»w}{\cal B})»v »= »f+{\cal L }»w.
\]
Since $\mathfrak{T}_{»w}{\cal A}$ is bounded and maximal monotone, it is pseu\discretionary{-}{}{}domono\discretionary{-}{}{}tone and demicontinuous (cf.~\cite[Lemma~1.3, p.~66]{ggz}).
Particularly, $\mathfrak{T}_{»w}{\cal A}$ is pseu\discretionary{-}{}{}domono\discretionary{-}{}{}tone with respect to ${\cal L }_{0}$ as well as $\mathfrak{T}_{»w}{\cal B}$ (cf.~Proposition~\ref{p2:PsmSum}).
Consequently, $\mathfrak{T}_{»w}{\cal A}+\mathfrak{T}_{»w}{\cal B}$ is pseu\discretionary{-}{}{}domono\discretionary{-}{}{}tone with respect to ${\cal L }_{0}$, demicontinuous and
coercive with respect to $ w_0 +»w$.
An existence result by Lions~\cite[Theorem~1.1, p.~316]{lions} guarantees that
there is a $»v\in D({\cal L }_{0})$ with
$({\cal L }_{0}+\mathfrak{T}_{»w}{\cal A}+\mathfrak{T}_{»w}{\cal B})»v\ni»f+{\cal L }»w$.
2. Setting $»u:=»v-»w$ it holds $»u\in D({\cal L }_{»»h})$ and ${\cal L }_{»»h}»u = {\cal L }_{0}»v - {\cal L }»w$.
This implies
\[
({\cal L }_{»»h}+{\cal A}+{\cal B})»u «= ({\cal L }_{0}+\mathfrak{T}_{»w}{\cal A}+\mathfrak{T}_{»w}{\cal B})»v - {\cal L }»w «= »f
\]
and therefore completes the proof.
\endproof
\begin{remark}\label{}
The theorem by Lions applied in our proof assumes coercivity of the pseu\discretionary{-}{}{}domono\discretionary{-}{}{}tone operator with
respect to $0$, but it can be generalized to the case of
coercivity with respect to an arbitrary element in ${\cal W}$ without any difficulties
(cf.~also \cite[Theorem~2.6.1]{doni}).
\end{remark}
\section{Application to a model of phase separation}
\label{s4:}
In this section we show how the results of the previous »sections can be applied to prove
the existence of solutions to coupled elliptic-parabolic systems.
In order to demonstrate the ability of these techniques and the generality of Gr\"oger's framework
we consider a parabolic equation of fourth order in space of Cahn--Hilliard type
which is coupled to a elliptic equation modeling a quasi-steady mechanical equilibrium
for each point in time.
The given system is highly nonlinear and both parts are strongly coupled.
This generality imposes a restriction -- solving the elliptic part and inserting the solution into the
parabolic part we use Theorem~\ref{t1:SemReduct} to ensure the pseudomonotonicity of the resulting operator.
Therefore, we require Assumption~\ref{a2:Semi} to hold which means that we have to restrict
the influence both parts of the system may exert on each other.
This is necessary since changes in lower order terms of one part may effect
higher order terms in the other and
the reduced equation has to be monotone in the leading order terms.
Nevertheless, no other existence results for this very general system
seem to be known yet.
Together with initial and boundary conditions, our systems reads as follows
{\def\xa#1#2#3{&\pboxc{0cm}{\pboxc{0.87\hsize}{$#1$}\pboxl{0.04\hsize}{#2}\pboxl{0.09\hsize}{$#3$}}&}
\begin{eqnarray*}
\xa{ \partial_t »u - \mathop{\rm div}( M \nabla(\mu \partial_t »u+»w)) \>=\> 0 }{on}{{\cal T}\times\Omega ,} \\
\xa{ »w \in \partial \varphi (»u) - \mathop{\rm div}( b_1 (»u,\nabla»u, e ))+ b_2 (»u,\nabla»u, e ) }{on}{{\cal T}\times\Omega ,} \\
\xa{ \mathop{\rm div} b_0 (»u,\nabla»u, e ) \>=\> 0,\enspace \enspace \enspace e =\epsilon({\bf u}):=\frac12(D{\bf u}+D{\bf u}^t) }{on}{{\cal T}\times\Omega ,} \\[1.5ex]
\xa{ M \nabla(\mu \partial_t »u+»w)\cdot\vec n \>=\> 0,\enspace \enspace \enspace b_1 (»u,\nabla»u, e )\cdot\vec n\>=\> 0 }{on}{{\cal T}\times\partial\Omega ,} \\
\xa{ b_0 (»u,\nabla»u, e )\vec n \>=\> 0 }{on}{{\cal T}\times{\Gamma_N},} \\
\xa{ {\bf u} \>=\> 0 }{on}{{\cal T}\times{\Gamma_D},} \\
\xa{ »u(0) \>=\> u_0 }{on}{\Omega .}
\end{eqnarray*}
As a consequence of the mass balance and the boundary conditions, the mean value of the
concentration does not change over time.
Therefore, after applying a simple shift we can assume $ u_0 $ to have mean value $0$
which then transfers to all $»u(t)$ for $t\in{\cal T}$.
}
The the remainder of this paper we suppose the following.
\begin{assumption}\label{}
The domain $\Omega \subset{{\mathbb R}}^N $ is nonempty, open, bounded and connected set with
Lipschitz boundary $\partial\Omega $.
The two open subsets ${\Gamma_D},{\Gamma_N}$ of $\partial\Omega $ are disjoint with $\partial\Omega =\overline {{\Gamma_D}}\cup\overline {{\Gamma_N}}$,
${\Gamma_D}\neq\emp$ and $»G:=\Omega \cup{\Gamma_D}$ is regular in the sense of Gr\"oger~\cite{konni1}.
${\cal T}=\mathop{]\kern1pt 0,T\kern1pt[},\enspace T>0$ is a bounded (time) interval and $0< m_1 \le m_2 $ and $ q_0 >2$ are
real constants.
\end{assumption}
The following regularity result is due to Gr\"oger~\cite{konni1} and applies to regular sets
$»G$ being the union of a domain $\Omega $ and a part ${\Gamma_D}$ of its boundary $\partial\Omega $,
where the latter serves as the Dirichlet boundary part.
Before stating his result, spaces used in the formulation are introduced.
\begin{definition}\label{}
Let
$ H :=\{»u\in L^2(\Omega ) »: \int_\Omega u=0 \}$ with the induced $L^2$-inner product and
$ V »:= H^1(\Omega )\cap H$ with the inner product $ \iKlammerA{u}{v} \iSub{ V }:= \iKlammerA{\nabla u}{\nabla v} \iSub{L^2}$.
We define
$ U :=\{{\bf u}\in H^1(\Omega ;{{\mathbb R}}^N )»: {\bf u}|_{\Gamma_D}=0\}$,
where ${\bf u}|_{\Gamma_D}$ is understood in the sense of traces of \,${\bf u}$ on ${\Gamma_D}\subset\partial\Omega $
and with the induced norm of $H^1(\Omega ;{{\mathbb R}}^N )$.
The mapping $\epsilon$ is given by
\[
\epsilon: U \rightarrow L^2(\Omega ;{{\mathbb R}}^{N\times N}),\enspace \enspace \epsilon({\bf u}):=\displaystyle\f12\,(D{\bf u}+D{\bf u}^t).
\]
We equip the range space $ Y :=\epsilon( U )$ with the norm of $L^2(\Omega ;{{\mathbb R}}^{N\times N})$.
Moreover, for $1\le p\le\infty$, the space $ W^{1,p}_0(»G;{{\mathbb R}}^M ) $ is defined to be the closure~of
\[
\{{\bf u}|_{\Int»G} »: {\bf u}\in C^\infty_c({{\mathbb R}}^N ;{{\mathbb R}}^M ),\enspace \mathop{\rm supp}{\bf u}\cap(\overline {»G}\sm»G) = \emp \}
\]
in the usual Sobolev spaces $W^{1,p}(\Int»G;{{\mathbb R}}^M )$ and
$ W^{-1,p}(»G;{{\mathbb R}}^M ) :=(W^{1,p'}_0(»G;{{\mathbb R}}^M ))^*$ for the conjugated exponent $p'$ given by
$\frac1p+\frac1{p'}=1$ (and using the convention $\frac1\infty:=0$).
\end{definition}
Let us agree to simply write $ ||\iBlock{x}|| _{ H } $ for $ ||\iBlock{x}|| _{L^2} $ even if $»x\not\in X $.
Now we are in the position to state Gr\"oger's regularity result adapted to our situation.
\begin{proposition}\label{p4:Konni}
Let $»b:»G\times{{\mathbb R}}^{N\times N}\rightarrow {{\mathbb R}}^{N\times N}$ such that
\begin{eqnarray*}
& »x\mapsto »b(»x,0) \,\in\, L^{ q_0 }(»G;{{\mathbb R}}^{N\times N}), \enspace \enspace \enspace
»x\mapsto »b(»x,»v) \enspace \mbox{is measurable}, & \\
& \bbig(»b(»x,»v)-»b(»x,»w)\bbig)\cdot\bbig(»v-»w\bbig)
\>\ge\> m_1 |»v-»w|^2, \enspace \enspace
|»b(»x,»v)-»b(»x,»w)| \>\le\> m_2 |»v-»w|. &
\end{eqnarray*
Corresponding to $»b$, the operator $ A : U \rightarrow U ^*$ is given by
\[
\iKlammerB{ A {\bf u}}{{\bf v}} \iSub{ U }:=\int_\Omega »b(»x,\epsilon({\bf u})):\epsilon({\bf v})\>dx.
\]
Then there exists a constant $ q_1 $ depending only on $»G, m_1 $ and $ m_2 $ with
$2< q_1 \le q_0 $ such that $ A $ maps the subspace $ W^{1, q_1 }_0(»G;{{\mathbb R}}^M ) $ of $ U $ onto the space $ W^{-1, q_1 }(»G;{{\mathbb R}}^M ) $.
\end{proposition}
\begin{remark}\label{}
Note that $ U =W^{1,2}_0(»»G;{{\mathbb R}}^{N\times N})$.
The original result of Gr\"oger~\cite{konni1} is given for scalar functions
under conditions analog to those given above.
To this end, he shows that the duality mapping of $W^{1,2}_0(»G;{{\mathbb R}})$ maps the
subspace $W^{1,p}_0(»G;{{\mathbb R}})$ onto $W^{-1,p}_0(»G;{{\mathbb R}})$ for some $p>2$ and
then transfers this property to nonlinear operators.
We further note that all arguments of Gr\"oger can be transferred to the vector-valued case
where the norm $ ||\iBlock{\>.\> }|| _{ U } $ of $ U $ is replaced by the
equivalent norm $ ||\iBlock{\epsilon(\>.\> )}|| _{ Y } $ (due to Korn's inequality).
\end{remark}
Throughout this section we further assume the following.
\begin{assumption}\label{}
Let $ C_P :=\sup\{ ||\iBlock{»u}|| _{ H } : »u\in V , \> ||\iBlock{»u}|| _{ V } \le1\}$ and
$ q_1 $ with $2< q_1 \le q_0 $ be given as in Proposition~\ref{p4:Konni} for $G'=»G=\Omega \cup{\Gamma_D}$.
Furthermore, the constants $ q_2 $ and $ q_3 $ with $2\le q_2 , q_3 \le\infty$ are such that $ V $ \!is
continuously embedded
into $L^{ q_2 }(\Omega )$ and compactly embedded into $L^{ q_3 }(\Omega )$.
Finally, $ q_4 :=\f{ q_3 }2 (1-\f2{ q_1 })$.
\end{assumption}
\begin{remark}\label{..}
$ C_P $ is the operator norm of the identity as an operator from $ V $ into $ H $,
which is finite due to Poincar\'e's inequality.
Moreover, the Sobolev embedding theorem shows that we can choose
$ q_2 \ge 2$ arbitraryly in ${{\mathbb R}}$ in case of $N=1,2$ and
$ q_2 =\f{2N}{N-2}$ if $N\ge3$ together with any
$ q_3 $ such that $2\le q_3 < q_2 $.
\end{remark}
Next, we consider conditions on the functions $ b_0 , b_1 , b_2 $ and $\varphi $.
\begin{definition}\label{}
\def\xa#1#2{©ruck{1cm}{\hfil{\rm #1}\hfil}{#2}
\def\xb{\vrule width0pt\hskip1cm
\def\ya#1{\iWidth{\iUserA}{#1}#1\global\iUserA=\iUserA
\def\yb#1{\pboxl{\iUserA}{$#1$}
\def\yc#1{\pboxl{3.8cm}{$#1$}
\parskip0p
Within the following conditions all inequalities are assumed to hold for all
$»t\in T ,»x\in\Omega ,»u, u_1 , u_2 \in{{\mathbb R}},»p, p_1 , p_2 \in{{\mathbb R}}^N , e , e_1 , e_2 \in{{\mathbb R}}^{N\times N}:$\vskip4pt
{\rucky{13mm}{{\rm(H0)}\hfil}{$ M \in{{\mathbb R}}^{N\times N}$ is symmetric and positive-definite and $\mu \ge0$.
If $\mu >0$, we set $\mu_0 :=0$, otherwise $\mu_0 :=1$.}}
\vskip0.8em
{\rucky{13mm}{{\rm(H1)}\hfil}{$\varphi :{{\mathbb R}}\rightarrow \overline {{{\mathbb R}}}$ is a convex, lower-\discretionary{}{}{}semi\discretionary{-}{}{}con\discretionary{-}{}{}tin\discretionary{-}{}{}u\discretionary{-}{}{}ous, proper functional.}}
\vskip.3em
{\rucky{13mm}{{\rm(H1a)}\hfil}{$\varphi \in C^1({{\mathbb R}})$ is convex and $\varphi (r)\le C(r^2+1)$ for all $r\in{{\mathbb R}}$ for some $C>0$.}}
\vskip0.8em
{\rucky{13mm}{{\rm(H2)}\hfil}{$ b_1 : T \times \Omega \times {{\mathbb R}}\times {{\mathbb R}}^N \times {{\mathbb R}}^{N\times N}\rightarrow {{\mathbb R}}^N $ \>is a Carath\'eodory function with \\[2pt]
\xb $\ya{ ( b_1 (»t,»x,»u, p_1 , e )- b_1 (»t,»x,»u, p_2 , e ))\cdot( p_1 - p_2 ) } \enspace \ge\enspace \alpha_{ b_1 ,p}| p_1 - p_2 |^2$, \\[2pt]
\xb $\yb{ | b_1 (»t,»x,»u,»p, e_1 )- b_1 (»t,»x,»u,»p, e_2 )| } \enspace \le\enspace \beta_{ b_1 ,e}| e_1 - e_2 |$, \\[2pt]
\xb $\yc{ | b_1 (»t,»x,»u,»p, e )|^2 } \enspace \le\enspace g (»t,»x) + C_{ b_1 ,u}|»u|^2+ C |»p|^2+ C_{ b_1 ,e}| e |^2$ \\[4pt]
for some constants $\alpha_{ b_1 ,p}>0,\> \beta_{ b_1 ,e}, C , C_{ b_1 ,u}, C_{ b_1 ,e}\ge0$ and $ g \in {\cal L}^1( {\cal T}\times\Omega )$. \\[2pt]
$ b_2 : T \times \Omega \times {{\mathbb R}}\times {{\mathbb R}}^N \times {{\mathbb R}}^{N\times N}\rightarrow {{\mathbb R}}$ \>is a Carath\'eodory function with \\[2pt]
\xb $\yb{ | b_2 (»t,»x,»u, p_1 , e )- b_2 (»t,»x,»u, p_2 , e ))| } \enspace \le\enspace \beta_{ b_2 ,p}| p_1 - p_2 |$,\\[2pt]
\xb $\yb{ | b_2 (»t,»x,»u,»p, e_1 )- b_2 (»t,»x,»u,»p, e_2 )| } \enspace \le\enspace \beta_{ b_2 ,e}| e_1 - e_2 |$,\\[4pt]
\xb $\yc{ | b_2 (»t,»x,»u,»p, e )|^2 } \enspace \le\enspace g (»t,»x) + C_{ b_2 ,u}|»u|^2+ C |»p|^2+ C_{ b_2 ,e}| e |^2$\\[2pt]
for some constants $\beta_{ b_2 ,p},\beta_{ b_2 ,e}, C , C_{ b_2 ,u}, C_{ b_2 ,e}\ge0$ and $ g \in {\cal L}^1( {\cal T}\times\Omega )$.}}
\vskip0.8em
{\rucky{13mm}{{\rm(H3)}\hfil}{$ b_0 : T \times \Omega \times {{\mathbb R}}\times {{\mathbb R}}^N \times {{\mathbb R}}^{N\times N}\rightarrow {{\mathbb R}}^{N\times N}$ \>is a Carath\'eodory function~with\\[2pt]
\xb $\yb{ ( b_0 (»t,»x,»u,»p, e_1 )- b_0 (»t,»x,»u,»p, e_2 )):( e_1 - e_2 ) } \enspace \ge\enspace \alpha_{ b_0 ,e}| e_1 - e_2 |^2$,\\[2pt]
\xb $\yb{ | b_0 (»t,»x,»u,»p, e_1 )- b_0 (»t,»x,»u,»p, e_2 ))| } \enspace \le\enspace \beta_{ b_0 ,e}| e_1 - e_2 |$,\\[2pt]
\xb $\yb{ | b_0 (»t,»x,»u, p_1 , e )- b_0 (»t,»x,»u, p_2 , e ))| } \enspace \le\enspace \beta_{ b_0 ,p}| p_1 - p_2 |$,\\[2pt]
\xb $\yc{ | b_0 (»t,»x,»u,»p, e )|^2 } \enspace \le\enspace g (»t,»x) + C_{ b_0 ,u}|»u|^2+ C |»p|^2+ C_{ b_0 ,e}| e |^2$\\[2pt]
for some constants $\alpha_{ b_0 ,e}>0,\> \beta_{ b_0 ,e},\beta_{ b_0 ,p}, C , C_{ b_0 ,u}, C_{ b_0 ,e}\ge0$ and $ g \in {\cal L}^1( {\cal T}\times\Omega )$.\\
Moreover, the matrix $ b_0 (»t,»x,»u,»p, e )$ is symmetric and continuous in $»t$ uniformly in~$(»x,»u,»p, e )$.}}
\vskip.3em
{\rucky{13mm}{{\rm(H3a)}\hfil}{Condition {\rm(H3)} is satisfied and furthermore it holds\\[2pt]
\xb $\pboxl{62mm}{$ | b_0 (»t,»x, u_1 ,»p, e )- b_0 (»t,»x, u_2 ,»p, e )| $} \enspace \le\enspace \gamma_{ b_0 ,u}\,(| e |+1)\,| u_1 - u_2 |^{ q_4 }$,\\[4pt]
\xb $\pboxl{50mm}{$ | b_0 (»t,»x,»u,»p, e )|^{ q_0 } $} \enspace \le\enspace g (»t,»x) + C (|»u|^{ q_0 }+|»p|^2+| e |^{ q_0 })$,\\[4pt]
for some $\gamma_{ b_0 ,u}, C \ge0$ and $g\in{\cal L}^1( {\cal T}\times\Omega )$.
}}
\vskip0.8em
{\rucky{13mm}{{\rm(H4)}\hfil}{$\alpha_{ b_1 ,p}- C_P \beta_{ b_2 ,p}>0$. Furthermore, we have \,$ m_1 \le \alpha_{ b_0 ,e} \le \beta_{ b_0 ,e} \le m_2 $.}}
\vskip.3em
{\rucky{13mm}{{\rm(H4a)}\hfil}{There exists a constant $ c_a >0$ such that with $\varphi _a $ defined by\\[2pt]
$\varphi _a :=\bigg[ ( c_a +1) \bigg( C_{ b_1 ,u}+ \f1{\alpha_{ b_0 ,e}} C_P C_{ b_1 ,e} C_{ b_0 ,u}\bigg) »+ (1+\f1{ c_a }) C_P \bigg( C_{ b_2 ,u}+\f1{\alpha_{ b_0 ,e}} C_P C_{ b_2 ,e} C_{ b_0 ,u}) \bigg) \bigg]^{1/2}$ \\[2pt]
it holds: \enspace \enspace \enspace \enspace $(\alpha_{ b_1 ,p} - C_P \,\beta_{ b_2 ,p})\alpha_{ b_0 ,e}-(\beta_{ b_1 ,e}+ C_P \,\beta_{ b_2 ,e})\beta_{ b_0 ,p} - \varphi _a «>0$.}}
\vskip0.8em
{\rucky{13mm}{{\rm(H5)}\hfil}{$ u_0 \in H^1(\Omega )$ with $\int_\Omega u_0 \,dx=0$ and \,$\varphi \circ u_0 \in L^1(\Omega )$.}}
\vskip15pt
{\rucky{13mm}{{\rm(H)}\hfil}{Conditions {\rm(H0)}, {\rm(H1)}, {\rm(H2)}, {\rm(H3)}, {\rm(H4)} and {\rm(H5)} are satisfied.}}
{\rucky{13mm}{{\rm(Ha)}\hfil}{Conditions {\rm(H0)}, {\rm(H1a)}, {\rm(H2)}, {\rm(H3a)}, {\rm(H4a)} and {\rm(H5)} are satisfied.}}
\vskip.3em
}\end{definition}
Here, by Carath\'eodory function we mean a function that is measurable as a function of $(t,x)$ and
continuous in the other arguments.
\begin{remark}\label{}\nix \\
\ifnum\the\remarkcc=0\else \\\fi\advance\remarkcc by 1{\bf \the\remarkcc.}\enspace In condition {\rm(H)} we collect Lipschitz and growth conditions that are needed in order
to define the operators involved in our weak formulation of problem~{\rm(P)} and
to apply Theorem~\ref{t1:SemReduct} later on,
whereas under {\rm(Ha)} we are able to show that
the weak formulation indeed possesses a solution.\ifnum\the\remarkcc=0\else \\\fi\advance\remarkcc by 1{\bf \the\remarkcc.}\enspace Under {\rm(H)} the operator $\partial \varphi $ in general is multi-valued.
The differentiability assumption in~{\rm(H2a)} could be omitted leading to
multi-valued pseu\discretionary{-}{}{}domono\discretionary{-}{}{}tone operators later on that can be handled by generalizing
the results of the preceding »sections.
Nevertheless, for the sake of simplicity, we restrict our discussion to
single-valued pseu\discretionary{-}{}{}domono\discretionary{-}{}{}tone operators here
$ m_1 :=\alpha_{ b_0 ,e}$ and $ m_2 :=\beta_{ b_0 ,e}$, if then~{\rm(H3a)} is satisfied.
Note that $ q_1 $ and hence $ q_4 $ depend on $ m_1 $ and $ m_2 $.
(Therefore we fix all these constants in advance.)\ifnum\the\remarkcc=0\else \\\fi\advance\remarkcc by 1{\bf \the\remarkcc.}\enspace The mappings $ b_1 $ and $ b_2 $ together as well as $ b_0 $ give rise to
operators $\widetilde A$ and $\widetilde B$, respectively, which correspond to the operators
given in Section~\ref{s2:}.
Condition~{\rm(H4)} is the strong monotonicity of $\widetilde B$ and {\rm(H4a)} is a tightening of~(A3.4).\end{remark}
In order to define a (appropriate weak) solution to problem~{\rm(P)} in the framework of Section~\ref{s3:}
consider the following operators.
\begin{definition}\label{}
We define $»F\in L ( V ; V ^*),\enspace \iKlammerB{»F»u}{»v} \iSub{ V } »:= \iKlammerA{ M \nabla»u}{\nabla»v} _{L^2(\Omega ;{{\mathbb R}}^N )}$ for $»u,»v\in V $,
\[
I_H := {\rm Id}_{ V \rightarrow H } ,\enspace \enspace
»I := I_H ^* J_H I_H \in L ( V ; V ^*) ,\enspace \enspace
E _1 := \mu {\rm Id}_{ H } + I_H »F^{-1} I_H ^* J_H \>\in\> L ( H ; H ) .
\]
Let $ E _2 \in L( H )$ be the (positive, symmetric) root of $ E _1 $ and
\[
K := E _2 I_H \>\in\> L ( V ; H ) ,\enspace \enspace \enspace
E := K ^* J_H K \>\in\> L ( V ; V ^*) .
\]
Corresponding to these spaces and operators let ${\cal V },{\cal H },{\cal W},{\cal E},{\cal K },{\cal L }$ and ${\cal L }_{»»h}$
be given as in Section~\ref{s3:} and define ${\cal U}:=L^2( T ; U )$.
\end{definition}
\begin{remark}\label{r4:E}
The operator $»F$ corresponds to the mapping $-\mathop{\rm div}( M \nabla\>.\> )$ with natural boundary conditions
and is positive-definite and symmetric.
These properties of $»F$ also transfer to $ E _1 $ and $ E $ and it holds
\[
E \>=\> I_H ^* E _2 ^* »JK E _2 I_H \>=\> \mu »I + »I»F^{-1}»I.
\]
Note that $ K $ has dense range.
Indeed, in the case of $\mu =0$ the operator $ E _1 $ is the composition of operators with dense range.
If $\mu >0$ then $ E _1 $ is even surjective since it is monotone, continuous and coercive.
Therefore, also $ E _2 $ and $ K $ have dense range in $ H $.
\end{remark}
As already done, we identify the dual of the space $L^2({\cal T};X)$ for a reflexive Banach space $ X $
with the space $L^2({\cal T};X^*)$.
Now, we introduce operators related to the functions $ b_0 , b_1 , b_2 $ and $\varphi $.
The Carath\'eodory property and the growth conditions ensure that these operators are indeed mappings between the given spaces.
\begin{definition}\label{..}
Suppose that conditions {\rm(H1)}, {\rm(H2)} and {\rm(H3)} hold. We set
{\def\xa#1#2#3#4{&\pboxl{6mm}{$#1$} : \pboxl{38mm}{$#2$}\enspace \enspace \enspace \pboxl{9.5cm}{\pboxl{3.9cm}{$#3$} «:= \pboxl{5.0cm}{$#4$}}&
\def\xb#1#2#3{ &\pboxl{6mm}{$#1$} : \pboxl{38mm}{$#2$}\enspace \enspace \enspace \pboxl{9.5cm}{\pboxl{3.9cm}{$#3$} \pboxc{11mm}{} \pboxl{5.0cm}{}}&
\begin{eqnarray*}
\xa{ \xhh{\widetilde B^Y}{B} }{ {\cal T}\times V \times V \times Y \rightarrow Y ^*, }{ \iKlammerB{\xhh{\widetilde B^Y}{B}(»t, u_1 , u_2 , e )}{ e _1 } \iSub{ Y } }{ \int_\Omega b_0 (t,»x, u_2 ,\nabla u_1 , e ): e _1 \> dx, } \\[0.5ex]
\xa{ \xhh{\widetilde B^1}{B} }{ {\cal T}\times V \times V \times Y \rightarrow V ^*, }{ \iKlammerB{\xhh{\widetilde B^1}{B}(»t, u_1 , u_2 , e )}{»v} \iSub{ V } }{ \int_\Omega b_1 (t,»x, u_2 ,\nabla u_1 , e )\cdot\nabla»v \> dx, } \\[0.5ex]
\xa{ \xhh{\widetilde B^2}{B} }{ {\cal T}\times V \times V \times Y \rightarrow V ^*, }{ \iKlammerB{\xhh{\widetilde B^2}{B}(»t, u_1 , u_2 , e )}{»v} \iSub{ V } }{ \int_\Omega b_2 (t,»x, u_2 ,\nabla u_1 , e )\,»v \> dx, } \\[0.5ex]
\xb{ \xhh{\widetilde B^X}{B} }{ {\cal T}\times V \times V \times Y \rightarrow V ^*, }{ \xhh{\widetilde B^X}{B}(»t, u_1 , u_2 , e ) «:= \xhh{\widetilde B^1}{B}(»t, u_1 , u_2 , e )+\xhh{\widetilde B^2}{B}(»t, u_1 , u_2 , e ), }
\end{eqnarray*}
together with
{\def\xa#1#2#3#4{&\pboxl{5mm}{$#1$} : \pboxl{3.2cm}{$#2$}\enspace \enspace \enspace \enspace \pboxl{1.9cm}{$#3$} «:= \pboxl{2.3cm}{$#4$}&
\begin{eqnarray*}
\xa{ \xhh{B^X}{B} }{ {\cal T}\times V \times Y \rightarrow V ^*, }{ \xhh{B^X}{B}(»t,»u, e ) }{ \xhh{\widetilde B^X}{B}(»t,»u,»u, e ), } \\[0.5ex]
\xa{ \xhh{B^Y}{B} }{ {\cal T}\times V \times Y \rightarrow Y ^*, }{ \xhh{B^Y}{B}(»t,»u, e ) }{ \xhh{\widetilde B^Y}{B}(»t,»u,»u, e ). }
\end{eqnarray*}
Moreover, the operators $\xhh{B^X}{B}$ and $\xhh{B^Y}{B}$ will be extended by
{\def\xa#1#2#3#4{&\pboxl{5mm}{$#1$} : \pboxl{2.35cm}{$#2$}\enspace \enspace \enspace \enspace \pboxl{2.7cm}{$#3$} «:= \pboxl{4.5cm}{$#4$}&
\begin{eqnarray*}
\xa{ \xhh{{\cal B}^X}{B} }{ {\cal V }\times{\cal Y}\rightarrow {\cal V }^*, }{ \iKlammerB{\xhh{{\cal B}^X}{B}(»u, e )}{»v} \iSub{{\cal V }} }{ \int_{\cal T} \iKlammerB{\xhh{B^X}{B}(t,»u, e )}{»v} \iSub{ V } \> dt, } \\[0.5ex]
\xa{ \xhh{{\cal B}^Y}{B} }{ {\cal V }\times{\cal Y}\rightarrow {\cal Y}^*, }{ \iKlammerB{\xhh{{\cal B}^Y}{B}(»u, e )}{ e _1 } \iSub{{\cal Y}} }{ \int_{\cal T} \iKlammerB{\xhh{B^Y}{B}(t,»u, e )}{ e _1 } \iSub{ Y } \> dt. }
\end{eqnarray*}
to operators on ${\cal V }\times{\cal Y}$.
Note that again the dependence of \>$»u,\nabla»u,»v, e $ and $ e _1 $ on $»x\in\Omega $ and $»t\in T $
was suppressed in this notation.
{\def\xa#1#2#3{\pboxc{35mm}{$#1$}\pboxl{10mm}{$#2$}:=\enspace \pboxl{6cm}{$#3$}
Moreover, we define the functionals
\begin{eqnarray*}
\xa{ Q : V \rightarrow \overline {{{\mathbb R}}}, }{ Q (»u) }{
\begin{cases}
\textstyle\int\limits_\Omega { \varphi \circ»u }\enspace \enspace & \text{if $\varphi \circ»u\in L^1(\Omega )$},\\
+\infty & \text{otherwise.}
\end{cases} }\\[4pt]
\xa{ {\cal Q}:{\cal V }\rightarrow \overline {{{\mathbb R}}}, }{ {\cal Q}(»u) }{
\begin{cases}
\textstyle\int\limits_{\cal T}{ Q \circ»u }\enspace \enspace & \text{if $ Q \circ»u\in L^1({\cal T})$},\\
+\infty & \text{otherwise.}
\end{cases} }
\end{eqnarray*
and the operator ${\cal A} »:= \partial{\cal Q} \>\subset\> {\cal V }\times{\cal V }^*$.
}
\end{definition}
Now we are in the position to introduce our concept of weak solutions for problem~{\rm(P)}.
\begin{definition}[{\rm Weak formulation}]\label{d4:Weak}
A tuple $(»u,{\bf u})\in{\cal W}\times{\cal U}$ is called a weak solution to problem~{\rm(P)} if
for $ e :=\epsilon({\bf u})\in Y $ it holds
\[
{\cal L }»u+{\cal A}»u+\xhh{{\cal B}^X}{B}(»u, e ) \>\ni\> 0,\enspace \enspace \enspace \xhh{{\cal B}^Y}{B}(»u, e )»=0,\enspace \enspace \enspace ({\cal K }»u)(0)»= K u_0 .
\]
\end{definition}
\begin{remark}\label{..}\nix\\
\ifnum\the\remarkcc=0\else \\\fi\advance\remarkcc by 1{\bf \the\remarkcc.}\enspace
By virtue of Proposition~\ref{p3:PartInt}, the images of functions of ${\cal W}$ under the mapping
${\cal K }$ can be regarded as elements of $C(\overline {{\cal T}}; H )$.
Therefore, $({\cal K }»u)(0)\in H $ is well defined and the condition $({\cal K }»u)(0) »= K u_0 $ meaningful.\ifnum\the\remarkcc=0\else \\\fi\advance\remarkcc by 1{\bf \the\remarkcc.}\enspace
It is not hard to show that if $»u,»w$ and ${\bf u}$ are sufficiently smooth in the sense of
Sobolev spaces, they are strong or even classical solutions to problem~{\rm(P)}.\ifnum\the\remarkcc=0\else \\\fi\advance\remarkcc by 1{\bf \the\remarkcc.}\enspace
In our weak formulation of problem~{\rm(P)} we only require ${\cal E}»u$ to have generalized derivatives
within ${\cal V }^*$, not $»I»u$ itself.
This relaxation of the regularity requirements together with the linearity
of $»u'-\mathop{\rm div}( M \nabla»w)=0$ allow us to treat problem~{\rm(P)} with the techniques of Section~\ref{s3:}.
Note that ${\cal A}»u+\xhh{{\cal B}^X}{B}(»u, e )$ only contributes space derivatives up to second order.
The remaining ones are 'hidden' in the operator $ E $.
Roughly speaking, the chemical potential $»w$ only attains values in $ V ^*$,
but values in $ V $ are needed in order to use the standard weak formulation of
the diffusion equation $\partial_t »u »- \mathop{\rm div}( M \nabla(\mu \partial_t »u+»w))=0$.
Therefore, we apply the operator $»I»F^{-1}$ to this equation, eliminate $»w$ and
use the resulting equation as a new weak formulation.\end{remark}
In order to show that problem~{\rm(P)} possesses a weak solution we show
firstly, that for $»t\in{\cal T}$ the equation $\xhh{B^Y}{B}(»t,»u, e )=0$ has a unique solution $ e = e (»u)\in Y $
for every $»u\in V $
and secondly, that mapping $»u\mapsto \xhh{B^X}{B}(»u, e (»u))$ is pseu\discretionary{-}{}{}domono\discretionary{-}{}{}tone.
Consequently, Theorem~\ref{t2:Exist} will guarantee the existence of weak solutions.
\begin{lemma}\label{lab.lemma.red.lip}{\def\xa#1#2#3{\pboxl{8cm}{$#1$}\pboxc{1cm}{$#2$}\pboxl{5.1cm}{$#3$}
\def\xb#1#2#3{\pboxl{8cm}{$#1$}\pboxc{1cm}{$#2$}\pboxl{5.1cm}{$#3$}
Let {\rm(H2)} and {\rm(H3)} be satisfied. Then it follows that
\begin{eqnarray*}
\xa{ \iKlammerB{\xhh{\widetilde B^X_t}{B}( u_1 ,»u, e )-\xhh{\widetilde B^X_t}{B}( u_2 ,»u, e )}{ u_1 - u_2 } \iSub{ V } }{ \ge }{ (\alpha_{ b_1 ,p}- C_P \beta_{ b_2 ,p}) ||\iBlock{ u_1 - u_2 }|| _{ V } ^2, }\\[0.5ex]
\xa{ \iKlammerB{\xhh{\widetilde B^Y_t}{B}( u_1 , u_2 , e _1 )-\xhh{\widetilde B^Y_t}{B}( u_1 , u_2 , e _2 )}{ e _1 - e _2 } \iSub{ Y } }{\ge }{ \alpha_{ b_0 ,e} ||\iBlock{ e _1 - e _2 }|| _{ Y } ^2, }\\[7pt]
\xb{ ||\iBlock{\xhh{\widetilde B^X_t}{B}( u_1 , u_2 , e _1 )-\xhh{\widetilde B^X_t}{B}( u_1 , u_2 , e _2 )}|| _{ V ^*} }{ \le }{ (\beta_{ b_1 ,e}+ C_P \beta_{ b_2 ,e}) ||\iBlock{ e _1 - e _2 }|| _{ Y } , }\\[0.5ex]
\xb{ ||\iBlock{\xhh{\widetilde B^Y_t}{B}( u_1 , u_2 , e _1 )-\xhh{\widetilde B^Y_t}{B}( u_1 , u_2 , e _2 )}|| _{ V ^*} }{ \le }{ \beta_{ b_0 ,e} ||\iBlock{ e _1 - e _2 }|| _{ Y } , }\\[0.5ex]
\xb{ ||\iBlock{\xhh{\widetilde B^Y_t}{B}( u_1 ,»u, e )-\xhh{\widetilde B^Y_t}{B}( u_2 ,»u, e )}|| _{ V ^*} }{ \le }{ \beta_{ b_0 ,p} ||\iBlock{ u_1 - u_2 }|| _{ V } }
\end{eqnarray*
for all $»t\in T ,»u, u_1 , u_2 \in V $ \!and $ e , e _1 , e _2 \in Y $.
In case of {\rm(H3b)} we also have
\begin{eqnarray*}
\xb{ ||\iBlock{\xhh{\widetilde B^Y_t}{B}(»u, u_1 , e )-\xhh{\widetilde B^Y_t}{B}(»u, u_2 , e )}|| _{ Y ^*} }{ \le }{ C_P \beta_{ b_0 ,u} ||\iBlock{ u_1 - u_2 }|| _{ V } }
\end{eqnarray*
}\end{lemma}
{\em Proof}.
Exemplarily, we show the strong monotonicity of $\xhh{\widetilde B^X_t}{B}$ in the first argument and the Lipschitz continuity in the last component.
The other inequalities can be proven similarly.
To this end, suppose that $»u, u_1 , u_2 \in V $ and $ e \in Y $.
Due to the definition of $ C_P $ we have
\[
||\iBlock{ u_1 - u_2 }|| _{ H } \>\le\> C_P \, ||\iBlock{\nabla( u_1 - u_2 )}|| _{ H } »= C_P \, ||\iBlock{ u_1 - u_2 }|| _{ V } .
\]
The Cauchy-Schwarz inequality and {\rm(H2)} yield
\begin{eqnarray*}
& & \hskip-2cm \iKlammerB{\xhh{\widetilde B^X_t}{B}( u_1 ,»u, e )-\xhh{\widetilde B^X_t}{B}( u_2 ,»u, e )}{ u_1 - u_2 } \iSub{ V }\\[0.5ex]
& »= & \int_\Omega \bbig( b_1 (»t,»x,»u,\nabla u_1 , e )- b_1 (»t,»x,»u,\nabla u_2 , e ) \bbig)\cdot\nabla( u_1 - u_2 ) \> dx \\
& & +\> \int_\Omega \bbig( b_2 (»t,»x,»u,\nabla u_1 , e )- b_2 (»t,»x,»u,\nabla u_2 , e ) \bbig)\cdot( u_1 - u_2 ) \> dx \\[0.5ex]
& \>\ge\> & \alpha_{ b_1 ,p}\, ||\iBlock{\nabla( u_1 - u_2 )}|| _{ H } ^2 »- \beta_{ b_2 ,p}\, ||\iBlock{\nabla( u_1 - u_2 )}|| _{ H } ||\iBlock{ u_1 - u_2 }|| _{ H } \\[0.5ex]
& \>\ge\> & (\alpha_{ b_1 ,p}- C_P \,\beta_{ b_2 ,p})\, ||\iBlock{ u_1 - u_2 }|| _{ V } ^2.
\end{eqnarray*
In order to show the Lipschitz continuity of $\xhh{\widetilde B^X_t}{B}$ in the last argument we estimate
\begin{eqnarray*}
& & \hskip-2cm \iKlammerB{ \xhh{\widetilde B^X_t}{B}( u_1 , u_2 , e _1 )-\xhh{\widetilde B^X_t}{B}( u_1 , u_2 , e _2 ) }{ »u } \iSub{ V }\\[0.5ex]
& »= & \int_\Omega \bbig( b_1 (»t,»x, u_2 ,\nabla u_1 , e _1 )- b_1 (»t,»x, u_2 ,\nabla u_1 , e _2 ) \bbig)\cdot\nabla»u \> dx \\
& & +\> \int_\Omega \bbig( b_2 (»t,»x, u_2 ,\nabla u_1 , e _1 )- b_2 (»t,»x, u_2 ,\nabla u_1 , e _2 ) \bbig)»u\, \> dx \\[0.5ex]
& \>\le\> & \beta_{ b_1 ,e}\, ||\iBlock{ e _1 - e _2 }|| _{ Y } ||\iBlock{\nabla»u}|| _{ H } »+ \beta_{ b_2 ,e}\, ||\iBlock{ e _1 - e _2 }|| _{ Y } ||\iBlock{»u}|| _{ H } \\[0.5ex]
& \>\le\> & (\beta_{ b_1 ,e}+ C_P \,\beta_{ b_2 ,e})\, ||\iBlock{ e _1 - e _2 }|| _{ Y } ||\iBlock{»u}|| _{ H }
\end{eqnarray*
for arbitrary $»u, u_1 , u_2 \in V $ and $ e _1 , e _2 \in Y $. Since
\[
||\iBlock{ \xhh{\widetilde B^X_t}{B}( u_1 , u_2 , e _1 )-\xhh{\widetilde B^X_t}{B}( u_1 , u_2 , e _2 ) }|| _{ V ^*}
= \osD{\displaystyle\sup_{»u\in V ,\atop ||\iBlock{»u}|| _{ V } \le1}}{} \iKlammerB{ \xhh{\widetilde B^X_t}{B}( u_1 , u_2 , e _1 )-\xhh{\widetilde B^X_t}{B}( u_1 , u_2 , e _2 ) }{ »u } \iSub{ V },
\]
we obtain the desired inequality.
\endproof
\begin{corollary}\label{c4:Ass}
Suppose {\rm(H)} to be satisfied.
Then $\xhh{\widetilde B^X_t}{B}$ and $»\xhh{\widetilde B^Y_t}{B}$ (as $\widetilde { A }$ and $\widetilde { B }$) satisfy~{\rm(A1)} and {\rm(A2)} of Section~\ref{s2:}
for every $y_0^*\in Y ^*$.
Moreover, the constants can be chosen by
\[
\alpha_A := \alpha_{ b_1 ,p} - C_P \,\beta_{ b_2 ,p}, \enspace \enspace \enspace \enspace
\beta_A := \beta_{ b_1 ,e}+ C_P \,\beta_{ b_2 ,e}, \enspace \enspace \enspace \enspace
\alpha_B := \alpha_{ b_0 ,e}, \enspace \enspace \enspace \enspace
\beta_B := \beta_{ b_0 ,p}.
\]
\end{corollary}
As done in Section~\ref{s2:}, we introduce the operators $\xwt{ R }$ and $\xwt{ S }$ which now also depend on~$»t\in T $.
\begin{definition}\label{}{\def\xa#1#2#3#4{\pboxl{6mm}{$#1$}\pboxl{4cm}{$#2$}\pboxl{2.2cm}{$#3$}\pboxl{45mm}{$#4$}
Assume {\rm(H)} to be satisfied.
Then for every $»t\in T $ we define the operators $\xwt{ R }_t $ and $\xwt{ S }_t $ according to Definition~\ref{d2:RS} and $ y_0^*:=0$ as
\begin{eqnarray*}
\xa{ \xwt{ R }: }{ T \times V \times V \rightarrow Y , }{ \xwt{ R }(»t, u_1 , u_2 ) }{ :=\> (\xhh{\widetilde B^Y}{B}\hskip-2mm_{t, u_1 , u_2 })^{-1}(0), } \\[0.5ex]
\xa{ \xwt{ S }: }{ T \times V \times V \rightarrow V ^*, }{ \xwt{ S }(»t, u_1 , u_2 ) }{ :=\> (\xhh{\widetilde B^X_t}{B}( u_1 , u_2 ,\xwt{ R }_t ( u_1 , u_2 )). }
\end{eqnarray*
Moreover, let $ B (t):= S _t : V \times V \rightarrow V ^*$ and ${\cal B}$ the superposition operator
(Nemytskii operator) of $ B $ given by $({\cal B}»u)(t):= B (t)»u(t)$.
}\end{definition}
\begin{lemma}\label{p4:Growth}{\def\xa#1#2{\parskip4pt\rucky{20mm}{\hskip10mm{\rm #1}\hss}{#2}
Let {\rm(H)} be fulfilled. Then there exist $C>0$ and $»h\in L^1( T )$ such that the following statements
are fulfilled for all $»t\in T $ and $»u, u_1 , u_2 \in V $:
\xa{1.}{the mappings $»t\mapsto R _t »u$ and $»t\mapsto B _t »u$ are continuous (and hence measurable),}
\xa{2.}{$ ||\iBlock{\xwt{ R }_t ( u_1 , u_2 )}|| _{ Y } ^2 \enspace \le\enspace »h(»t) »+ C ||\iBlock{ u_1 }|| _{ V } ^2 »+ \f1{\alpha_B }\, C_P C_{ b_0 ,u} ||\iBlock{ u_2 }|| _{ V } ^2$,}
\xa{3.}{$ ||\iBlock{\xwt{ S }_t ( u_1 , u_2 )}|| _{ V ^*} ^2 \enspace \le\enspace »h(»t) »+ C ||\iBlock{ u_1 }|| _{ V } ^2 »+ \varphi _a ^2 ||\iBlock{ u_2 }|| _{ V } ^2 $,}
\xa{4.}{${\cal B}$ is a bounded mapping from ${\cal V }$ into ${\cal V }^*$}
}\end{lemma}
{\em Proof}.
1. Let $»u\in V , t_0 \in T $ and $ \Ge >0$ be given. We define $ e (»t):= R _t (»u)$.
The continuity of $ b_0 $ in $t$ implies that $\xhh{\widetilde B^Y_t}{B}(»u, e ( t_0 ))$ is continuous in $»t$.
Hence, there exists a $ \delta >0$ such that
$ ||\iBlock{ \xhh{\widetilde B^Y_t}{B}(»u, e ( t_0 ))-\xhh{B^Y_{t_0}}{B}(»u, e ( t_0 )) }|| _{ Y ^*} = ||\iBlock{ \xhh{\widetilde B^Y_t}{B}(»u, e ( t_0 )) }|| _{ Y ^*}
< \alpha_B \Ge $
for all $»t\in T $ with $|»t- t_0 |< \delta $.
The strong monotonicity of $\xhh{B^Y_{»t,»u}}{B}$ implies the Lipschitz continuity of $(\xhh{B^Y_{»t,»u}}{B})^{-1}$.
Hence, it holds
\[
||\iBlock{ e (»t)- e ( t_0 ) }|| _{ Y }
\le \f{1}{\alpha_B } ||\iBlock{ \xhh{\widetilde B^Y_t}{B}(»u, e (t))-\xhh{\widetilde B^Y_t}{B}(»u, e ( t_0 )) }|| _{ Y ^*}
= \f{1}{\alpha_B } ||\iBlock{ \xhh{\widetilde B^Y_t}{B}(»u, e ( t_0 )) }|| _{ Y ^*} < \Ge
\]
for all $»t\in T $ with $|»t- t_0 |< \delta $.
This proves the continuity of $»t\mapsto R _t »u$ and hence its measurability.
Since $\xhh{B^X_t}{B}(»u, e )$ satisfies the Carath\'eodory condition,
the mapping $»t\mapsto \xhh{B^X_t}{B}(»u,\xwt{ R }_t »u)= B _{»t}»u$ is measurable.
2.+3. Let $»t\in T ,\, »z=( u_1 , u_2 )\in V \times V $ be given and denote $»y:=\xwt{ R }_t »z$.
From the strong monotonicity of $\xhh{\widetilde B^Y_t}{B}$ it follows
\[
||\iBlock{»y}|| _{ Y } ^2
\enspace \le\enspace \f1{\alpha_B }\, \iKlammerB{ \osH{\xhh{\widetilde B^Y_t}{B}(»z,»y)-\xhh{\widetilde B^Y_t}{B}(»z,»0)}{\vrule width0pt height9pt} }{»y-0} \iSub{ Y }
\>\le\> \f1{\alpha_B }\, ||\iBlock{\xhh{\widetilde B^Y_t}{B}(»z,0)}|| _{ Y ^*} ||\iBlock{»y}|| _{ Y }
\]
because of $\xhh{\widetilde B^Y_t}{B}(»z,»y)= y_0^*=0$.
By the growth condition on $ b_0 $, we obtain for some $»h\in L^1( T )$
\begin{eqnarray*}
\alpha_B \, ||\iBlock{\xwt{ R }_t »z}|| _{ Y } & \le & ||\iBlock{\xhh{\widetilde B^Y_t}{B}(»z,»0)}|| _{ Y ^*}
«= \sup\bigg\{ \iKlammerB{ \osH{\xhh{\widetilde B^Y_t}{B}(»z,0)}{\vrule width0pt height9pt} }{ e '} \iSub{ Y }»: e '\in Y ,\enspace ||\iBlock{ e '}|| _{ Y } \le1 \bigg\} \\
& \le & ||\iBlock{ b_0 (»t,»., u_2 ,\nabla u_1 ,0) }|| _{ H } \\
& \le & \bigg( \int_\Omega »g(»t,»x) \> dx + C ||\iBlock{\nabla u_1 }|| _{ H } ^2 + C_{ b_0 ,u} ||\iBlock{ u_2 }|| _{ H } ^2 \bigg)^{1/2} \\
& \le & \bigg( »h(»t) + C ||\iBlock{ u_1 }|| _{ V } ^2 + C_P C_{ b_0 ,u} ||\iBlock{ u_2 }|| _{ V } ^2 \bigg)^{1/2}\hskip-3pt.
\end{eqnarray*
With this inequality and the growth condition on $ b_1 $ and $ b_2 $ we can similarly estimate
\begin{eqnarray*}
||\iBlock{\xwt{ S }_t »z}|| _{ V ^*}
& = & ||\iBlock{\xhh{\widetilde B^X_t}{B}(»z,\xwt{ R }_t »z)}|| _{ V ^*} \\[0.5ex]
& \le & ||\iBlock{ b_1 (»t,»., u_2 ,\nabla u_1 ,\xwt{ R }_t »z)}|| _{ H } »+ C_P ||\iBlock{ b_2 (»t,»., u_2 ,\nabla u_1 ,\xwt{ R }_t »z)}|| _{ H } \\[0.5ex]
& \le & \bigg( »h(»t) »+ C ||\iBlock{\nabla u_1 }|| _{ H } ^2 »+ C_{ b_1 ,u} ||\iBlock{ u_2 }|| _{ H } ^2 »+ C_{ b_1 ,e} ||\iBlock{\xwt{ R }_t »z}|| _{ H } ^2 \bigg)^{1/2} \\[-2pt]
& & »+ C_P \bigg( »h(»t) »+ C ||\iBlock{\nabla u_1 }|| _{ H } ^2 »+ C_{ b_2 ,u} ||\iBlock{ u_2 }|| _{ H } ^2 »+ C_{ b_2 ,e} ||\iBlock{\xwt{ R }_t »z}|| _{ H } ^2 \bigg)^{1/2} \\
& \le & \bigg( »h(»t) »+ C ||\iBlock{ u_1 }|| _{ V } ^2 »+ \varphi _a ^2 ||\iBlock{ u_2 }|| _{ V } ^2 \bigg)^{1/2}\hskip-3pt.
\end{eqnarray*
\iHeight\iDa{$\f1c$}\iDepth\iDb{$\f1c$}\def\xa{\vrule width0pt height\iDa depth\iDb
\def\xa{
In the last line the inequality \osD{$\sqrt{\xa a\,}+\sqrt{\xa b\,} \le \sqrt{( c_a +1)a+(1+\f1{ c_a })b\,}$}{}\ for $a,b\ge0$ was used.
4. The mapping $ B : T \times V \rightarrow V ^*$ is measurable in $t$ and demicontinuous in $»v$.
Hence, ${\cal B}»u$ is measurable for every $»u\in{\cal V }$.
Moreover, the growth condition of step~2 guarantees that ${\cal B}$ is a bounded operator from ${\cal V }$ into ${\cal V }^*$.
\endproof
With the help of this lemma and the bijectivity of $\xhh{B^Y_{»t,»u}}{B}$
the task of finding a weak solution to problem~{\rm(P)} can be reformulated in the following way.
\begin{corollary}\label{c4:Iff}
A pair $(»u,{\bf u})\in{\cal W}\times{\cal U}$ is a weak solution to problem~{\rm(P)} if and only if
$ e (t):= R (»t,»u(»t))$ satisfies ${\bf u}(t)=\epsilon^{-1}( e (t))$ and
$»u\in{\cal W}$ is a solution to
\[
({\cal L }+{\cal A}+{\cal B})»u \>\ni\> 0,\enspace \enspace ({\cal K }»u)(0) »= K u_0 .
\]
\end{corollary}
\begin{lemma}\label{p4:Ssc}
Assume {\rm(H2)},\,{\rm(H3a)} and {\rm(H4)} to be fulfilled. Then
$
\xhh{\widetilde B^X_t}{B}: V \times V _\omega \times Y \>\rightarrow \> V ^*
$
is continuous for all $»t\in T $.
Furthermore, for $»u_n \mathop{\rightharpoonup}»u$ in $ V $ it holds
\[
»\xhh{\widetilde B^Y_t}{B}(»v,»u_n , e ) \mathop{\relbar\joinrell\rightarrow} »\xhh{\widetilde B^Y_t}{B}(»v,»u, e ) \mbox{\enspace \enspace in $ Y ^*$}
\]
for all $»t\in T ,\> »v,»v_1 ,»v_2 \in V $ and every solution $ e \in Y $ to $»\xhh{\widetilde B^Y_t}{B}(»v_1 ,»v_2 , e )=0$.
\end{lemma}
{\em Proof}.
The continuity of $\xhh{\widetilde B^X_t}{B}$ is a direct consequence of the growth conditions on $ b_1 $ and $ b_2 $ and
the compact embedding of $ V _\omega $ into $ H $.
Assume that $»v,»v_1 ,»v_2 \in V $ are given, $»u_n \mathop{\rightharpoonup}»u$ in $ V $ and that $ e $
is a solution to $»\xhh{\widetilde B^Y_t}{B}(»v_1 ,»v_2 , e )=0$.
By {\rm(H2)}, the mapping $ e '\mapsto b_0 (»t,»x,»v_2 (»x),\nabla»v_1 (»x), e ')$ is strongly monotone and
Lipschitz continuous from ${{\mathbb R}}^{N\times N}$ into itself independently of $(»t,»x)\in T \times\Omega $.
Furthermore, due to {\rm(H3a)}, $»x\mapsto b_0 (»t,»x,»v_2 (»x),\nabla»v_1 (»x),0)\in L^{ q_0 }(\Omega ;{{\mathbb R}}^{N\times N})$ for all $»t\in T $.
Consequently, Proposition~\ref{p4:Konni} implies $ e \in L^{ q_1 }(\Omega ;{{\mathbb R}}^{N\times N})$ for every $»t\in T $.
Moreover, the convergence $»u_n \mathop{\rightharpoonup}»u$ in $ V $ yields $»u_n \mathop{\rightarrow }»u$ in $L^{ q_3 }(\Omega )$.
Therefore, by {\rm(H3a)} and H\"older's inequality we get for all $»t\in T $ and $ e '\in Y $
\begin{eqnarray*}
&& \hskip-2cm \iKlammerB{ »\xhh{\widetilde B^Y_t}{B}(»v,»u_n , e )-»\xhh{\widetilde B^Y_t}{B}(»v,»u, e ) }{ e ' } \iSub{ Y } \\[0.5ex]
& = & \int_\Omega \big[ b_0 (»t,»x,»u_n ,»v, e )- b_0 (»t,»x,»u,»v, e ) \big]: e ' \> dx \\[0.5ex]
& \le & ||\iBlock{ e '}|| _{ Y } \int_\Omega \bbig| b_0 (»t,»x,»u_n ,»v, e )- b_0 (»t,»x,»u,»v, e ) \bbig|^2 \> dx \\[0.5ex]
& \le & \gamma_{ b_0 ,u}^2 ||\iBlock{ e '}|| _{ Y } \int_\Omega | u_1 - u_2 |^{2 q_4 }(| e |+1)^2 \> dx \\[0.5ex]
& \le & C\, ||\iBlock{ e '}|| _{ Y } ||\iBlock{ u_1 - u_2 }|| _{{L^{ q_3 }(\Omega )}} ^{2 q_4 } ( ||\iBlock{ e }|| _{L^{ q_1 }(\Omega )} ^2+1)
\end{eqnarray*
since
$(\f{ q_3 }{2 q_4 })^{-1} + (\f{ q_1 }{2})^{-1} = \f{ q_1 -2}{ q_1 } + \f{2}{ q_1 } =1$.
Hence, $»\xhh{\widetilde B^Y_t}{B}(»v,»u_n , e )$ converges to $»\xhh{\widetilde B^Y_t}{B}(»v,»u, e )$ in $ Y ^*$.
\endproof
\begin{corollary}\label{c4:Psm}
If {\rm(Ha)} is satisfied, then the mapping $ B _t: V \rightarrow V ^*$ is pseu\discretionary{-}{}{}domono\discretionary{-}{}{}tone and demicontinuous for all $»t\in T $.
\end{corollary}
{\em Proof}.
Due to Corollary~\ref{c4:Ass}, $\xhh{\widetilde B^X_t}{B}$ and $»\xhh{\widetilde B^Y_t}{B}$ satisfy the conditions {\rm(A1)} and {\rm(A2)} from~Definition~\ref{d1:Semi}
as well as {\rm(A3.4)} and $\alpha_A >0$, since
\[
\alpha_A \alpha_B \enspace \ge\enspace \alpha_A \alpha_B -\beta_A \beta_B «= (\alpha_{ b_1 ,p} - C_P \,\beta_{ b_2 ,p})\alpha_{ b_0 ,e}-(\beta_{ b_1 ,e}+ C_P \,\beta_{ b_2 ,e})\beta_{ b_0 ,p} «>0.
\]
Moreover, Lemma~\ref{p4:Ssc} implies {\rm(A3.1)}--{\rm(A3.3)}.
Then, the assertion follows from Theorem~\ref{t1:SemReduct}, Proposition~\ref{p1:SemiPsm} and Proposition~\ref{p1:Demi}.
\endproof
\begin{proposition}\label{p4:Psm2}
Under condition {\rm(Ha)}, the operator ${\cal B}:{\cal V }\times{\cal V }^*$ is bounded, demicontinuous and pseu\discretionary{-}{}{}domono\discretionary{-}{}{}tone with respect to ${\cal L }$
and coercive with respect to $0\in{\cal V }$.
\end{proposition}
{\em Proof}.
Remark~\ref{r4:E} guarantees the injectivity of $ K $.
Therefore, we identify $ V $ with $ K ( V )$ as in Remark~\ref{r3:E}
and prove the assertion by showing that the hypotheses of
\cite[Prop.~1, p.~440]{papa1} are fulfilled.
The measurability of $»t\mapsto B (»t,»u)$ and the growth conditions follow from Lemma~\ref{p4:Growth},
the pseudomonotonicity and the demicontinuity of $»u\mapsto B (»t,»u)$ from Corollary~\ref{c4:Psm}.
It therefore remains to show that there are $C>0$ and $»g\in L^1( T )$ with
\[
\iKlammerB{ B (»t,»u) }{»u} \iSub{ V } \enspace \ge\enspace »g(»t) + C\, ||\iBlock{»u}|| _{ V } ^2
\]
for all $»t\in T ,»u\in V $.
Using Lemma~\ref{p4:Growth} and Lemma~\ref{p1:LipMon} we obtain
\begin{eqnarray*}
\iKlammerB{ S _t »u}{»u} \iSub{ V } & = & \iKlammerB{\xwt{ S }_t (»u,»u)-\xwt{ S }_t (0,»u)}{»u-0} \iSub{ V } »+ \iKlammerB{\xwt{ S }_t (»0,»u)}{»u} \iSub{ V } \\[0.5ex]
& \>\ge\> & \f{\alpha_A \alpha_B -\beta_A \beta_B }{\alpha_B }\> ||\iBlock{»u}|| _{ V } ^2 »- ||\iBlock{\xwt{ S }_t (»0,»u)}|| _{ V ^*} ||\iBlock{»u}|| _{ V } \\[0.5ex]
& \>\ge\> & \f{\alpha_A \alpha_B -\beta_A \beta_B -\alpha_B \varphi _a }{\alpha_B }\> ||\iBlock{»u}|| _{ V } ^2 »- \sqrt{»h(»t)}\, ||\iBlock{»u}|| _{ V } \\[0.5ex]
& \>\ge\> & \f{\alpha_A \alpha_B -\beta_A \beta_B -\alpha_B \varphi _a }{2\alpha_B }\> ||\iBlock{»u}|| _{ V } ^2 »- C\,»h(»t).
\end{eqnarray*}
This shows the coercivity condition and completes the proof.
\endproof
\begin{theorem}[{\rm Existence of weak solution}]\label{}
If {\rm(Ha)} is satisfied, then there exists a weak solution $(»u,{\bf u})\in{\cal W}\times{\cal U}$ to problem~{\rm(P)}.
\end{theorem}
{\em Proof}.
By Corollary~\ref{c4:Iff}, it suffices to show the existence of a solution $»u\in{\cal W}$ to
\begin{equation} \label{«cc1}
({\cal L }+{\cal A}+{\cal B})\,»u \>\ni\> 0,\enspace \enspace \enspace ({\cal K }»u)(0)= K u_0 .
\end{equation}
Condition~{\rm(H1a)} implies that ${\cal A}:{\cal V }\rightarrow {\cal V }^*$ is bounded.
Moreover, together with $\varphi $ also $ Q $ and ${\cal Q}$ are convex, lower-\discretionary{}{}{}semi\discretionary{-}{}{}con\discretionary{-}{}{}tin\discretionary{-}{}{}u\discretionary{-}{}{}ous and proper.
Hence, $ A $ is maximal monotone.
By Proposition~\ref{p4:Psm2}, ${\cal B}:{\cal V }\rightarrow {\cal V }^*$ is bounded, demicontinuous, pseu\discretionary{-}{}{}domono\discretionary{-}{}{}tone with respect to ${\cal L }$ and coercive
with respect to $0\in D({\cal A})\cap{\cal W}$.
Therefore, Theorem~\ref{t2:Exist} yields the existence of a solution $»u\in{\cal W}$ to~\nix(\ref{«cc1}).
\endproof
\subsection*{Acknowledgment}
The author gratefully acknowledges the support of the DFG Research Training Group 1128
"Analysis, Numerics, and Optimization of Multiphase Problems"
during the time of his dissertation.
The results of the present paper form parts of his PhD thesis~\cite{doni}.
|
2,869,038,154,008 | arxiv | \section{Introduction}
Consider solving the large-scale matrix equation
\begin{equation}\label{eq:matrix equations}
AXB= C,
\end{equation}
where $A \in \mathbb{R}^{m \times p}$, $B \in \mathbb{R}^{q\times n}$, $C \in \mathbb{R}^{m\times n}$, and unknown matrix $X \in \mathbb{R}^{p\times q}$. The large-scale linear matrix equation \eqref{eq:matrix equations} arises in many applications such as signal processing \cite{Regalia1989}, photogrammetry \cite{Rauhala1980}, etc. We assume that the equation \eqref{eq:matrix equations} has a solution, i.e., $A^\dag ACBB^\dag=C$. In practice, it is usually enough to find an approximate solution not far away from the minimal Frobenius norm solution $X_*=A^\dag CB^\dag$. Hence how to effectively solve this equation has been important research recently.
Currently, there are many direct methods based on matrix factorization \cite{Chu1987,Fausett1994,Zha1995} and iteration methods \cite{Tian2017,Peng2005,Ding2005,Ding2008,Peng2010-1} for solving large-scale linear matrix equations. However, these methods were designed with ignoring big data matrix equation problems. When applied to matrix equations from signal processing, machine learning, and image restoration, these methods can be infeasible due to exceeding storage location or requiring more time cost. In particular, the matrix equation \eqref{eq:matrix equations} can be written the following equivalent matrix-vector form by the Kronecker product
\begin{equation}\label{eq:systems}
(B^T\otimes A){\rm vec}(X)={\rm vec}(C),
\end{equation}
where the Kronecker product $(B^T\otimes A)\in \mathbb{R}^{mn \times pq}$, the right-side vector ${\rm vec}(C)\in \mathbb{R}^{mn \times1}$, and the unknown vector ${\rm vec}(X)\in \mathbb{R}^{pq \times1}$. Many iteration methods are proposed to solve the matrix equation \eqref{eq:matrix equations} by applying the Kronecker product, see \cite{Zhang2011,Cvetk2008,Peng2010-2}. Recently, randomized methods have been widely concerned for solving large-scale linear systems, such as randomized SVD \cite{Wei2016,Wei2019,Wei2020}, the randomized Kaczmarz algorithm \cite{Strohmer2009,Zouzias2012,Needell2015,Ma2015,Du20}, and the randomized coordinate descent method \cite{Leventhal2008,Gower,Du2019,Du21}. Randomized Kaczmarz algorithm applied to the linear system \eqref{eq:systems} can also solve the matrix equation \eqref{eq:matrix equations}. Nevertheless, when the dimensions of matrices $A$ and $B$ are large, the dimensions of the linear system \eqref{eq:systems} increase dramatically, which causes these randomized projection algorithms to require extra cache memory and too much time. Du et al. proposed the randomized block coordinate descent (RBCD) method for solving the matrix least-squares problem $\min_{X\in \mathbb{R}^{p\times q}}\|C-AXB\|_F$ in \cite{Du22}. This method is computationally expensive per iteration because it involves large-scale matrix-matrix products.
In this paper, we propose the global block randomized block Kaczmarz (GRBK) algorithm for solving the large-scale matrix equations by determining the solution of the matrix equation \eqref{eq:matrix equations} from the randomized sketched matrix equation \eqref{eq:sketched}. In practice, to avoid computing pseudoinverse, we study a parallelized version of GRBK, in which a weighted average of independent updates is used. Before summarizing our contributions, we first present the notations that are used throughout this paper and briefly describe the properties of the Kronecker product.
\subsection{Notation}
For an integer $m\geq1$, let $[m]=\{1,2,\cdots,m\}$. We denote by $\mathbb{R}^{m\times n}$ the space of all $m\times n$ real matrices, and by $\|\cdot\|_2$ and $\|\cdot\|_F$ the 2-norm and Frobenius norm, respectively. Given two matrices $X,Y\in \mathbb{R}^{n\times n}$, $\langle X,Y\rangle_F={\rm tr}(X^TY)$ is the Frobenius inner product of matrices $X$ and $Y$ and ${\rm tr}(X)$ denotes the trace of $X$, specially, $\langle X,X\rangle_F=\|X\|_F^2$. In addition, for any matrix $A\in^{m\times n}$, we use $A^T$, $A^\dag$, $A_{i,:}$, $A_{:,j}$ to denote the transpose, the pseudoinverse, the $i$th row, the $j$th column of $A$, respectively. We also denote the maximum singular value and minimum nonzero singular value of $X$ by $\sigma_{\max}(X)$ and $\sigma_{\min}(X)$, respectively. For index sets $I\subset[m]$ and $J\subset[n]$, let $A_{I,:}$, $A_{:,J}$ and $A_{I,J}$ denote the row submatrix indexed by $I$, the column submatrix indexed by $J$, and the submatrix that lies in the rows indexed by $I$ and the columns indexed by $J$, respectively. We use $|I|$ to denote the cardinality of a subset $I\subset[m]$. For any random variable $X$, let $\mathbb{E}[X]$ denote its expectation.
\subsection{Kronecker product}
For deriving the convergence of the algorithms in this paper, the Kronecker product is used. We briefly state a few of its useful properties here. More can be found, e.g. in \cite{Graham1981}. For all matrices $A$ and $B$, we have
\begin{equation*}
(A\otimes B)^\dag=A^\dag\otimes B^\dag,~ (A\otimes B)^T=A^T\otimes B^T, ~\|A\otimes B\|_F=\|A\|_F\|B\|_F,
\end{equation*}
and
\begin{equation*}
\sigma_{\max}(A\otimes B)=\sigma_{\max}(A)\sigma_{\max}(B), ~\sigma_{\min}(A\otimes B)=\sigma_{\min}(A)\sigma_{\min}(B).
\end{equation*}
Furthermore, we introduce the ${\rm vec}(\cdot)$ operation by stacking columns. If $X\in \mathbb{R}^{m\times n}$, then
\begin{equation*}
{\rm vec}(X)=\begin{pmatrix}X_{:,1}^T& \cdots& X_{:,n}^T \end{pmatrix}^T\in \mathbb{R}^{mn\times 1}.
\end{equation*}
If $A\in \mathbb{R}^{m \times p}$, $X\in \mathbb{R}^{p\times q}$, and $B\in \mathbb{R}^{q\times n}$, then
\begin{equation*}
{\rm vec}(AXB)=(B^T\otimes A){\rm vec}(X).
\end{equation*}
\subsection{Contributions}
We propose the global randomized block Kaczmarz (GRBK) algorithm to solve the matrix equations $AXB=C$. The GRBK method uses two randomized strategies to choose two subsets $I_k$ and $J_k$ of the constraints at each iteration, and updates (see \Cref{alg:GRBK}):
\begin{equation*}
X_{k+1}=X_k+A_{I_k,:}^\dag(C_{I_k,J_k}-A_{I_k,:}X_kB_{:,J_k})B_{:,J_k}^\dag.
\end{equation*}
We provide a theoretical guarantee for the GRBK method and demonstrate its performance. Furthermore, to avoid computing pseudoinverses, we propose a global randomized average block Kaczmarz method to solve the matrix equations $AXB=C$ and denote the GRABK method (see \Cref{alg:GRABK}). The updated format of the GRABK method is as follows:
\begin{equation*}
X_{k+1}=X_k+\alpha_k\left(\sum_{i\in I_k,j\in J_k}u_i^{k}v_j^{k}\tfrac{A_{i,:}^T(C_{ij}-A_{i,:}X_kB_{:,j})B_{:,j}^T}{\|A_{i,:}\|^2\|B_{:,j}\|^2}\right),
\end{equation*}
where the stepsize $\alpha_k\in(0,2)$ and the weights $u_i^{k},v_j^{k}\in [0,1]$ such that $\sum_{i\in I_k}u_i^{k}=1$ and $\sum_{j\in J_k}v_j^{k}=1$. In addition, we analyzed the convergence of the GRBK and GRABK methods.
\subsection{Outline}
In \Cref{Sec: GRBK}, we derive the global randomized block Kaczmarz method for solving the large-scale matrix equation. In \Cref{Sec: GRABK}, we define the global randomized average block Kaczmarz methods for solving the large-scale matrix equation and derive new convergence rates. In \Cref{Sec:NR}, we report the numerical results to corroborate our theoretical. Finally, we state brief conclusions in \Cref{Sec:Con}.
\section{The global randomized block Kaczmarz algorithm }\label{Sec: GRBK}
In this paper, we are concerned with the randomized Kaczmarz method to solve the matrix equation $AXB=C$. Since matrix equation \eqref{eq:matrix equations} is difficult to solve directly, our way is to iteratively solve a small randomized version of matrix equation \eqref{eq:matrix equations}. This is, we choose two index sets $I\subseteq[m]$ and $J\subseteq[n]$ at random, and instead solve the following sketched matrix equation:
\begin{equation}\label{eq:sketched}
A_{I,:}XB_{:,J}=C_{I,J}.
\end{equation}
The sketched matrix equation is of a much smaller dimension than the original one, and hence easier to solve. However, the equation \eqref{eq:sketched} will no longer have a unique solution. In order to construct a method, we need a way of picking a particular solution. Our method defines $X_{k+1}$ to be the solution that is closest to the current iteration $X_k$ in the Frobenius norm. Hence, the next iterate $X_{k+1}$ is the nearest point to $X_k$ that satisfies a sketched matrix equation:
\begin{equation}\label{eq:it-1}
X_{k+1}=\mathop{\arg\min}_{\scaleto{\begin{smallmatrix} A_{I,:}XB_{:,J}=C_{I,J}\\X \in \mathbb{R}^{p\times q} \end{smallmatrix}}{10pt}}\tfrac{1}{2}\|X-X_k\|_F^2.
\end{equation}
In addition, $X_{k+1}$ is the best approximation of $X_*$ in a subspace passing through $X_k$:
\begin{equation}\label{eq:it-2}
\begin{split}
X_{k+1}=&\mathop{\arg\min}_{\scaleto{\begin{smallmatrix}X \in \mathbb{R}^{p\times q}\\ Y\in\mathbb{R}^{|I|\times |J|}\end{smallmatrix}}{10pt}} \tfrac{1}{2}\|X-X_*\|_F^2\\
&{\rm subject~to~~} X=X_k+A_{I,:}^TYB_{:,J}^T,~Y {\rm ~is~free}.
\end{split}
\end{equation}
By substituting the constraint \eqref{eq:it-2} into the objective function, then differentiating with respect to $Y$ to find the stationary point
\begin{equation*}
Y_*=(A_{I,:}A_{I,:}^T)^\dag(C_{I,J}-A_{I,:}X_kB_{:,J})(B_{:,J}^TB_{:,J})^\dag,
\end{equation*}
we obtain that
\begin{eqnarray}\label{eq:it-3}
\nonumber X_{k+1}&=&X_k+A_{I,:}^T(A_{I,:}A_{I,:}^T)^\dag(C_{I,J}-A_{I,:}X_kB_{:,J})(B_{:,J}^TB_{:,J})^\dag B_{:,J}^T\\
&=&X_k+A_{I,:}^\dag(C_{I,J}-A_{I,:}X_kB_{:,J})B_{:,J}^\dag
\end{eqnarray}
is the solution to \eqref{eq:it-2}. Next, we show the equivalence of \eqref{eq:it-1} and \eqref{eq:it-2} by using Lagrangian duality. The problem \eqref{eq:it-1} has a convex quadratic objective function with linear constraints, hence strong duality holds. Introducing Lagrange multiplier $Y\in \mathbb{R}^{|I|\times |J|}$, the Lagrangian $\mathcal{L}:\mathbb{R}^{p\times q}\times \mathbb{R}^{|I|\times |J|} \rightarrow \mathbb{R}$ associated with the problem \eqref{eq:it-1} as
\begin{equation}\label{eq:dual}
\mathcal{L}(X,Y)=\tfrac{1}{2}\|X-X_k\|_F^2-\langle Y, A_{I,:}XB_{:,J}-C_{I,J}\rangle_F.
\end{equation}
Clearly, the optimal value of the primal problem \eqref{eq:it-1} is
\begin{equation*}
\min_{X \in \mathbb{R}^{p\times q}}\max_{Y\in\mathbb{R}^{|I|\times |J|}}\mathcal{L}(X,Y)
= \min_{\scaleto{\begin{smallmatrix} A_{I,:}XB_{:,J}=C_{I,J}\\X \in \mathbb{R}^{p\times q} \end{smallmatrix}}{10pt}}\tfrac{1}{2}\|X-X_k\|_F^2.
\end{equation*}
The Lagrange dual function $\mathcal{G}:\mathbb{R}^{|I|\times |J|} \rightarrow \mathbb{R}$ as the minimum value of the Lagrangian $\mathcal{L}(X,Y)$ over $X$, i.e.,
\begin{equation*}
\mathcal{G}(Y)=\min_{X \in \mathbb{R}^{p\times q}}\mathcal{L}(X,Y).
\end{equation*}
Differentiating Lagrangian $\mathcal{L}(X,Y)$ with respect to $X$ and setting to zero gives $X=X_k+A_{I,:}^TYB_{:,J}^T$. Substituting into \eqref{eq:dual} gives
\begin{eqnarray*}
\mathcal{L}(X,Y)&=&\tfrac{1}{2}\|A_{I,:}^TYB_{:,J}^T\|_F^2-\langle Y, A_{I,:}(X_k+A_{I,:}^TYB_{:,J}^T)B_{:,J}-C_{I,J}\rangle_F\\
&=& -\tfrac{1}{2}\|A_{I,:}^TYB_{:,J}^T\|_F^2-\langle Y, A_{I,:}(X_k-X_*)B_{:,J}\rangle_F\\
&=& -\tfrac{1}{2}\|A_{I,:}^TYB_{:,J}^T+X_k-X_*\|_F^2+\tfrac{1}{2}\|X_k-X_*\|_F^2.
\end{eqnarray*}
As the term $\tfrac{1}{2}\|X_k-X_*\|_F^2$ does not depend on $X$ and $Y$, substituting $X=X_k+A_{I,:}^TYB_{:,J}^T$ into the last equation, we obtain the dual problem:
\begin{eqnarray*}
\max_{Y}\mathcal{G}(Y)
&=&\max_{\scaleto{\begin{smallmatrix}X=X_k+A_{I,:}^TYB_{:,J}^T \\ X \in \mathbb{R}^{p\times q},Y\in\mathbb{R}^{|I|\times |J|} \end{smallmatrix}}{12pt}}-\tfrac{1}{2}\|X-X_*\|_F^2\\
&=&\min_{\scaleto{\begin{smallmatrix}X=X_k+A_{I,:}^TYB_{:,J}^T \\X \in \mathbb{R}^{p\times q},Y\in\mathbb{R}^{|I|\times |J|} \end{smallmatrix}}{12pt}}\tfrac{1}{2}\|X-X_*\|_F^2.
\end{eqnarray*}
Hence, by strong duality, i.e.,
\begin{equation*}
\min_{X \in \mathbb{R}^{p\times q}}\max_{Y\in\mathbb{R}^{|I|\times |J|}}\mathcal{L}(X,Y)=\max_{Y\in\mathbb{R}^{|I|\times |J|}}\min_{X \in \mathbb{R}^{p\times q}}\mathcal{L}(X,Y),
\end{equation*}
we have the equivalence of \eqref{eq:it-1} and \eqref{eq:it-2}.
Based on \eqref{eq:it-1}, \eqref{eq:it-2} and \eqref{eq:it-3}, we can summarize the methods described in this section as \Cref{alg:GRBK}, which is called the global randomized block Kaczmarz (GRBK) method.
\begin{algorithm}[!htbp]
\caption{Global Randomized Block Kaczmarz (GRBK)}
\label{alg:GRBK}
\hspace*{0.02in}{\bf Input:} {$A\in \mathbb{R}^{m \times p}$, $B\in \mathbb{R}^{q\times n}$, and $C\in \mathbb{R}^{m \times n}$, $X_0\in \mathbb{R}^{p\times q}$.}\\
\hspace*{0.02in}{\bf Output:} {Last iterate $X_{k+1}$.}
\begin{algorithmic}[1]
\For{$k=0,1,2,\cdots,$}
\State{Select a index set $I_k\subseteq[m]$ with probability $\mathbb{P}(I_k)>0$ such that $$\sum_{I_k\subseteq[m]}\mathbb{P}(I_k)=1;$$}
\State{Select a index set $J_k\subseteq[n]$ with probability $\mathbb{P}(J_k)>0$ such that $$\sum_{J_k\subseteq[n]}\mathbb{P}(J_k)=1;$$}
\State{Update $X_{k+1}=X_k+A_{I_k,:}^\dag(C_{I_k,J_k}-A_{I_k,:}X_kB_{:,J_k})B_{:,J_k}^\dag$;}
\EndFor
\end{algorithmic}
\end{algorithm}
At each iteration, the current iterate $X_k$ is projected onto the solution space of the sketched matrix equation $A_{I_k,:}XB_{:, J_k}=C_{I_k, J_k}$. The index sets $I_k\subseteq[m]$ and $J_k\subseteq[n]$ are selected according to probability distribution $\mathbb{P}(I_k)$ and $\mathbb{P}(J_k)$, respectively. Before we prove convergence, let us define some notations. Assume that any matrix $M$ has singular value decomposition $M=U\Sigma V^T$, where orthogonal matrices $U\in \mathbb{R}^{m\times m}$ and $V\in \mathbb{R}^{n\times n}$, and $\Sigma=\diag(\sigma_1,\cdots,\sigma_p)\in\mathbb{R}^{m\times n},~p=\min\{m,n\}$. We define $M^{\dag\frac{1}{2}}=V\Sigma^{\dag\frac{1}{2}}U^T$, where $\Sigma^{\dag\frac{1}{2}} = \diag(\sigma_1^{-\frac{1}{2}}, \cdots, \sigma_p^{-\frac{1}{2}}) \in\mathbb{R}^{n\times m}$. We define orthogonal projection
\begin{equation*}
P_1=A_{I_k,:}^\dag A_{I_k,:}, P_2=B_{:,J_k}B_{:,J_k}^\dag.
\end{equation*}
Then $P_1^T=P_1$, $P_1^2=P_1$, $P_2^T=P_2$, and $P_2^2=P_2$. The expectation of $P_1$ is
\begin{eqnarray*}
\mathbb{E}[P_1] &=&\sum_{I_k\subseteq[m]}\mathbb{P}(I_k)A_{I_k,:}^\dag A_{I_k,:}\\
&=&\sum_{I_k\subseteq[m]}\mathbb{P}(I_k)A_{I_k,:}^T(A_{I_k,:}A_{I_k,:}^T)^\dag A_{I_k,:}\\
&=&\sum_{I_k\subseteq[m]}A_{I_k,:}^T(A_{I_k,:}A_{I_k,:}^T)^{\dag\tfrac{1}{2}}\mathbb{P}(I_k) (A_{I_k,:}A_{I_k,:}^T)^{\dag\tfrac{1}{2}}A_{I_k,:}\\
&=&(\Delta A)^T(\Delta A),
\end{eqnarray*}
where $\Delta=\diag (\mathbb{P}(I_k)^{\frac{1}{2}}(A_{I_k,:}A_{I_k,:}^T)^{\dag\frac{1}{2}}, I_k\subseteq [m])$ is block diagonal matrix.
Similarly, the expectation of $P_2$ is
\begin{equation*}
\mathbb{E}[P_2]=(B\Gamma)(B\Gamma)^T,
\end{equation*}
where $\Gamma=\diag(\mathbb{P}(J_k)^{\frac{1}{2}}(B_{:,J_k}^TB_{:,J_k})^{\dag\frac{1}{2}}, J_k\subseteq [n])$ is block diagonal matrix.
We now analyze the convergence of the error $X_k-X_*$ for iterates of \Cref{alg:GRBK}. This result, stated in \Cref{Th:cov-1}, shows that the \Cref{alg:GRBK} will converge linearly to the solution of minimal Frobenius norm in expectation.
\begin{theorem}\label{Th:cov-1}
Let $X_*$ be the minimal Frobenius norm solution of $AXB=C$, and $X_k$ be the $k$th approximation of $X_*$ generated by the GRBK method. The expected norm of the error at the $k$th iteration satisfies
\begin{equation}\label{eq:cov-1}
\mathbb{E}\|X_k-X_*\|_F^2\leq\left(1-\sigma_{\min}^2(\Delta)\sigma_{\min}^2(\Gamma)\sigma_{\min}^2(A)\sigma_{\min}^2(B)\right)^k\|X_0-X_*\|_F^2.
\end{equation}
\end{theorem}
\begin{proof}
The step 3 of \Cref{alg:GRBK} can be rewritten as a simple fixed point formula
\begin{equation*}
X_{k+1}-X_*=X_k-X_*-A_{I_k,:}^\dag A_{I_k,:}(X_k-X_*)B_{:,J_k}B_{:,J_k}^\dag,
\end{equation*}
Since
\begin{eqnarray*}
&&\langle X_{k+1}-X_k, X_{k+1}-X_*\rangle_F\\
&&=\langle P_1(X_*-X_k)P_2, (X_k-X_*)-P_1(X_k-X_*)P_2 \rangle_F \\
&&=\langle P_1(X_*-X_k)P_2, (X_k-X_*)\rangle_F-\langle P_1(X_*-X_k)P_2, P_1(X_k-X_*)P_2 \rangle_F\\
&&=\langle P_1(X_*-X_k), (X_k-X_*)P_2\rangle_F-\langle P_1(X_*-X_k), (X_k-X_*)P_2 \rangle_F\\
&&=0.
\end{eqnarray*}
It follows that
\begin{equation*}
\|X_{k+1}-X_*\|_F^2=\|X_k-X_*\|_F^2-\|X_{k+1}-X_k\|_F^2.
\end{equation*}
Taking conditional expectations, we get
\begin{equation}\label{eq:p-1}
\mathbb{E}[\|X_{k+1}-X_*\|_F^2|X_k]=\|X_k-X_*\|_F^2-\mathbb{E}[\|X_{k+1}-X_k\|_F^2|X_k].
\end{equation}
Since
\begin{eqnarray*}
\|X_{k+1}-X_k\|_F^2 &=&\langle P_1(X_k-X_*)P_2, P_1(X_k-X_*)P_2 \rangle_F \\
&=&\langle P_1(X_k-X_*), (X_k-X_*)P_2 \rangle_F.
\end{eqnarray*}
Hence
\begin{eqnarray*}
\mathbb{E}[\|X_{k+1}-X_k\|_F^2|X_k] &=&
\langle \mathbb{E}[P_1](X_k-X_*), (X_k-X_*)\mathbb{E}[P_2] \rangle_F\\
&=&\left\langle (\Delta A)^T(\Delta A)(X_k-X_*), (X_k-X_*)(B\Gamma)(B\Gamma)^T \right\rangle_F\\
&=&\|(\Delta A)(X_k-X_*)(B\Gamma)\|_F^2.
\end{eqnarray*}
Using the fact
\begin{eqnarray*}
\|(\Delta A)(X_k-X_*)(B\Gamma)\|_F^2&=&\|[(B\Gamma)^T\otimes (\Delta A)]{\rm vec}(X_k-X_*)\|_2^2\\
&=&\|(\Gamma^T\otimes \Delta)(B^T\otimes A){\rm vec}(X_k-X_*)\|_2^2\\
&\geq&\sigma_{\min}^2(\Gamma^T\otimes \Delta)\|(B^T\otimes A){\rm vec}(X_k-X_*)\|_2^2\\
&\geq&\sigma_{\min}^2(\Gamma^T\otimes \Delta)\sigma_{\min}^2(B^T\otimes A)\|X_k-X_*\|_F^2\\
&=&\sigma_{\min}^2(\Delta)\sigma_{\min}^2(\Gamma)\sigma_{\min}^2(A)\sigma_{\min}^2(B)\|X_k-X_*\|_F^2,
\end{eqnarray*}we have
\begin{equation}\label{eq:p-2}
\mathbb{E}[\|X_{k+1}-X_k\|_F^2|X_k] \geq \sigma_{\min}^2(\Delta)\sigma_{\min}^2(\Gamma)\sigma_{\min}^2(A)\sigma_{\min}^2(B)\|X_k-X_*\|_F^2.
\end{equation}
Thus, combining \eqref{eq:p-1} and \eqref{eq:p-2}, we can obtain an estimate an follows:
\begin{equation*}
\mathbb{E}[\|X_{k+1}-X_*\|_F^2|X_k]\leq\left(1-\sigma_{\min}^2(\Delta)\sigma_{\min}^2(\Gamma)\sigma_{\min}^2(A)\sigma_{\min}^2(B)\right)
\|X_k-X_*\|_F^2.
\end{equation*}
Taking the full expectation of both sides, we conclude that
\begin{equation*}
\mathbb{E}[\|X_{k+1}-X_*\|_F^2]\leq\left(1-\sigma_{\min}^2(\Delta)\sigma_{\min}^2(\Gamma)\sigma_{\min}^2(A)\sigma_{\min}^2(B)\right)
\mathbb{E}[\|X_k-X_*\|_F^2].
\end{equation*}
By induction, we complete the proof.
\end{proof}
\begin{remark}\label{Re:r-1}
If the sets $[m]$ and $[n]$ are partitioned by
\begin{equation*}
[m]=\{I_1, \cdots, I_s\}, ~[n]=\{J_1, \cdots, J_t\},
\end{equation*}
and the index sets $I_k\subseteq[m]$ and $J_k\subseteq[n]$ are selected according to probability distribution
\begin{equation*}
\mathbb{P}(I_k)=\tfrac{\|A_{I_k,:}\|_F^2}{\|A\|_F^2}~{\rm and}~\mathbb{P}(J_k)=\tfrac{\|B_{:,J_k}\|_F^2}{\|B\|_F^2},
\end{equation*}
respectively. Then
\begin{eqnarray*}
\sigma_{\min}(\Delta) &=&\min_{1\leq i\leq s}\tfrac{\|A_{I_i,:}\|_F}{\|A\|_F}\sigma_{\min}\left((A_{I_i,:}A_{I_i,:}^T)^{\dag\tfrac{1}{2}}\right)\\
&=&\min_{1\leq i\leq s}\tfrac{\|A_{I_i,:}\|_F}{\|A\|_F}\sigma_{\max}^{-1}(A_{I_i,:})\\
&=&\tfrac{1}{\|A\|_F}\left(\max_{1\leq i\leq s}\tfrac{\sigma_{\max}(A_{I_i,:})}{\|A_{I_i,:}\|_F}\right)^{-1}
\end{eqnarray*}
and
\begin{equation*}
\sigma_{\min}(\Gamma)=\tfrac{1}{\|B\|_F}\left(\max_{1\leq j\leq t}\tfrac{\sigma_{\max}(B_{:,J_j})}{\|B_{:,J_j}\|_F}\right)^{-1}.
\end{equation*}
Hence, the upper bound estimate of \eqref{eq:cov-1} becomes
\begin{equation}\label{eq:cov-1-1}
\mathbb{E}\|X_k-X_*\|_F^2\leq\left(1-\tfrac{\sigma_{\min}^2(A)}{\|A\|_F^2 \beta_{\max}^2(A)}
\tfrac{\sigma_{\min}^2(B)}{\|B\|_F^2 \beta_{\max}^2(B)}\right)^k\|X_0-X_*\|_F^2,
\end{equation}
where
\begin{equation}\label{eq:beta}
\beta_{\max}(A)=\max_{1\leq i\leq s}\tfrac{\sigma_{\max}(A_{I_i,:})}{\|A_{I_i,:}\|_F}{\rm~and~}
\beta_{\max}(B)=\max_{1\leq j\leq t}\tfrac{\sigma_{\max}(B_{:,J_j})}{\|B_{:,J_j}\|_F}.
\end{equation}
\end{remark}
Assume that $\beta_{\max}(A)=\tfrac{\sigma_{\max}(A_{I_{i_0},:})}{\|A_{I_{i_0},:}\|_F}$ and $\beta_{\max}(B)= \tfrac{\sigma_{\max} (B_{:,J_{j_0}})} {\|B_{:,J_{j_0}}\|_F}$. As
\begin{equation*}
\sigma_{\max}(A_{I_{i_0},:})>\sigma_{\min}(A), ~\|A\|_F>\|A_{I_{i_0},:}\|_F,
\end{equation*}
it holds that
\begin{equation*}
0<\tfrac{\sigma_{\min}^2(A)}{\|A\|_F^2 \beta_{\max}^2(A)}= \tfrac{\sigma_{\min}^2(A)\|A_{I_{i_0},:}\|_F^2}{\|A\|_F^2\sigma_{\max}^2(A_{I_{i_0},:})}<1.
\end{equation*}
Similarly, we have
\begin{equation*}
0<\tfrac{\sigma_{\min}^2(B)}{\|B\|_F^2 \beta_{\max}^2(B)}<1.
\end{equation*}
Thus, the convergence factor of inequality \eqref{eq:cov-1-1} is less than 1 and greater than 0. So the GRBK method converges to the minimal Frobenius norm solution of $AXB = C$.
\begin{remark}\label{Re:r-2}
Now, let us consider the block index sets size $|I_k|=|J_k|=1$. In this case, the indices $i_k\in [m]$ and $j_k\in [n]$ are selected according to a probability distribution $\mathbb{P}(i_k)=\frac{\|A_{i_k,:}\|^2}{\|A\|_F^2}$ and $\mathbb{P}(j_k)=\frac{\|B_{:,j_k}\|^2}{\|B\|_F^2}$, respectively. Then, the update \eqref{eq:it-3} becomes
\begin{equation}\label{eq:it-3a}
X_{k+1}=X_k+\tfrac{A_{i_k,:}^T(C_{i_kj_k}-A_{i_k,:}X_kB_{:,j_k})B_{:,j_k}^T}{\|A_{i_k,:}\|^2\|B_{:,j_k}\|^2},
\end{equation}
which is called the global randomized Kaczmarz (GRK) method. Since
\begin{equation*}
\beta_{\max}(A)=\max_{1\leq i\leq m}\tfrac{\sigma_{\max}(A_{i,:})}{\|A_{i,:}\|_F}=1
~{\rm and}~
\beta_{\max}(B)=\max_{1\leq j\leq n}\tfrac{\sigma_{\max}(B_{:,j})}{\|B_{:,j}\|_F}=1.
\end{equation*}
Then, we get a linear convergence rate in the expectation of the form
\begin{equation}\label{eq:cov-1-2}
\mathbb{E}[\|X_k-X_*\|_F^2]\leq\left(1-\tfrac{\sigma_{\min}^2(A)}{\|A\|_F^2}\tfrac{\sigma_{\min}^2(B)}{\|B\|_F^2}\right)^k\|X_0-X_*\|_F^2.
\end{equation}
\end{remark}
Comparing \eqref{eq:cov-1-2} with the convergence rate \eqref{eq:cov-1-1}, since $\beta_{\max}(A)$ and $\beta_{\max}(B)$ are less than or equal to 1, we show the convergence factor of the GRBK method is smaller than that of the GRK method, which reveals that the GRBK method has a significant speed-up.
\section{The global randomized average block Kaczmarz algorithms}\label{Sec: GRABK}
In this section, we develop new variants of the global randomized block Kaczmarz algorithm for solving large-scale linear matrix equation $AXB=C$. In practice, the main drawback of \eqref{eq:it-3} is that each iteration is expensive and difficult to parallelize, since we need to compute the pseudoinverse of two submatrices. To take advantage of parallel computation and speed up the convergence of GRK, we consider a simple extension of the GRK method, where at each iteration multiple independent updates are computed in parallel and a weighted average of the updates is used. Specifically, we write the averaged GRK update:
\begin{equation}\label{eq:it-4}
X_{k+1}=X_k+\alpha_k\left(\sum_{i\in I_k,j\in J_k}u_i^{k}v_j^{k}\tfrac{A_{i,:}^T(C_{ij}-A_{i,:}X_kB_{:,j})B_{:,j}^T}{\|A_{i,:}\|^2\|B_{:,j}\|^2}\right),
\end{equation}
where the stepsize $\alpha_k\in(0,2)$ and the weights $u_i^{k},v_j^{k}\in [0,1]$ such that $\sum_{i\in I_k}u_i^{k}=1$ and $\sum_{j\in J_k}v_j^{k}=1$. The averaged GRK is detailed in \Cref{alg:GRABK}. If $I_k$ and $J_k$ are sets of size one, i.e. $I_k={i_k}$ and $J_k=\{j_k\}$, and $u_i^{k},v_j^{k}=1$ for $i=1,\cdots,m,j=1,\cdots,n$ and $k\geq0$, we recover the GRK method.
\begin{algorithm}[!htbp]
\caption{Global Randomized Average Block Kaczmarz (GRABK)}
\label{alg:GRABK}
\hspace*{0.02in}{\bf Input:} {$A\in \mathbb{R}^{m \times p}$, $B\in \mathbb{R}^{q\times n}$, $C\in \mathbb{R}^{m \times n}$, $X_0\in \mathbb{R}^{p\times q}$, weights $u_i^{(k)}\geq0$, $v_j^{(k)}\geq0$, and stepsizes $\alpha\geq0$.}\\
\hspace*{0.02in}{\bf Output:} {Last iterate $X_{k+1}$.}
\begin{algorithmic}[1]
\For{$k=0,1,2,\cdots,$}
\State{Select a index set $I_k\subseteq[m]$ with probability $\mathbb{P}(I_k)>0$ such that $$\sum_{I_k\subseteq[m]}\mathbb{P}(I_k)=1;$$}
\State{Select a index set $J_k\subseteq[n]$ with probability $\mathbb{P}(J_k)>0$ such that $$\sum_{J_k\subseteq[n]}\mathbb{P}(J_k)=1;$$}
\State {Update $X_{k+1}=X_k+\alpha_k\left(\sum\limits_{i\in I_k,j\in J_k} u_i^{(k)}v_j^{(k)} \tfrac{A_{i,:}^T(C_{ij}-A_{i,:}X_kB_{:,j})B_{:,j}^T} {\|A_{i,:}\|^2\|B_{:,j}\|^2}\right) $.}
\EndFor
\end{algorithmic}
\end{algorithm}
Recall that the weights satisfy $0\leq u_i^{k},v_j^{k}\leq 1$ and $\sum_{i\in I_k}u_i^{k}=\sum_{j\in J_k}v_j^{k}=1$. Hence, we assume that the bounded of weights satisfy
\begin{equation*}
0<u_{\min}\leq u_i^{k}\leq u_{\max}<1~{\rm and}~0<v_{\min}\leq u_i^{k}\leq v_{\max}<1
\end{equation*}
for all $i\in I_k, j\in J_k$ and $k\geq0$. If the weights $u_i^{k}=\tfrac{\|A_{i,:}\|^2}{\|A_{I_k,:}\|_F^2}$ and $u_i^{k}=\tfrac{\|B_{:,j}\|^2}{\|B_{:,J_k}\|_F^2}$ for all $k\geq0$, we get the following compact update:
\begin{equation*}
X_{k+1}=X_k-\alpha_k\tfrac{A_{I_k,:}^T(A_{I_k,:}X_kB_{:,J_k}-C_{I_k,J_k})B_{:,J_k}^T}{\|A_{I_k,:}\|_F^2\|B_{:,J_k}\|_F^2}.
\end{equation*}
If the weights $u_i^{k}=\tfrac{1}{|I_k|}$ and $v_j^{k}=\tfrac{1}{|J_k|}$ for all $k\geq0$, we get the following compact update:
\begin{equation*}
X_{k+1}=X_k-\alpha_k\tfrac{A_{I_k,:}^TD_{I_k}^2(A_{I_k,:}X_kB_{:,J_k}-C_{I_k,J_k})D_{J_k}^2B_{:,J_k}^T}{|I_k||J_k|},
\end{equation*}
where the diagonal matrices
\begin{equation}\label{eq:D}
\begin{split}
&D_{I_k}=\diag(\|A_{i,:}\|^{-1}, ~i\in I_k) \in \mathbb{R}^{|I_k|\times|I_k|},\\
&D_{J_k}=\diag(\|B_{:,j}\|^{-1}, ~j\in J_k) \in \mathbb{R}^{|J_k|\times|J_k|}.
\end{split}
\end{equation}
In the rest of the section, we assume that the index set $I_k\subseteq[m]$ is chosen with probability $\mathbb{P}(I_k)>0$ such that $\sum_{I_k\subseteq[m]}\mathbb{P}(I_k)= 1$, and the index set $J_k\subseteq[n]$ is chosen with probability is $\mathbb{P}(J_k)>0$ such that $\sum_{J_k\subseteq[n]}\mathbb{P}(J_k )=1$. Let
\begin{equation*}
\tilde{A}_{I_k,:}=D_{I_k}A_{I_k,:} {\rm~and~} \tilde{B}_{:,J_k}=B_{:,J_k}D_{J_k}.
\end{equation*}
Then, the following equalities hold:
\begin{eqnarray*}
\mathbb{E}\left[(\tilde{A}_{I_k,:})^T\tilde{A}_{I_k,:}\right]&=&\sum_{I_k\subseteq[m]}\mathbb{P}(I_k)
\left[(D_{I_k}A_{I_k,:})^T(D_{I_k}A_{I_k,:})\right]\\
&=&(D_{_A}A)^T(D_{_A}A),
\end{eqnarray*}
and
\begin{eqnarray*}
\mathbb{E}\left[\tilde{B}_{:,J_k}(\tilde{B}_{:,J_k})^T\right]&=&\sum_{J_k\subseteq[n]}\mathbb{P}(J_k)
\left[(B_{:,J_k}D_{J_k})(B_{:,J_k}D_{J_k})^T\right]\\
&=&(BD_{_B})(BD_{_B})^T,
\end{eqnarray*}
where the block diagonal matrices
\begin{equation}\label{eq:D-1}
D_{_A}=\mathbb{P}(I_k)^{\frac{1}{2}}{\rm diag}(D_{I_k},I_k\subseteq[m]) {\rm~and~}
D_{_B}=\mathbb{P}(J_k)^{\frac{1}{2}}{\rm diag}(D_{J_k},J_k\subseteq[n]).
\end{equation}
Before we talk about stepsize $\alpha_k$, let us define following notations:
\begin{equation}\label{eq: gamma}
\gamma_{\max}(A)=\max_{I_k\subseteq [m]}\sigma_{\max}(\tilde{A}_{I_k,:}),~
\gamma_{\max}(B)=\max_{J_k\subseteq [n] }\sigma_{\max}(\tilde{B}_{:,J_k}).
\end{equation}
By the iterative scheme \eqref{eq:it-4} and $C_{ij}=A_{i,:}X_*B_{:,j}$ for all $i\in [m]$ and $j\in[n]$, we have
\begin{equation*}
X_{k+1}-X_*=(X_k-X_*)-\alpha_k\left(\sum_{i\in I_k,j\in J_k}u_i^{k}v_j^{k}\tfrac{A_{i,:}^TA_{i,:}(X_k-X_*)B_{:,j}B_{:,j}^T}{\|A_{i,:}\|^2\|B_{:,j}\|^2}\right).
\end{equation*}
It follows that
\begin{equation}\label{eq:unfold}
\begin{split}
\|X_{k+1}-X_*\|_F^2&=\|X_k-X_*\|_F^2\\
&-2\alpha_k\left\langle \sum_{i\in I_k,j\in J_k}u_i^{k}v_j^{k} \tfrac{A_{i,:}^TA_{i,:}(X_k-X_*)B_{:,j}B_{:,j}^T}{\|A_{i,:}\|^2\|B_{:,j}\|^2}, X_k-X_*\right\rangle_F\\
&+\alpha_k^2\left\|\sum_{i\in I_k,j\in J_k}u_i^{k}v_j^{k} \tfrac{A_{i,:}^TA_{i,:}(X_k-X_*)B_{:,j}B_{:,j}^T}{\|A_{i,:}\|^2\|B_{:,j}\|^2}\right\|_F^2.
\end{split}
\end{equation}
In order to ensure strictly decrease of the sequence $\{\|X_k-X_*\|_F^2\}_{k=0}^{\infty}$, we need
\begin{equation*}
\scaleto{-2\alpha_k\left\langle \sum_{i\in I_k,j\in J_k}u_i^{k}v_j^{k} \tfrac{A_{i,:}^TA_{i,:}(X_k-X_*)B_{:,j}B_{:,j}^T}{\|A_{i,:}\|^2\|B_{:,j}\|^2}, X_k-X_*\right\rangle_F
+ \alpha_k^2\left\|\sum_{i\in I_k,j\in J_k}u_i^{k}v_j^{k}\tfrac{A_{i,:}^TA_{i,:}(X_k-X_*)B_{:,j}B_{:,j}^T}{\|A_{i,:}\|^2\|B_{:,j}\|^2}\right\|_F^2<0.}{30pt}
\end{equation*}
That is to say
\begin{equation}\label{Ieq:sizestep}
0<\alpha_k<2L_k,~L_k=
\tfrac{\left\langle \sum_{i\in I_k,j\in J_k}u_i^{k}v_j^{k} \frac{A_{i,:}^TA_{i,:}(X_k-X_*)B_{:,j}B_{:,j}^T}{\|A_{i,:}\|^2\|B_{:,j}\|^2}, X_k-X_*\right\rangle_F }
{\left\|\sum_{i\in I_k,j\in J_k}u_i^{k}v_j^{k}\frac{A_{i,:}^TA_{i,:}(X_k-X_*)B_{:,j}B_{:,j}^T}{\|A_{i,:}\|^2\|B_{:,j}\|^2}\right\|_F^2}.
\end{equation}
Next, we consider the global randomized average block Kaczmarz algorithm with constant stepsize and adaptive stepsize.
\subsection{The global randomized average block Kaczmarz algorithm with constant stepsize}
In this section, we study global randomized average block Kaczmarz algorithm with constant stepsize $\alpha_k=\alpha$ and the weights $u_i^{k}=u_i,v_j^{k}=v_j$. Hence, the update format \eqref{eq:it-4} becomes
\begin{equation*}
X_{k+1}=X_k+\alpha\left(\sum_{i\in I_k,j\in J_k}u_iv_j\tfrac{A_{i,:}^T(C_{ij}-A_{i,:}X_kB_{:,j})B_{:,j}^T}{\|A_{i,:}\|^2\|B_{:,j}\|^2}\right).
\end{equation*}
The weights satisfy $0<u_{\min}\leq u_i\leq u_{\max}<1,~0<v_{\min}\leq u_i\leq v_{\max}<1$ for all $i,~j$ and $\sum_{i\in I_k}u_i=\sum_{j\in J_k}v_j=1$. For each iteration, we want the step size to be consistent. Therefore, we need to find a lower bound on $L_k$. Here, we consider a constant stepsize of the form
\begin{equation}\label{eq:size}
\alpha=\eta \alpha_*
\end{equation}
for some $\eta\in(0,2)$, where $\alpha_*=\tfrac{u_{\min}v_{\min}}{u_{\max}^2v_{\max}^2\gamma_{\max}^2(A)\gamma_{\max}^2(B)}$ is a lower bound on $L_k$, $\gamma_{\max}(A)$ and $\gamma_{\max}(B)$ are shown in equation \eqref{eq: gamma}. See proof of \Cref{Th:cov-2} for details. \Cref{Th:cov-2} proves the convergence rate of \Cref{alg:GRABK} with constant stepsize $\alpha$, which depends explicitly on the geometric properties of the matrices $A, B$ and submatrices $A_{I_k,:},B_{:,J_k}$.
\begin{theorem}\label{Th:cov-2}
Let $X_*$ be the minimal Frobenius norm solution of $AXB=C$, and $X_k$ be the $k$th approximation of $X_*$ generated by the GRABK method with the weights $0<u_{\min}\leq u_i\leq u_{\max}<1$ for all $i\in [m]$, and $0<v_{\min}\leq v_j\leq v_{\max}<1$ for all $j\in [n]$, and the stepsize $\alpha=\eta \alpha_*$ for some $\eta\in(0,2)$. Then, the expected norm of the error at the $k$th iteration satisfies
\begin{equation}\label{eq:cov-2}
\scaleto{
\mathbb{E}\|X_k-X_*\|_F^2\leq \left(1-\eta(2-\eta) \phi
\sigma_{\min}^2(D_{_A}) \sigma_{\min}^2(D_{_B}) \sigma_{\min}^2(A) \sigma_{\min}^2(B) \right)^k\|X_0-X_*\|_F^2,}{13pt}
\end{equation}
where $\phi=\tfrac{u_{\min}^2v_{\min}^2}{u_{\max}^2v_{\max}^2\gamma_{\max}^2(A) \gamma_{\max}^2(B)}$, and
the block diagonal matrices $D_{_A}$ and $D_{_B}$ are shown in equation \eqref{eq:D-1}.
\end{theorem}
\begin{proof}
Since
\begin{equation*}
\sum_{i\in I_k,j\in J_k}\tfrac{A_{i,:}^TA_{i,:}(X_k-X_*)B_{:,j}B_{:,j}^T}{\|A_{i,:}\|^2\|B_{:,j}\|^2}=
(\tilde{A}_{I_k,:})^T\tilde{A}_{I_k,:}(X_k-X_*)\tilde{B}_{:,J_k}(\tilde{B}_{:,J_k})^T.
\end{equation*}
For the second term of the equation \eqref{eq:unfold}, we have
\begin{eqnarray*}
&&\left\langle \sum_{i\in I_k,j\in J_k}u_iv_j \tfrac{A_{i,:}^TA_{i,:}(X_k-X_*)B_{:,j}B_{:,j}^T}{\|A_{i,:}\|^2\|B_{:,j}\|^2}, X_k-X_*\right\rangle_F \\
&&\geq u_{\min}v_{\min} \left\langle (\tilde{A}_{I_k,:})^T\tilde{A}_{I_k,:}(X_k-X_*) \tilde{B}_{:,J_k}(\tilde{B}_{:,J_k})^T, X_k-X_*\right\rangle_F \\
&&=u_{\min}v_{\min}\left\langle \tilde{A}_{I_k,:}(X_k-X_*) \tilde{B}_{:,J_k},
\tilde{A}_{I_k,:}(X_k-X_*)\tilde{B}_{:,J_k}\right\rangle_F\\
&&=u_{\min}v_{\min}\|\tilde{A}_{I_k,:}(X_k-X_*)\tilde{B}_{:,J_k}\|_F^2.
\end{eqnarray*}
For the third term of the equation \eqref{eq:unfold}, we have
\begin{eqnarray*}
&& \left\|\sum_{i\in I_k,j\in J_k}u_iv_j \tfrac{A_{i,:}^TA_{i,:}(X_k-X_*)B_{:,j}B_{:,j}^T}{\|A_{i,:}\|^2\|B_{:,j}\|^2}\right\|_F^2\\
&&\leq u_{\max}^2v_{\max}^2 \left\|(\tilde{A}_{I_k,:})^T\tilde{A}_{I_k,:}(X_k-X_*)\tilde{B}_{:,J_k}(\tilde{B}_{:,J_k})^T\right\|_F^2\\
&&=u_{\max}^2v_{\max}^2 \left\|[\tilde{B}_{:,J_k}\otimes (\tilde{A}_{I_k,:})^T]{\rm vec}[\tilde{A}_{I_k,:}(X_k-X_*)\tilde{B}_{:,J_k}]\right\|_2^2\\
&&\leq u_{\max}^2v_{\max}^2\sigma_{\max}^2\left(\tilde{B}_{:,J_k}\otimes (\tilde{A}_{I_k,:})^T\right)
\left\|{\rm vec}[\tilde{A}_{I_k,:}(X_k-X_*)\tilde{B}_{:,J_k}]\right\|_2^2\\
&&=u_{\max}^2v_{\max}^2\sigma_{\max}^2(\tilde{B}_{:,J_k})\sigma_{\max}^2(\tilde{A}_{I_k,:})\|\tilde{A}_{I_k,:} (X_k-X_*) \tilde{B}_{:,J_k}\|_F^2\\
&&\leq u_{\max}^2 v_{\max}^2 \gamma_{\max}^2(A) \gamma_{\max}^2(B) \|\tilde{A}_{I_k,:}(X_k-X_*)\tilde{B}_{:,J_k}\|_F^2.
\end{eqnarray*}
Hence
\begin{eqnarray*}
\|X_{k+1}-X_*\|_F^2&\leq&\|X_k-X_*\|_F^2-\big(2\alpha u_{\min}v_{\min}\\
&&-\alpha^2 u_{\max}^2 v_{\max}^2 \gamma_{\max}^2(A) \gamma_{\max}^2(B) \big) \|\tilde{A}_{I_k,:}(X_k-X_*)\tilde{B}_{:,J_k}\|_F^2.
\end{eqnarray*}
In order to ensure strictly decrease of the sequence $\{\|X_k-X_*\|_F^2\}_{k=0}^{\infty}$, we need
\begin{equation*}
2\alpha u_{\min}v_{\min}-\alpha^2 u_{\max}^2 v_{\max}^2 \gamma_{\max}^2(A) \gamma_{\max}^2(B)>0.
\end{equation*}
Hence, the stepsize
\begin{equation*}
0<\alpha<\tfrac{2u_{\min}v_{\min}}{u_{\max}^2v_{\max}^2\gamma_{\max}^2(A) \gamma_{\max}^2(B)}\leq2L_k,
\end{equation*}
and the optimal stepsize is obtained by maximizing
\begin{equation*}
2\alpha u_{\min}v_{\min}-\alpha^2 u_{\max}^2 v_{\max}^2 \gamma_{\max}^2(A) \gamma_{\max}^2(B)
\end{equation*}
with respect to $\alpha$,
which leads to $\alpha_*=\tfrac{u_{\min}v_{\min}}{u_{\max}^2v_{\max}^2\gamma_{\max}^2(A) \gamma_{\max}^2(B)}$.
Hence, taking stepsize $\alpha=\eta\alpha_*$ for some $\eta\in (0,2)$, we obtain
\begin{equation*}
\|X_{k+1}-X_*\|_F^2\leq\|X_k-X_*\|_F^2-\eta(2-\eta)\phi\|\tilde{A}_{I_k,:}(X_k-X_*)\tilde{B}_{:,J_k}\|_F^2.
\end{equation*}
Taking conditional expectations, we get
\begin{equation}\label{eq:p-1a}
\mathbb{E}[\|X_{k+1}-X_*\|_F^2|X_k]=\|X_k-X_*\|_F^2-\eta(2-\eta)\phi \mathbb{E}[\|\tilde{A}_{I_k,:}(X_k-X_*)\tilde{B}_{:,J_k}\|_F^2|X_k].
\end{equation}
We note that
\begin{eqnarray*}
&&\mathbb{E}[\|\tilde{A}_{I_k,:}(X_k-X_*)\tilde{B}_{:,J_k}\|_F^2|X_k]\\
&&=\left\langle \mathbb{E}[(\tilde{A}_{I_k,:})^T\tilde{A}_{I_k,:}](X_k-X_*),
(X_k-X_*)\mathbb{E}[\tilde{B}_{:,J_k}(\tilde{B}_{:,J_k})^T]\right\rangle_F\\
&&=\left\langle (D_{_A}A)^T(D_{_A}A)(X_k-X_*), (X_k-X_*)(BD_{_B})(BD_{_B})^T\right\rangle_F\\
&&=\|(D_{_A}A)(X_k-X_*)(BD_{_B})\|_F^2.
\end{eqnarray*}
Using the fact
\begin{eqnarray*}
\|(D_{_A}A)(X_k-X_*)(BD_{_B})\|_F^2&=&\|[(BD_{_B})^T\otimes (D_{_A}A)]{\rm vec}(X_k-X_*)\|_2^2\\
&=&\|(D_{_B}^T\otimes D_{_A})(B^T\otimes A){\rm vec}(X_k-X_*)\|_2^2\\
&\geq&\sigma_{\min}^2(D_{_B}^T\otimes D_{_A})\sigma_{\min}^2(B^T\otimes A)\|X_k-X_*\|_F^2\\
&=&\sigma_{\min}^2(D_{_A})\sigma_{\min}^2(A)\sigma_{\min}^2(D_{_B})\sigma_{\min}^2(B)\|X_k-X_*\|_F^2,
\end{eqnarray*}we have
\begin{equation}\label{eq:p-2a}
\scaleto{\mathbb{E}[\|\tilde{A}_{I_k,:}(X_k-X_*)\tilde{B}_{:,J_k}\|_F^2|X_k]\geq \sigma_{\min}^2(D_{_A}) \sigma_{\min}^2(A) \sigma_{\min}^2(D_{_B}) \sigma_{\min}^2(B) \|X_k-X_*\|_F^2.}{11pt}
\end{equation}
Thus, combining \eqref{eq:p-1a} and \eqref{eq:p-2a}, we can obtain an estimate an follows:
\begin{equation*}
\scaleto{
\mathbb{E}[\|X_{k+1}-X_*\|_F^2|X_k]\leq
\left(1-\eta(2-\eta)\phi \sigma_{\min}^2(D_{_A})\sigma_{\min}^2(A)\sigma_{\min}^2(D_{_B})\sigma_{\min}^2(B)\right)\|X_k-X_*\|_F^2.}{11pt}
\end{equation*}
Taking the full expectation of both sides, we conclude that
\begin{equation*}
\scaleto{
\mathbb{E}\|X_{k+1}-X_*\|_F^2\leq \left(1-\eta(2-\eta) \phi \sigma_{\min}^2(D_{_A})\sigma_{\min}^2(A)\sigma_{\min}^2(D_{_B})\sigma_{\min}^2(B) \right)\mathbb{E}\|X_k-X_*\|_F^2.}{11pt}
\end{equation*}
By induction, we complete the proof.
\end{proof}
\begin{remark}\label{Re:r-3}
Let $[m]=\{I_1, \cdots, I_s\}$ and $[n]=\{J_1, \cdots, J_t\}$ be partitions of $[m]$ and $[n]$, respectively. The index sets $I_k\subseteq[m]$ and $J_k\subseteq[n]$ are selected according to probability distribution
\begin{equation*}
\mathbb{P}(I_k)=\tfrac{\|A_{I_k,:}\|_F^2}{\|A\|_F^2}~{\rm and}~\mathbb{P}(J_k)=\tfrac{\|B_{:,J_k}\|_F^2}{\|B\|_F^2},
\end{equation*}
respectively. In \Cref{alg:GRABK}, we take the weights $u_i^{k}=\tfrac{\|A_{i,:}\|_2^2}{\|A_{I_k,:}\|_F^2}$ and $v_j^{k}=\tfrac{\|B_{:,j}\|_2^2}{\|B_{:,J_k}\|_F^2}$ for all $k\geq0$, and the stepsize $\alpha=\eta \tfrac{1}{\beta_{\max}^2(A)\beta_{\max}^2(B)}$ for some $\eta\in(0,2)$. In this case, we have the following error estimate
\begin{equation}\label{eq:cov-2-1}
\mathbb{E}\|X_k-X_*\|_F^2\leq\left(1-\eta(2-\eta)\tfrac{\sigma_{\min}^2(A)}{\|A\|_F^2 \beta_{\max}^2(A)}\tfrac{\sigma_{\min}^2(B)}{\|B\|_F^2 \beta_{\max}^2(B)}\right)^k\|X_0-X_*\|_F^2,
\end{equation}
where $\beta_{\max}(A)$ and $\beta_{\max}(B)$ are shown in equation \eqref{eq:beta}.
\end{remark}
\begin{remark}
By \Cref{Re:r-3}, we know that $0<\alpha<\tfrac{2}{\beta_{\max}^2(A)\beta_{\max}^2(B)}$ guarantees the convergence of the error $\mathbb{E}\|X_k-X_*\|_F^2$. However, since the error estimate \eqref{eq:cov-2-1} usually is not sharp, the stepsize $\alpha$ satisfying $\tfrac{2}{\beta_{\max}^2(A)\beta_{\max}^2(B)}<\alpha<2\tfrac{\|A\|_F^2\|B\|_F^2}{\sigma_{\max}^2(A)\sigma_{\max}^2(B)}$ is also possible to result in convergence.
\end{remark}
\begin{remark}\label{Re:r-4}
Assume that the matrix $A$ is row normalized, i.e., $\|A_{i,:}\|=1$, and the matrix $B$ is column normalized, i.e., $\|B_{:,j}\|=1$. For index sets $I_k\subseteq[m]$ and $J_k\subseteq[n]$, the block sampling have the same size $|I_k|=\tau_1$ and $|J_k|=\tau_2$ for all $k\geq0$.
In this case, we have $\mathbb{P}(I_k)=\frac{\tau_1}{m}$ and $\mathbb{P}(J_k)=\frac{\tau_2}{n}$. Let us consider the particular choices $\eta=1$, the weights $u_i=\tfrac{1}{\tau_1}$ and $v_i=\tfrac{1}{\tau_2}$. Since
\begin{eqnarray*}
\gamma_{\max}^2(A)&=&\max_{I_k\subseteq [m]}\sigma_{\max}^2(A_{I_k,:})=\tau_1\beta_{\max}^2(A),\\
\gamma_{\max}^2(B)&=&\max_{J_k\subseteq [n]}\sigma_{\max}^2(B_{:,J_k})=\tau_2\beta_{\max}^2(B).
\end{eqnarray*}
Then, the convergence rate \eqref{eq:cov-2} and \eqref{eq:cov-2-1} become
\begin{equation}\label{eq:cov-2-2}
\mathbb{E}\|X_k-X_*\|_F^2\leq\left(1-\tfrac{\tau_1\tau_2}{\gamma_{\max}^2(A)\gamma_{\max}^2(B)}
\tfrac{\sigma_{\min}^2(A)}{m}\tfrac{\sigma_{\min}^2(B)}{n} \right)^k\|X_0-X_*\|_F^2,
\end{equation}
where $\gamma_{\max}(A)$ and $\gamma_{\max}(B)$ are shown in equation \eqref{eq: gamma}.
\end{remark}
Comparing \eqref{eq:cov-2-2} with the convergence rate \eqref{eq:cov-1-2}, since $\tfrac{\tau_1}{\gamma_{\max}^2(A)}$ and $\tfrac{\tau_2}{\gamma_{\max}^2(B)}$ greater than or equal to 1, we show the convergence factor of the GRABK method is smaller than that of the GRK method, which reveals that the GRABK method has a significant speed-up.
\subsection{The global randomized average block Kaczmarz algorithm with adaptive stepsize}
Since the GRABK method involved a stepsize $\alpha$ depending on the geometric properties of the $A, B$ and submatrices $A_{I_k,:}, B_{:, J_k}$, which may be difficult to compute in large-scale matrix equations. Next, we design a randomized average block Kaczmarz method with adaptive stepsize, which does not require the computation of $\gamma_{\max}^2(A)$, $\gamma_{\max}^2(B)$, $A_{I,:}^\dag$, and $B_{:,J}^\dag$. To simplify the notation, we define $\hat{u}_i^{k}=\frac{u_i^{k}}{\|A_{i,:}\|^2}$ and $\hat{v}_j^{k}=\frac{v_j^{k}}{\|B_{:,j}\|^2}$. Thus, the iterative formula \eqref{eq:it-4} becomes
\begin{equation*}
X_{k+1}=X_k+\alpha_k\left(\sum_{i\in I_k,j\in J_k}\hat{u}_i^{k}\hat{v}_j^{k}A_{i,:}^T(C_{ij}-A_{i,:}X_kB_{:,j})B_{:,j}^T\right).
\end{equation*}
By \eqref{Ieq:sizestep}, for each iteration, we consider the adaptive stepsize of the form
\begin{equation}\label{eq:stepsize}
\alpha_k=\eta L_k,~{\rm where}~L_k=\tfrac{\sum_{i\in I_k,j\in J_k}\hat{u}_i^{k}\hat{v}_j^{k}(C_{ij}-A_{i,:}X_kB_{:,j})^2}
{\left\|\sum_{i\in I_k,j\in J_k}\hat{u}_i^{k}\hat{v}_j^{k}A_{i,:}^T(C_{ij}-A_{i,:}X_kB_{:,j})B_{:,j}^T\right\|_F^2}
\end{equation}
for some $\eta\in(0,2)$. The convergence of the GRABK method with adaptive stepsize $\alpha_k$ is guaranteed by the \Cref{Th:cov-3}. The convergence rate of the GRABK method with adaptive stepsize $\alpha_k$ depends explicitly on the geometric properties of the matrices $A, B$ and submatrices $A_{I_k,:}, B_{:, J_k}$.
\begin{theorem}\label{Th:cov-3}
Let $X_*$ be the minimal Frobenius norm solution of $AXB=C$, and $X_k$ be the $k$th approximation of $X_*$ generated by the GRABK method with the weights $0<u_{\min}\leq u_i^{k}\leq u_{\max}<1$ for all $i\in [m]$, and $0<v_{\min}\leq v_j^{k}\leq v_{\max}<1$ for all $j\in [n]$, and the stepsize $\alpha_k=\eta L_k$ for some $\eta\in(0,2)$. Then, the expected norm of the error at the $k$th iteration satisfies
\begin{equation}\label{eq:cov-3}
\scaleto{
\mathbb{E}\|X_k-X_*\|_F^2\leq \left(1-\eta(2-\eta)\psi
\sigma_{\min}^2(D_{_A}) \sigma_{\min}^2(D_{_B}) \sigma_{\min}^2(A) \sigma_{\min}^2(B) \right)^k\|X_0-X_*\|_F^2,}{13pt}
\end{equation}
where $\psi=\tfrac{u_{\min}v_{\min}}{u_{\max}v_{\max}\gamma_{\max}^2(A) \gamma_{\max}^2(B)}$, and
the block diagonal matrices $D_{_A}$ and $D_{_B}$ are shown in equation \eqref{eq:D-1}.
\end{theorem}
\begin{proof}
Let $\hat{A}_{i,:}=(\hat{u}_i^{k})^\frac{1}{2}A_{i,:}$ and $\hat{B}_{:,j}=B_{:,j}(\hat{v}_j^{k})^\frac{1}{2}$. Hence
\begin{equation*}
X_{k+1}=X_k+\alpha_k\left(\sum_{i\in I_k,j\in J_k}(\hat{A}_{i,:})^T \hat{A}_{i,:} (X_*-X_k) \hat{B}_{:,j}(\hat{B}_{:,j})^T\right),
\end{equation*}
and
\begin{equation*}
L_k=\tfrac{\sum_{i\in I_k,j\in J_k}[\hat{A}_{i,:}(X_k-X_*)\hat{B}_{:,j}]^2}
{\left\|\sum_{i\in I_k,j\in J_k}(\hat{A}_{i,:})^T \hat{A}_{i,:}(X_*-X_k)\hat{B}_{:,j}(\hat{B}_{:,j})^T\right\|_F^2}.
\end{equation*}
Using that
\begin{equation*}
\left\langle X_k-X_*, (\hat{A}_{i,:})^T \hat{A}_{i,:} (X_*-X_k) \hat{B}_{:,j}(\hat{B}_{:,j})^T\right\rangle_F
=-[\hat{A}_{i,:}(X_k-X_*)\hat{B}_{:,j}]^2,
\end{equation*}
we get
\begin{eqnarray*}
\|X_{k+1}-X_*\|_F^2&=&\|X_k-X_*\|_F^2-2\alpha_k \sum_{i\in I_k,j\in J_k}[\hat{A}_{i,:}(X_k-X_*)\hat{B}_{:,j}]^2 \\
&&+\alpha_k^2\left\|\sum_{i\in I_k,j\in J_k}(\hat{A}_{i,:})^T \hat{A}_{i,:} (X_*-X_k) \hat{B}_{:,j}(\hat{B}_{:,j})^T\right\|_F^2\\
&=&\|X_k-X_*\|_F^2-2\eta L_k\sum_{i\in I_k,j\in J_k}[\hat{A}_{i,:}(X_k-X_*)\hat{B}_{:,j}]^2\\
&&+\eta^2L_k^2\left\|\sum_{i\in I_k,j\in J_k}(\hat{A}_{i,:})^T \hat{A}_{i,:} (X_*-X_k) \hat{B}_{:,j}(\hat{B}_{:,j})^T\right\|_F^2\\
&=&\|X_k-X_*\|_F^2-\eta(2-\eta)L_k\sum_{i\in I_k,j\in J_k}[\hat{A}_{i,:}(X_k-X_*)\hat{B}_{:,j}]^2.
\end{eqnarray*}
Since
\begin{eqnarray*}
&&\left\|\sum_{i\in I_k,j\in J_k}(\hat{A}_{i,:})^T \hat{A}_{i,:}(X_*-X_k)\hat{B}_{:,j}(\hat{B}_{:,j})^T\right\|_F^2\\
&&=\left\|(\hat{A}_{I_k,:})^T\hat{A}_{I_k,:} (X_k-X_*) \hat{B}_{:,J_k}(\hat{B}_{:,J_k})^T\right\|_F^2 \\
&&=\left\|[\hat{B}_{:,J_k}\otimes(\hat{A}_{I_k,:})^T]{\rm vec}[\hat{A}_{I_k,:} (X_k-X_*) \hat{B}_{:,J_k}]\right\|_2^2 \\
&&\leq \sigma_{\max}^2\left(\hat{B}_{:,J_k}\otimes(\hat{A}_{I_k,:})^T\right) \left\|\hat{A}_{I_k,:} (X_k-X_*) \hat{B}_{:,J_k}\right\|_F^2 \\
&&\leq \sigma_{\max}^2(\hat{A}_{I_k,:})\sigma_{\max}^2(\hat{B}_{:,J_k}) \left\|\hat{A}_{I_k,:} (X_k-X_*) \hat{B}_{:,J_k}\right\|_F^2,
\end{eqnarray*}
where $\hat{A}_{I_k,:}=U_{I_k}\tilde{A}_{I_k,:}$ and $\hat{B}_{:,J_k}=\tilde{B}_{:,J_k}V_{J_k}$,
and the diagonal matrices $U_{I_k}=\diag ((u_i^{k})^{\frac{1}{2}},~i\in I_k)$ and $V_{J_k}=\diag((v_j^{k})^{\frac{1}{2}},~j\in J_k)$, $D_{I_k}$ and $D_{J_k}$ are shown in equation \eqref{eq:D}. In addition
\begin{equation*}
\sum_{i\in I_k,j\in J_k}[\hat{A}_{i,:}(X_k-X_*)\hat{B}_{:,j}]^2=\left\|\hat{A}_{I_k,:} (X_k-X_*) \hat{B}_{:,J_k} \right\|_F^2.
\end{equation*}
Furthermore
\begin{equation*}
\sigma_{\max}^2(\hat{A}_{I_k,:})\leq u_{\max}\sigma_{\max}^2(\tilde{A}_{I_k,:})\leq u_{\max}\gamma_{\max}^2(A)
\end{equation*}
and
\begin{equation*}
\sigma_{\max}^2(\hat{B}_{:,J_k})\leq v_{\max}\sigma_{\max}^2(\tilde{B}_{:,J_k})\leq v_{\max}\gamma_{\max}^2(B).
\end{equation*}
Hence
\begin{equation*}
L_k\geq\tfrac{1}{\sigma_{\max}^2(\hat{A}_{I_k,:})\sigma_{\max}^2(\hat{B}_{:,J_k})} \geq\tfrac{1}{u_{\max}v_{\max}\gamma_{\max}^2(A)\gamma_{\max}^2(B)}.
\end{equation*}
Therefore
\begin{eqnarray*}
\|X_{k+1}-X_*\|_F^2&\leq&\|X_k-X_*\|_F^2-\tfrac{\eta(2-\eta)}{u_{\max}v_{\max}\gamma_{\max}^2(A)\gamma_{\max}^2(B)} \left\|\hat{A}_{I_k,:} (X_k-X_*) \hat{B}_{:,J_k} \right\|_F^2\\
&\leq&\|X_k-X_*\|_F^2-\tfrac{\eta(2-\eta)u_{\min}v_{\min}}{u_{\max}v_{\max}\gamma_{\max}^2(A)\gamma_{\max}^2(B)} \left\|\tilde{A}_{I_k,:} (X_k-X_*) \tilde{B}_{:,J_k} \right\|_F^2.
\end{eqnarray*}
Taking the conditional expectation and using \eqref{eq:p-2a}, we get
\begin{equation*}
\scaleto{
\mathbb{E}[\|X_{k+1}-X_*\|_F^2|X_k] \leq
\left(1-\eta(2-\eta)\psi\sigma_{\min}^2(D_{_A})\sigma_{\min}^2(A)\sigma_{\min}^2(D_{_B})\sigma_{\min}^2(B)\right)\|X_k-X_*\|_F^2.}{11pt}
\end{equation*}
Taking the full expectation of both sides, we conclude that
\begin{equation*}
\scaleto{
\mathbb{E}\|X_{k+1}-X_*\|_F^2\leq
\left(1-\eta(2-\eta)\psi \sigma_{\min}^2(D_{_A})\sigma_{\min}^2(A)\sigma_{\min}^2(D_{_B})\sigma_{\min}^2(B)\right)\mathbb{E}\|X_k-X_*\|_F^2.}{11pt}
\end{equation*}
By induction, we complete the proof.
\end{proof}
\begin{remark}\label{Re:r-5}
Under the conditions of \Cref{Re:r-3}, we take the adaptive stepsize $\alpha_k=\eta L_k$ for some $\eta\in(0,2)$ and the weights $u_i^{k}=\tfrac{\|A_{i,:}\|_2^2}{\|A_{I_k,:}\|_F^2}$ and $v_j^{k}=\tfrac{\|B_{:,j}\|_2^2}{\|B_{:,J_k}\|_F^2}$ for all $k\geq0$ in \Cref{alg:GRABK}. In this case, $L_k\geq\tfrac{1}{\beta_{\max}^2(A)\beta_{\max}^2(B)}$, we have the following error estimate
\begin{equation}\label{eq:cov-3-1}
\mathbb{E}\|X_k-X_*\|_F^2\leq\left(1-\eta(2-\eta)\tfrac{\sigma_{\min}^2(A)}{\|A\|_F^2 \beta_{\max}^2(A)}\tfrac{\sigma_{\min}^2(B)}{\|B\|_F^2 \beta_{\max}^2(B)}\right)^k\|X_0-X_*\|_F^2.
\end{equation}
\end{remark}
\begin{remark}Under the assumptions and conditions of \Cref{Re:r-4}, the convergence rate \eqref{eq:cov-3} becomes
\begin{equation}\label{eq:cov-3-2}
\mathbb{E}\|X_k-X_*\|_F^2\leq\left(1-\tfrac{\tau_1\tau_2}{\gamma_{\max}^2(A)\gamma_{\max}^2(B)}
\tfrac{\sigma_{\min}^2(A)}{m}\tfrac{\sigma_{\min}^2(B)}{n} \right)^k\|X_0-X_*\|_F^2.
\end{equation}
We observe that this convergence rate is the same as \eqref{eq:cov-2-2}. However, the \Cref{alg:GRABK} with adaptive stepsize has more chances to accelerate.
\end{remark}
\begin{remark}
Let us consider the particular choices $\eta=1$, the convergence rate \eqref{eq:cov-2-1}and \eqref{eq:cov-3-1}become \eqref{eq:cov-1-1}, this implies that the GRBK and GRABK methods have the same convergence rate. However, for solving large-scale matrix equations, the GRABK method can run in parallel and does not need to compute pseudoinverses. As a result, the GRABK method requires fewer CPU times than the GRBK method.
\end{remark}
There is a tight connection between the constant stepsize \eqref{eq:size} and the adaptive stepsize \eqref{eq:stepsize}.
In the proofs of \Cref{Th:cov-2,Th:cov-3}, the lower bounds of $L_k$ are given respectively. Since $\tfrac{u_{\min}}{u_{\max}}\leq 1$ and $\tfrac{v_{\min}}{v_{\max}} \leq 1$, it holds that
\begin{equation*}
L_k\geq\tfrac{1}{u_{\max}v_{\max}\gamma_{\max}^2(A)\gamma_{\max}^2(B)}\geq \tfrac{u_{\min}v_{\min}}{u_{\max}^2v_{\max}^2\gamma_{\max}^2(A)\gamma_{\max}^2(B)}=\alpha_*.
\end{equation*}
Hence, the adaptive stepsize \eqref{eq:stepsize} can be viewed as a practical approximation of the constant stepsize \eqref{eq:size}. However, the GRABK method with adaptive stepsize \eqref{eq:size} has more chances to accelerate, since the adaptive stepsize is, in general, larger than the constant stepsize counterpart.
\section{Numerical results}\label{Sec:NR}
In this section, we investigate the computational behavior of GRABK for solving various matrix equations, and compare GRABK with RK \cite{Strohmer2009}, i.e., the randomized Kaczmarz method is applied to the linear system $(B^T \otimes A) {\rm vec}(X) = {\rm vec} (C)$, and RBCD \cite{Du22} methods. All experiments are carried out using MATLAB (version R2017b) on a personal computer with 1.60 GHz central processing unit (Intel(R) Core(TM) i5-8265U CPU), 8.00 GB memory, and Windows operating system (64 bit Windows 10). We divided our tests into three broad categories: synthetic dense data, real-world sparse data, and an application to image restoration.
To construct a matrix equation, we set $C=AXB$, where $X$ is a random matrix with entries generated from a standard normal distribution. All computations are started from the initial guess $X_0 = 0$, and terminated once the \emph{relative error} (RE) of the solution, defined by
\begin{equation*}
{\rm RE}=\tfrac{\|X_k-X_*\|_F^2}{\|X_*\|_F^2}
\end{equation*}
at the current iterate $x_k$, satisfies $\text{RE} < 10^{-6}$ or exceeded maximum iteration, where $X_*=A^{\dag}CB^{\dag}$. We report the average number of iterations (IT) and the average CPU times in seconds (CPU) for 10 times repeated runs. We consider the following GRBK and GRABK variants:
\begin{itemize}
\item GRBK with partition sampling and as in \Cref{Re:r-1}.
\item GRK: GRBK with the block index sets size $|I_k | = |J_k | = 1$ as in \Cref{Re:r-2}.
\item GRABK-c: GRABK with partition sampling and constant stepsize $\alpha=\tfrac{\eta}{\beta_{\max}^2(A)\beta_{\max}^2(B)}$ for some $\eta\in(0,2)$ as in \Cref{Re:r-3}.
\item GRABK-a: GRABK with partition sampling and adaptive stepsize $\alpha_k=\eta L_k$ for some $\eta\in(0,2)$ as in \Cref{Re:r-5}.
\end{itemize}
For the block methods, we assume that $[m]=\{I_1, \cdots, I_s\}$ and $[n]=\{J_1, \cdots, J_t\}$ respectively are partitions of $[m]$ and $[n]$, and the block sampling have the same size $|I_k|=\tau_1$ and $|J_k|=\tau_2$, where
\begin{equation*}
I_i=
\begin{cases}
\{(i-1)\tau_1+1, (i-1)\tau_1+2, \cdots, i\tau_1\}, &i=1,2,\cdots,s-1, \\
\{(s-1)\tau_1+1, (s-1)\tau_1+2, \cdots, m\}, &i=s,
\end{cases}
\end{equation*}
and
\begin{equation*}
J_j=
\begin{cases}
\{(j-1)\tau_2+1, (j-1)\tau_2+2, \cdots, j\tau_2\}, &j=1,2,\cdots,t-1, \\
\{(t-1)\tau_2+1, (t-1)\tau_2+2, \cdots, n\}, &j=t.
\end{cases}
\end{equation*}
\subsection{Synthetic dense data}
Synthetic dense data for this test is generated as follows:
\begin{itemize}
\item Type I: For given $m,p$, and $r_1={\rm rank}(A)$, we construct a matrix $A$ by
\begin{equation*}
A=U_1D_1V_1^T,
\end{equation*}
where $U_1\in \mathbb{R}^{m\times r_1}$ and $V_1\in \mathbb{R}^{p\times r_1}$ are orthogonal columns matrices. The entries of $U_1$ and $V_1$ are generated from a standard normal distribution, and then columns are orthogonalization, i.e.,
\begin{equation*}
[U_1,\sim]={\rm qr(randn}(m,r_1),0),[V_1,\sim]={\rm qr(randn}(p,r_1),0).
\end{equation*}
The matrix $D_1$ is an $r_1\times r_1$ diagonal matrix whose diagonal entries are uniformly distribution numbers in $(1,2)$, i.e.,
\begin{equation*}
D_1=\diag(1+{\rm rand}(r_1,1)).
\end{equation*}
Similarly, for given $q,n$, and $r_2={\rm rank}(B)$, we construct a matrix $B$ by
\begin{equation*}
B=U_2D_2V_2^T,
\end{equation*}
where $U_2\in \mathbb{R}^{q\times r_2}$ and $V_1\in \mathbb{R}^{n\times r_2}$ are orthogonal columns matrices, and the matrix $D_2$ is an $r_2\times r_2$ diagonal matrix.
\item Type II: For given $m,p,q,n$, the entries of $A$ and $B$ are generated from a standard normal distribution, i.e.,
\begin{equation*}
A={\rm randn}(m,p),~B={\rm randn}(q,n).
\end{equation*}
\end{itemize}
\begin{figure}[!htb]
\centering
\subfigure[Type I]{\includegraphics[width=5in]{fig1}}
\subfigure[Type II]{\includegraphics[width=5in]{fig2}}
\caption{The relative error of GRABK-c with block size $\tau_1=\tau_2=50$ and different stepsizes $\alpha=\tfrac{\eta}{\beta_{\max}^2(A)\beta_{\max}^2(B)}$ for two matrix equation.}
\label{fig1}
\end{figure}
\begin{figure}[!htb]
\centering
\subfigure[Type I]{\includegraphics[width=5in]{fig3}}
\subfigure[Type II]{\includegraphics[width=5in]{fig4}}
\caption{The relative error of GRABK-a with block size $\tau_1=\tau_2=50$ and different stepsizes $\alpha_k=\eta L_k$ for two matrix equation.}
\label{fig2}
\end{figure}
In \Cref{fig1}, we plot the relative error of GRABK-c with a fixed block size $\tau_1=\tau_2=50$ and different stepsizes $\alpha=\tfrac{\eta}{\beta_{\max}^2(A)\beta_{\max}^2(B)}$ for two matrix equation with Type I ($A=U_1D_1V_1^T$ with $m=500, p=250,r_1=150$ and $B=U_2D_2V_2^T$ with $n=500, q=250, r_2=150$) and Type II ($A={\rm randn}(500,250)$ and $B={\rm randn}(250,500)$). Similarly, in \Cref{fig2}, we plot the relative error of GRABK-a with a fixed block size $\tau_1=\tau_2=50$ and different stepsizes $\alpha=\eta L_k$ for two matrix equation with Type I ($A=U_1D_1V_1^T$ with $m=500, p=250,r_1=150$ and $B=U_2D_2V_2^T$ with $n=500, q=250, r_2=150$) and Type II ($A={\rm randn}(500,250)$ and $B={\rm randn}(250,500)$). From \Cref{fig1,fig2}, we observed that the convergence rate of the GRABK-c and GRABK-a becomes faster as the stepsize increases, and then becomes slower after reaching the fastest rate, or even does not converge. Hence, in order to ensure convergence and weigh the convergence rate, we choose the stepsize $\alpha_k=\tfrac{1.95}{\beta_{\max}^2(A)\beta_{\max}^2(B)}$ in GRABK-c and $\alpha=L_k$ in GRABK-a.
\begin{table}[!htb]
\centering
\caption{The average IT and CPU of RK, GRK, RBCD, GRBK, GRABK-c, and GRABK-a for matrix equations with Type I.}
\scalebox{0.6}{
\begin{tabular}{lllllllllllllll}
\toprule
$m$ & $p$ & $r_1$& $\tau_1$ & $q$ & $n$ & $r_2$ & $\tau_2$ & & RK & GRK & RBCD & GRBK & GRABK-c & GRABK-a \\
\midrule
50 & 20 & 10 & 10 & 20 & 50 & 20 & 10 & IT & 5386.6 & 5186.3 & 473.1 & 26.0 & 185.7 & 123.9 \\
& & & & & & & & CPU & 0.2052 & 0.1430 & 0.0168 & 0.0047 & 0.0071 & 0.0053 \\
\midrule
50 & 20 & 10 & 10 & 20 & 50 & 10 & 5 & IT & 2275.6 & 2353.4 & - & 21.3 & 206.1 & 108.3 \\
& & & & & & & & CPU & 0.0842 & 0.0631 & - & 0.0029 & 0.0072 & 0.0038 \\
\midrule
100 & 40 & 20 & 20 & 40 & 100 & 40 & 20 & IT & 19789.9 & 19764.0 & 1045.2 & 24.3 & 216.9 & 131.0 \\
& & & & & & & & CPU & 2.8006 & 0.5695 & 0.0753 & 0.0094 & 0.0111 & 0.0072 \\
\midrule
100 & 40 & 20 & 20 & 40 & 100 & 20 & 10 & IT & 9289.4 & 9577.2 & - & 23.5 & 213.8 & 118.3 \\
& & & & & & & & CPU & 1.3268 & 0.2785 & - & 0.0066 & 0.0106 & 0.0057 \\
\midrule
500 & 100 & 50 & 100 & 40 & 100 & 40 & 20 & IT & 48986.7 & 48786.7 & 1169.8 & 28.4 & 178.3 & 91.0 \\
& & & & & & & & CPU & 28.6252 & 1.9705 & 0.4446 & 0.0435 & 0.0286 & 0.0127 \\
\midrule
500 & 100 & 50 & 100 & 40 & 100 & 20 & 10 & IT & 24653.3 & 24326.1 & - & 25.7 & 171.0 & 92.9 \\
& & & & & & & & CPU & 13.9705 & 1.0288 & - & 0.0345 & 0.0274 & 0.0121 \\
\midrule
500 & 100 & 50 & 100 & 100 & 500 & 100 & 50 & IT &$>$ &$>$ & 2569.3 & 24.7 & 172.6 & 88.5 \\
& & & & & & & & CPU &$>$ &$>$ & 4.49404 & 0.0632 & 0.04739 & 0.02415 \\
\midrule
500 & 100 & 50 & 100 & 100 & 500 & 50 & 25 & IT & $>$ & $>$& - & 23.0 & 197.7 & 89.9 \\
& & & & & & & & CPU & $>$ & $>$ & - & 0.0429 & 0.0461 & 0.0200 \\
\midrule
1000 & 200 & 100 & 200 & 100 & 500 & 100 & 50 & IT &$>$ & $>$ & 2348.1 & 24.4 & 162.7 & 87.4 \\
& & & & & & & & CPU &$>$ &$>$ & 13.4265 & 0.1847 & 0.1071 & 0.0538 \\
\midrule
1000 & 200 & 100 & 200 & 100 & 500 & 50 & 25 & IT & $>$ &$>$ &- & 23.6 & 217.0 & 93.7 \\
& & & & & & & & CPU & $>$ & $>$ & - & 0.1454 & 0.1048 & 0.0453 \\
\midrule
1000 & 200 & 100 & 200 & 200 & 1000 & 200 & 100 & IT & $>$ & $>$ & 5030.0 & 24.4 & 178.8 & 89.7 \\
& & & & & & & & CPU & $>$ & $>$ & 53.9234 & 0.2553 & 0.2133 & 0.1048 \\
\midrule
1000 & 200 & 100 & 200 & 200 & 1000 & 100 & 50 & IT & $>$ &$>$ & - & 24.3 & 206.3 & 96.3 \\
& & & & & & & & CPU & $>$ &$>$ & - & 0.1701 & 0.1779 & 0.0753 \\
\midrule
2000 & 400 & 200 & 400 & 400 & 2000 & 400 & 200 & IT & $>$ &$>$ &$>$ & 23.7 & 195.5 & 95.1 \\
& & & & & & & & CPU & $>$ & $>$ & $>$ & 0.9242 & 1.4372 & 0.6825 \\
\midrule
2000 & 400 & 200 & 400 & 400 & 2000 & 200 & 100 & IT & $>$ & $>$ & - & 23.7 & 216.2 & 98.2 \\
& & & & & & & & CPU & $>$ & $>$ & - & 0.7470 & 1.2880 & 0.5734 \\
\midrule
5000 & 1000 & 750 & 500 & 1000 & 5000 & 1000 & 500 & IT & $>$ &$>$ &$>$ & 52.6 & 318.7 & 163.6 \\
& & & & & & & & CPU & $>$ & $>$ &$>$ & 13.2482 & 20.7552 & 10.2175 \\
\midrule
5000 & 1000 & 750 & 500 & 1000 & 5000 & 750 & 500 & IT & $>$ & $>$ & - & 35.3 & 279.2 & 134.5 \\
& & & & & & & & CPU & $>$ & $>$ & - & 8.6873 & 17.3383 & 8.1562\\
\bottomrule
\end{tabular}}
\label{tab1}
\end{table}
\begin{table}[!htb]
\centering
\caption{The average IT and CPU of RK, GRK, RBCD, GRBK, GRABK-c, and GRABK-a for matrix equations with Type II.}
\scalebox{0.7}{
\begin{tabular}{lllllllllllll}
\toprule
$m$ & $p$ & $\tau_1$ & $q$ & $n$ & $\tau_2$ & & RK & GRK & RBCD & GRBK & GRABK-c & GRABK-a \\
\midrule
50 & 20 & 10 & 20 & 50 & 10 & IT & 45699.0 & 45196.0 & 3909.1 & 177.3 & 1574.0 & 700.7 \\
& & & & & & CPU & 1.6867 & 1.1436 & 0.1296 & 0.0237 & 0.0532 & 0.0252 \\
\midrule
100 & 40 & 20 & 40 & 100 & 20 & IT & $>$ & $>$ & 9843.7 & 248.0 & 2855.7 & 1075.9 \\
& & & & & & CPU & $>$ & $>$ & 0.7973 & 0.1001 & 0.1465 & 0.0547 \\
\midrule
500 & 100 & 100 & 40 & 100 & 20 & IT & $>$ &$>$ & 3911.6 & 43.8 & 869.6 & 358.2 \\
& & & & & & CPU &$>$ & $>$ & 1.1979 & 0.0873 & 0.1466 & 0.0491 \\
\midrule
500 & 100 & 100 & 100 & 500 & 50 & IT & $>$ & $>$ & 4525.0 & 26.8 & 383.9 & 175.0 \\
& & & & & & CPU & $>$ & $>$ & 8.4209 & 0.0790 & 0.1067 & 0.0497 \\
\midrule
1000 & 200 & 200 & 100 & 500 & 50 & IT & $>$ & $>$ & 4988.4 & 28.7 & 429.1 & 188.4 \\
& & & & & & CPU &$>$ &$>$ & 30.0199 & 0.2781 & 0.2926 & 0.1276 \\
\midrule
1000 & 200 & 200 & 200 & 1000 & 100 & IT & $>$ & $>$ & 9893.0 & 27.3 & 415.1 & 182.2 \\
& & & & & & CPU & $>$ & $>$ & 110.7012 & 0.3463 & 0.5131 & 0.2240 \\
\midrule
5000 & 1000 & 500 & 1000 & 5000 & 500 & IT & $>$ &$>$ & $>$ & 99.2 & 614.5 & 285.8 \\
& & & & & & CPU & $>$ & $>$ &$>$ & 25.8886 & 40.5165 & 18.2868\\
\midrule
\end{tabular}}
\label{tab2}
\end{table}
In \Cref{tab1,tab2}, we report the average IT and CPU of RK, GRK, RBCD, GRBK, GRABK-c, and GRABK-a for solving matrix equations with Type I and Type II matrices. For the RBCD \cite{Du22}, we use the same parameters as in reference \cite{Du22}. For the GRBK, GRABK\_c, and GRABK\_a, we use the same block partition and block size. For the GRABK-c, and GRABK-a, the stepsizes $\alpha=\tfrac{1.95}{\beta_{\max}^2(A)\beta_{\max}^2(B)}$ and $\alpha_k=L_k$ are used, respectively. In the following tables, the item `$>$' represents that the number of iteration steps exceeds the maximum iteration (50000 or 120s). The item `-' represents that the method does not converge. From these two tables, we can see the following phenomena.
The GRK method is better than the RK method in terms of IT and CPU time. The IT and CPU time of both the RK and GRK methods increases with the increase of matrix dimension. However, the GRK method has a small increase in terms of CPU time.
The GRBK, GRABK-c, and GRABK-a methods vastly outperform the RBCD method in terms of IT and CPU time. The RBCD method does not converge in the case that the matrix $B$ is not full row rank. When the matrix size is small, the GRBK method is competitive, because the calculation of the pseudoinverse is less expensive and the number of iteration steps is small. When the matrix size is large, the GRABK-a method is more challenging, because the method does not need to calculate the pseudoinverse, and the step size is adaptive.
In \Cref{fig3}, we plot the relative error of GRABK-c with a fixed stepsize $\alpha=\tfrac{1.95}{\beta_{\max}^2(A)\beta_{\max}^2(B)}$
and different block sizes $\tau_1=\tau_2=\tau$ for two matrix equation with Type I ($A=U_1D_1V_1^T$ with $m=500, p=250,r_1=150$ and $B=U_2D_2V_2^T$ with $n=500, q=250, r_2=150$) and Type II ($A={\rm randn}(500,250)$ and $B={\rm randn}(250,500)$). Similarly, in \Cref{fig4}, we plot the relative error of GRABK-a with a fixed fixed stepsize $\alpha_k=L_k$ and different block size $\tau_1=\tau_2=\tau$ for two matrix equation with Type I ($A=U_1D_1V_1^T$ with $m=500, p=250,r_1=150$ and $B=U_2D_2V_2^T$ with $n=500, q=250, r_2=150$) and Type II ($A={\rm randn}(500,250)$ and $B={\rm randn}(250,500)$). From \Cref{fig3,fig4}, we observed that increasing block size leads to a better convergence rate of the GRABK-c and GRABK-a methods. As the block size $\tau$ increases, the IT and CPU time first decreases, and then increases after reaching the minimum. This means that a proper block size $\tau$ can speed up the convergence speed of the GRABK-c and GRABK-a methods. If the GRABK-c and GRABK-a methods are implemented in parallel, the larger the block size $\tau$ will be the better.
\begin{figure}[!htb]
\centering
\subfigure[Type I]{\includegraphics[width=5in]{fig5}}
\subfigure[Type II]{\includegraphics[width=5in]{fig6}}
\caption{The relative error of GRABK-c with stepsize $\alpha=\tfrac{1.95}{\beta_{\max}^2(A)\beta_{\max}^2(B)}$ and different block size $\tau_1=\tau_2=\tau$ for two matrix equation.}
\label{fig3}
\end{figure}
\begin{figure}[!htb]
\centering
\subfigure[Type I]{\includegraphics[width=5in]{fig7}}
\subfigure[Type II]{\includegraphics[width=5in]{fig8}}
\caption{The relative error of GRABK-a with stepsize $\alpha_k=L_k$ and different block size $\tau_1=\tau_2=\tau$ for two matrix equation.}
\label{fig4}
\end{figure}
\subsection{Real-world sparse data}
We also test the RK, GRK, RBCD, GRBK, GRABK-c, and GRABK-a methods for solving matrix equations with sparse matrices from the Florida sparse matrix collection \cite{Davis2011}. In \Cref{tab3}, we report the average IT and CPU of RK, GRK, RBCD, GRBK, GRABK-c, and GRABK-a for solving matrix equations with sparse matrices. The parameters of these methods are the same as in the previous subsection. We observe that the GRK method out requires less IT and CPU than the RK method. The GRBK, GRABK-c, and GRABK-a methods vastly outperform the RBCD method in terms of IT and CPU time. Hence, the GRBK, GRABK-c, and GRABK-a methods are competitive, because the product of a sparse matrix and a vector is less expensive. When the matrix size is large, the GRABK-c and GRABK-a methods can be implemented in parallel.
\begin{table}[!htbp]
\centering
\caption{The average IT and CPU of RK, GRK, RBCD, GRBK, GRABK-c, and GRABK-a for matrix equations with sparse matrices from \cite{Davis2011}.}
\scalebox{0.6}{
\begin{tabular}{lllllllllllllll}
\toprule
$m$ & $p$ & $r_1$& $\tau_1$ & $q$ & $n$ & $r_2$ & $\tau_2$ & & RK & GRK & RBCD & GRBK & GRABK-c & GRABK-a \\
\midrule
\multicolumn{4}{l}{ash219} & \multicolumn{4}{l}{n4c5-b11} & IT & 19446.6 & 18920.1 & 317.6 & 663.7 & 3007.1 & 1237.9 \\
219 & 85 & 85 & 20 & 10 & 120 & 10 & 5 & CPU & 4.1239 & 0.5464 & 0.0562 & 0.0883 & 0.1173 & 0.0505 \\
\multicolumn{4}{l}{ash219} & \multicolumn{4}{l}{relat4$^T$} & IT & 37283.6 & 38512.3 & - & 408.6 & 4272.3 & 1519.3 \\
219 & 85 & 85 & 20 & 12 & 66 & 5 & 5 & CPU & 5.5010 & 1.0767 & - & 0.1077 & 0.1774 & 0.0601 \\
\multicolumn{4}{l}{rel4} & \multicolumn{4}{l}{n4c5-b11} & IT & 2457.1 & 2382.9 & 679.4 & 187.3 & 968.0 & 20355.6 \\
66 & 12 & 5 & 5 & 10 & 120 & 10 & 5 & CPU & 0.2004 & 0.0618 & 0.0264 & 0.0178 & 0.0309 & 1.7052 \\
\multicolumn{4}{l}{rel4} & \multicolumn{4}{l}{relat4$^T$} & IT & 5399.9 & 5409.5 & - & 288.8 & 2801.7 & 688.7 \\
66 & 12 & 5 & 5 & 12 & 66 & 5 & 5 & CPU & 0.3149 & 0.1268 & - & 0.0241 & 0.0826 & 0.0210 \\
\multicolumn{4}{l}{mk10-b1} & \multicolumn{4}{l}{bibd\_11\_5} & IT & $>$ & $>$ & 1214 & 1123.8 & 3242.8 & 1862.3 \\
630 & 45 & 44 & 50 & 55 & 462 & 55 & 50 & CPU & $>$ & $>$ & 1.7975 & 0.7687 & 0.3273 & 0.1882 \\
\multicolumn{4}{l}{mk10-b1} & \multicolumn{4}{l}{rel5$^T$} & IT & $>$ &$>$ & - & 595.4 & 10102.5 & 1971.6 \\
630 & 45 & 44 & 50 & 35 & 340 & 24 & 30 & CPU & $>$&$>$ & - & 0.2681 & 0.8224 & 0.1625 \\
\multicolumn{4}{l}{relat5} & \multicolumn{4}{l}{bibd\_11\_5} & IT & $>$ & $>$ & 11985 & 812 & 44387 & 4378 \\
340 & 35 & 24 & 30 & 55 & 462 & 55 & 50 & CPU & $>$ &$>$& 10.5393 & 0.4626 & 3.6697 & 0.4041 \\
\multicolumn{4}{l}{relat5} & \multicolumn{4}{l}{rel5$^T$} & IT & $>$&$>$ & - & 554.3 & 91091.1 & 4597.0 \\
340 & 35 & 24 & 30 & 35 & 340 & 24 & 30 & CPU & $>$ & $>$ & - & 0.1967 & 6.3276 & 0.3362 \\
\multicolumn{4}{l}{relat6} & \multicolumn{4}{l}{ash958$^T$} & IT &$>$ &$>$ & 122525.3 &490.5 &90579.2&7304.7 \\
2340 & 157 & 137 & 200 & 292 & 958 & 292 & 200 & CPU & $>$&$>$ &218.4782& 4.7630& 35.6329& 13.7720 \\
\bottomrule
\end{tabular}}
\label{tab3}
\end{table}
As we can see from the numerical results, the GRBK, GRABK-c, and GRABK-a methods require less IT than the RBCD method for dense and sparse matrices. The GRBK, GRABK-c, and GRABK-a methods require fewer CPU times than the RBCD method. The RBCD method does not converge in the case that the matrix $B$ is not full row rank. The GRBK, GRABK-c, and GRABK-a methods can be implemented more efficiently than the RBCD method in many computer architectures, and the GRABK-c and GRABK-a methods can be deployed on parallel computing units to reduce the computational time.
\subsection{An application to image restoration}
In this section, we illustrate the effectiveness of our proposed method with several examples of image restoration \cite{Hansen}. Let $X^*=(x_{ij}^*)_{p\times q}$ and $X=(x_{ij})_{p\times q}$ be the original and restored images, respectively. The quality of the restoration result is compared by using the peak signal-to-noise ratio (PSNR) with the following formula
\begin{equation*}
{\rm PSNR}=10\log_{10}\left(\tfrac{(\max\{x_{ij}^*\})^2}{\rm MSE}\right),
\end{equation*}
where ${\rm MSE}=\tfrac{\sum_{i=1}^{p}\sum_{j=1}^{q}(x^*_{ij}-x_{ij})^2}{pq}$. The original image is represented by an array $n\times n$ pixels. We consider the matrix equation \eqref{eq:systems}, where $C$ is the observed blurred image, and the blurring matrices $A$ is the uniform Toeplitz matrices of size $n\times n$ defined by
\begin{equation*}
a_{ij}=\begin{cases}
\tfrac{1}{2r-1}, & |i-j|\leq r \\
0, & \mbox{otherwise},
\end{cases}
\end{equation*}
and $B$ is the Gaussian Toeplitz matrices of size $n\times n$ given by
\begin{equation*}
b_{ij}=\begin{cases}
\tfrac{1}{\sigma\sqrt{2\pi}}\exp\left(-\tfrac{(i-j)^2}{2\sigma^2}\right), & |i-j|\leq r \\
0, & \mbox{otherwise}.
\end{cases}
\end{equation*}
For $r=3$ and $\sigma=7$, we apply RBCD, GRBK, GRABK-c with stepsize $\alpha=\tfrac{1.95}{\beta_{\max}^2(A)\beta_{\max}^2(B)}$, and GRABK-a with stepsize $\alpha_k=L_k$ for this example, and block size $\tau_1=\tau_2=\tfrac{n}{2}$. In \Cref{fig5,fig6,fig7,fig8}, the original, blurred, and de-blurred images are shown.
\begin{figure}[!htb]
\centering
\subfigure{\includegraphics[width=5in]{ex1}}
\caption{Digital image, $n=20$.}
\label{fig5}
\end{figure}
\begin{figure}[!htb]
\centering
\subfigure{\includegraphics[width=5in]{ex2}}
\caption{Digital image, $n=128$.}
\label{fig6}
\end{figure}
\begin{figure}[!htb]
\centering
\subfigure{\includegraphics[width=5in]{ex3}}
\caption{Modified Shepp-Logan, $n=256$.}
\label{fig7}
\end{figure}
\begin{figure}[!htb]
\centering
\subfigure{\includegraphics[width=5in]{ex4}}
\caption{Pumpkins, $n=512$.}
\label{fig8}
\end{figure}
The blurring of these images is caused by the out-of-focus and atmospheric turbulence. The capability of our proposed methods is confirmed by de-blurring the blurred image. From \Cref{fig5,fig6,fig7,fig8}, we see that the PSNR of the restored image by the GRBK method is better than those by GRABK-c, and GRABK-a. However, the GRBK method requires more CPU times than the GRABK-c, and GRABK-a, because the GRBK method needs to calculate the pseudoinverse. When the image is large, the GRABK-c and GRABK-a methods can restore the image better and require fewer CPU times, and can be implemented in parallel.
\section{Conclusions}\label{Sec:Con}
We have proposed the global randomized block Kaczmarz method to solve large-scale matrix equations and prove its convergence theory. In addition, we also have presented a class of global randomized average block Kaczmarz methods for solving large-scale matrix equations, which provide a general framework for the design and analysis of global randomized block Kaczmarz method to solve large-scale matrix equations. Our convergence results provide a theoretical guarantee for the convergence of the global randomized average block Kaczmarz methods with constant and adaptive stepsizes. The numerical examples also illustrate the benefits of the new methods.
|
2,869,038,154,009 | arxiv | \section{Introduction}
Why should we use lattice Monte Carlo methods to describe a real system when
there are so many efficient and accurate methods to treat the full problem?
There are at least two good reasons. The first one takes a
pragmatic point of view: For complicated systems a full calculation simply
cannot be done on present-day computers. The second reason rests on the belief
that the physics underlying the properties of real materials is simple and can
be captured in model systems. If we succeed in this there is the additional
benefit of having identified the important general features of the material.
For the doped Fullerides we encounter just such a situation. Even for a
single C$_{60}$ molecule a full QMC calculation is still a challenge, and
simulations of Fullerides, i.e.\ solids made of C$_{60}$ molecules, are
simply out of question. For many properties it is, however, sufficient to
focus on the valence band only, removing all other degrees of freedom from
the Hamiltonian. Important features of the doped Fullerides that
have to be reflected in such a model are the degeneracy of the molecular
orbital that gives rise to the valence band, the filling of the valence
band, and the lattice structure of the solid. All these can be incorporated in
a Hubbard-like Hamiltonian, which can be treated efficiently using lattice
QMC methods.
In the following we will first show how to set up a model Hamiltonian for
the doped Fullerides. Then we discuss Monte Carlo methods for such lattice
Hamiltonians, especially the optimization of Gutzwiller functions both in
variational and fixed-node diffusion Monte Carlo.
Finally we use QMC to investigate the Mott transition in the
doped Fullerides. The interest in these questions comes from the following
situation: Density functional calculations predict that the doped Fullerides
are metals. On the other hand, one finds that the Coulomb repulsion between
electrons on the same C$_{60}$ molecule is very strong. This suggests that
correlations should be dominating, making all doped Fullerides Mott insulators.
Reality falls in between these two extremes: some doped Fullerides are metals
(and even superconductors), while others are insulators. From our QMC
calculations we find that due to the degeneracy of the valence band the
integer-doped Fullerides are close to a Mott transition, and not far into the
Mott insulator regime, as simple theories would suggest. Whether a
given compound is on the metallic or the insulating side of the transition
depends then on the crystal structure (bipartite vs.\ frustrated) and
the filling of the band.
\section{Model Hamiltonian}
\begin{figwindow}[0,r,{\epsfxsize=4cm \epsffile{k3c60_bw.eps}},%
{Schematic band structure of A$_3$C$_{60}$.}]
Solid \keyword{C$_{60}$} is characterized by a very weak inter-molecular
interaction. Therefore the discrete molecular levels merely broaden into narrow,
well separated bands (see Fig.~1) \cite{ldabands}. The valence band originates
from the lowest unoccupied molecular orbital, which is a 3-fold degenerate
\keyword{$t_{1u}$ orbital}. Doping the solid with alkali metals does not affect
the band structure close to the Fermi level very much. Only the filling of the
$t_{1u}$ band changes, since each alkali atom donates its valence electron.
To simplify the description of the doped Fullerides we want to focus on the
electrons in the $t_{1u}$ band only. To get rid of the other degrees of freedom
we use the \keyword{L\"owdin downfolding} technique \cite{lowdin}. The basic
idea is to partition the Hilbert space into a subspace that contains the
degrees of freedom that we are interested in (in our case the
`$t_{1u}$-subspace') and the rest of the Hilbert space:
${\cal H}={\cal H}_0 \oplus {\cal H}_1$.
We can then write the Hamiltonian of the system as
\end{figwindow}
\begin{equation}
H=\left(\begin{array}{cc} H_{00}& 0 \\ 0 &H_{11}\end{array}\right)
+\left(\begin{array}{cc} 0 &V_{01}\\V_{10}& 0 \end{array}\right) ,
\end{equation}
where $H_{ii}$ is the projection of the Hamiltonian onto subspace ${\cal H}_i$,
while $V_{ij}=H_{ij}$ contain the hybridization matrix elements between the two
subspaces. Writing Green's function $G=(E-H)^{-1}$ in the same way, we
can calculate the projection of $G$ onto ${\cal H}_0$ \cite{invpart}:
\begin{equation}
G_{00}=\Big(E-\underbrace{[H_{00}+ V_{01}\,(E-H_{11})^{-1}V_{10}]}_{H_{\rm eff}(E)}\Big)^{-1} .
\end{equation}
We see that the physics of the full system is described by an effective
Hamiltonian $H_{\rm eff}(E)$ that operates on the subspace ${\cal H}_0$ only.
This drastic simplification comes, however, at a price: the effective
Hamiltonian is energy dependent. In practice one approximates it with an
energy-independent Hamiltonian $H_{\rm eff}(E_0)$. This works well if we
are only interested in energies close to $E_0$.
In solid C$_{60}$ we have the fortunate situation that the bands
retain the character of the molecular orbitals, since the hybridization
matrix elements are small compared to the energy separations of the orbitals.
In fact we can neglect the other bands
altogether and get the hopping matrix elements $t_{in,\,jn'}$ between the
$t_{1u}$ orbitals $n$ and $n'$ on molecules $i$ and $j$ directly from a
tight-binding parameterization \cite{TBparam,A4C60}. Figure 2 shows the
comparison of the {\it ab initio} $t_{1u}$ band structure with the band
structure obtained from the \keyword{tight-binding} Hamiltonian with only
$t_{1u}$ orbitals.
\begin{figure}
\centerline{\epsfxsize=9.5cm \epsffile{c60band.eps}}
\caption[]{Band structure ($t_{1u}$ band) of solid C$_{60}$ (fcc)
(a) as calculated {\it ab initio} using the
local density approximation \cite{ldabands} and
(b) using a tight-binding Hamiltonian with only $t_{1u}$
orbitals \cite{TBparam}.}
\end{figure}
To get a realistic description of the electrons in the $t_{1u}$ band we have
to include the correlation effects which come from the Coulomb repulsion
of electrons in $t_{1u}$ orbitals on the same molecule. The resulting
Hamiltonian which describes the interplay of the hopping of electrons and their
Coulomb repulsion has the form
\begin{equation}\label{Hamil}
H=\sum_{\langle ij\rangle} \sum_{nn'\sigma} t_{in,jn'}\;
c^\dagger_{in\sigma} c^{\phantom{\dagger}}_{jn'\sigma}
+\;U\sum_i\hspace{-0.5ex} \sum_{(n\sigma)<(n'\sigma')}\hspace{-1ex}
n_{i n\sigma} n_{i n'\sigma'} .
\end{equation}
The \keyword{on-site Coulomb interaction} $U$ can be calculated within density
functional theory \cite{calcU}. It is given by the increase in the energy of
the $t_{1u}$ level per electron that is added to one molecule of the system.
It is important to avoid double counting in the calculation of $U$. While the
relaxation of the occupied orbitals and the polarization of neighboring
molecules has to be included in the calculation, excitations within the
$t_{1u}$ band must be excluded, since they are contained explicitly in the
Hamiltonian (\ref{Hamil}).
The results are consistent with experimental estimates \cite{expU,lof}:
$U\approx 1.2-1.4\;eV$. For comparison, the width of the $t_{1u}$ band
is in the range $W\approx 0.5-0.85\;eV$.
\section{Quantum Monte Carlo}
We now turn to the question of how to calculate the ground state of a
lattice Hamiltonian like (\ref{Hamil}). To simplify the notation most
examples in the present section are for the simple \keyword{Hubbard model}
(only one orbital per site, next neighbor hopping matrix elements $t_{ij}=-t$)
on a 2 dimensional square lattice:
\begin{equation}\label{Hubbard}
H=-t\;\sum c^\dagger_i c_j + U\sum n_{i\uparrow} n_{i\downarrow} .
\end{equation}
The band width for this model is $W=8\,t$.
We first introduce the Gutzwiller Ansatz as a suitable trial function $\Psi_T$
for the above Hamiltonian. Expectation values for the Gutzwiller function
can be calculated using variational Monte Carlo (VMC). Then we describe the
fixed-node diffusion Monte Carlo (FN-DMC) method that allows us to
calculate more accurate variational estimates of the ground state energy (see
the lecture notes by G.\ Bachelet for a more complete discussion of FN-DMC).
The main emphasis of our discussion will be on the optimization of the trial
function both in variational and fixed-node diffusion Monte Carlo.
\subsection{Variational Monte Carlo}
A good trial function for the Hubbard model has to balance the opposing
tendencies of the hopping term and the interaction term:
Without interaction (i.e.\ for $U=0$) the ground state of the Hamiltonian
(\ref{Hubbard}) is the Slater determinant $\Phi$ that maximizes the
kinetic energy. Without hopping ($t=0$) the interaction is minimized.
Since only doubly occupied sites, i.e.\ sites with $n_{i\uparrow}=1$
and $n_{i\downarrow}=1$, contribute to the Coulomb energy,
the electrons are distributed as uniformly as possible over the lattice
to minimize the number of double occupancies. A good compromise between
these two extremes is to start from the non-interacting wavefunction $\Phi$
but reduce the weight of configurations $R$ with large double occupancies
$D(R)$. This leads (up to normalization) to the \keyword{Gutzwiller
wavefunction} \cite{GWF}:
\begin{equation}\label{GWF}
\Psi_T(R) = g^{D(R)}\;\Phi(R) ,
\end{equation}
with $g\in(0,1]$ the Gutzwiller parameter. Figure 3 shows how decreasing the
Gutzwiller factor suppresses the configurations with a large number of double
occupancies.
\begin{figure}
\centerline{\epsfxsize=4.5cm\epsffile{gwgtp.1.00.eps}\hspace{-0.5cm}
\epsfxsize=4.5cm\epsffile{gwgtp.0.50.eps}\hspace{-0.5cm}
\epsfxsize=4.5cm\epsffile{gwgtp.0.30.eps}}
\caption{Weight of configurations with given number $D$ of double occupancies
for Gutzwiller wavefunctions $\Psi_T(R)=g^{D(R)}\,\Phi(R)$. Reducing
the Gutzwiller factor $g$ suppresses configurations with high
Coulomb energy $E_{\rm Coul}(R)=U\,D(R)$ at the expense of
increasing the kinetic energy.
The results shown here are for a Hubbard model with
$16\times16$ sites and $101+101$ electrons.}
\end{figure}
To calculate the energy expectation value for the Gutzwiller wavefunction
we have to perform a sum over all configurations $R$:
\begin{equation}\label{Evar}
E_T = {\langle\Psi_T|H|\Psi_T\rangle \over \langle\Psi_T|\Psi_T\rangle}
= {\sum_R E_{\rm loc}(R)\;\Psi_T^2(R) \over \sum_R \Psi_T^2(R)} ,
\end{equation}
where we have introduced the local energy for a configuration $R$
\begin{equation}\label{Eloc}
E_{\rm loc}(R)
= \sum_{R'} {\langle\Psi_T|R'\rangle\,\langle R'|H|R\rangle
\over \langle\Psi_T|R\rangle}
= \sum_{R'}\!'\;t\;{\Psi_T(R')\over\Psi_T(R)} + U\,D(R) .
\end{equation}
Since the number of configurations $R$ grows exponentially with system-size,
the summation in (\ref{Evar}) can be performed only for very small systems.
For larger problems we use \keyword{variational Monte Carlo} \cite{VMC}.
The idea is to perform a random walk in the space of
configurations, with transition probabilities $p(R\to R')$ chosen such
that the configurations $R_{VMC}$ in the random walk have the probability
distribution function $\Psi_T^2(R)$. Then
\begin{equation}\label{Evmc}
E_{\rm VMC} =
{\sum_{R_{\rm VMC}} E_{\rm loc}(R_{\rm VMC}) \over \sum_{R_{\rm VMC}} 1}
\approx
{\sum_R E_{\rm loc}(R)\;\Psi_T^2(R) \over \sum_R \Psi_T^2(R)}
= E_T .
\end{equation}
The transition probabilities can be determined from detailed balance
\begin{equation}\label{detailedbalance}
\Psi_T^2(R)\,p(R\to R') = \Psi_T^2(R')\,p(R'\to R)
\end{equation}
which gives $p(R\to R')={1/N}\;\min[1,\Psi_T^2(R')/\Psi_T^2(R)]$, with
$N$ being the maximum number of possible transitions.
It is sufficient to consider only transitions between configurations that
are connected by the Hamiltonian, i.e.\ transitions in which one electron
hops to a neighboring site. The standard prescription is then to propose a
transition $R\to R'$ with probability $1/N$ and accept it with probability
$\min[1,\Psi_T^2(R')/\Psi_T^2(R)]$. This works well for $U$ not too large.
For strongly correlated systems, however, the random walk will stay for long
times in configurations with a small number of double occupancies $D(R)$, since
most of the proposed moves will increase $D$ and hence be rejected with
probability $\approx 1-g^{D(R')-D(R)}$.
Fortunately there is a way to integrate-out the time the walk stays in a
given configuration. To see how, we first observe that for the local energy
(\ref{Eloc}) the ratio of the wavefunctions for all transitions induced by
the Hamiltonian have to be calculated. This in turn means that we also
know all transition probabilities $p(R\to R')$. We can therefore eliminate
any rejection (i.~e.\ accept with probability one) by proposing moves with
probabilities
\begin{equation}
\tilde{p}(R\to R') = {p(R\to R')\over\sum_{R'} p(R\to R')}
= {p(R\to R')\over 1-p_{\rm stay}(R)} .
\end{equation}
Checking detailed balance (\ref{detailedbalance}) we find that now we are
sampling configurations $\tilde{R}_{VMC}$ from the probability distribution
function $\Psi_T^2(R)\,(1-p_{\rm stay}(R))$. To compensate for this we assign
a weight $w(R)=1/(1-p_{\rm stay}(R))$ to each configuration $R$. The energy
expectation value is then given by
\begin{equation}
E_T \approx
{\sum_{\tilde{R}_{VMC}} w(\tilde{R}_{VMC})\,E_{\rm loc}(\tilde{R}_{VMC}) \over
\sum_{\tilde{R}_{VMC}} w(\tilde{R}_{VMC})} .
\end{equation}
The above method is quite efficient since it ensures that in every Monte Carlo
step a new configuration is created. Instead of staying in a configuration
where $\Psi_T$ is large, this configuration is weighted with the expectation
value of the number of steps the simple Metropolis algorithm would stay there.
This is particularly convenient for simulations of systems with strong
correlations: Instead of having to do longer and longer runs as $U$ is
increased, the above method produces, for a fixed number of Monte Carlo
steps, results with comparable error estimates.
\subsubsection*{Correlated sampling}
We now turn to the problem of optimizing the trial function $\Psi_T$.
A criterion for a good trial function is e.g.\ a low variational
energy. To find the wavefunction that minimizes the variational energy
we could do independent VMC calculations for a set of different trial
functions. It is, however, difficult to compare the energies from these
calculations since each VMC result comes with its own statistical
errors. This problem can be avoided with \keyword{correlated sampling}
\cite{corrsmpl}.
The idea is to use the same random walk for calculating the expectation value
of all the different trial functions. This reduces the {\em relative} errors
and hence makes it easier to find the minimum.
Let us assume we have generated a random walk $R_{VMC}$ using $\Psi_T$ as
a trial function. Using the same random walk, we can then estimate the energy
expectation value (\ref{Evmc}) for a different trial function $\tilde{\Psi}_T$,
by introducing the reweighting factors $\tilde{\Psi}_T^2(R)/\Psi_T^2(R)$:
\begin{equation}\label{corrsmpl}
\tilde{E}_T \approx
{\sum_{R_{VMC}} \tilde{E}_{\rm loc}(R)\,\tilde{\Psi}_T^2(R)/\Psi_T^2(R) \over
\sum_{R_{VMC}} \tilde{\Psi}_T^2(R)/\Psi_T^2(R) }
.
\end{equation}
(Since the random walk $R_{VMC}$ has only a finite number of configurations,
this will only work well as long as the reweighting factors do not deviate
too much from unity. Otherwise a few configurations with large reweighting
factors will dominate. See Fig.~4.)
\begin{figure}
\parbox[b]{6cm}{\epsfxsize=6cm \epsffile{corrvmc.88.epsi}}
\hspace{\fill}
\begin{minipage}[b]{6.0cm}
\caption{Correlated sampling for the Gutz\-willer parameter $g$. The
calculations are for a Hubbard model with $8\times8$ sites,
$28+28$ electrons, and $U=4\,t$.
The full curve shows the result starting from a calculation
with $g=1$. The predicted minimum $g_{\rm min}$ is indicated
by the dotted line. A dashed line gives the correlated sampling
curve obtained from a calculation using $g_{\rm min}$ in the trial
function. Both find the same minimum. $E(g)$ becomes unreliable for
very small $g$ due to reweighting factors much larger than unity.}
\end{minipage}
\end{figure}
We notice that (also in $\tilde{E}_{\rm loc}$) the new trial function
$\tilde{\Psi}_T$ appears only in ratios with the old trial function.
For Gutzwiller functions (\ref{GWF}) that differ only in the Gutzwiller factor
this means that the Slater determinants cancel, leaving only powers
$(\tilde{g}/g)^{D(R)}$. Since $D(R)$ is {\em integer} we can then rearrange
the sums in (\ref{corrsmpl}) into polynomials in $\tilde{g}/g$. To find the
optimal Gutzwiller parameter we then pick a reasonable $g$, perform a VMC run
for $\Psi_T(g)$ during which we also estimate the coefficients for these
polynomials. We can then calculate $E(\tilde{g})$ by simply evaluating the
ratio of the polynomials. Since there are typically only of the order of some
ten non-vanishing coefficients (cf.\ the distribution of weights in Fig.~3),
this method is very efficient. Figure 4 shows how the method performs in
practice. The idea of rewriting the sum over configurations into a polynomial
can be easily generalized to trial functions with more correlation factors of
the type $r^{c(R)}$, as long as the correlation function $c(R)$ is
integer-valued on the space of configurations.
\subsubsection*{Character of the Slater determinant}
So far we have always constructed the Gutzwiller wavefunction from the
ground state $\Phi$ of the non-interacting Hamiltonian ($U=0$). Alternatively
we could use the Slater determinant $\Phi(U)$ from solving the interacting
problem in the Hartree-Fock approximation. We can even interpolate between
these two extremes by doing a Hartree-Fock calculation with a fictitious
Hubbard interaction $U_0$ to obtain the Slater determinant $\Phi(U_0)$. This
introduces an additional variational parameter in the Gutzwiller wavefunction.
Increasing $U_0$ will change the character of the trial function from
paramagnetic to antiferromagnetic. This transition is also
reflected in the variational energies, as is shown in Figure 5. Clearly, for
small $U$ the paramagnetic state is favorable, while for large $U$ the
antiferromagnetic state gives a lower variational energy. We notice that for
all values of $U$ the optimal $U_0$ is much smaller than $U$.
\begin{figure}
\begin{minipage}[b]{6.1cm}
\caption{Dependence of variational (VMC) and fixed-node diffusion
Monte Carlo (FN-DMC) on the trial function. $U_0$ is the Hubbard
interaction that was used for the Slater determinant in the
Gutzwiller wavefunction $\Psi_T(R)=g^{D(R)}\;\Phi(U_0)$.
The Gutzwiller parameter has always been optimized.
The results shown here are the energies (relative to the atomic
limit) for a Hamiltonian that describes K$_3$C$_{60}$ (32 sites),
with $U$ being varied from $1.25$ (lowest curve) to $2.00\,eV$
(highest curve).}
\vspace*{1ex}
\end{minipage}
\hspace{\fill}
\parbox[b]{6cm}{\epsfxsize=6cm\epsffile{corrU0.epsi}}
\end{figure}
\subsection{Fixed-node diffusion Monte Carlo}
\keyword{Diffusion Monte Carlo} \cite{GFMC} allows us, in principle, to sample
the true ground state of a Hamiltonian. The basic idea is to use a projection
operator that has the lowest eigenstate as a fixed point. For a lattice problem
where the spectrum is bounded $E_n\in[E_0,E_{\rm max}]$, the projection is
given by
\begin{equation}\label{proj}
|\Psi^{(n+1)}\rangle = [1-\tau(H-E_0)]\;|\Psi^{(n)}\rangle
\;;\quad |\Psi^{(0)}\rangle=|\Psi_T\rangle .
\end{equation}
If $\tau<2/(E_{\rm max}-E_0)$ and $|\Psi_T\rangle$ has a non-vanishing overlap
with the ground state, the above iteration converges to $|\Psi_0\rangle$. There
is no time-step error involved.
Because of the prohibitively large dimension of the many-body Hilbert space,
the matrix-vector product in (\ref{proj}) cannot be done exactly. Instead, we
rewrite the equation in configuration space
\begin{equation}\label{iter}
\sum |R'\rangle\langle R'|\Psi^{(n+1)}\rangle
= \sum_{R,R'} |R'\rangle
\underbrace{\langle R'|1-\tau(H-E_0)|R\rangle}_{=:F(R',R)}
\langle R|\Psi^{(n)}\rangle
\end{equation}
and perform the propagation in a stochastic sense: $\Psi^{(n)}$ is
represented by an ensemble of configurations $R$ with weights $w(R)$.
The transition matrix element $F(R',R)$ is rewritten as a transition
probability $p(R\to R')$ times a normalization factor $m(R',R)$. The iteration
(\ref{iter}) is then stochastically performed as follows: For each $R$ we pick
a new configuration $R'$ with probability $p(R\to R')$ and multiply its weight
by $m(R',R)$. Then the new ensemble of configurations $R'$ with their
respective weights represents $\Psi^{(n+1)}$. \keyword{Importance sampling}
decisively improves the efficiency of this process by replacing $F(R',R)$ with
$G(R',R)=\langle\Psi_T|R'\rangle\,F(R',R)/\langle R|\Psi_T\rangle$, so
that transitions from configurations where the trial function is small
to configurations with large trial function are enhanced:
\begin{equation}
\sum |R'\rangle\langle\Psi_T| R'\rangle\langle R'|\Psi^{(n+1)}\rangle
= \sum_{R,R'} |R'\rangle\,G(R',R)\,
\langle\Psi_T|R\rangle\,\langle R|\Psi^{(n)}\rangle .
\end{equation}
Now the ensemble of configurations represents the product $\Psi_T\,\Psi^{(n)}$.
After a large number $n$ of iterations the ground state energy is then
given by the
\keyword{mixed estimator}
\begin{equation}\label{mixedest}
E_0 = {\langle\Psi_T|H|\Psi^{(n)}\rangle \over \langle\Psi_T\Psi^{(n)}\rangle}
\approx {\sum_R E_{\rm loc}(R)\;w(R) \over \sum_R w(R)} .
\end{equation}
As long as the evolution operator has only non-negative matrix elements
$G(R',R)$, all weights $w(R)$ will be positive. If, however, $G$ has
negative matrix elements there will be both configurations with positive and
negative weight. Their contributions to the estimator (\ref{mixedest})
tend to cancel so that eventually the statistical error dominates, rendering
the simulation useless. This is the infamous \keyword{sign problem}.
A straightforward way to get rid of the sign problem is to remove the
offending matrix elements from the Hamiltonian, thus defining a new Hamiltonian
$H_{\rm eff}$ by
\begin{equation}
\langle R'|H_{\rm eff}| R\rangle = \left\{
\begin{array}{cc}
0 & \mbox{ if $G(R',R)<0$} \\
\langle R'|H| R\rangle & \mbox{ else}
\end{array}\right.
\end{equation}
For each off-diagonal element $\langle R'|H| R\rangle$ that has been removed,
a term is added to the diagonal:
\begin{equation}
\langle R|H_{\rm eff}|R\rangle
= \langle R|H|R\rangle
+ \sum_{R'} \Psi_T(R')\langle R'|H|R\rangle/\Psi_T(R) .
\end{equation}
This is the \keyword{fixed-node approximation} for lattice Hamiltonians
introduced in Ref.~\cite{FNDMC}. $H_{\rm eff}$ is by construction free of the
sign problem and variational, i.e.\ $E_0^{\rm eff}\ge E_0$. The equality holds
if $\Psi_T(R')/\Psi_T(R)=\Psi_0(R')/\Psi_0(R)$ for all $R$, $R'$ with
$G(R',R)<0$.
Fixed-node diffusion Monte Carlo for a lattice Hamiltonian thus means that
we choose a trial function from which we construct an effective Hamiltonian
and determine its ground state by diffusion Monte Carlo.
Because of the variational property, we want to pick the $\Psi_T$ such that
$E_0^{\rm eff}$ is minimized, i.e. we want to optimize the trial function, or,
equivalently, the effective Hamiltonian. Also here we can use the concept of
correlated sampling. For optimizing the Gutzwiller parameter $g$ we can
even exploit the idea of rewriting the correlated sampling sums into
polynomials in $\tilde{g}/g$, that we already have introduced in VMC.
There is, however, a problem arising from the fact that the weight
of a given configuration $R^{(n)}$ in iteration $n$ is given by the product
$w(R^{(n)})=\prod_{i=1}^n m(R^{(i)},R^{(i-1)})$. Each individual normalization
factor $m(R',R)$ can be written as a finite polynomial, but the order of the
polynomial for $w(R^{(n)})$ increases steadily with the number of iterations.
It is therefore not practical to try to calculate the ever increasing number
of coefficients for the correlated sampling function $E^{(n)}(\tilde{g})$.
But since we still can easily calculate the coefficients for the $m(R',R)$,
we may use them to evaluate $E^{(n)}(\tilde{g})$ in each iteration on a set
of predefined values $\tilde{g}_i$ of the Gutzwiller parameter.
Figure 6 shows an example. It is interesting to note that the Gutzwiller
factor that minimizes $E_{VMC}$ is usually not the optimum Gutzwiller factor
for fixed-node DMC.
\begin{figure}
\begin{minipage}{5cm}
\caption{Correlated sampling of the Gutzwiller parameter $g$ in the trial
function to optimize the effective Hamiltonian in fixed-node
diffusion Monte Carlo. The results shown are for a Hubbard model
with $4\times4$ sites, $7+7$ electrons, and $U=4\,t$. The error bars
are the FN-DMC energies for different values of $g$, the lines
through the error bars are the corresponding correlated sampling
curves.}
\vspace*{3ex}
\end{minipage}
\hspace{\fill}
\parbox{6.5cm}{\epsfxsize=6.5cm\epsffile{dmcs.epsi}}
\end{figure}
As in VMC we can also vary the trial function by changing the character of
the Slater determiant $\Phi(U_0)$. We again find that the change from a
paramagnetic to an antiferromagnetic trial function is reflected in the
fixed-node energies (see Fig.~5), the paramagnetic state being favored
for small $U$, while the antiferromagnetic state gives the lower energy
for large $U$.
We want to use Monte Carlo methods to detect a Mott transition in the doped
Fullerides. For this we anticipate that we need an accuracy of better than
$0.025\,eV$.
To get a feeling for the accuracy of variational and fixed-node diffusion
Monte Carlo, using Gutzwiller trial functions, we compare the results of
the QMC calculations with exact results. Since exact diagonalizations can
only be done for small systems we consider a small cluster of 4 molecules.
The results for different values of the Hubbard interaction $U$ are shown
in Table 1. We find that the FN-DMC error is about an order of magnitude
smaller than the error in VMC. The typical FN-DMC error for our lattice
model is typically some $meV$, which should be sufficient for the
application at hand.
\begin{table}
{\footnotesize {\it Table 1.}
Total energy (in $eV$) for a cluster of four C$_{60}$ molecules with $6+6$
electrons in the $t_{1u}$ band (hopping parameters for K$_3$C$_{60}$).
The results of variational and diffusion Monte Carlo are compared to the
exact energy.}
\vspace{1ex}
\begin{center}
\begin{tabular}{d{2}d{4}d{8}@{\hspace{1ex}}d{3}%
d{8}@{\hspace{1ex}}d{3}}
\hline\hline
\multicolumn{1}{c}{$U$} &
\multicolumn{1}{c}{$E_{\rm exact}$} &
\multicolumn{1}{c}{$E_{FN-DMC}$} &
\multicolumn{1}{c}{$\Delta E$} &
\multicolumn{1}{c}{$E_{VMC}$} &
\multicolumn{1}{c}{$\Delta E$} \\
\hline
0.25 & 0.8457 & 0.8458(1)& 0.000 & 0.8490(2) & 0.003 \\
0.50 & 4.1999 & 4.2004(1)& 0.001 & 4.2075(3) & 0.008 \\
0.75 & 7.4746 & 7.4756(2)& 0.001 & 7.4873(4) & 0.013 \\
1.00 & 10.6994 & 10.7004(2)& 0.001 & 10.7179(5) & 0.019 \\
1.25 & 13.8860 & 13.8875(3)& 0.002 & 13.9127(6) & 0.027 \\
1.50 & 17.0408 & 17.0427(4)& 0.002 & 17.0728(7) & 0.032 \\
1.75 & 20.1684 & 20.1711(5)& 0.003 & 20.2061(4) & 0.038 \\
2.00 & 23.2732 & 23.2757(10)&0.003 & 23.3125(6) & 0.039 \\
\hline\hline
\end{tabular}
\end{center}
\end{table}
\section{Mott transition in doped Fullerides}
We now apply the quantum Monte Carlo methods described above to the
Hamiltonian (\ref{Hamil}). Our aim is to understand the
\keyword{Mott transition} in the integer-doped Fullerides A$_n$C$_{60}$.
Here A stands for an alkali metal like K, Rb, or Cs.
The criterion for the metal-insulator transition is the opening of the gap
\begin{equation}\label{gap}
E_g=E(N+1)-2\,E(N)+E(N-1) .
\end{equation}
Density functional calculations predict that the doped Fullerides
A$_n$C$_{60}$ with $n=1\ldots5$ are metals \cite{ldabands}. Only
A$_6$C$_{60}$ is an insulator with a completely filled $t_{1u}$
band. On the other hand, the strong Coulomb repulsion between
two electrons on the same C$_{60}$ molecule, which is much larger
than the width of the $t_{1u}$ band, suggests that all integer-doped
Fullerides should be Mott insulators. It has therefore been
suggested that experimental samples of, say, the superconductor
K$_3$C$_{60}$ are metallic only because they are non-stoichiometric,
i.e.\ that they actually are K$_{3-\delta}$C$_{60}$ \cite{lof}.
\subsubsection*{K$_3$C$_{60}$}
\begin{figwindow}[7,r,%
{\parbox{3.3cm}{\epsfxsize=3.3cm\epsffile{hop0.eps}
\epsfxsize=3.3cm\epsffile{hopp.eps}}},%
{Degeneracy argument.}]
In a first step we investigate what consequences the degeneracy of the
$t_{1u}$-band has for the Mott transition in K$_3$C$_{60}$. The analysis
is motivated by the following simple argument \cite{Mott,degen}. In the limit
of very large $U$ we can estimate the energies needed to calculate the
gap (\ref{gap}). For half filling, all molecules will have 3 electrons in the
$t_{1u}$ orbital (Fig.~7, top). Hopping is strongly suppressed since it would
increase the energy by $U$. Therefore, to leading order in $t^2/U$, there will
be no kinetic contribution to the total energy $E(N)$. In contrast, the systems
with $N\pm1$ electrons have an extra electron/hole that can hop without
additional cost in Coulomb energy. To estimate the kinetic energy we calculate
the matrix element for the hopping of the extra charge against an
antiferromagnetic background. Denoting the initial state with extra charge on
molecule $i$ by $|1\rangle$, we find that the second moment
$\langle1|H^2|1\rangle$ is given by the number of different possibilities
for a next-neighbor hop times the single electron hopping matrix element $t$
squared. By inserting $\sum_j |j\rangle\langle j|$, where
$|j\rangle$ denotes the state with the extra charge hopped from site $i$ to
site $j$, we find $\langle1|H|j\rangle = \sqrt{3}\,t$, since, with an
antiferromagnetic background and degeneracy 3, there are 3 different ways
an extra charge can hop to a neighboring molecule (Fig.~7, bottom). Thus,
due to the 3-fold degeneracy, {\sl the hopping matrix element is enhanced by a
factor $\sqrt{3}$ compared to the single electron hopping matrix element $t$.}
For a single electron system the kinetic energy is of the order of half
the band width $W/2$. The enhancement of the hopping matrix element in the
many-body case suggests then that the kinetic energy for the extra charge
is correspondingly enhanced. Inserting the energies into (\ref{gap})
we find that for the 3-fold degenerate system our simple argument predicts
a gap
\end{figwindow}
\begin{equation}\label{Egap}
E_g=U-\sqrt{3}\,W ,
\end{equation}
instead of $E_g=U-W$ in the non-degenerate case.
Extrapolating to intermediate $U,\,$ it appears that the degeneracy
shifts the Mott transition towards larger $U$.
The above argument is, of course, not rigorous. First, it is not clear
whether the result for $E_g$ that was obtained in the limit of large $U$
can be extrapolated to intermediate $U,\,$ where the Mott transition
actually takes place. Also the analogy of the hopping in the many-body case
with the hopping of a single electron is not rigorous, since the hopping of an
extra charge against an antiferromagnetic background creates a string of
flipped spins \cite{Nagaoka}. Nevertheless the argument suggests that orbital
degeneracy might play an important role for the Mott transition.
To test this proposition, we have performed quantum Monte Carlo calculations
for the model Hamiltonian (\ref{Hamil}) with hopping matrix elements
appropriate for K$_3$C$_{60}$ \cite{Mott}. The Coulomb interaction $U$ has been
varied from $U=0\ldots 1.75\,eV$ to study the opening of the gap. Since the
Monte Carlo calculations are for finite systems, we have to extrapolate to
infinite system size.
To improve the extrapolation we correct for finite-size effects: First,
there could be a gap $E_g(U=0)$ already in the spectrum of the non-interacting
system. Further, even for a metallic system of $M$ molecules there will be a
finite-size contribution of $U/M$ to the gap. It comes from the electrostatic
energy of the extra charge, uniformly distributed over all sites. Both
corrections vanish in the limit $M\to\infty$, as they should. The finite-size
corrected gap $\tilde{E}_g=E_g - U/M - E_g(U=0)$ for systems with
$M=$ 4, 8, 16, 32, and 64 molecules is shown in Figure 8. We find that the
gap opens for $U$ between $1.50\,eV$ and $1.75\,eV.\,$ Since for the real
system $U=1.2\ldots1.4\,eV,\,$ K$_3$C$_{60}$ is thus close to a Mott
transition, but still on the metallic side -- even though $U$ is considerably
larger than the band width $W$. This is in contrast to simpler theories that
neglect orbital degeneracy.
\begin{figure}
\parbox[b]{6cm}{\epsfxsize=6cm\epsffile{Egap3.eps}}
\hspace{\fill}
\begin{minipage}[b]{5.5cm}
\caption{Finite-size corrected gap $\tilde{E}_g=E_g-U/M-E_g(U=0)$ for
increasing Coulomb interaction $U$ as a function of $1/M$,
where $M$ is the number of molecules. The calculations are for
a Hubbard model with hopping matrix elements appropriate for
K$_3$C$_{60}$. The band width varies between $W=0.58\,eV$ for
$M=4$ and $W=0.63\,eV$ in the infinite-size limit.}
\vspace*{4ex}
\end{minipage}
\end{figure}
\subsubsection*{Doping dependence}
\vspace{0.5ex}
\begin{minipage}{6.5cm}
The degeneracy argument described above for K$_3$C$_{60}$ can be generalized
to integer fillings. Away from half filling the enhancement of the hopping
matrix elements for an extra electron is different from that for an extra
hole. The effective enhancement for different fillings are given in the
adjacent table.
\end{minipage}
\hspace{\fill}
\begin{minipage}[t]{5.2cm}
\begin{tabular}{l@{\hspace{5ex}}c@{$\;\approx\,$}c}
\hline\hline
filling & \multicolumn{2}{c}{enhancement}\\
\hline
$n=\;3$ & $\sqrt{3}$ & 1.73\\[0.5ex]
$n=2,4$ & ${\sqrt{3}+\sqrt{2}\over2}$ & 1.57\\[0.5ex]
$n=1,5$ & ${\sqrt{2}+ 1 \over2}$ & 1.21\\
\hline\hline
\end{tabular}
\end{minipage}
\vspace{0.5ex}
We find that the enhancement decreases as we move away from half filling.
Therefore we expect that away from half filling, correlations become
more important, putting the system closer to the Mott transition, or maybe
even pushing it across the transition, making it an insulator. We have
analyzed the \keyword{doping} dependence of the Mott transition for the same
Hamiltonian as used for K$_3$C$_{60}$, changing the filling of the $t_{1u}$
band from $n=1$ to 5 \cite{doping}. This model describes the
Fm${\bar 3}$m-Fullerides A$_n$C$_{60}$ with fcc lattice and orientational
disorder \cite{rmp}. The critical Coulomb interaction $U_c$, at which the
transition from a metal (for $U<U_c$) to an insulator ($U>U_c$) takes place,
is shown in Figure 9 for the different integer fillings. As expected from
the degeneracy argument, $U_c$ decreases away from $n=3$.
We note, however, that $U_c$ is asymmetric around half filling.
This asymmetry is not present in the simple degeneracy argument, where we
implicitly assumed that the lattice is bipartite. In such a
situation we have electron-hole symmetry, which implies symmetry around
half-filling. For frustrated lattices like the fcc lattice electron-hole
symmetry is broken, leading to the asymmetry in $U_c$ that is seen in Fig.~9.
\begin{figure}
\begin{minipage}[b]{5.5cm}
\caption[]{Doping dependence of the Mott transition. The error bars indicate
the estimate of the critical ratio $U_c/W$ for different integer
fillings of the $t_{1u}$ band. The calculations are for doped
Fm${\bar 3}$m Fullerides with fcc lattice structure and
orientational disorder. The shaded region shows the range
of $U/W$ in which the doped Fullerides are falling.}
\vspace{5ex}
\end{minipage}
\hspace{\fill}
\parbox[b]{6cm}{\epsfxsize=6cm\epsffile{Mott.eps}}
\end{figure}
\subsubsection*{Lattice dependence}
\begin{figwindow}[1,r,%
{{\epsfxsize=2cm\epsffile{triangle.eps}}},{\hspace*{\fill}\\Triangle}]
To understand the effect of \keyword{frustration} in terms of the hopping
arguments that we have made so far, we have to consider more than just one
next-neighbor hop.
The simplest system where we encounter frustration is a triangle with hopping
between neighboring sites. In the single electron case we can form a bonding
state with energy $E_{\rm min}=2\,t,\,$ but because of frustration we
cannot form an antibonding state. Instead the maximum eigenenergy is
$E_{\rm max}=t.\;$ Hence frustration leads to an asymmetric 'band' of
width $W=3\,t.$
\end{figwindow}
In the many-body case the situation is different. Like in the degeneracy
argument we look at the hopping of an extra electron against a (frustrated)
antiferromagnetic background in the large-$U$ limit. For simplicity we assume
a non-degenerate system, i.e.\ there is one electron per site on the triangle,
plus the extra electron. In this case we have to move the extra charge
{\em twice} around the triangle to come back to the many-body state we started
from. Thus in the large-$U$ limit the many-body problem is an eigenvalue
problem of a $6\times6$ matrix with extreme eigenvalues $\pm2\,t.\;$
In the degeneracy argument we have assumed that the kinetic energy of the
extra charge is given by $W/2$. On the triangle, we find, however, that the
hopping energy is larger than that by a factor $4/3$. This suggests that
for frustrated systems the single electron band width $W$ in (\ref{Egap})
should be multiplied by a prefactor larger than one. We therefore expect that
frustration alone, even without degeneracy, shifts the Mott transition to
larger $U.$
To analyze the effect of frustration on the Mott transition we have determined
the critical $U$ for a hypothetical doped Fulleride A$_4$C$_{60}$ with body
centered tetragonal (bct) structure, a lattice without frustration, having
the same band width ($W=0.6\,eV$) as the fcc-Fullerides, shown
in Figure 9. For $U=1.3\,eV$, we find a gap $E_g\approx0.6\,eV$ for the
Fulleride with bct structure, while the frustrated fcc compound is still
metallic $E_g=0$. This difference is entirely due to the lattice structure.
Using realistic parameters for K$_4$C$_{60}$ \cite{A4C60} that crystallizes
in a bct structure
we find a Mott insulator with gap $E_g\approx 0.7\,eV$, which is in line with
experimental findings: $E_g=0.5\pm0.1\,eV$ \cite{Knupfer}.
\subsubsection*{Conclusion}
We have seen that, due to more efficient hopping, orbital degeneracy increases
the critical $U$ at which the Mott transition takes place. This puts
the integer-doped Fullerides close to a Mott transition. Whether they are on
the metallic or insulating side depends on the filling of the band and the
lattice structure: Since the degeneracy enhancement works best for a half
filled band, systems doped away from half-filling tend to be more insulating.
The effect of frustration, on the other hand, is to make the system more
metallic.
\section*{Acknowledgments}
This work has been supported by the Alexander-von-Humboldt Stiftung under the
Feodor-Lynen-Program and the Max-Planck-Forschungspreis.
|
2,869,038,154,010 | arxiv | \chapter*{List of Abbreviations}
\begin{table}[t!]
\begin{tabular}{lll}
AP & & Affine Projection\\
AR & & Auto Regressive\\
LMS & & Least-Mean-Square\\
MSE & & Mean-Squared Error\\
NLMS & & Normalized LMS\\
PAPA & & Proportionate AP Algorithm\\
SM-AP & & Set-Membership Affine Projection\\
SM-NLMS & & Set-Membership Normalized LMS\\
SM-PAPA & & Set-Membership PAPA\\
SSM-AP & & Sparsity-aware SM-AP
\end{tabular}
\end{table}
\chapter*{Abbreviations}
\chapter{}
\chapter{Introduction}
In the last decades, the volume of data to be processed and kept for storage has been proliferated, mainly due to the increased availability of low-cost sensors and storage devices. As examples, we can mention the usage of multiple antennas in multiple-input and multiple-output wireless communication systems, the application of multiple audio devices in speech enhancement and audio signal processing, and the employment of echo cancellers in small or handheld communication devices. Moreover, these technological features are continuously spreading.
Our world is overwhelmed by data and to benefit from them in our daily life, we need to process the data correctly. A significant amount of data, however, brings about no new information in order that only part of it is particularly useful~\cite{Berberidis_censor_data_tsp2016,Wang_Big_data_GlobalSIP2014}. Therefore, we are compelled to improve our ability to evaluate the importance of the received data. This capability is called {\it data selection}. It enables the derivation of {\it data-selective adaptive filters}, which can neglect undesired data in a smart way. These filters are designed to reject the redundant data and perform their modeling tasks utilizing a small fraction of the available data.
Data-selective adaptive filters evaluate, select, and process data at each iteration of their learning process. These filters assess the data and choose only the ones bringing about some innovation. This property of the data-selective adaptive filters distinguishes them from the family of classical adaptive filters, which consider all data. In particular, these data-selective adaptive filters improve the accuracy of the estimator and decrease the computational complexity at the same time~\cite{Hamed_robustnessSMNLMS_sam2016,Hamed_robustnessSM_EURASIP2017,Markus_sparseSMAP_tsp2014}.
In this thesis, to apply the data selection, we employ the {\it set-membership filtering} (SMF)\abbrev{SMF}{Set-Membership Filtering} approach~\cite{Gollamudi_smf_letter1998,Diniz_adaptiveFiltering_book2013}. The set-membership (SM) adaptive filtering algorithm aims at estimating
the system such that the magnitude of the estimation output error is upper bounded by a predetermined positive constant called the threshold. The threshold is usually chosen based on {\it a priori} information about the sources of uncertainty. A comparison between traditional and SM adaptive filters was performed in~\cite{Diniz_adaptiveFiltering_book2013,Markus-phdthesis}, where the results had shown that the algorithms employing the SMF\abbrev{SMF}{Set-Membership Filtering} strategy require lower computational resources as compared to the conventional adaptive filters. The SMF\abbrev{SMF}{Set-Membership Filtering} algorithms, however, are not so widely used since there is some lack of analysis tools, and there is a limited number of set-membership adaptive filtering algorithms available. This thesis introduces new algorithms employing the SMF\abbrev{SMF}{Set-Membership Filtering} approach and provides some analysis tools.
This chapter is organized as follows. Section~\ref{sec:motivation-chap1} contains the main motivations. The targets of this thesis are given in Section~\ref{sec:target-chap1}. Section~\ref{sec:profile-chap1} describes the contributions of this thesis. Finally, the notation is explained in Section~\ref{sec:notation-chap1}.
\section{Motivations} \label{sec:motivation-chap1}
The area of {\it Digital Signal Processing} takes part in our daily lives for decades now, since it is at the core of virtually all electronic gadget we have been utilizing, ranging from medical equipment to mobile phones. If we have full information about the signals,
we can apply the most suitable algorithm (a digital filter for instance) to process the signals. However, if we do not know the statistical properties of the signals, a possible solution is to utilize an adaptive filter that automatically modifies its characteristics to match the behavior of the observed data.
Adaptive filters~\cite{Diniz_adaptiveFiltering_book2013,Sayed_adaptiveFilters_book2008,Haykin_adaptiveFiltering_book2002} are utilized in several electronic and communication devices, such as smartphones, advanced headphones, DSP chips, smart antennas, and microphone arrays for teleconference systems. Also, they have application in many areas such as system identification~\cite{Raffaello_rls_dcd_eusipco2016}, channel equalization~\cite{Diniz_semiblind_ds_iscas2008}, noise reduction~\cite{Andersen_atf_taslp2016}, echo cancellation~\cite{Ruiz_acoustic_ec_its2014}, interference cancellation~\cite{Rodrigo_multi-antenna_twc2013}, signal prediction~\cite{Hamed_smtrinion-tcssII2016}, acoustic images~\cite{Ehrenfried_damas_aiaa2007}, stock market~\cite{Zheng_stock_market_icca2010}, etc. Due to the diversity of applications of adaptive signal processing, traditional adaptive filters cannot meet the needs of every application. An ideal adaptive filter would have low processing time, high accuracy in the learning process, low energy consumption, low memory usage, etc. These properties, however, conflict with each other.
An adaptive filter uses an algorithm to adjust its coefficients. An algorithm is a procedure to modify the coefficients in order to minimize a prescribed criterion. The algorithm is characterized by defining the search method, the objective function, and the error signal nature. The traditional algorithms in adaptive filtering implement coefficient updates at each iteration. However, when the adaptive filter learns from the observed data and reaches its steady state, it is desirable that the adaptive filter has the ability to reduce its energy consumption since there is less information to be learned. Here appears the importance of data-selective adaptive filters since they assess the input data, then according to the innovation they decide to perform an update or not.
After defining the set-membership adaptive filtering algorithms as a subset of the data-selective adaptive filters, many works have shown how effective these algorithms are in reducing the energy consumption. In some environments they can decrease the number of updates by 80$\%$~\cite{Diniz_adaptiveFiltering_book2013,Markus-phdthesis}. This thesis, however, shows that there is room for improvements regarding the reduction in the number of arithmetic operations and energy consumption, as discussed in Chapters 5 and 6.
\section{Targets} \label{sec:target-chap1}
The targets of this thesis are:
\begin{itemize}
\item To analyze the performance of some existing set-membership adaptive filtering algorithms to confirm their competitive performance as compared to the classical adaptive filtering approaches;
\item To develop data-selective adaptive filtering algorithms beyond the real and complex numbers, and examine the advantage of the set-membership technique in different mathematical number systems;
\item To improve some existing set-membership adaptive filtering algorithms to bring about improvements in performance and computational complexity;
\item To introduce some new sparsity-aware set-membership adaptive filtering algorithms with low computational burden;
\item To exploit the hidden sparsity in the linear combination of parameters of adaptive filters.
\end{itemize}
In a nutshell, in this thesis, we improve and analyze data-selective adaptive filtering algorithms.
\section{Thesis Contributions} \label{sec:profile-chap1}
In this thesis, we analyze the robustness of classical set-membership adaptive filtering algorithms and extend these conventional algorithms for the trinion and the quaternion systems. In addition, we introduce an improved version of a set-membership adaptive filtering algorithm along with the partial updating strategy. Moreover, we develop some algorithms for sparse systems utilizing the SMF\abbrev{SMF}{Set-Membership Filtering} technique. Finally, we try to exploit the hidden sparsity in systems with lowpass and highpass frequencies. To address such topics, the text is hereinafter organized as follows.
Chapter 2 introduces some conventional adaptive filtering algorithms, such as the least-mean-square (LMS), the normalized LMS (NLMS), the affine projection (AP), and the recursive least-squares (RLS) ones. Then, we review the set estimation theory in adaptive signal processing and presents the set-membership filtering (SMF)\abbrev{SMF}{Set-Membership Filtering} strategy. Also, we describe a short review of the set-membership normalized least-mean-square (SM-NLMS) \abbrev{SM-NLMS}{Set-Membership Normalized LMS}and the set-membership affine projection (SM-AP) algorithms.
In Chapter 3, we address the robustness, in the sense of $l_2$-stability, of the SM-NLMS \abbrev{SM-NLMS}{Set-Membership Normalized LMS}and the SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} algorithms. For the SM-NLMS \abbrev{SM-NLMS}{Set-Membership Normalized LMS}algorithm, we demonstrate that it is robust regardless the choice of its parameters and that the SM-NLMS \abbrev{SM-NLMS}{Set-Membership Normalized LMS}enhances the parameter estimation in most of the iterations in which an update occurs, two advantages over the classical NLMS algorithm. Moreover, we also prove that if the noise bound is known, then we can set the SM-NLMS \abbrev{SM-NLMS}{Set-Membership Normalized LMS}so that it never degrades the estimate. As for the SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} algorithm, we demonstrate that its robustness depends on a judicious choice of one of its parameters: the constraint vector (CV). We prove the existence of CVs satisfying the robustness condition, but practical choices remain unknown. We also demonstrate that both the SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} and the SM-NLMS \abbrev{SM-NLMS}{Set-Membership Normalized LMS}algorithms do not diverge, even when their parameters are selected naively, provided the additional noise is bounded. Furthermore, numerical results that corroborate our analyses are presented.
In Chapter 4, we introduce new data-selective adaptive filtering algorithms for trinion and quaternion systems $\mathbb{T}$ and $\mathbb{H}$. The work advances the set-membership trinion- and quaternion-valued normalized least-mean-square (SMTNLMS\abbrev{SMTNLMS}{Set-Membership Trinion-Valued NLMS} and SMQNLMS)\abbrev{SMQNLMS}{Set-Membership Quaternion-Valued NLMS} and the set-membership trinion- and quaternion-valued affine projection (SMTAP\abbrev{SMTAP}{Set-Membership Trinion-Valued AP} and SMQAP)\abbrev{SMQAP}{Set-Membership Quaternion-Valued AP} algorithms. Also, as special cases, we obtain trinion- and quaternion-valued algorithms not employing the set-membership strategy. Prediction simulations based on recorded wind data are provided, showing the improved performance of the proposed algorithms regarding reduced computational load. Moreover, we study the application of quaternion-valued adaptive filtering algorithms to adaptive beamforming.
Usually, set-membership algorithms implement updates more regularly during the early iterations in stationary environments. Therefore, if these updates exhibit high computational complexity, an alternative solution is needed. A possible approach to partly control the computational complexity is to apply partial update technique, where only a subset of the adaptive filter coefficients is updated at each iteration. In Chapter 5, we present an improved set-membership partial-update affine projection (I-SM-PUAP)\abbrev{I-SM-PUAP}{Improved SM-PUAP} algorithm, aiming at accelerating the convergence rate, and decreasing the update rate of the set-membership partial-update affine projection (SM-PUAP)\abbrev{SM-PUAP}{Set-Membership Partial-Update AP} algorithm. To meet these targets, we constrain the weight vector perturbation to be bounded by a hypersphere instead of the threshold hyperplanes as in the standard algorithm. We use the distance between the present weight vector and the expected update in the standard SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} algorithm to construct the hypersphere. Through this strategy, the new algorithm shows better behavior in the early iterations. Simulation results verify the excellent performance of the proposed algorithm related to the convergence rate and the required number of updates.
In Chapter 6, we derive two LMS-based\abbrev{LMS}{Least-Mean-Square} algorithms, namely the simple set-membership affine projection (S-SM-AP)\abbrev{S-SM-AP}{Simple SM-AP} and the improved S-SM-AP\abbrev{IS-SM-AP}{Improved S-SM-AP} (IS-SM-AP), in order to exploit the sparsity of an unknown system while focusing on having low computational cost. To achieve this goal, the proposed algorithms apply a discard function on the weight vector to disregard the coefficients close to zero during the update process. In addition, the IS-SM-AP\abbrev{IS-SM-AP}{Improved S-SM-AP} algorithm reduces the overall number of computations required by the adaptive filter even further by replacing small coefficients with zero. Moreover, we introduce the $l_0$ norm RLS ($l_0$-RLS)\abbrev{$l_0$-RLS}{$l_0$ Norm RLS} and the RLS\abbrev{RLS}{Recursive Least-Squares} algorithm for sparse models (S-RLS)\abbrev{S-RLS}{RLS Algorithm for Sparse System}. Also, we derive the data-selective version of these RLS-based\abbrev{RLS}{Recursive Least-Squares} algorithms. Simulation results show similar performance when comparing the proposed algorithms with some existing state-of-the-art sparsity-aware algorithms while the proposed algorithms require lower computational complexity.
When our target is to detect and exploit sparsity in the model parameters, in many situations, the sparsity is hidden in the relations among these coefficients so that some suitable tools are required to reveal the potential sparsity. Chapter 7 proposes a set of least-mean-square (LMS)\abbrev{LMS}{Least-Mean-Square} type algorithms, collectively called feature LMS (F-LMS)\abbrev{F-LMS}{Feature LMS} algorithms, setting forth a hidden feature of the unknown parameters, which ultimately would improve convergence speed and steady-state mean-squared error. The fundamental idea is to apply linear transformations, by means of the so-called feature matrices, to reveal the sparsity hidden in the coefficient vector, followed by a sparsity-promoting penalty function to exploit such sparsity. Some F-LMS\abbrev{F-LMS}{Feature LMS} algorithms for lowpass and highpass systems are also introduced by using simple feature matrices that require only trivial operations. Simulation results demonstrate that the proposed F-LMS\abbrev{F-LMS}{Feature LMS} algorithms bring about several performance improvements whenever the hidden sparsity of the parameters is exposed.
Finally, chapter 8 highlights the conclusions of the work, and gives some clues for future works regarding the topics addressed in the thesis.
\section{Notation} \label{sec:notation-chap1}
In this section, we introduce most of the usual notation utilized in this thesis. However, in order to avoid confusing the reader, we evade presenting here the definition of the rare notation in this text, and we introduce them only at the vital moments.
Equalities are shown by $=$, and when they refer to a definition, we use $\triangleq$.\symbl{$\triangleq$}{Definition} The real, nonnegative real, nature, integer, complex, trinion, and quaternion numbers are denoted by $\mathbb{R}$, $\mathbb{R}_+$, $\mathbb{N}$, $\mathbb{Z}$, $\mathbb{C}$, $\mathbb{T}$, and $\mathbb{H}$, respectively. \symbl{$\mathbb{R}$}{Set of real numbers} \symbl{$\mathbb{R}_+$}{Set of nonnegative real numbers} \symbl{$\mathbb{N}$}{Set of natural numbers} \symbl{$\mathbb{Z}$}{Set of integer numbers} \symbl{$\mathbb{C}$}{Set of complex numbers} \symbl{$\mathbb{T}$}{Set of trinion numbers} \symbl{$\mathbb{H}$}{Set of quaternion numbers}
Moreover, scalars are represented by lowercase letters (e.g., $x$), vectors by lowercase boldface letters (e.g., $\xbf$), and matrices by uppercase boldface letters (e.g., $\Xbf$). The
symbols $(\cdot)^T$ and $(\cdot)^H$ stand for the transposition\symbl{$(\cdot)^T$}{Transposition of $(\cdot)$} and Hermitian operators,\symbl{$(\cdot)^H$}{Hermitian transposition of $(\cdot)$} respectively. Also, all vectors are column vectors in order that the inner product between two vectors $\xbf$ and $\ybf$ is defined as $\xbf^T\ybf$ or $\xbf^H\ybf$.
We represent the trace operator by ${\rm tr}(\cdot)$.\symbl{${\rm tr}(\cdot)$}{Trace of matrix} The identity matrix and zero vector (matrix) are denoted by $\Ibf$ \symbl{$\Ibf$}{Identity matrix} and ${\bf 0}$,\symbl{${\bf 0}$}{Zero vector or zero matrix} respectively. Also, ${\rm diag}(\xbf)$ stands for a diagonal matrix with vector $\xbf$ on its diagonal and zero outside it.\symbl{${\rm diag}(\xbf)$}{Diagonal matrix with $\xbf$ on its diagonal} Furthermore, $\mathbb{P}[\cdot]$ and $\mathbb{E}[\cdot]$ denote the probability\symbl{$\mathbb{P}$}{Probability operator} and the expected value operators,\symbl{$\mathbb{E}$}{Expected value operator} respectively. Also, $\|\cdot\|$ denotes the $l_2$ norm (when the norm is not defined explicitly, we are referring to the $l_2$ norm).
\chapter{Conventional and Set-Membership Adaptive Filtering Algorithms}
The {\it point estimation theory}~\cite{Lehmann_pointEstimation_book2003} utilizes a sample data for computing a single solution as the best estimate of an unknown parameter. For decades, machine learning and adaptive filtering have been grounded in the point estimation theory~\cite{Diniz_adaptiveFiltering_book2013,Sayed_adaptiveFilters_book2008,Haykin_adaptiveFiltering_book2002,Theodoridis_Pattern_Recognition_book2008,Bishop_Pattern_Recognition_book2011}. Nowadays, the benefit of the set estimation approach, however, is becoming clearer by disclosing its advantages~\cite{Combettes_foundationSetTheoreticEstimation_procIEEE1993,Markus_edcv_eusipco2013,Combettes_noise_SetTheoretic_tsp1991}.
In contrast with the world of theoretical models, in the real-world we live with uncertainties originate from measurement noise, quantization, interference, modeling errors, etc. Therefore, searching the solution utilizing point estimation theory sometimes results in a waste of energy and time. An alternative is to address the problem from the {\it set estimation theory}~\cite{Combettes_foundationSetTheoreticEstimation_procIEEE1993} point of view. In fact, in this approach, we search for a set of acceptable solutions instead of a unique point as a solution.
The adaptive filtering algorithms presented in \cite{Haykin_adaptiveFiltering_book2002,Sayed_adaptiveFilters_book2008} exhibit a trade-off between convergence rate and misadjustment after transient, particularly in stationary environments. In general, fast converging algorithms lead to high variance estimators after convergence. To tackle this problem, we can apply set-membership filtering (SMF)~\abbrev{SMF}{Set-Membership Filtering}
\cite{Diniz_adaptiveFiltering_book2013,Markus-phdthesis} which is a representative of the set estimation theory. The SMF\abbrev{SMF}{Set-Membership Filtering} technique prevents unnecessary updates and reduces the computational complexity by updating the filter coefficients only when the estimation error is greater than a predetermined upper bound~\cite{Fogel_valueOfInformation_automatica1982,Deller_smi_asspmag1989,Gollamudi_smf_letter1998}.
In set-membership adaptive filters, we try to find a {\it feasibility set} such that any member in this set has the output estimation error limited by a predetermined upper bound. For this purpose, the objective function of the algorithm is related to a bounded error constraint on the filter output, such that the updates are contained in a set of acceptable solutions. The inclusion of {\it a priori} information, such as the noise bound, into the objective function leads to some noticeable advantages. As compared with the normalized least-mean-square (NLMS)\abbrev{NLMS}{Normalized LMS} and the affine projection (AP)\abbrev{AP}{Affine Projection} algorithms, their set-membership counterparts have lower computational cost, better accuracy, data selection, and robustness against noise~\cite{Gollamudi_smf_letter1998,Gollamudi_smUpdatorShared_tsp1998,Nagaraj_beacon_tsp1999,Diniz_sm_bnlms_tsp2003,Werner_sm_ap_letter2001,Hamed_robustnessSMNLMS_sam2016,Hamed_robustnessSM_EURASIP2017}.
This chapter presents a brief review of some adaptive filtering algorithms. An interested reader should refer to~\cite{Diniz_adaptiveFiltering_book2013} for more details. Section~\ref{sec:conventional_algorithms} describes the point estimation adaptive filtering algorithms. Section~\ref{sec:SM-AF-chap2} reviews the SMF\abbrev{SMF}{Set-Membership Filtering} approach and the main set-membership algorithms. The estimation of the threshold parameter for big data applications is discussed in Section~\ref{sec:estimate_gamma_chap2}. Finally, Section~\ref{sec:conclusion-chap2} contains the conclusions.
\section{Point Estimation Adaptive Filtering \\Algorithms} \label{sec:conventional_algorithms}
In this section, we introduce some LMS-based adaptive filtering algorithms and the recursive least-squares (RLS)\abbrev{RLS}{Recursive Least-Squares} algorithm.
\subsection{Least-mean-square algorithm} \label{sub:lms-chap2}
The update equation of the least-mean-square (LMS)\abbrev{LMS}{Least-Mean-Square} algorithm is given by~\cite{Diniz_adaptiveFiltering_book2013}
\begin{align}
\wbf(k+1)=\wbf(k)+2\mu e(k)\xbf(k),
\end{align}
where $\xbf(k)=[x_0(k)~x_1(k)~\cdots~x_N(k)]^T$ and $\wbf(k)=[w_0(k)~w_1(k)~\cdots~w_N(k)]^T$ are the input signal vector and the the weight vector, respectively.\symbl{$\xbf(k)$}{Input signal vector} \symbl{$\wbf(k)$}{Coefficient vector} \symbl{$k$}{Iteration counter} The output signal is defined by $y(k)\triangleq\wbf^T(k)\xbf(k)=\xbf^T(k)\wbf(k)$,\symbl{$y(k)$}{Output signal} and $e(k)\triangleq d(k)-y(k)$ denotes the error signal,\symbl{$e(k)$}{Error signal} where $d(k)$ is the desired signal.\symbl{$d(k)$}{Desired signal} The convergence factor $\mu$ \symbl{$\mu$}{Convergence factor} should be chosen in the range $0<\mu<\frac{1}{{\rm tr}[\Rbf]}$ to guarantee the convergence, where $\Rbf\triangleq\mathbb{E}[\xbf(k)\xbf^T(k)]$ is the correlation matrix. \symbl{$\Rbf$}{Correlation matrix}
\subsection{Normalized LMS algorithm} \label{sub:nlms-chap2}
To increase the convergence rate of the LMS\abbrev{LMS}{Least-Mean-Square} algorithm without using matrix $\Rbf$, we can utilize the NLMS\abbrev{NLMS}{Normalized LMS} algorithm. The recursion rule of the NLMS\abbrev{NLMS}{Normalized LMS} algorithm is described by~\cite{Diniz_adaptiveFiltering_book2013}
\begin{align}
\wbf(k+1)=\wbf(k)+\frac{\mu_n}{\xbf^T(k)\xbf(k)+\delta}e(k)\xbf(k),
\end{align}
where $\delta$ is a small regularization factor,\symbl{$\delta$}{Regularization factor} and the step size $\mu_n$ should be selected in the range $0<\mu_n<2$.
\subsection{Affine projection algorithm} \label{sub:ap-chap2}
When the input signal is correlated, it is possible to use old data signal to improve the convergence speed of the algorithm. For this purpose, let us utilize the last $L+1$ input signal vector and form matrix $\Xbf(k)$ as
\begin{align}
\Xbf(k)=[\xbf(k)~\xbf(k-1)~\cdots~\xbf(k-L)]\in\mathbb{R}^{(N+1)\times(L+1)}.
\end{align}
Also, let us define the desired signal vector $\dbf(k)$, the output signal vector $\ybf(k)$, and the error signal vector $\ebf(k)$ as follows
\begin{align}
\dbf(k)&=[d(k)~d(k-1)~\cdots~d(k-L)]^T,\nonumber\\
\ybf(k)&\triangleq\wbf^T(k)\Xbf(k)=\Xbf^T(k)\wbf(k),\nonumber\\
\ebf(k)&\triangleq\dbf(k)-\ybf(k).
\end{align}
Then, the update rule of the affine projection (AP)\abbrev{AP}{Affine Projection} algorithm is described by~\cite{Diniz_adaptiveFiltering_book2013}
\begin{align}
\wbf(k+1)=\wbf(k)+\mu\Xbf(k)[\Xbf^T(k)\Xbf(k)]^{-1}\ebf(k),
\end{align}
where $\mu$ is the convergence factor.
\subsection{Recursive least-squares algorithm} \label{sub:rls-chap2}
Here, we review the RLS\abbrev{RLS}{Recursive Least-Squares} algorithm. The goal of this algorithm is to match the output signal to the desired signal as much as possible.
The objective function of the RLS\abbrev{RLS}{Recursive Least-Squares} algorithm is given by
\begin{align}
\zeta(k)=\sum_{i=0}^k\lambda^{k-i}\varepsilon^2(i)=\sum_{i=0}^k\lambda^{k-i}[d(i)-\xbf^T(i)\wbf(k)]^2,
\end{align}
where $\lambda$ is a forgetting factor which should be adopted in the range $0\ll\lambda\leq1$, and $\varepsilon(i)$ is called the {\it a posteriori} error.\symbl{$\varepsilon(k)$}{{\it A posteriori} error signal} Note that in the elaboration of the LMS-based\abbrev{LMS}{Least-Mean-Square} algorithms we use the {\it a priori} error, whereas for the RLS\abbrev{RLS}{Recursive Least-Squares} algorithm we utilize the {\it a posteriori} error.
If we differentiate $\zeta(k)$ with respect to $\wbf(k)$ and equate the result to zero, we get the optimal coefficient vector $\wbf(k)$~\cite{Diniz_adaptiveFiltering_book2013}
\begin{align}
\wbf(k)=\Big[\sum_{i=0}^k\lambda^{k-i}\xbf(i)\xbf^T(i)\Big]^{-1}\sum_{i=0}^k\lambda^{k-i}\xbf(i)d(i)=\Rbf_D^{-1}(k)\pbf_D(k),
\end{align}
where $\Rbf_D(k)$ and $\pbf_D(k)$ are named the deterministic correlation matrix of the input signal and the deterministic cross-correlation vector between the input and the desired signals, respectively.\symbl{$\Rbf_D(k)$}{Deterministic correlation matrix of the input signal} \symbl{$\pbf_D(k)$}{Deterministic cross-correlation vector between the input and the desired signals} By using the matrix inversion lemma~\cite{Goodwin_Dynamic_system_id_book1977}, the inverse of $\Rbf_D(k)$ can be given by\symbl{$\Sbf_D(k)$}{The inverse of $\Rbf_D(k)$}
\begin{align}
\Sbf_D(k)=\Rbf_D^{-1}(k)=\frac{1}{\lambda}\Big[\Sbf_D(k-1)-\frac{\Sbf_D(k-1)\xbf(k)\xbf^T(k)\Sbf_D(k-1)}{\lambda+\xbf^T(k)\Sbf_D(k-1)\xbf(k)}\Big].
\end{align}
\section{Set-Membership Adaptive Filtering \\Algorithms} \label{sec:SM-AF-chap2}
In this section, we firstly introduce the set-membership filtering (SMF)\abbrev{SMF}{Set-Membership Filtering} approach. Secondly, we present the SM-NLMS\abbrev{SM-NLMS}{Set-Membership Normalized LMS} algorithm. Finally, we review the SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} algorithm.
\subsection{Set-membership filtering} \label{sub:SMF-chap2}
The SMF\abbrev{SMF}{Set-Membership Filtering} approach proposed in~\cite{Gollamudi_smf_letter1998} is suitable for adaptive filtering problems that are linear in parameters. Thus, for a given input signal vector $\xbf(k)\in\mathbb{R}^{N+1}$ at iteration $k$ and the filter coefficients $\wbf\in\mathbb{R}^{N+1}$, the output signal of the filter is obtained by
\begin{align}
y(k)=\wbf^T\xbf(k),
\end{align}
where $\xbf(k)=[x_0(k)~x_1(k)~\cdots~x_N(k)]^T$ and $\wbf=[w_0~w_1~\cdots~w_N]^T$.
For a desired signal sequence $d(k)$, the estimation error sequence $e(k)$ is computed as
\begin{align}
e(k)=d(k)-y(k).
\end{align}
The SMF\abbrev{SMF}{Set-Membership Filtering} criterion aims at estimating the parameter $\wbf$ such that the magnitude of the estimation output error is upper bounded by a constant $\gammabar\in\mathbb{R}_+$, for all possible pairs $(\xbf,d)$.\symbl{$\gammabar$}{Upper bound for the magnitude of the error signal} If the value of $\gammabar$ is suitably selected, there are various valid estimates for $\wbf$. The threshold is usually chosen based on {\it a priori} information about the sources of uncertainty. Note that any $\wbf$ leading to an output estimation error with magnitude smaller than $\gammabar$ is an acceptable solution. Hence, we obtain a set of filters rather than a single estimate.
Let us denote by ${\cal S}$ the set comprised of all possible pairs $(\xbf,d)$.\symbl{${\cal S}$}{Set comprised of all possible pairs $(\xbf,d)$} We want to find $\wbf$ such that $|e|=|d-\wbf^T\xbf|\leq\gammabar$ for all $(\xbf,d)\in{\cal S}$. Therefore, the {\it feasibility set} $\Theta$ will be defined as\symbl{$\Theta$}{Feasibility set}
\begin{align}
\Theta\triangleq\bigcap_{(\xbf,d)\in{\cal S}}\{\wbf\in\mathbb{R}^{N+1}:|d-\wbf^T\xbf|\leq\gammabar\},
\end{align}
so that the SMF\abbrev{SMF}{Set-Membership Filtering} criterion can be stated as finding $\wbf\in\Theta$.
In the case of online applications, we do not have access to all members of ${\cal S}$. Thus, we consider the practical case in which only measured data are available and develop iterative techniques. Suppose that a set of data pairs $\{(\xbf(0),d(0)),(\xbf(1),d(1)),\cdots,(\xbf(k),d(k))\}$ is available, and define the {\it constraint set} ${\cal H}(k)$ at time instant $k$ as\symbl{${\cal H}(k)$}{Constraint set at iteration $k$}
\begin{align}
{\cal H}(k)\triangleq \{\wbf\in\mathbb{R}^{N+1}:|d(k)-\wbf^T\xbf(k)|\leq\gammabar\}.
\end{align}
Also, define the {\it exact membership set} $\psi(k)$ as the intersection of the constraint sets from the beginning, i.e. the first iteration, to iteration $k$,\symbl{$\psi(k)$}{Exact membership set} or
\begin{align}
\psi(k)\triangleq \bigcap_{i=0}^k{\cal H}(i).
\end{align}
Then, $\Theta$ can be iteratively estimated via the exact membership set since $\lim_{k\rightarrow\infty}\psi(k)=\Theta$.
Figure \ref{fig:smf-chap2} shows the geometrical interpretation of the SMF\abbrev{SMF}{Set-Membership Filtering} principle. The boundaries of the constraint sets are hyperplanes, and ${\cal H}(k)$ corresponds to region between the parallel hyperplanes in the parameter space. The exact membership set represents a polytope in the parameter space. The volume of $\psi(k)$ decreases for each $k$ in which the pairs $(\xbf(k),d(k))$ bring about some innovation. Note that $\Theta\subset\psi(k)$ for all $k$, since $\Theta$ is the intersection of all possible constraint sets.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=.85\linewidth]{Figs/smf-chap2.pdf}
\caption{SMF geometrical interpretation in the parameter space $\psi(1)$ (redrawn from \cite{Markus-phdthesis}).}
\label{fig:smf-chap2}
\end{center}
\end{figure}
The target of set-membership adaptive filtering is to obtain adaptively an estimate that belongs to the feasibility set. The simplest method is to calculate a point estimate using, for example, the information
provided by ${\cal H}(k)$ similar to the set-membership NLMS\abbrev{NLMS}{Normalized LMS} algorithm described in the following subsection, or several previous ${\cal H}(k)$ like in the SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} algorithm discussed in Subsection~\ref{sub:sm-ap-chap2}.
\subsection{Set-membership normalized LMS algorithm} \label{sub:sm-nlms-chap2}
The set-membership NLMS\abbrev{NLMS}{Normalized LMS} algorithm, first proposed in \cite{Gollamudi_smf_letter1998}, implements a test to check if the previous estimate $\wbf(k)$ lies outside the constraint set ${\cal H}(k)$. If $|d(k)-\wbf^T(k)\xbf(k)|>\gammabar$, then $\wbf(k+1)$ will be updated to the closest boundary of ${\cal H}(k)$ at a minimum distance. Figure~\ref{fig:sm-nlms-chap2} depicts the updating procedure of the SM-NLMS \abbrev{SM-NLMS}{Set-Membership Normalized LMS}algorithm.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=.75\linewidth]{Figs/sm-nlms-chap2.pdf}
\caption{Coefficient vector updating for the SM-NLMS algorithm (redrawn from \cite{Diniz_adaptiveFiltering_book2013}).}
\label{fig:sm-nlms-chap2}
\end{center}
\end{figure}
The SM-NLMS \abbrev{SM-NLMS}{Set-Membership Normalized LMS}algorithm has the updating rule
\begin{align}
\wbf(k+1)= \wbf(k)+\frac{\mu(k)}{\xbf^T(k)\xbf(k)+\delta}e(k)\xbf(k), \label{eq:sm-nlms-update-chap2}
\end{align}
where the variable step size $\mu(k)$ is given by
\begin{align}
\mu(k)=\left\{\begin{array}{ll}1-\frac{\gammabar}{|e(k)|}&\text{if }|e(k)|>\gammabar,\\0&\text{otherwise},\end{array}\right.
\end{align}
and $\delta$ is a small regularization factor. As
a rule of thumb, the value of $\gammabar$ is selected about $\sqrt{\tau\sigma_n^2}$, where $\sigma_n^2$ is the variance of the additional noise~\cite{Gollamudi_smf_letter1998,Galdino_SMNLMS_gammabar_ISCAS2006}, and $1\leq\tau\leq5$.
Note that we can introduce the NLMS algorithm through the SM-NLMS algorithm. Indeed, the NLMS\abbrev{NLMS}{Normalized LMS} algorithm with unit step size is a particular case of the SM-NLMS \abbrev{SM-NLMS}{Set-Membership Normalized LMS}algorithm by adopting $\gammabar=0$.
\subsection{Set-membership affine projection algorithm} \label{sub:sm-ap-chap2}
The exact membership set $\psi(k)$ suggests the use of more constraint sets in the update~\cite{Werner_sm_ap_letter2001}. Moreover, it is widely known that data-reusing algorithms can increase convergence speed significantly for correlated-input
signals~\cite{Diniz_adaptiveFiltering_book2013,Haykin_adaptiveFiltering_book2002,Ozeki_ap_japan1984}. This section introduces the SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} algorithm whose updates belong to the last $L+1$ constraint sets. For this purpose, let us define the input signal matrix $\Xbf(k)$, the output signal vector $\ybf(k)$, the error signal vector $\ebf(k)$, the desired signal vector $\dbf(k)$,
the additive noise signal vector $\nbf(k)$, and the constraint vector (CV)\abbrev{CV}{Constraint Vector} $\gammabf(k)$ as \symbl{$\Xbf(k)$}{Input signal matrix} \symbl{$\ybf(k)$}{Output signal vector} \symbl{$\ebf(k)$}{Error signal vector} \symbl{$\dbf(k)$}{Desired signal vector} \symbl{$\nbf(k)$}{Additive noise signal vector} \symbl{$\gammabf(k)$}{Constraint vector}
\begin{equation}
\begin{aligned}
\Xbf(k)&=[\xbf(k)~\xbf(k-1)~\cdots~\xbf(k-L)] \in\mathbb{R}^{(N+1)\times (L+1)},\\
\xbf(k)&=[x(k)~x(k-1)~\cdots~x(k-N)]^T\in\mathbb{R}^{N+1},\\
\ybf(k)&=[y(k)~y(k-1)~\cdots~y(k-L)]^T\in\mathbb{R}^{L+1},\\
\ebf(k)&=[e(k)~\epsilon(k-1)~\cdots~\epsilon(k-L)]^T \in\mathbb{R}^{L+1},\\
\dbf(k)&=[d(k)~d(k-1)~\cdots~d(k-L)]^T \in\mathbb{R}^{L+1},\\
\nbf(k)&=[n(k)~n(k-1)~\cdots~n(k-L)]^T \in\mathbb{R}^{L+1},\\
\gammabf(k)&=[\gamma_0(k)~\gamma_1(k)~\cdots~\gamma_L(k)]^T \in\mathbb{R}^{L+1},
\label{eq:pack-chap2}
\end{aligned}
\end{equation}
where $N$ is the order of the adaptive filter\symbl{$N$}{Order of the FIR adaptive filter}, and $L$ is the data-reusing factor\symbl{$L$}{Data reuse factor}, i.e.,
$L$ previous data are used together with the data from the current iteration $k$.
The output signal vector is defined as $\ybf(k)\triangleq\wbf^T(k)\Xbf(k)=\Xbf^T(k)\wbf(k)$, the desired signal vector is given by $\dbf(k)\triangleq\wbf_o^T\Xbf(k)+\nbf(k)$, where $\wbf_o$ is the optimal solution (unknown system),\symbl{$\wbf_o$}{Impulse response of the unknown system} and the error signal vector is given by $\ebf(k) \triangleq \dbf(k)-\ybf(k)$. The entries of the constraint vector
should satisfy $| \gamma_i(k) |\leq \gammabar$, for $i=0,\ldots,L$, where $\gammabar \in \mathbb{R}_+$ is the upper bound for the
magnitude of the error signal $e(k)$.
The objective function to be minimized in the SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} algorithm can be stated as follows: a coefficient update is implemented whenever $\wbf(k)\not\in\psi^{L+1}(k)$ in such a way that
\begin{align}
&\min\frac{1}{2}\|\wbf(k+1)-\wbf(k)\|^2\nonumber\\
&\text{subject to:}\nonumber\\
&\dbf(k)-\Xbf^T(k)\wbf(k+1)=\gammabf(k),
\end{align}
where $\psi^{L+1}(k)$ is the intersection of the $L+1$ last constraint sets.
Figure~\ref{fig:sm-ap-chap2} shows a usual coefficient update related to the SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} algorithm in $\mathbb{R}^2$, $L=1$ and $|\gamma_i(k)|\leq\gammabar$ such that $\wbf(k+1)$ is not placed at the border of ${\cal H}(k)$.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=.85\linewidth]{Figs/sm-ap-chap2.pdf}
\caption{Coefficient vector updating for the SM-AP algorithm (redrawn from \cite{Diniz_adaptiveFiltering_book2013}).}
\label{fig:sm-ap-chap2}
\end{center}
\end{figure}
By using the method of Lagrange multipliers, after some manipulations, the recursion rule of the SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} algorithm will be described as
\begin{align}
\wbf(k+1) =\left\{\begin{array}{ll}\wbf(k)+\Xbf(k)\Abf(k)(\ebf(k)-\gammabf(k))&\text{if}~|e(k)|>\gammabar ,
\\ \wbf(k)&\text{ otherwise,}\end{array}\right. \ \label{eq:sm-ap}
\end{align}
where we assume that $\Abf(k)\triangleq(\Xbf^T(k)\Xbf(k))^{-1} \in\mathbb{R}^{L+1\times L+1}$ exists, i.e., $\Xbf^T(k)\Xbf(k)$ is a full-rank matrix. \symbl{$\Abf(k)$}{Auxiliary matrix $\Abf(k)\triangleq(\Xbf^T(k)\Xbf(k))^{-1}$}
Otherwise, we could add a regularization parameter as explained in~\cite{Diniz_adaptiveFiltering_book2013}.
Note that we can propose the AP\abbrev{AP}{Affine Projection} algorithm through the SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} algorithm. In other words, the AP\abbrev{AP}{Affine Projection} algorithm with unity step-size, aiming at improving the convergence speed of stochastic gradient algorithms, is a particular case of the SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} algorithm by selecting $\gammabar=0$.
It is worthwhile to mention that when $L=0$ and $\gamma_0(k)=\frac{\gammabar e(k)}{|e(k)|}$, the SM-AP algorithm has the SM-NLMS algorithm as special case.
\section{Estimating $\gammabar$ in the Set-Membership \\Algorithm for Big Data Application} \label{sec:estimate_gamma_chap2}
In big data applications, initially, it could be practical to prescribe a percentage of the amount of data we intend to utilize to achieve the desired performance. This percentage will be defined in accordance with our ability to analyze the data, taking into consideration the constraints on energy, computational time, and memory restrictions. After adopting a percentage of the update, our goal is to select the most informative data to be part of the corresponding selected percentage. Here, by taking the probability of updating into consideration, we will estimate the threshold in the SM-NLMS \abbrev{SM-NLMS}{Set-Membership Normalized LMS} and the SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} algorithms, which is responsible for censoring the data in accordance with the adopted percentage of the update. The content of this section is published in~\cite{Hamed_gamma_estimate_GlobalSIP2017}.
We want to obtain $\gammabar$ such that the algorithm considers the desired percentage of data to update its recursion rule. In fact, if the magnitude of the output estimation error is greater than $\gammabar$, the set-membership (SM)\abbrev{SM}{Set-Membership} algorithm will update since the current input and the desired signals carry enough innovation.
In general, for the desired update rate, $p$, we require computing
$\gammabar$ such that
\begin{align}
\mathbb{P}[|e(k)|>\gammabar]=p, \label{eq:single_gamma-chap2}
\end{align}
where $\mathbb{P}[\cdot]$ denotes the probability operator. Note that $p$ represents the update rate of the algorithm, i.e., the percentage of the data which we consider most informative data.
Given the probability density function of the error signal, then it is possible to compute $\gammabar$. Note that the error signal is the difference between the desired and the output signals, i.e.,
\begin{align}
e(k)&\triangleq d(k)-y(k)\triangleq \wbf_o^T\xbf(k)+n(k)-\wbf^T(k)\xbf(k)\nonumber\\
&=[\wbf_o-\wbf(k)]^T\xbf(k)+n(k)=\etilde(k)+n(k), \label{eq:error_signal-chap2}
\end{align}
where $\etilde(k)$ is the noiseless error signal, and $n(k)$ is the noise signal. \symbl{$\etilde(k)$}{Noiseless error signal} \symbl{$n(k)$}{Noise signal}
In the steady-state environment $\|\mathbb{E}[\wbf_o-\wbf(k)]\|_2^2<\infty$~\cite{Hamed_robustnessSM_EURASIP2017}, where $\mathbb{E}[\cdot]$ is the expected value operator and, in general, $\mathbb{E}[\wbf_o-\wbf(k)]\approx{\bf 0}$. Therefore, if you have sufficient order for the adaptive system, then in the steady-state environment the distribution of the error signal and the additive noise signal are the same. Thus, we can use the distribution of the additive noise signal in Equation~(\ref{eq:single_gamma-chap2}) to calculate the desired value of $\gammabar$.
Assuming the distribution of the noise signal is Gaussian with zero mean and variance $\sigma_n^2$,\symbl{$\sigma_n^2$}{Variance of the noise signal} an important case, we can provide a solution for the threshold for this special case. If the noiseless error signal is uncorrelated with the additional noise signal, by Equation~(\ref{eq:error_signal-chap2}), we have $\mathbb{E}[e(k)]=\mathbb{E}[\etilde(k)]+\mathbb{E}[n(k)]=0$
and ${\rm Var}[e(k)]=\mathbb{E}[\etilde^2(k)]+\sigma_n^2$, where ${\rm Var}[\cdot]$ is the variance operator.\symbl{${\rm Var}$}{Variance operator} $\mathbb{E}[\etilde^2(k)]$ is the excess of the steady-state mean-square error (EMSE)\abbrev{EMSE}{Excess of the Steady-State Mean-Square Error} that in the steady-state environment is given by~\cite{Markus_mseSMAP_icassp2010,Markus_mseSMAP_cssp2013}
\begin{align}
\mathbb{E}[\etilde^2(k)]=\frac{(L+1)[\sigma_n^2+\gammabar^2-2\gammabar\sigma_n^2\rho_0(k)]p}{[(2-p)-2(1-p)\gammabar\rho_o(k)]}\Big(\frac{1-a}{1-a^{L+1}}\Big), \label{eq:E(etilde)-chap2}
\end{align}
where
\begin{align}
\rho_0(k)=&\sqrt{\frac{2}{\pi(2\sigma_n^2+\frac{1}{L+1}\gammabar^2)}}, \label{eq:rho_0-chap2}\\
a=&[1-p+2p\gammabar\rho_0(k)](1-p). \label{eq:a-chap2}
\end{align}
To calculate $\mathbb{E}[\etilde^2(k)]$ in Equation~(\ref{eq:E(etilde)-chap2}), we require the value of $\gammabar$, while estimating $\gammabar$ is our purpose. To address this
problem, the natural approach is estimate it using numerical
integration or Monte-Carlo methods. However, aiming at gaining some
insight, at the first moment we can assume that in the steady-state
environment $\mathbb{E}[\etilde^2(k)]=0$, and the distribution of $e(k)$ is the same as $n(k)$, in order to calculate the estimation of $\gammabar$ using Equation~(\ref{eq:single_gamma-chap2}). Then, we substitute the obtained value of $\gammabar$ in Equation~(\ref{eq:E(etilde)-chap2}) to compute $\mathbb{E}[\etilde^2(k)]$. Finally, by obtaining $\mathbb{E}[\etilde^2(k)]$, we can have a better estimation for the distribution of $e(k)$.
Therefore, since the distribution of $e(k)$ is the same as the distribution of $n(k)$, for the first estimation of $\gammabar$ we have
\begin{align}
\mathbb{P}[|e(k)|>\gammabar]=\mathbb{P}[|n(k)|>\gammabar]
=\mathbb{P}[n(k)<-\gammabar]+\mathbb{P}[n(k)>\gammabar]=p.
\end{align}
Then because of the symmetry in Gaussian distribution we have $\mathbb{P}[n(k)>\gammabar]=\frac{p}{2}$. Since $n(k)$ has Gaussian distribution, we need to obtain $\gammabar$ from
\begin{align}
\int_{\gammabar}^{\infty}\frac{1}{\sqrt{2\pi\sigma_n^2}}\exp(-\frac{r^2}{2\sigma_n^2})dr=\frac{p}{2}.
\end{align}
Hence, given an update rate $0\leq p\leq 1$, we may use the standard normal distribution table and find the desired $\gammabar$. As the second step, for getting a better estimation of $\gammabar$, we substitute $\gammabar$ in Equations~(\ref{eq:E(etilde)-chap2})-(\ref{eq:a-chap2}) to obtain $\mathbb{E}[\etilde^2(k)]$. We can now use the zero mean Gaussian distribution with variance $\sigma_e^2=\mathbb{E}[\etilde^2(k)]+\sigma_n^2$ as the distribution of the error signal.\symbl{$\sigma_e^2$}{Variance of the error signal} Applying this distribution to Equation~(\ref{eq:single_gamma-chap2}), we can obtain a better estimation for $\gammabar$ through the equation
\begin{align}
\int_{\gammabar}^{\infty}\frac{1}{\sqrt{2\pi\sigma_e^2}}\exp(-\frac{r^2}{2\sigma_e^2})dr=\frac{p}{2}. \label{eq:second_step}
\end{align}
By using the standard normal distribution table, from where we can find the new estimation of $\gammabar$. It is worth mentioning
that the chosen desired update rate determines a loose relative
importance of the innovation brought about by the new
incoming data set.
\section{Conclusions} \label{sec:conclusion-chap2}
In this chapter, we have reviewed some adaptive filtering algorithms which play an essential role in the following chapters. First, we have introduced the LMS\abbrev{LMS}{Least-Mean-Square}, the NLMS\abbrev{NLMS}{Normalized LMS}, the AP\abbrev{AP}{Affine Projection}, and the RLS\abbrev{RLS}{Recursive Least-Squares} algorithms. Then, we have described the SMF\abbrev{SMF}{Set-Membership Filtering} approach. By incorporating this strategy into the conventional algorithms, we implement an update when the magnitude of the output estimation error is greater than the predetermined positive constant. For this purpose, we have defined some of the involved sets such as the feasibility set, the constraint set, and the exact membership set. Then, we have described the SM-NLMS\abbrev{SM-NLMS}{Set-Membership Normalized LMS} and the SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} algorithms. Finally, for the SM-NLMS\abbrev{SM-NLMS}{Set-Membership Normalized LMS} and the SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} algorithms, we have discussed how to estimate the threshold parameter in big data applications to obtain the desired update rate.
\chapter{On the Robustness of the Set-Membership Algorithms}
Online learning algorithms are a substantial part of Adaptive Signal Processing, thus the efficiency of the algorithms has to be assessed. The classical adaptive filtering algorithms are iterative estimation methods based on the {\it point estimation theory}~\cite{Lehmann_pointEstimation_book2003}.
This theory focuses on searching for a unique solution that minimizes (or maximizes) some objective function.
Two widely used classical algorithms are the normalized least-mean-square (NLMS)\abbrev{NLMS}{Normalized LMS} and the affine projection (AP)\abbrev{AP}{Affine Projection} algorithms.
These algorithms present a trade-off between convergence rate and steady-state misadjustment, and their properties
have been extensively studied~\cite{Diniz_adaptiveFiltering_book2013,Sayed_adaptiveFilters_book2008}.
Two important set-membership (SM)\abbrev{SM}{Set-Membership} algorithms are the set-membership NLMS (SM-NLMS) and the set-membership AP (SM-AP)\abbrev{SM-AP}{Set-Membership Affine Projection} algorithms, proposed
in~\cite{Gollamudi_smf_letter1998,Werner_sm_ap_letter2001}, respectively.
These algorithms keep the advantages of their classical counterparts, but they are more accurate, more robust against noise, and
also reduce the computational complexities due to the data selection strategy previously
explained~\cite{Markus_mseSMAP_cssp2013,Diniz_adaptiveFiltering_book2013,Arablouei_tracking_performance_SMNLMS_APSIPA2012,Carini_Filtered_x_SMAP_icassp2006}.
Various applications of SM\abbrev{SM}{Set-Membership} algorithms and their advantages over the classical algorithms have been discussed in the
literature~\cite{Gollamudi_smUpdatorShared_tsp1998,Nagaraj_beacon_tsp1999,Guo_fsmf_tsp2007,Diniz_sm_pap_jasmp2007,Markus_semiblindQAM_spawc2008,Bhotto_2012_TSP,Zhang_robustSMnlms_tcas2014,Mao_smfGPS_sensors2017}.
Despite the recognized advantages of the SM\abbrev{SM}{Set-Membership} algorithms, they are not broadly used, probably
due to the limited analysis of the properties of these algorithms.
The steady-state mean-squared error (MSE)\abbrev{MSE}{Mean-Squared Error} analysis of the SM-NLMS\abbrev{SM-NLMS}{Set-Membership Normalized LMS} algorithm has been discussed
in~\cite{Markus_mseSMNLMS_iswcs2010,Yamada_sm-nlmsAnalysis_tsp2009}.
Also, the steady-state MSE\abbrev{MSE}{Mean-Squared Error} performance of the SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} algorithm has been analyzed
in~\cite{Diniz_CSSP_2011,Markus_mseSMAP_cssp2013,Markus_mseSMAP_icassp2010}.
The content of this chapter was published in~\cite{Hamed_robustnessSMNLMS_sam2016,Hamed_robustnessSM_EURASIP2017}. In this chapter, the robustness of the SM-NLMS \abbrev{SM-NLMS}{Set-Membership Normalized LMS}and the SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} algorithms are discussed in the sense of
$l_2$ stability~\cite{Sayed_adaptiveFilters_book2008,Rupp_PAProbustness_tsp2011}. For the SM-NLMS\abbrev{SM-NLMS}{Set-Membership Normalized LMS} algorithm, we demonstrate that it is robust regardless the choice of its parameters and that
the SM-NLMS\abbrev{SM-NLMS}{Set-Membership Normalized LMS} enhances the parameter estimation in most of the iterations in which an update occurs, two advantages
over the classical NLMS \abbrev{NLMS}{Normalized LMS}algorithm.
Moreover, we also prove that if the noise bound is known, then we can set the SM-NLMS\abbrev{SM-NLMS}{Set-Membership Normalized LMS} so that it
never degrades the estimate.
As for the SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} algorithm, we demonstrate that its robustness depends on a judicious choice of one of its parameters:
the constraint vector (CV)\abbrev{CV}{Constraint Vector}.
We prove the existence of CVs\abbrev{CV}{Constraint Vector} satisfying the robustness condition, but practical choices remain unknown.
We also demonstrate that both the SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} and the SM-NLMS\abbrev{SM-NLMS}{Set-Membership Normalized LMS} algorithms do not diverge, even when their parameters are selected naively, provided that the additional noise is bounded. Section~\ref{sec:robustness-criterion} describes the robustness criterion. Section \ref{sec:discussed-algrithms-robustness} presents the algorithms discussed in this chapter. The robustness of the SM-NLMS\abbrev{SM-NLMS}{Set-Membership Normalized LMS} algorithm is studied in Section~\ref{sec:robustness-sm-nlms}, where we also discuss the cases in which the
noise bound is assumed known and unknown. Section~\ref{sec:robustness-sm-ap} presents the local and the global robustness properties of the SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} algorithm. Section~\ref{sec:simulation-robustness} contains the simulations and numerical results. Finally, concluding remarks are drawn in Section~\ref{sec:conclusion-robustness}.
\section{Robustness Criterion}\label{sec:robustness-criterion}
At every iteration $k$, assume that the desired signal $d(k)$ is related to the unknown system $\wbf_o$ by
\begin{align}
d(k) \triangleq \underbrace{\wbf_o^T \xbf(k)}_{\triangleq y_o(k)} + n(k),
\end{align}
where $n(k)$ denotes the unknown noise and accounts for both measurement noise and modeling uncertainties or errors.
Also, we assume that the unknown noise sequence $\{ n(k) \}$ has finite energy~\cite{Sayed_adaptiveFilters_book2008}, i.e.,
\begin{align}
\sum_{k=0}^j |n(k)|^2<\infty,\qquad {\rm for~all~} j. \label{eq:noise-condtion}
\end{align}
Suppose that we have a sequence of desired signals $\{d(k)\}$ and we intend to estimate $y_o(k)=\wbf_o^T\xbf(k)$.
For this purpose, assume that $\hat{y}_{k|k}$ is an estimate of $y_o(k)$ and it is only dependent on $d(j)$ for $j=0,\cdots,k$.
For a given positive number $\eta$, we aim at calculating the following estimates
$\hat{y}_{k|k} \in \{\hat{y}_{0|0},\hat{y}_{1|1},\cdots,\hat{y}_{M|M}\}$,
such that for any $n(k)$ satisfying~\eqref{eq:noise-condtion} and any $\wbf_o$, the following criterion is satisfied:
\begin{align}
\frac{\sum\limits_{k=0}^j \|\hat{y}_{k|k}-y_o(k)\|^2}{\wbftilde^T(0)\wbftilde(0)+\sum_{k=0}^j|n(k)|^2}<\eta^2, \qquad {\rm for~all~} j=0,\cdots,M \label{eq:criterion}
\end{align}
where $\wbftilde(0) \triangleq \wbf_o-\wbf(0)$ and $\wbf(0)$ is our initial guess about $\wbf_o$.
Note that the numerator is a measure of estimation-error energy up to iteration $j$ and the denominator includes
the energy of disturbance up to iteration $j$ and the energy of the error $\wbftilde(0)$ that is due to the initial guess.
So, the criterion given in~\eqref{eq:criterion} requires that we adjust estimates $\{\hat{y}_{k|k}\}$ such that
the ratio of the estimation-error energy (numerator) to the energy of the uncertainties (denominator) does not exceed $\eta^2$.
When this criterion is satisfied, we say that bounded disturbance energies induce bounded estimation-error energies and,
therefore, the obtained estimates are robust. The smaller value of $\eta$ results in the more robust solution, but the value of $\eta$ cannot be decreased freely.
The interested reader can refer to~\cite{Sayed_adaptiveFilters_book2008}, pages 719 and 720, for more details about this robustness criterion.
\section{The Set-Membership Algorithms} \label{sec:discussed-algrithms-robustness}
In this section, we remind the SM-NLMS\abbrev{SM-NLMS}{Set-Membership Normalized LMS} and the SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} algorithms, and in the following sections we deal with their robustness.
\subsection{The SM-NLMS Algorithm} \label{subsec:sm_nlms}
The SM-NLMS\abbrev{SM-NLMS}{Set-Membership Normalized LMS} algorithm is characterized by the updating rule~\cite{Diniz_adaptiveFiltering_book2013}
\begin{align}
\wbf(k+1) = \wbf(k) + \frac{\mu(k)}{ \| \xbf(k) \|^2 + \delta } e(k) \xbf(k) , \label{eq:sm-nlms-robustness}
\end{align}
where
\begin{align} \label{eq:def_mu}
\mu(k) \triangleq \left\{ \begin{matrix} 1 - \frac{\gammabar}{|e(k)|} & \text{if } |e(k)| > \gammabar , \\
0 & \text{otherwise} , \end{matrix} \right.
\end{align}
and $\gammabar \in \mathbb{R}_+$ is the upper bound for the magnitude of the error signal
that is acceptable and it is usually chosen as a multiple of the noise standard deviation $\sigma_n$~\cite{Markus_mseSMAP_cssp2013,Diniz_adaptiveFiltering_book2013}. The parameter $\delta \in \mathbb{R}_+$ is a regularization factor, usually chosen as
a small constant, used to avoid singularity (divisions by $0$).
\subsection{The SM-AP Algorithm} \label{subsec:sm_ap}
The SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} algorithm is described by the recursion~\cite{Werner_sm_ap_letter2001}
\begin{align}
\wbf(k+1)=\left\{\begin{array}{ll}\wbf(k)+\Xbf(k)\Abf(k)(\ebf(k)-\gammabf(k))&\text{if}~|e(k)|>\gammabar ,
\\ \wbf(k)&\text{ otherwise,}\end{array}\right. \ \label{eq:sm-ap}
\end{align}
where we assume that $\Abf(k)\triangleq(\Xbf^T(k)\Xbf(k))^{-1} \in\mathbb{R}^{(L+1)\times (L+1)}$ exists, i.e., $\Xbf^T(k)\Xbf(k)$ is a full-rank matrix.
Otherwise, we could add a regularization parameter as explained in~\cite{Diniz_adaptiveFiltering_book2013}.
\section{Robustness of the SM-NLMS Algorithm}\label{sec:robustness-sm-nlms}
In this section, we discuss the robustness of the set-membership NLMS (SM-NLMS) algorithm.
In Subsection~\ref{sub:robustness-sm-nlms}, we present some
robustness properties.
We address the robustness of the SM-NLMS\abbrev{SM-NLMS}{Set-Membership Normalized LMS} algorithm for the cases of unknown noise bound and known noise bound in
Subsections~\ref{sub:sm-nlms-unbounded-noise} and~\ref{sub:sm-nlms-bounded-noise}, respectively.
Then, in Subsection~\ref{sub:sm-nlms-time-varying-gammabar}, we introduce a time-varying error bound {aiming at achieving
simultaneously fast convergence, low computational burden, and efficient use of the input data}.
\subsection{Robustness of {the} SM-NLMS algorithm}\label{sub:robustness-sm-nlms}
Let us consider a system identification scenario in which the unknown system is denoted by
$\wbf_o \in \mathbb{R}^{N+1}$ and the desired (reference) signal $d(k)$ is defined as
\begin{align}
d(k) \triangleq \wbf_o^T \xbf(k) + n(k) , \label{eq:desiredSignalModel}
\end{align}
where $n(k) \in \mathbb{R}$ represents the additive measurement noise.
One of the main difficulties of analyzing the SM-NLMS\abbrev{SM-NLMS}{Set-Membership Normalized LMS} algorithm is its conditional statement in~\eqref{eq:def_mu}.
We can overcome such difficulty by defining \symbl{$\mubar(k)$}{Auxiliary step size $\mubar(k)\triangleq1-\frac{\gammabar}{|e(k)|}$} \symbl{$f(e(k),\gammabar)$}{The indicator function: returns 1 if $|e(k)|>\gammabar$, otherwise returns 0} \symbl{$\alpha(k)$}{Auxiliary value $\alpha(k)\triangleq\|\xbf(k)\|^2+\delta$}
\begin{align}
\mubar(k) \triangleq 1 - \frac{\gammabar}{|e(k)|}, \label{eq:def_mubar}
\end{align}
and the indicator function $f:\mathbb{R}\times\mathbb{R}_+ \rightarrow \{ 0,1 \}$ as
\begin{align}
f(e(k),\gammabar) \triangleq \left\{ \begin{matrix}
1 & \text{if } |e(k)| > \gammabar , \\
0 & \text{otherwise} .
\end{matrix} \right. \label{eq:def_indicatorFunc}
\end{align}
In this way, the SM-NLMS\abbrev{SM-NLMS}{Set-Membership Normalized LMS} updating rule can be rewritten as
\begin{align}
\wbf(k+1) = \wbf(k) + \frac{\mubar(k)}{\alpha(k)} e(k) \xbf(k) f(e(k),\gammabar) , \label{eq:sm-nlms_indicator}
\end{align}
where
\begin{align}
\alpha(k) \triangleq \| \xbf(k) \|^2 + \delta . \label{eq:def_alpha}
\end{align}
Since we are interested in robustness properties, it is useful to define $\wbftilde(k) \in \mathbb{R}^{N+1}$ as \symbl{$\wbftilde(k)$}{Auxiliary vector $\wbftilde(k)\triangleq\wbf_o-\wbf(k)$}
\begin{align}
\wbftilde(k) \triangleq \wbf_o - \wbf(k) , \label{eq:def_wbftilde}
\end{align}
i.e., $\wbftilde(k)$ is a vector representing the discrepancy between the quantity we aim to estimate $\wbf_o$ and
our current estimate $\wbf(k)$. Thus, the error signal can be rewritten as
\begin{align}
e(k) = d(k) - \wbf^T(k) \xbf(k) &= \wbf_o^T \xbf(k) + n(k) - \wbf^T(k) \xbf(k) \nonumber\\
&= \underbrace{\wbftilde^T(k) \xbf(k)}_{\triangleq \etilde(k)} + n(k) , \label{eq:def_noiselessError-sm-nlms}
\end{align}
where $\etilde(k)$ denotes the noiseless error, i.e., the error due to a mismatch between $\wbf(k)$ and $\wbf_o$.
By using~\eqref{eq:def_wbftilde} in~\eqref{eq:sm-nlms_indicator} we obtain
\begin{align}
\wbftilde(k+1) = \wbftilde(k) - \frac{\mubar(k)}{\alpha(k)} e(k) \xbf(k) f(e(k),\gammabar) ,
\end{align}
which can be further expanded by decomposing $e(k)$ as in Equation~\eqref{eq:def_noiselessError-sm-nlms} yielding
\begin{align}
\wbftilde(k+1) = \wbftilde(k) - \frac{\mubar(k)}{\alpha(k)} \etilde(k) \xbf(k) f(e(k),\gammabar)
- \frac{\mubar(k)}{\alpha(k)} n(k) \xbf(k) f(e(k),\gammabar) . \label{eq:robust_aux01}
\end{align}
By computing the energy of~\eqref{eq:robust_aux01}, the robustness property given in Theorem~\ref{thm:local_robustness-sm-nlms} can be derived after some mathematical manipulations.
\begin{thm}[Local Robustness of SM-NLMS]\label{thm:local_robustness-sm-nlms}
For the SM-NLMS\abbrev{SM-NLMS}{Set-Membership Normalized LMS} algorithm, it always holds that
\begin{align}
\| \wbftilde (k+1) \|^2 = \| \wbftilde (k) \|^2 , \text{ if } f(e(k),\gammabar) = 0 \label{eq:local_robustness_f0}
\end{align}
or
\begin{align}
\| \wbftilde(k+1) \|^2 + \frac{\mubar(k)}{\alpha(k)} \etilde^2(k)
< \| \wbftilde(k) \|^2 + \frac{\mubar(k)}{\alpha(k)} n^2(k) \ , \label{eq:local_robustness_f1}
\end{align}
if $f(e(k),\gammabar) = 1$.
\end{thm}
\begin{proof}
We start by repeating Equation~\eqref{eq:robust_aux01}, but to simplify the notation we will omit both the index $k$ and the arguments of function $f$ that appear on the right-hand side of that equation to obtain
\begin{align}
\wbftilde(k+1) = \wbftilde - \frac{\mubar}{\alpha} \etilde \xbf f
- \frac{\mubar}{\alpha} n \xbf f .
\end{align}
By computing the Euclidean norm of the above equation we get
\begin{align}
\| \wbftilde(k+1) \|^2
=& \wbftilde^T \wbftilde - \frac{\mubar}{\alpha} \etilde \wbftilde^T \xbf f - \frac{\mubar}{\alpha} n \wbftilde^T \xbf f -\frac{\mubar}{\alpha} \etilde \xbf^T \wbftilde f + \frac{\mubar^2}{\alpha^2} \etilde^2 \xbf^T \xbf f^2 \nonumber \\
&+ \frac{\mubar^2}{\alpha^2} \etilde n \xbf^T \xbf f^2 -\frac{\mubar}{\alpha} n \xbf^T \wbftilde f + \frac{\mubar^2}{\alpha^2} n \etilde \xbf^T \xbf f^2 + \frac{\mubar^2}{\alpha^2} n^2 \xbf^T \xbf f^2 \nonumber \\
=& \| \wbftilde \|^2 - \frac{\mubar}{\alpha} \etilde^2 f - \frac{\mubar}{\alpha} n \etilde f -\frac{\mubar}{\alpha} \etilde^2 f + \frac{\mubar^2}{\alpha^2} \etilde^2 \| \xbf \|^2 f^2 + \frac{\mubar^2}{\alpha^2} \etilde n \| \xbf \|^2 f^2 \nonumber \\
&-\frac{\mubar}{\alpha} n \etilde f + \frac{\mubar^2}{\alpha^2} n \etilde \| \xbf \|^2 f^2 + \frac{\mubar^2}{\alpha^2} n^2 \| \xbf \|^2 f^2 \nonumber \\
=& \| \wbftilde \|^2 - 2 \frac{\mubar}{\alpha} \etilde^2 f - 2 \frac{\mubar}{\alpha} n \etilde f +(\etilde + n)^2 \frac{\mubar^2}{\alpha^2} \| \xbf \|^2 f^2 \nonumber \\
=& \| \wbftilde \|^2 + (\etilde + n)^2 \frac{\mubar^2}{\alpha^2} \| \xbf \|^2 f^2 - 2 \frac{\mubar}{\alpha} \etilde^2 f - 2 \frac{\mubar}{\alpha} n \etilde f - \frac{\mubar}{\alpha} n^2 f + \frac{\mubar}{\alpha} n^2 f \nonumber \\
=& \| \wbftilde \|^2 + (\etilde + n)^2 \frac{\mubar^2}{\alpha^2} \| \xbf \|^2 f^2 + \frac{\mubar}{\alpha} n^2 f - (\etilde + n)^2 \frac{\mubar}{\alpha} f - \frac{\mubar}{\alpha} \etilde^2 f , \label{eq:robustness_derivation_1}
\end{align}
where the second equality is due to the relation $\etilde = \wbftilde^T \xbf = \xbf^T \wbftilde$, as given
in Equation~\eqref{eq:def_noiselessError-sm-nlms}. Rearranging the terms in~\eqref{eq:robustness_derivation_1} yields
\begin{align}
\| \wbftilde(k+1) \|^2 + \frac{\mubar f}{\alpha} \etilde^2
= \| \wbftilde \|^2 + \frac{\mubar f}{\alpha} n^2 + c_1 c_2 , \label{eq:energy_relation}
\end{align}
where
\begin{align}
c_1 \triangleq \frac{\mubar f}{\alpha} (\etilde + n)^2 ,\qquad
c_2 \triangleq \frac{\mubar f}{\alpha} \| \xbf \|^2 - 1 .
\end{align}
From~\eqref{eq:energy_relation}, we observe that when $f = 0$ we have
\begin{align}
\| \wbftilde(k+1) \|^2 = \| \wbftilde(k) \|^2
\end{align}
as expected, since $f=0$ means that no update was performed.
However, when $f=1$ we have $0 < \mubar < 1$ and $(\etilde + n)^2 = e^2 > \gammabar^2 > 0$.
In addition, observe that $0 \leq \| \xbf \|^2/\alpha < 1$ due to Equation~\eqref{eq:def_alpha} and the fact that $\delta>0$.
Combining these inequalities leads to $c_2 < 0$ and $c_1 > 0$.
Thus, when $f=1$ the product $c_1 c_2 < 0$, which leads to the inequality
\begin{align}
\| \wbftilde(k+1) \|^2 + \frac{\mubar}{\alpha} \etilde^2
< \| \wbftilde \|^2 + \frac{\mubar}{\alpha} n^2 .
\end{align}
Returning with the omitted index $k$, for $f(e(k),\gammabar)=1$ we have
\begin{align}
\| \wbftilde(k+1) \|^2 + \frac{\mubar(k)}{\alpha(k)} \etilde^2(k)
< \| \wbftilde(k) \|^2 + \frac{\mubar(k)}{\alpha(k)} n^2(k) .
\end{align}
\end{proof}
Theorem~\ref{thm:local_robustness-sm-nlms} presents local bounds for the energy of the coefficient deviation when running from
an iteration to the next one.
Indeed, \eqref{eq:local_robustness_f0} states that the coefficient deviation does not change when no coefficient update is actually implemented, whereas~\eqref{eq:local_robustness_f1} {provides} a bound for $\| \wbftilde(k+1) \|^2$
based on $\| \wbftilde(k) \|^2$, $\etilde^2(k)$, and $n^2(k)$, when an update occurs.
In addition, the global robustness result in Corollary~\ref{thm:global_robustness-sm-nlms} can readily be derived
from Theorem~\ref{thm:local_robustness-sm-nlms}.\symbl{${\cal K}_{\rm up}$}{Set containing the iteration indexes in which $\wbf(k)$ is updated}
\begin{cor}[Global Robustness of SM-NLMS]\label{thm:global_robustness-sm-nlms}
Consider the SM-NLMS\abbrev{SM-NLMS}{Set-Membership Normalized LMS} algorithm running from iteration $0$ (initialization) to a given iteration $K$.
The relation
\begin{align}
\dfrac{\| \wbftilde(K) \|^2 + \sum\limits_{k \in {\cal K}_{\rm up}}\frac{\mubar(k)}{\alpha(k)}\etilde^2(k)}{\| \wbftilde(0) \|^2 +
\sum\limits_{k \in {\cal K}_{\rm up}}\frac{\mubar(k)}{\alpha(k)}n^2(k)} < 1 \label{eq:global_robustness-sm-nlms}
\end{align}
holds, where ${\cal K}_{\rm up} \neq \emptyset$ is the set containing the iteration indexes $k$ in which $\wbf(k)$ is indeed updated.
If ${\cal K}_{\rm up} = \emptyset$,\symbl{$\emptyset$}{Empty set} then $\| \wbftilde(K) \|^2 = \| \wbftilde(0) \|^2$ due to~\eqref{eq:local_robustness_f0},
but this case is not of practical interest since ${\cal K}_{\rm up} = \emptyset$ means that no update is performed at all.
\end{cor}
\begin{proof}
Define the set of all iterations under analysis ${\cal K} \triangleq \{ 0, 1, 2, \ldots,$ $K-1 \}$.
Denote as ${\cal K}_{\rm up}$ the subset of ${\cal K}$ comprised only of the iterations in which an update
occurs, whereas ${\cal K}_{\rm up}^c \triangleq {\cal K} \setminus {\cal K}_{\rm up}$ contains the
iteration indexes in which the filter coefficients are not updated.
From Theorem~\ref{thm:local_robustness-sm-nlms}, \eqref{eq:local_robustness_f1} holds when $\wbf(k)$ is updated.
By summing such inequality for all $k \in {\cal K}_{\rm up}$ we obtain
\begin{align}
\sum_{k \in {\cal K}_{\rm up}} \Big(\| \wbftilde(k+1) \|^2 + \frac{\mubar(k)}{\alpha(k)} \etilde^2(k) \Big)
< \sum_{k \in {\cal K}_{\rm up}} \Big(\| \wbftilde(k) \|^2 + \frac{\mubar(k)}{\alpha(k)} n^2(k)\Big). \label{eq:robustness_accumulation_f1}
\end{align}
Similarly, we can use~\eqref{eq:local_robustness_f0} to write, for all $k \in {\cal K}_{\rm up}^c$, the equality
\begin{align}
\sum_{k \in {\cal K}_{\rm up}^c} \| \wbf(k+1) \|^2 = \sum_{k \in {\cal K}_{\rm up}^c} \| \wbf(k) \|^2. \label{eq:robustness_accumulation_f0}
\end{align}
Combining~\eqref{eq:robustness_accumulation_f1} and~\eqref{eq:robustness_accumulation_f0} leads to
\begin{align}
\sum_{k \in {\cal K}} \| \wbftilde(k+1) \|^2
+ \sum_{k \in {\cal K}_{\rm up}} \frac{\mubar(k)}{\alpha(k)} \etilde^2(k)
< \sum_{k \in {\cal K}} \| \wbftilde(k) \|^2
+ \sum_{k \in {\cal K}_{\rm up}} \frac{\mubar(k)}{\alpha(k)} n^2(k). \label{eq:robustness_accumulation-sm-nlms}
\end{align}
But since several of the terms $\| \wbftilde(k) \|^2$ get canceled from both sides of the inequality \eqref{eq:robustness_accumulation-sm-nlms}, we find that it simplifies to
\begin{align}
\| \wbftilde(K) \|^2 + \sum_{k \in {\cal K}_{\rm up}}\frac{\mubar(k)}{\alpha(k)}\etilde^2(k)
< \| \wbftilde(0) \|^2 + \sum_{k \in {\cal K}_{\rm up}}\frac{\mubar(k)}{\alpha(k)}n^2(k)
\end{align}
or, assuming a nonzero denominator,
\begin{align}
\dfrac{\| \wbftilde(K) \|^2 + \sum\limits_{k \in {\cal K}_{\rm up}}\frac{\mubar(k)}{\alpha(k)}\etilde^2(k)}{\| \wbftilde(0) \|^2 +
\sum\limits_{k \in {\cal K}_{\rm up}}\frac{\mubar(k)}{\alpha(k)}n^2(k)} < 1 .
\end{align}
This relation holds for all $K$. The only assumption used in the derivation is that ${\cal K}_{\rm up}$ is a nonempty set. Otherwise, we would have $\| \wbftilde(K) \|^2 = \| \wbftilde(0) \|^2$, which would happen only if $\wbf(k)$ is never updated, which has no practical interest.
\end{proof}
Corollary~\ref{thm:global_robustness-sm-nlms} shows that, for the SM-NLMS\abbrev{SM-NLMS}{Set-Membership Normalized LMS} algorithm, $l_2$-stability from its uncertainties
$\{ \wbftilde(0), \{ n(k) \}_{0\leq k\leq K} \}$ to its errors $\{ \wbftilde(K), \{ \etilde(k) \}_{0\leq k\leq K} \}$
is always guaranteed.
Unlike the NLMS\abbrev{NLMS}{Normalized LMS} algorithm, in which the step-size parameter must be chosen appropriately to guarantee such $l_2$-stability,
for the SM-NLMS\abbrev{SM-NLMS}{Set-Membership Normalized LMS} algorithm it is taken for granted (i.e., no restriction is imposed on $\gammabar$).
\subsection{Convergence of $\{\|\wbftilde(k)\|^2\}$ with unknown noise bound}\label{sub:sm-nlms-unbounded-noise}
The robustness results mentioned in Subsection~\ref{sub:robustness-sm-nlms} provide bounds for the evolution of
$\{\|\wbftilde(k)\|^2\}$ in terms of other variables.
However, we have experimentally observed that the SM-NLMS\abbrev{SM-NLMS}{Set-Membership Normalized LMS} algorithm presents a well-behaved convergence of the
sequence $\{\|\wbftilde(k)\|^2\}$, i.e.,
for most iterations we have $\|\wbftilde(k+1)\|^2 \leq \|\wbftilde(k)\|^2$.
Therefore, in this subsection, we investigate under which conditions the sequence $\{\|\wbftilde(k)\|^2\}$
is (and is not) decreasing.
\begin{cor}\label{cor:sm_nlms_decreasing}
When an update occurs $($i.e., $f(e(k),\gammabar) = 1$ $)$, $\etilde^2(k) \geq n^2(k)$ implies $\| \wbftilde(k+1) \|^2 < \| \wbftilde(k) \|^2$.
\end{cor}
\begin{proof}
By rearranging the terms in~\eqref{eq:local_robustness_f1} we obtain
\begin{align}
\| \wbftilde(k+1) \|^2 + \frac{\mubar(k)}{\alpha(k)} \left( \etilde^2(k) - n^2(k) \right)
< \| \wbftilde(k) \|^2,
\end{align}
which is valid for $f(e(k),\gammabar) = 1$.
Observe that $\frac{\mubar(k)}{\alpha(k)} > 0$ since $\alpha(k) \in \mathbb{R}_+$ and $\mubar(k) \in (0,1)$ when $f(e(k),\gammabar) = 1$.
Thus $\frac{\mubar(k)}{\alpha(k)} \left( \etilde^2(k) - n^2(k) \right)\geq0$ when $f(e(k),\gammabar) = 1$ and $\etilde^2(k) \geq n^2(k)$.
Therefore, when an update occurs, $\etilde^2(k) \geq n^2(k) \Rightarrow \| \wbftilde(k+1) \|^2 < \| \wbftilde(k) \|^2$.
\end{proof}
In words, Corollary~\ref{cor:sm_nlms_decreasing} states that the SM-NLMS\abbrev{SM-NLMS}{Set-Membership Normalized LMS} algorithm improves its estimate $\wbf(k+1)$
every time an update is required and the energy of the error signal $e^2(k)$ is dominated by $\etilde^2(k)$, the component of the error
which is due to the mismatch between $\wbf(k)$ and $\wbf_o$.
Corollary~\ref{cor:sm_nlms_decreasing} also explains why the SM-NLMS\abbrev{SM-NLMS}{Set-Membership Normalized LMS} algorithm usually presents a {\it monotonic decreasing sequence}
$\{\|\wbftilde(k)\|^2\}$ during its transient period.
Indeed, in the early iterations, the absolute value of the error is generally large, thus $|e(k)|>\gammabar$ and $\etilde^2(k)>n^2(k)$,
implying that $\| \wbftilde(k+1) \|^2 < \| \wbftilde(k) \|^2$.
In addition, there are a few iterations during the transient period in which the input data do not bring enough innovation so that
no update is performed, which means that $\| \wbftilde(k+1) \|^2 = \| \wbftilde(k) \|^2$ for these few iterations.
As a conclusion, it is very likely to have $\| \wbftilde(k+1) \|^2 \leq \| \wbftilde(k) \|^2$ for all iterations $k$ belonging
to the transient period.
After the transient period, however, the SM-NLMS\abbrev{SM-NLMS}{Set-Membership Normalized LMS} algorithm may yield $\| \wbftilde(k+1) \|^2 > \| \wbftilde(k) \|^2$ in a few iterations.
Although it is hard to compute how often such an event occurs, we can provide an upper bound for {the probability of this event} as follows:
\begin{align}
\mathbb{P}[\|\wbftilde(k+1)\|^2 > \|\wbftilde(k)\|^2] &\leq \mathbb{P}[\{|e(k)|>\gammabar\}\cap\{\etilde^2(k)<n^2(k)\}] \nonumber\\
&<\mathbb{P}[|e(k)|>\gammabar]={\rm erfc}\left(\sqrt{\frac{\tau}{2}}\right) , \label{eq:sm_nlms_probability}
\end{align}
where $\mathbb{P}[\cdot]$ and ${\rm erfc}(\cdot)$ are the probability operator and the complementary error
function~\cite{Proakis_DigitalCommunications_book1995}, respectively. \symbl{${\rm erfc}(\cdot)$}{The complementary error function}
The first inequality follows from the fact that we do not know exactly what will happen with $\| \wbftilde(k+1) \|^2$ when an update
occurs and $\etilde^2(k)<n^2(k)$ at the same time\footnote{This is because Corollary~\ref{cor:sm_nlms_decreasing} provides a
sufficient, but not necessary, condition for $\|\wbftilde(k+1)\|^2 < \|\wbftilde(k)\|^2$.}
and, therefore, it corresponds to a {\it pessimistic bound}.
The second inequality is trivial and the subsequent equality follows from~\cite{Galdino_SMNLMS_gammabar_ISCAS2006} by parameterizing $\gammabar$
as $\gammabar=\sqrt{\tau\sigma_n^2}$, where $\tau \in \mathbb{R}_+$ (typically $\tau = 5$) and by modeling the error $e(k)$
as a zero-mean Gaussian random variable with variance $\sigma_n^2$.
From~\eqref{eq:sm_nlms_probability}, one can observe that the probability of obtaining
$\|\wbftilde(k+1)\|^2 > \|\wbftilde(k)\|^2$ is small.
For instance, for $2\leq\tau\leq9$ we have $0.0027\leq{\rm erfc}\Big(\sqrt{\frac{\tau}{2}}\Big)\leq0.1579$, and
for the usual choice $\tau=5$ we have ${\rm erfc}\Big(\sqrt{\frac{\tau}{2}}\Big)=0.0253$.
The results in this subsection show that $\| \wbftilde(k+1) \|^2 \leq \| \wbftilde(k) \|^2$ for most iterations of
the SM-NLMS\abbrev{SM-NLMS}{Set-Membership Normalized LMS} algorithm, meaning that the SM-NLMS\abbrev{SM-NLMS}{Set-Membership Normalized LMS} algorithm uses the input data efficiently.
Indeed, having $\| \wbftilde(k+1) \|^2 > \| \wbftilde(k) \|^2$ means that the input data was used to obtain an
estimate $\wbf(k+1)$ which is further away from the quantity we aim to estimate $\wbf_o$, which is a waste of computational resources
(it would be better not to update at all).
Here, we showed that this rarely happens for the SM-NLMS\abbrev{SM-NLMS}{Set-Membership Normalized LMS} algorithm, {a property not shared by} the classical algorithms,
as it will be verified experimentally in Section~\ref{sec:simulation-robustness}.
\subsection{Convergence of $\{\|\wbftilde(k)\|^2\}$ with known noise bound}\label{sub:sm-nlms-bounded-noise}
In this subsection, we demonstrate that if the noise bound is known, then it is possible to set the threshold parameter $\gammabar$
of the SM-NLMS\abbrev{SM-NLMS}{Set-Membership Normalized LMS} algorithm so that $\{\|\wbftilde(k)\|^2\}$ is a monotonic decreasing sequence.
Theorem~\ref{thm:strong-local-robustness-sm-nlms} and Corollary~\ref{cor:strong-global-robustness-sm-nlms} address
this issue.
\begin{thm}[Strong Local Robustness of SM-NLMS]\label{thm:strong-local-robustness-sm-nlms}
Assume the noise is bounded by a known constant $B \in \mathbb{R}_+$, i.e., $|n(k)|\leq B, \forall k$.
If one chooses $\gammabar \geq 2B$, then $\{\|\wbftilde(k)\|^2\}$ is a monotonic decreasing sequence, i.e.,
$\|\wbftilde(k+1)\|^2\leq\|\wbftilde(k)\|^2,\forall k$.
\end{thm}
\begin{proof}
If $f(e(k),\gammabar)=1$, then $|e(k)| = |\etilde(k) + n(k)|>\gammabar$, which means that:
(i)~$\etilde(k) > \gammabar - n(k)$ for the positive values of $\etilde(k)$ or
(ii)~$\etilde(k) < -\gammabar - n(k)$ for the negative values of $\etilde(k)$.
Recalling that $n(k) \in [-B,B]$ and $\gammabar \in [2B,\infty)$, now we can find the bound for $\etilde(k)$ by finding the
minimum of (i) and the maximum of (ii) as follows: \\
(i) $\etilde(k) > \gammabar - n(k) \Rightarrow \etilde_{\rm min} > \gammabar - B \geq B$; \\
(ii) $\etilde(k) <-\gammabar - n(k) \Rightarrow \etilde_{\rm max} <-\gammabar + B \leq -B$. \\
Results (i) and (ii) above state that if $\gammabar \geq 2B$, then $| \etilde(k) | > B$,
which means that $| \etilde(k) | > | n(k) |, \forall k$.
Consequently, by using Corollary~\ref{cor:sm_nlms_decreasing} it follows that $\|\wbftilde(k+1)\|^2 < \|\wbftilde(k)\|^2,\forall k$ in
which $f(e(k),\gammabar)=1$.
In addition, if $f(e(k),\gammabar)=0$ we have $\|\wbftilde(k+1)\|^2 = \|\wbftilde(k)\|^2$.
Therefore, we can conclude that $\gammabar \geq 2B \Rightarrow \|\wbftilde(k+1)\|^2\leq\|\wbftilde(k)\|^2,\forall k$.
\end{proof}
\begin{cor}[Strong Global Robustness of SM-NLMS]\label{cor:strong-global-robustness-sm-nlms}
Consider the SM-NLMS\abbrev{SM-NLMS}{Set-Membership Normalized LMS} algorithm running from iteration $0$ (initialization) to a given iteration $K$.
If $\gammabar \geq 2B$, then $\|\wbftilde(K)\|^2 \leq \|\wbftilde(0)\|^2$, in which the equality
holds only when no update is performed along all the iterations.
\end{cor}
The proof of Corollary~\ref{cor:strong-global-robustness-sm-nlms} is omitted because it is a straightforward consequence
of Theorem~\ref{thm:strong-local-robustness-sm-nlms}.
\subsection{Time-varying $\gammabar(k)$} \label{sub:sm-nlms-time-varying-gammabar}
After reading Subsections~\ref{sub:sm-nlms-unbounded-noise} and~\ref{sub:sm-nlms-bounded-noise}, one might be
tempted to set $\gammabar$ as a high value since it reduces the number of updates, thus saving computational resources
{and also leading} to a well-behaved sequence $\{ \|\wbftilde(k)\|^2 \}$ that has high probability of being monotonously decreasing.
However, a high value of $\gammabar$ leads to slow convergence, because the updates during the learning stage (transient period) are
less frequent and the step-size $\mu(k)$ is reduced as well.
Hence, $\gammabar$ represents a compromise between convergence speed and efficiency and, therefore,
should be chosen carefully according to the specific characteristics of the application.
An alternative approach is to allow a time-varying error bound $\gammabar(k)$ generally defined as
$\gammabar(k) \triangleq \sqrt{\tau(k) \sigma_n^2}$, where \symbl{$\gammabar(k)$}{Time-varying error bound}
\begin{align}\label{eq:gammabar-timevar}
\tau(k) \triangleq\begin{cases}
\text{Low value (e.g., $\tau(k) \in [1,5]$)}, \qquad\text{if $k \in$ transient period, } \\
\text{High value (e.g., $\tau(k) \in [5,9]$)}, \qquad\text{if $k \in $ steady-state.}
\end{cases}
\end{align}
By using such a $\gammabar(k)$, one obtains the best features of the high and low values of $\gammabar$ discussed in the
first paragraph of this subsection.
In addition, if the noise bound $B$ is known, then one should set $\gammabar(k)\geq 2B$ for all $k$ during the steady-state,
as explained in Subsection~\ref{sub:sm-nlms-bounded-noise}.
It is worth mentioning that~\eqref{eq:gammabar-timevar} provides a general expression for $\tau(k)$ that allows it to vary smoothly
along the iterations even within a single period (i.e., transient period or steady-state).
In order to apply the $\gammabar(k)$ defined above, the algorithm should be able to monitor the environment to determine
when there is a transition between transient and steady-state periods.
An intuitive way to do this is to monitor the values of $|e(k)|$.
In this case, one should form a window with the $E \in \mathbb{N}$ most recent values of the error, compute the average
of these $|e(k)|$ within the window, and compare it against a threshold parameter to make the decision.
An even more intuitive and efficient way to monitor the iterations relies on how often the algorithm is updating.
In this case, one should form a window of length $E$ containing Boolean variables (flags, i.e., 1-bit information) indicating the iterations
in which an update was performed considering the $E$ most recent iterations.
If many updates were performed within the window, then the algorithm must be in the transient period; otherwise, the algorithm is likely to be
in steady-state.
\section{Robustness of the SM-AP Algorithm} \label{sec:robustness-sm-ap}
In this section, we address the robustness of the set-membership affine projection (SM-AP) algorithm. We study its
robustness properties in Subsection~\ref{sub:robustness-sm-ap}, whereas in Subsection~\ref{sub:divergence_sm_ap}, we demonstrate that the SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} algorithm does not diverge.
\subsection{Robustness of the SM-AP algorithm}\label{sub:robustness-sm-ap}
Suppose that in a system identification problem the unknown system is denoted by $\wbf_o\in\mathbb{R}^{N+1}$
and the desired (reference) vector is given by
\begin{align}
\dbf(k) \triangleq \Xbf^T(k) \wbf_o + \nbf(k) . \label{eq:desiredSignalModel-SM-AP}
\end{align}
By defining the coefficient mismatch $\wbftilde(k)\triangleq\wbf_o-\wbf(k)$,
the error vector can be written as
\begin{align}
\ebf(k)=\Xbf^T(k)\wbf_o+\nbf(k)-\Xbf^T(k)\wbf(k)=\underbrace{\Xbf^T(k)\wbftilde(k)}_{\triangleq \ebftilde(k)}+\nbf(k) \ , \label{eq:def_noiselessError-SM-AP}
\end{align}
where $\ebftilde(k)$ denotes the noiseless error vector (i.e., the error due to a nonzero $\wbftilde(k)$). \symbl{$\ebftilde(k)$}{Noiseless error signal vector}
By defining the indicator function $f:\mathbb{R}\times\mathbb{R}_+ \rightarrow \{ 0,1 \}$ as in~\eqref{eq:def_indicatorFunc}
and using it in (\ref{eq:sm-ap}), the update rule of the SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} algorithm can be written as follows:
\begin{align}
\hspace{-0.1cm}\wbf(k+1)=\wbf(k)+\Xbf(k)\Abf(k)
(\ebf(k)-\gammabf(k))f(e(k),\gammabar), \label{eq:r_update}
\end{align}
where $\Abf(k)=[\Xbf^T(k)\Xbf(k)]^{-1}$. After subtracting $\wbf_o$ from both sides of (\ref{eq:r_update}), we obtain
\begin{align}
\wbftilde(k+1)=\wbftilde(k)-\Xbf(k)\Abf(k)(\ebf(k)-\gammabf(k))f(e(k),\gammabar).
\end{align}
Notice that $\Abf(k)$ is a symmetric positive definite matrix.
To simplify our notation, we will omit the index $k$ and the arguments of function $f$ that appear on the
right-hand side (RHS)\abbrev{RHS}{Right-Hand Side} of the previous equation, then by decomposing $\ebf(k)$ as in~\eqref{eq:def_noiselessError-SM-AP}
we obtain
\begin{align}
\wbftilde(k+1)=\wbftilde-\Xbf\Abf\ebftilde f-\Xbf\Abf\nbf f+\Xbf\Abf \gammabf f \label{eq:update-SM-AP} ,
\end{align}
from which Theorem~\ref{thm:local_robustness-SM-AP} can be derived.
\begin{thm}[Local Robustness of SM-AP]\label{thm:local_robustness-SM-AP}
For the SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} algorithm, at every iteration we have
\begin{align}
\| \wbftilde (k+1) \|^2 = \| \wbftilde (k) \|^2 , \text{ if } f(e(k),\gammabar) = 0 \label{eq:local_robustness_f0-SM-AP}
\end{align}
otherwise
\begin{align}
\left\{\begin{array}{ll}
\frac{\|\wbftilde(k+1)\|^2+\ebftilde^T\Abf\ebftilde}{\|\wbftilde(k)\|^2+\nbf^T\Abf\nbf}<1,&\text{if}~\gammabf^T\Abf\gammabf<2\gammabf^T\Abf\nbf,\\
\frac{\|\wbftilde(k+1)\|^2+\ebftilde^T\Abf\ebftilde}{\|\wbftilde(k)\|^2+\nbf^T\Abf\nbf}=1,&\text{if}~\gammabf^T\Abf\gammabf=2\gammabf^T\Abf\nbf,\\
\frac{\|\wbftilde(k+1)\|^2+\ebftilde^T\Abf\ebftilde}{\|\wbftilde(k)\|^2+\nbf^T\Abf\nbf}>1,&\text{if}~\gammabf^T\Abf\gammabf>2\gammabf^T\Abf\nbf,\end{array}\right. \label{eq:local_robustness_f1-SM-AP}
\end{align}
where the iteration index $k$ has been dropped for the sake of clarity, and we assume that
$\|\wbftilde(k)\|^2+\nbf^T\Abf\nbf\neq0$ just to allow us to write the theorem in a compact form.
\end{thm}
\begin{proof}
By computing the Euclidean norm of Equation~\eqref{eq:update-SM-AP} and rearranging the terms we get
\begin{align}
\|\wbftilde(k+1)\|^2=&\wbftilde^T\wbftilde-\wbftilde^T\Xbf\Abf\ebftilde f -\wbftilde^T\Xbf\Abf\nbf f +\wbftilde^T\Xbf\Abf\gammabf f -\ebftilde^T\Abf^T\Xbf^T\wbftilde f \nonumber\nonumber\\
&+\ebftilde^T\Abf^T\Abf^{-1}\Abf\ebftilde f^2 +\ebftilde^T\Abf^T\Abf^{-1}\Abf\nbf f^2 -\ebftilde^T\Abf^T\Abf^{-1}\Abf\gammabf f^2 \nonumber\\ &-\nbf^T\Abf^T\Xbf^T\wbftilde f
+\nbf^T\Abf^T\Abf^{-1}\Abf\ebftilde f^2 +\nbf^T\Abf^T\Abf^{-1}\Abf\nbf f^2 \nonumber\\ &-\nbf^T\Abf^T\Abf^{-1}\Abf\gammabf f^2 +\gammabf^T\Abf^T\Xbf^T\wbftilde f
-\gammabf^T\Abf^T\Abf^{-1}\Abf\ebftilde f^2 \nonumber\\ &-\gammabf^T\Abf^T\Abf^{-1}\Abf\nbf f^2
+\gammabf^T\Abf^T\Abf^{-1}\Abf\gammabf f^2 \nonumber\\
=&\|\wbftilde\|^2-\ebftilde^T\Abf\ebftilde f -\ebftilde^T\Abf\nbf f +\ebftilde^T\Abf\gammabf f -\ebftilde^T\Abf\ebftilde f+\ebftilde^T\Abf\ebftilde f^2 \nonumber\\
&+\ebftilde^T\Abf\nbf f^2 -\ebftilde^T\Abf\gammabf f^2 -\nbf^T\Abf\ebftilde f +\nbf^T\Abf\ebftilde f^2 +\nbf^T\Abf\nbf f^2 \nonumber\\
&-\nbf^T\Abf\gammabf f^2 +\gammabf^T\Abf\ebftilde f -\gammabf^T\Abf\ebftilde f^2 -\gammabf^T\Abf\nbf f^2 +\gammabf^T\Abf\gammabf f^2 \ , \label{eq:norm2-sm-ap-robustness}
\end{align}
where it was used that $\Abf^{-1} = \Xbf^T(k)\Xbf(k)$ and $\ebftilde(k) = \Xbf^T(k) \wbftilde(k)$.
From the above equation we observe that when $f=0$ we have
\begin{align}
\|\wbftilde(k+1)\|^2=\|\wbftilde(k)\|^2 \label{eq:equality}
\end{align}
as expected, since $f=0$ means that the algorithm does not update its coefficients.
However, when $f=1$ the following equality is achieved from \eqref{eq:norm2-sm-ap-robustness}:
\begin{align}
\|\wbftilde(k+1)\|^2=\|\wbftilde\|^2-\ebftilde^T\Abf\ebftilde +\nbf^T\Abf\nbf-2\gammabf^T\Abf\nbf +\gammabf^T\Abf\gammabf \ . \label{eq:main_equation}
\end{align}
After rearranging the terms of the previous equation we obtain
\begin{align}
\|\wbftilde(k+1)\|^2+\ebftilde^T\Abf\ebftilde=\|\wbftilde\|^2+\nbf^T\Abf\nbf-2\gammabf^T\Abf\nbf+\gammabf^T\Abf\gammabf \ . \label{eq:NiceIdentity}
\end{align}
Therefore,
$\|\wbftilde(k+1)\|^2+\ebftilde^T\Abf\ebftilde<\|\wbftilde\|^2+\nbf^T\Abf\nbf$ if $\gammabf^T\Abf\gammabf<2\gammabf^T\Abf\nbf$,
$\|\wbftilde(k+1)\|^2+\ebftilde^T\Abf\ebftilde=\|\wbftilde\|^2+\nbf^T\Abf\nbf$ if $\gammabf^T\Abf\gammabf=2\gammabf^T\Abf\nbf$, and
$\|\wbftilde(k+1)\|^2+\ebftilde^T\Abf\ebftilde>\|\wbftilde\|^2+\nbf^T\Abf\nbf$ if $\gammabf^T\Abf\gammabf>2\gammabf^T\Abf\nbf$.
Assuming $\|\wbftilde\|^2+\nbf^T\Abf\nbf\neq0$ we can summarize the discussion above in a compact form as follows:
\begin{align}
\left\{\begin{array}{ll}\frac{\|\wbftilde(k+1)\|^2+\ebftilde^T\Abf\ebftilde}{\|\wbftilde(k)\|^2+\nbf^T\Abf\nbf}<1,&\text{if}~\gammabf^T\Abf\gammabf<2\gammabf^T\Abf\nbf,\\
\frac{\|\wbftilde(k+1)\|^2+\ebftilde^T\Abf\ebftilde}{\|\wbftilde(k)\|^2+\nbf^T\Abf\nbf}=1,&\text{if}~\gammabf^T\Abf\gammabf=2\gammabf^T\Abf\nbf,\\
\frac{\|\wbftilde(k+1)\|^2+\ebftilde^T\Abf\ebftilde}{\|\wbftilde(k)\|^2+\nbf^T\Abf\nbf}>1,&\text{if}~\gammabf^T\Abf\gammabf>2\gammabf^T\Abf\nbf.\end{array}\right.
\end{align}
\end{proof}
The combination of the first two inequalities in~\eqref{eq:local_robustness_f1-SM-AP}, which corresponds to the
case $\gammabf^T\Abf\gammabf \leq 2\gammabf^T\Abf\nbf$, has an interesting interpretation.
It describes that for any constraint vector $\gammabf$ satisfying this condition we have
\begin{align}
\|\wbftilde(k+1)\|^2+\ebftilde^T\Abf\ebftilde \leq \|\wbftilde(k)\|^2+\nbf^T\Abf\nbf, \label{eq:first_eq_loccal_SM-AP}
\end{align}
no matter what the noise vector $\nbf(k)$ is.
In this way, we can derive the global robustness property of the SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} algorithm.
\begin{cor}[Global Robustness of SM-AP]\label{cor:global_robustness-SM-AP}
Suppose that the SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} algorithm running from 0 (initialization) to a given iteration $K$
employs a constraint vector $\gammabf$ satisfying $\gammabf^T\Abf\gammabf \leq 2\gammabf^T\Abf\nbf$
at every iteration in which an update occurs.
Then, it always holds that
\begin{align}
\frac{\|\wbftilde(K)\|^2+\sum\limits_{k\in{\cal K}_{\rm up}}\ebftilde^T\Abf\ebftilde}{\|\wbftilde(0)\|^2
+\sum\limits_{k\in{\cal K}_{\rm up}}\nbf^T\Abf\nbf} \leq 1,
\end{align}
where ${\cal K}_{\rm up} \neq \emptyset$ is the set comprised of the iteration indexes $k$ in which $\wbf(k)$ is indeed updated and the equality
holds when $\gammabf^T\Abf\gammabf=2\gammabf^T\Abf\nbf$ for every $k \in {\cal K}_{\rm up}$.
If ${\cal K}_{\rm up} = \emptyset$, then $\|\wbftilde(K)\|^2 = \|\wbftilde(0)\|^2$, a case that has no practical interest
since no update is performed.
\end{cor}
\begin{proof}
Denote by ${\cal K} \triangleq \{ 0, 1, 2, \ldots,$ $K-1 \}$ the set of all iterations.
Let ${\cal K}_{\rm up}\subseteq{\cal K}$ be the subset containing only the iterations in which an update
occurs, whereas ${\cal K}_{\rm up}^c \triangleq {\cal K} \setminus {\cal K}_{\rm up}$ is comprised of the
iterations in which the filter coefficients are not updated.
As a consequence of Theorem~\ref{thm:local_robustness-SM-AP}, when an update occurs the inequality given in~\eqref{eq:first_eq_loccal_SM-AP}
is valid provided $\gammabf$ is chosen such that $\gammabf^T\Abf\gammabf \leq 2\gammabf^T\Abf\nbf$ is respected.
In this way, by summing such inequality for all $k \in {\cal K}_{\rm up}$ we obtain
\begin{align}
\sum_{k \in {\cal K}_{\rm up}} \Big(\|\wbftilde(k+1)\|^2+\ebftilde^T\Abf\ebftilde \Big)
\leq \sum_{k \in {\cal K}_{\rm up}} \Big( \|\wbftilde(k)\|^2+\nbf^T\Abf\nbf\Big). \label{eq:robustness_accumulation_f1_SM-AP}
\end{align}
Observe that $\gammabf$, $\ebftilde$, $\nbf$, and $\Abf$ all depend on the independent variable $k$, which we have omitted for
the sake of simplification.
In addition, for the iterations without coefficient update, we have~\eqref{eq:local_robustness_f0-SM-AP}, which
can be summed for all $k \in {\cal K}_{\rm up}^c$ leading to
\begin{align}
\sum_{k \in {\cal K}_{\rm up}^c} \| \wbftilde(k+1) \|^2 = \sum_{k \in {\cal K}_{\rm up}^c} \| \wbftilde(k) \|^2. \label{eq:robustness_accumulation_f0_SM-AP}
\end{align}
Summing~\eqref{eq:robustness_accumulation_f1_SM-AP} and~\eqref{eq:robustness_accumulation_f0_SM-AP} yields
\begin{align}
\sum_{k \in {\cal K}} \| \wbftilde(k+1) \|^2
+ \sum_{k \in {\cal K}_{\rm up}}\ebftilde^T\Abf\ebftilde
\leq \sum_{k \in {\cal K}} \| \wbftilde(k) \|^2
+ \sum_{k \in {\cal K}_{\rm up}}\nbf^T\Abf\nbf. \label{eq:robustness_accumulation_SM-AP}
\end{align}
Then, we can cancel several of the terms $\| \wbftilde(k) \|^2$ from both sides of the above inequality simplifying it as follows
\begin{align}
\| \wbftilde(K) \|^2
+\sum_{k \in {\cal K}_{\rm up}}\ebftilde^T\Abf\ebftilde
\leq \| \wbftilde(0) \|^2
+ \sum_{k \in {\cal K}_{\rm up}}\nbf^T\Abf\nbf.
\end{align}
Assuming a nonzero denominator, we can write the previous inequality in a compact form
\begin{align}
\frac{\| \wbftilde(K) \|^2
+ \sum\limits_{k \in {\cal K}_{\rm up}}\ebftilde^T\Abf\ebftilde}{\| \wbftilde(0) \|^2
+ \sum\limits_{k \in {\cal K}_{\rm up}}\nbf^T\Abf\nbf} \leq 1.
\end{align}
This relation holds for all $K$, provided $\gammabf^T\Abf\gammabf \leq 2\gammabf^T\Abf\nbf$ is satisfied for every iteration
in which an update occurs, i.e., for every $k \in {\cal K}_{\rm up}$.
The only assumption used in the derivation is that ${\cal K}_{\rm up}\neq\emptyset$.
Otherwise, we would have $\| \wbftilde(K) \|^2 = \| \wbftilde(0) \|^2$, which would occur only if $\wbf(k)$
is never updated, which is not of practical interest.
\end{proof}
Observe that, unlike the SM-NLMS\abbrev{SM-NLMS}{Set-Membership Normalized LMS} algorithm, the SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} algorithm requires the condition $\gammabf^T\Abf\gammabf \leq 2\gammabf^T\Abf\nbf$
to be satisfied in order to guarantee $l_2$-stability from its uncertainties
$\{ \wbftilde(0), \{ n(k) \}_{0\leq k\leq K} \}$ to its errors $\{ \wbftilde(K), \{ \etilde(k) \}_{0\leq k\leq K} \}$.
The next question is: are there constraint vectors $\gammabf$ satisfying such a condition?
This is a very interesting point because the left-hand side (LHS)\abbrev{LHS}{Left-Hand Side} of the condition is always positive, whereas the RHS\abbrev{RHS}{Right-Hand Side} is not.
Corollary~\ref{cor:global_robustness-SM-AP-c*n(k)} answers this question and shows an example of such a constraint vector.
\begin{cor}\label{cor:global_robustness-SM-AP-c*n(k)}
Suppose the CV\abbrev{CV}{Constraint Vector} is chosen as $\gammabf(k) = c\nbf(k)$ in the SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} algorithm, where $\nbf(k)$ is the noise vector defined
in~\eqref{eq:desiredSignalModel-SM-AP}.
If $0 \leq c \leq 2$, then the condition $\gammabf^T\Abf\gammabf \leq 2\gammabf^T\Abf\nbf$ always holds, implying that
the SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} algorithm is globally robust by Corollary~\ref{cor:global_robustness-SM-AP}.
\end{cor}
\begin{proof}
Substituting $\gammabf(k) = c\nbf(k)$ in $\gammabf^T\Abf\gammabf \leq 2\gammabf^T\Abf\nbf$ leads to
the following condition $(c^2-2c)\nbf^T(k)\Abf(k)\nbf(k)\leq 0$, which is satisfied for $c^2-2c \leq 0 \Rightarrow 0\leq c \leq 2$ since
$\Abf(k)$ is positive definite.
Hence, due to Corollary~\ref{cor:global_robustness-SM-AP} the proposed $\gammabf(k)$ leads to a globally robust SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} algorithm.
\end{proof}
It is worth mentioning that the constraint vector $\gammabf(k)$ in Corollary~\ref{cor:global_robustness-SM-AP-c*n(k)}
is not practical because $\nbf(k)$ is not observable.
Therefore, Corollary~\ref{cor:global_robustness-SM-AP-c*n(k)} is actually related to the existence of $\gammabf(k)$
satisfying $\gammabf^T\Abf\gammabf<2\gammabf^T\Abf\nbf$.
Unlike the SM-NLMS\abbrev{SM-NLMS}{Set-Membership Normalized LMS} algorithm, the $l_2$-stability of the SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} algorithm is not guaranteed.
Indeed, as demonstrated in Theorem~\ref{thm:local_robustness-SM-AP} and Corollary~\ref{cor:global_robustness-SM-AP},
a judicious choice of the CV\abbrev{CV}{Constraint Vector} is required for the SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} algorithm to be $l_2$-stable.
{\it It is worth mentioning that practical choices of $\gammabf (k)$ satisfying the robustness condition
$\gammabf^T\Abf\gammabf \leq 2\gammabf^T\Abf\nbf$ for every iteration $k$ are not known yet!}
Even widely used CVs\abbrev{CV}{Constraint Vector}, like the simple choice CV (SC-CV)~\cite{Markus_optimalCV_sigpro2017}\abbrev{SC-CV}{Simple Choice CV}, sometimes violate this condition
as will be shown in Section~\ref{sec:simulation-robustness}.
However, this does not mean that the SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} algorithm diverges.
In fact, it does not diverge regardless the choice of $\gammabf(k)$, as demonstrated in the next subsection.
\subsection{The SM-AP algorithm does not diverge}\label{sub:divergence_sm_ap}
When the SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} algorithm updates (i.e., when $|e(k)| > \gammabar$), it generates $\wbf (k+1)$ as the solution to the following
optimization problem~\cite{Werner_sm_ap_letter2001,Diniz_adaptiveFiltering_book2013}:
\begin{align}
& \text{minimize } \| \wbf(k+1) - \wbf(k) \|^2 \nonumber \\
& \text{subject to } \dbf(k) - \Xbf^T(k) \wbf(k+1) = \gammabf(k).
\end{align}
The constraint essentially states that the a posteriori errors $\epsilon(k-l) \triangleq d(k-l) - \xbf^T(k-l) \wbf(k+1)$ are equal to
their respective $\gamma_l(k)$, which in turn are bounded by $\gammabar$.
This leads to the following derivation:
\begin{align}
| \epsilon(k-l) | = | d(k-l) - \xbf^T(k-l) \wbf(k+1) | & \leq \gammabar , \nonumber \\
| \xbf^T(k-l) \wbftilde(k+1) + n(k-l) | & \leq \gammabar ,
\end{align}
which should be valid for all iterations and suitable values of the involved variables. Therefore, we have
\begin{align}
-\gammabar-n(k-l)&\leq \xbf^T(k-l)\wbftilde(k+1)\leq \gammabar-n(k-l).
\end{align}
Since the noise sequence is bounded and $\gammabar < \infty$, we have
\begin{align}
-\infty < \sum_{i=0}^N x_i(k-l){\tilde w}_i(k+1) < \infty,
\end{align}
where $x_i(k-l), {\tilde w}_i(k+1) \in \mathbb{R}$ denote the $i$th entry of vectors $\xbf(k-l), \wbftilde(k+1) \in \mathbb{R}^{N+1}$, respectively.
As a result, $|{\tilde w}_i(k+1)|$ is also bounded implying $\| \wbftilde(k+1) \|^2 < \infty$, which means that
the SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} algorithm does not diverge even when its CV\abbrev{CV}{Constraint Vector} is not properly chosen.
In Section~\ref{sec:simulation-robustness} we verify this fact experimentally by using a {\it general CV}\abbrev{CV}{Constraint Vector}, i.e.,
a CV\abbrev{CV}{Constraint Vector} whose entries are randomly chosen but satisfying $| \gamma_i (k) | \leq \gammabar$.
Such general CV\abbrev{CV}{Constraint Vector} leads to poor performance, in comparison to the SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} algorithm using adequate CVs\abbrev{CV}{Constraint Vector}, but the algorithm
does not diverge.
The same reasoning could be applied to demonstrate that the SM-NLMS\abbrev{SM-NLMS}{Set-Membership Normalized LMS} algorithm does not diverge as well.
However, from Corollary~\ref{thm:global_robustness-sm-nlms} it is straightforward to verify that $\| \wbftilde(K) \|^2 < \infty$ for every $K$,
as the denominator in~\eqref{eq:global_robustness-sm-nlms} is finite.
\section{Simulations} \label{sec:simulation-robustness}
In this section, we provide simulation results for the SM-NLMS\abbrev{SM-NLMS}{Set-Membership Normalized LMS} and SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} algorithms in order to verify their robustness properties addressed in the previous sections.
These results are obtained by applying the aforementioned algorithms to a system identification problem.
The unknown system $\wbf_o$ is comprised of $10$ coefficients drawn from a standard Gaussian distribution.
The noise $n(k)$ is a zero-mean white Gaussian noise with variance $\sigma_n^2=0.01$ yielding a signal-to-noise ratio (SNR)\abbrev{SNR}{Signal-to-Noise Ratio}
equal to $20$~dB.
The regularization factor and the initialization for the adaptive filtering coefficient vector are $\delta = 10^{-12}$ and
$\wbf(0)=[0~\cdots~0]^T \in \mathbb{R}^{10}$, respectively.
The error bound parameter is usually set as $\gammabar = \sqrt{5 \sigma_n^2}=0.2236$, unless otherwise stated.
\subsection{Confirming the results for the SM-NLMS algorithm} \label{subsec:simulation-robustness-sm-nlms}
Here, the input signal $x(k)$ is a zero-mean white Gaussian noise with variance equal to $1$.
Fig.~\ref{fig:sim1-sm-nlms-robustness} aims at verifying Theorem~\ref{thm:local_robustness-sm-nlms}.
Thus, for the iterations $k$ with coefficient update, let us denote the left-hand side (LHS)\abbrev{LHS}{Left-Hand Side} and the right-hand side (RHS)\abbrev{RHS}{Right-Hand Side}
of~\eqref{eq:local_robustness_f1} as $g_1(k)$ and $g_2(k)$, respectively.
In addition, to simultaneously account for~\eqref{eq:local_robustness_f0}, we define
$g_1(k) = \| \wbftilde(k+1) \|^2$ and $g_2(k) = \| \wbftilde(k) \|^2$ for the iterations
without coefficient update.
Fig.~\ref{fig:sim1-sm-nlms-robustness} depicts $g_1(k)$ and $g_2(k)$ considering the system identification scenario
described in the beginning of Section~\ref{sec:simulation-robustness}.
In this figure, we can observe that $g_1(k) \leq g_2(k)$ for all $k$.
Indeed, we verified that $g_1(k) = g_2(k)$ (i.e., curves are overlaid) only in the iterations without update, i.e.,
$\wbf(k+1) = \wbf(k)$.
In the remaining iterations we have $g_1(k) < g_2(k)$, corroborating Theorem~\ref{thm:local_robustness-sm-nlms}.
\begin{figure}[t!]
\centering
\includegraphics[width=1\linewidth]{Figs/sim1-sm-nlms-robustness.pdf}
\caption{Values of $g_1(k)$ and $g_2(k)$ over the iterations for the SM-NLMS algorithm corroborating Theorem~\ref{thm:local_robustness-sm-nlms}. \label{fig:sim1-sm-nlms-robustness}}
\end{figure}
Fig.~\ref{fig:sim2-sm-nlms-robustness} depicts the sequence $\{\|\wbftilde(k)\|^2\}$ for the SM-NLMS\abbrev{SM-NLMS}{Set-Membership Normalized LMS} algorithm and its classical
counterpart, the NLMS\abbrev{NLMS}{Normalized LMS} algorithm.
For the SM-NLMS\abbrev{SM-NLMS}{Set-Membership Normalized LMS} algorithm, we consider three cases: fixed $\gammabar$ with unknown noise bound (blue solid line),
fixed $\gammabar$ with known noise bound $B=0.11$ (cyan solid line), and time-varying $\gammabar(k)$,
defined as $\sqrt{5\sigma_n^2}$ during the transient period and $\sqrt{9\sigma_n^2}$ during the steady-state,
with unknown noise bound (green solid line).
For the results using the time-varying $\gammabar(k)$, the window length is $E=20$, and when the number of updates in the window
is less than 4, we assume the algorithm is in the steady-state period.
For the NLMS\abbrev{NLMS}{Normalized LMS} algorithm, two different step-sizes are used: $\mu=0.9$, which leads to fast convergence but high misadjustment,
and $\mu=0.05$, which leads to slow convergence but low misadjustment.
In Fig.~\ref{fig:sim2-sm-nlms-robustness}, the blue curve confirms the discussion in Subsection~\ref{sub:sm-nlms-unbounded-noise}.
Indeed, we can observe that the sequence $\{\|\wbftilde(k)\|^2\}$ represented by this blue curve increases only
$30$ times along the $2500$ iterations, meaning that the SM-NLMS\abbrev{SM-NLMS}{Set-Membership Normalized LMS} algorithm did not improve its estimate $\wbf(k+1)$ only
in $30$ iterations.
Thus, in this experiment we have $\mathbb{P}[\|\wbftilde(k+1)\|^2>\|\wbftilde(k)\|^2] = 0.012$, whose value is lower than
its corresponding upper bound given by ${\rm erfc}(\sqrt{2.5})=0.0253$, as explained in Subsection~\ref{sub:sm-nlms-unbounded-noise}.
Also, we can observe that the event $\|\wbftilde(k+1)\|^2>\|\wbftilde(k)\|^2$ did not occur in the early iterations because
in these iterations $\etilde^2(k)$ is usually large due to a significant mismatch between $\wbf(k)$ and $\wbf_o$, i.e.,
the condition specified in Corollary~\ref{cor:sm_nlms_decreasing} is frequently satisfied.
Also in Fig.~\ref{fig:sim2-sm-nlms-robustness}, the cyan curve shows that when the noise bound is known we can obtain a
monotonic decreasing sequence $\{\|\wbftilde(k)\|^2\}$ by selecting $\gammabar \geq 2B$, corroborating
Theorem~\ref{thm:strong-local-robustness-sm-nlms} and Corollary~\ref{cor:strong-global-robustness-sm-nlms}.
The sequence $\{\|\wbftilde(k)\|^2\}$ represented by the green curve in Fig.~\ref{fig:sim2-sm-nlms-robustness} increases only $3$ times,
thus confirming the advantage
of using a time-varying $\gammabar(k)$ when the noise bound is unknown, as explained in
Subsection~\ref{sub:sm-nlms-time-varying-gammabar}.
As compared to the SM-NLMS\abbrev{SM-NLMS}{Set-Membership Normalized LMS} algorithm, the behavior of the sequence $\{\|\wbftilde(k)\|^2\}$ for the NLMS\abbrev{NLMS}{Normalized LMS} algorithm
is very irregular.
Indeed, for the NLMS\abbrev{NLMS}{Normalized LMS} algorithm there are many iterations in which $\|\wbftilde(k+1)\|^2>\|\wbftilde(k)\|^2$, even
when using a small step-size $\mu$.
Hence, the NLMS\abbrev{NLMS}{Normalized LMS} algorithm does not use the input data as efficiently as the SM-NLMS\abbrev{SM-NLMS}{Set-Membership Normalized LMS} algorithm does,
given that the NLMS\abbrev{NLMS}{Normalized LMS} performs many ``useless updates''.
In conclusion, an interesting advantage of the SM-NLMS\abbrev{SM-NLMS}{Set-Membership Normalized LMS} algorithm over the NLMS\abbrev{NLMS}{Normalized LMS} algorithm is that the former can achieve
fast convergence and has a well-behaved sequence $\{\|\wbftilde(k)\|^2\}$ (which rarely increases) at the same time.
In addition, the SM-NLMS\abbrev{SM-NLMS}{Set-Membership Normalized LMS} algorithm also saves computational resources by not updating the filter coefficients at
every iteration.
In Fig.~\ref{fig:sim2-sm-nlms-robustness}, the update rates of the blue, cyan, and green curves are 4.6$\%$, 1.5$\%$, and 1.9$\%$, respectively.
They confirm that the computational cost of the SM-NLMS\abbrev{SM-NLMS}{Set-Membership Normalized LMS} algorithm is significantly lower than that of the NLMS \abbrev{NLMS}{Normalized LMS}
algorithm.\footnote{In comparison to the NLMS\abbrev{NLMS}{Normalized LMS} algorithm, whenever the SM-NLMS\abbrev{SM-NLMS}{Set-Membership Normalized LMS} algorithm updates it performs two additional operations:
One division and one subtraction due to the computation of $\mu(k)$. However, for most of the iterations the SM-NLMS\abbrev{SM-NLMS}{Set-Membership Normalized LMS} algorithm requires fewer
operations because it does not update often.}
\begin{figure}[t!]
\centering
\includegraphics[width=1\linewidth]{Figs/sim2-sm-nlms-robustness.pdf}
\caption{$\|\wbftilde(k)\|^2 \triangleq \| \wbf_o - \wbf(k) \|^2$ for the NLMS and the SM-NLMS algorithms. \label{fig:sim2-sm-nlms-robustness}}
\end{figure}
\subsection{Confirming the results for the SM-AP algorithm} \label{subsec:simulation-robustness-sm-ap}
For the case of the SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} algorithm, the input is a first-order autoregressive signal generated as $x(k)=0.95x(k-1)+n(k-1)$.
We test the SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} algorithm employing $L=2$ (i.e., reuse of two previous input data) and
three different constraint vectors (CVs)\abbrev{CV}{Constraint Vector} $\gammabf(k)$: a general CV\abbrev{CV}{Constraint Vector}, the SC-CV,\abbrev{SC-CV}{Simple Choice CV}
and the noise vector CV.\abbrev{CV}{Constraint Vector}
The general CV\abbrev{CV}{Constraint Vector} $\gammabf(k)$, in which the entries are set as $\gamma_l(k) = \gammabar$ for $0\leq l \leq L$, illustrates a case
where the CV\abbrev{CV}{Constraint Vector} is not properly chosen~\cite{Markus_edcv_eusipco2013,Markus_optimalCV_sigpro2017}.
The SC-CV~\cite{Markus_edcv_eusipco2013,Markus_optimalCV_sigpro2017}\abbrev{SC-CV}{Simple Choice CV} is defined as
$\gamma_0(k) = \gammabar\frac{e(k)}{|e(k)|}$ and $\gamma_l(k) = \epsilon(k-l)$ for $1 \leq l \leq L$.
The noise vector CV\abbrev{CV}{Constraint Vector} is given by $\gammabf(k) = \nbf(k)$.
The results depicted in Figs.~\ref{fig:sim-robustness-sm-ap}, \ref{fig:sim-robustness-sm-ap-simp}, \ref{fig:sm-ap-noise-robustness}, and \ref{fig:sm-ap-simp-boounded-robustness} aim at verifying Theorem~\ref{thm:local_robustness-SM-AP} and
Corollary~\ref{cor:global_robustness-SM-AP-c*n(k)}.
We define $g_1(k)$ and $g_2(k)$ as the numerator and the denominator of~\eqref{eq:local_robustness_f1-SM-AP}
in Theorem~\ref{thm:local_robustness-SM-AP}, respectively, when an update occurs; otherwise, we define
$g_1(k) = \| \wbftilde(k+1) \|^2$ and $g_2(k) = \| \wbftilde(k) \|^2$.
\begin{figure}[t!]
\centering
\includegraphics[width=1\linewidth]{Figs/sm-ap-robustness.pdf}
\caption{Values of $g_1(k)$ and $g_2(k)$ over the iterations for the SM-AP algorithm with $\gammabf(k)$ as the general CV,
where $g_1(k)$ and $g_2(k)$ are the numerator and denominator of~\eqref{eq:local_robustness_f1-SM-AP} in
Theorem~\ref{thm:local_robustness-SM-AP}, when an update occurs; otherwise, $g_1(k)=\|\wbftilde(k+1)\|^2$ and $g_2(k)=\|\wbftilde(k)\|^2$. \label{fig:sim-robustness-sm-ap}}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=1\linewidth]{Figs/sm-ap-simp-robustness.pdf}
\caption{Values of $g_1(k)$ and $g_2(k)$ over the iterations for the SM-AP algorithm with $\gammabf(k)$ as the SC-CV,
where $g_1(k)$ and $g_2(k)$ are the numerator and denominator of~\eqref{eq:local_robustness_f1-SM-AP} in
Theorem~\ref{thm:local_robustness-SM-AP}, when an update occurs; otherwise, $g_1(k)=\|\wbftilde(k+1)\|^2$ and $g_2(k)=\|\wbftilde(k)\|^2$. \label{fig:sim-robustness-sm-ap-simp}}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=1\linewidth]{Figs/sm-ap-noise-robustness.pdf}
\caption{Values of $g_1(k)$ and $g_2(k)$ over the iterations for the SM-AP algorithm with $\gammabf(k) = \nbf(k)$,
where $g_1(k)$ and $g_2(k)$ are the numerator and denominator of~\eqref{eq:local_robustness_f1-SM-AP} in
Theorem~\ref{thm:local_robustness-SM-AP}, when an update occurs; otherwise, $g_1(k)=\|\wbftilde(k+1)\|^2$ and $g_2(k)=\|\wbftilde(k)\|^2$. \label{fig:sm-ap-noise-robustness}}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=1\linewidth]{Figs/sm-ap-simp-bounded-robustness.pdf}
\caption{Values of $g_1(k)$ and $g_2(k)$ over the iterations for the SM-AP algorithm with $\gammabf(k)$ as the SC-CV when the noise bound is known, where $g_1(k)$ and $g_2(k)$ are the numerator and denominator of~\eqref{eq:local_robustness_f1-SM-AP} in
Theorem~\ref{thm:local_robustness-SM-AP}, when an update occurs; otherwise, $g_1(k)=\|\wbftilde(k+1)\|^2$ and $g_2(k)=\|\wbftilde(k)\|^2$. \label{fig:sm-ap-simp-boounded-robustness}}
\end{figure}
The results depicted in Fig.~\ref{fig:sim-robustness-sm-ap} illustrate that, for the general CV\abbrev{CV}{Constraint Vector}, there are many iterations
in which $g_1(k)>g_2(k)$ (about $293$ out of $1000$ iterations).
This is an expected behavior since the general CV\abbrev{CV}{Constraint Vector} does not take into account (directly or indirectly) the value of $n(k)$ and,
therefore, it does not consider the robustness condition $\gammabf^T(k) \Abf(k) \gammabf(k) \leq 2 \gammabf^T(k) \Abf(k) \nbf(k)$.
For the SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} algorithm employing the SC-CV\abbrev{SC-CV}{Simple Choice CV}, however, there are very few iterations in which
$g_1(k)> g_2(k)$ (only $19$ out of $1000$ iterations), as shown in Fig.~\ref{fig:sim-robustness-sm-ap-simp}.
This means that even the widely used SC-CV\abbrev{SC-CV}{Simple Choice CV} does not lead to global robustness.
Fig.~\ref{fig:sm-ap-noise-robustness} depicts the results for the SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} algorithm with $\gammabf(k)=\nbf(k)$.
In this case, we can observe that $g_1(k)\leq g_2(k)$ for all $k$, corroborating Corollary~\ref{cor:global_robustness-SM-AP-c*n(k)}.
In other words, this CV\abbrev{CV}{Constraint Vector} guarantees the global robustness of the SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} algorithm.
Fig.~\ref{fig:sm-ap-simp-boounded-robustness} illustrates $g_1(k)$ and $g_2(k)$ for the SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} algorithm with SC-CV\abbrev{SC-CV}{Simple Choice CV} when
the noise bound is known and 10 times smaller than $\gammabar$.
In contrast with the SM-NLMS\abbrev{SM-NLMS}{Set-Membership Normalized LMS} algorithm, for the SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} algorithm even when the noise bound is known and much smaller than $\gammabar$,
we cannot guarantee that $g_1(k)\leq g_2(k)$ for all $k$.
In Fig.~\ref{fig:sm-ap-simp-boounded-robustness}, for example, we observe $g_1(k)>g_2(k)$ in 15 iterations.
Fig.~\ref{fig:sm-ap-all-wtilde-robustness} depicts the sequence $\{\|\wbftilde(k)\|^2\}$ for the AP\abbrev{AP}{Affine Projection} and the SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} algorithms.
For the AP\abbrev{AP}{Affine Projection} algorithm, the step-size $\mu$ is set as 0.9 and 0.05, whereas for the SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} algorithm the three previously
defined CVs\abbrev{CV}{Constraint Vector} are tested.
For the AP\abbrev{AP}{Affine Projection} algorithm, we can observe an irregular behavior of $\{\|\wbftilde(k)\|^2\}$, i.e., this sequence increases and decreases
very often.
Even when a low value of $\mu$ is applied we still observe many iterations in which $\|\wbftilde(k+1)\|^2 > \|\wbftilde(k)\|^2$ (425 out of 1000 iterations).
The SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} algorithm using the general CV\abbrev{CV}{Constraint Vector} performs similar to the AP\abbrev{AP}{Affine Projection} algorithm with high $\mu$.
But when the CV\abbrev{CV}{Constraint Vector} is properly chosen, like the SC-CV\abbrev{SC-CV}{Simple Choice CV} for example, we observe that the number of iterations
in which $\|\wbftilde(k+1)\|^2 > \|\wbftilde(k)\|^2$ is dramatically reduced (26 out of 1000 iterations), which means that the SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} with an adequate CV \abbrev{CV}{Constraint Vector}
performs fewer ``useless updates'' than the AP\abbrev{AP}{Affine Projection} algorithm.
Another interesting, although not practical, choice of CV\abbrev{CV}{Constraint Vector} is $\gammabf(k) = \nbf(k)$, which leads to a monotonic decreasing
sequence $\{\|\wbftilde(k)\|^2\}$.
\begin{figure}[t!]
\centering
\includegraphics[width=1\linewidth]{Figs/sm-ap-all-wtilde-robustness.pdf}
\caption{$\|\wbftilde(k)\|^2 \triangleq \| \wbf(k) - \wbf_o \|^2$ for the AP and the SM-AP algorithms. \label{fig:sm-ap-all-wtilde-robustness}}
\end{figure}
The MSE\abbrev{MSE}{Mean-Squared Error} learning curves for the AP\abbrev{AP}{Affine Projection} and the SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} algorithms are depicted in Fig.~\ref{fig:Robustness_learning_curves}.
These results were computed by averaging the squared error over 1000 trials for each curve.
Observing the results of the AP\abbrev{AP}{Affine Projection} algorithm, the trade-off between convergence rate and steady-state MSE\abbrev{MSE}{Mean-Squared Error} is evident.
Indeed, excluding the SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} with general CV\abbrev{CV}{Constraint Vector} (which is not an adequate choice for the CV)\abbrev{CV}{Constraint Vector}, the AP\abbrev{AP}{Affine Projection} algorithm could not achieve fast convergence
and low MSE\abbrev{MSE}{Mean-Squared Error} simultaneously, as the SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} algorithm did. In addition, observe that $\gammabf(k)=\nbf(k)$ leads to the best results in terms of convergence rate and steady-state MSE\abbrev{MSE}{Mean-Squared Error}, but the
performance of the SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} with SC-CV\abbrev{SC-CV}{Simple Choice CV} is quite close.
The average number of updates required by the SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} algorithm using the general CV\abbrev{CV}{Constraint Vector}, the SC-CV\abbrev{SC-CV}{Simple Choice CV}, and the noise CV\abbrev{CV}{Constraint Vector} are
35$\%$, 9.7$\%$, and 3.6$\%$, respectively, implying that the last two CVs\abbrev{CV}{Constraint Vector} also have lower computational cost.
It is worth noticing that even when using the general CV\abbrev{CV}{Constraint Vector}, the SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} algorithm still converges although it presents poor performance,
as explained in Subsection~\ref{sub:divergence_sm_ap}.
\begin{figure}[t!]
\centering
\includegraphics[width=1\linewidth]{Figs/learning-robustness.pdf}
\caption{Learning curves for the AP and SM-AP algorithm using different constraint vectors. \label{fig:Robustness_learning_curves}}
\end{figure}
\section{Conclusion} \label{sec:conclusion-robustness}
In this chapter, we addressed the robustness (in the sense of $l_2$-stability) of the SM-NLMS\abbrev{SM-NLMS}{Set-Membership Normalized LMS} and the SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} algorithms.
In addition to the already known advantages of the SM-NLMS\abbrev{SM-NLMS}{Set-Membership Normalized LMS} algorithm over the NLMS\abbrev{NLMS}{Normalized LMS} algorithm, regarding accuracy and
computational cost, in this chapter we demonstrated that:
(i) the SM-NLMS\abbrev{SM-NLMS}{Set-Membership Normalized LMS} algorithm is robust regardless the choice of its parameters and
(ii) the SM-NLMS\abbrev{SM-NLMS}{Set-Membership Normalized LMS} algorithm uses the input data very efficiently, i.e., it rarely produces a worse estimate $\wbf(k+1)$
during its update process.
For the case where the noise bound is known, we explained how to set appropriately the parameter $\gammabar$ so that
the SM-NLMS\abbrev{SM-NLMS}{Set-Membership Normalized LMS} algorithm {\it never generates a worse estimate}, i.e., the sequence $\{ \| \wbftilde(k) \|^2 \}$ (the squared Euclidean norm of the parameters deviation) becomes
monotonously decreasing.
For the case where the noise bound is unknown, we designed a time-varying parameter $\gammabar(k)$ that achieves simultaneously
fast convergence and efficient use of the input data.
Unlike the SM-NLMS\abbrev{SM-NLMS}{Set-Membership Normalized LMS} algorithm, we demonstrated that there exists a condition to guarantee the $l_2$-stability
of the SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} algorithm.
This robustness condition depends on a parameter known as the constraint vector (CV)\abbrev{CV}{Constraint Vector} $\gammabf(k)$.
We proved the existence of vectors $\gammabf(k)$ satisfying such a condition, but practical choices remain unknown.
In addition, it was shown that the SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} with an adequate CV\abbrev{CV}{Constraint Vector} uses the input data more efficiently than the AP\abbrev{AP}{Affine Projection} algorithm.
We also demonstrated that both the SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} and SM-NLMS\abbrev{SM-NLMS}{Set-Membership Normalized LMS} algorithms do not diverge, even when their parameters are not properly
selected, provided the noise is bounded.
Finally, numerical results that corroborate our study were presented.
\chapter{Trinion and Quaternion Set-Membership Affine Projection Algorithms}
The quaternions are a number system that extends the complex numbers. They were introduced by William Rowan Hamilton in 1843 for the first time~\cite{Hamilton_quaternion_PM1844}. Quaternions have several applications in multivariate signal processing problems, such as color image processing~\cite{Pei_crb_tsp2004,Guo_rbct_sp2011}, wind profile prediction~\cite{Took_qvstjf_RE2011,Barth_aqd_letter2014,Jiang_gqvgo_DSP2014}, and adaptive beamforming~\cite{Zhang_qvrab_SP2014}. A wide family of quaternion based algorithms have been introduced in adaptive filtering literatures~\cite{Ujang_qvnaf_TNN2011,Took_qlaafhp_TSP2009,Took_sqlfcla_icassp2009,Neto_nrcwlqa_SSP2011}.
As a generalization of the complex domain, the quaternion domain provides a useful way to process 3- and 4-dimensional signals. Recently, several quaternion based adaptive filtering algorithms have appeared and they take benefit from the fact that the quaternion domain is a division algebra and it has a suitable data representation~\cite{Pei_quaternion_TIP1999,Bihan_quaternion_ICIP2003,Campa_quaternion_CDC2006}. Therefore, the quaternion algorithms allow a coupling between the components of 3- and 4-dimensional processes. Also, the quaternion-valued algorithm results in better performance compared to the real-valued algorithms, since it accounts for the coupling of the wind measurements and can be developed to exploit the augmented quaternion statistics~\cite{Took_qvstjf_RE2011}. As a by-product, in comparison with the real-valued algorithms in $\mathbb{R}^3$ and $\mathbb{R}^4$, they show enhanced stability and more degrees of freedom in the control of the adaptation mechanism.
However, when the signals involved in the adaptation process have only three dimensions, i.e., one real and two imaginary components, we can apply the trinion based algorithms. Using a data set for wind profile prediction, the trinion-valued least mean square (TLMS) algorithm is proposed~\cite{Guo_tdwpp_DSP2015} and its learning speed is compared with the quaternion least mean square (QLMS) algorithm~\cite{Barth_aqd_letter2014}. In the TLMS\abbrev{TLMS}{Trinion-Valued LMS} algorithm, the computational complexity is lower than QLMS\abbrev{QLMS}{Quaternion-Valued LMS} algorithm, since the implementation of a full quaternion-valued multiplication requires 16 and 12 real-valued multiplications and additions, respectively. In the trinion case, to multiply two 3-D numbers we only need 9 and 6 real-valued multiplications and additions, respectively. The quaternion affine projection (QAP)\abbrev{QAP}{Quaternion-Valued Affine Projection} algorithm~\cite{Jahanchahil_cqvapa_SP2013} has been applied to predict noncircular real-world 4-D wind, but it can also be used to 3-D profile wind prediction.
Here we consider a powerful approach to decrease the computational complexity of an adaptive filter by employing set-membership filtering (SMF)\abbrev{SMF}{Set-Membership Filtering} approach~\cite{Diniz_adaptiveFiltering_book2013,Gollamudi_smf_letter1998}. For real numbers, the set-membership NLMS~\cite{Gollamudi_smf_letter1998,Diniz_adaptiveFiltering_book2013} \abbrev{NLMS}{Normalized LMS}and AP~\cite{Werner_sm_ap_letter2001,Diniz_adaptiveFiltering_book2013,Diniz_sm_bnlms_tsp2003} algorithms were reviewed in Chapter 2. This chapter aims to generalize these algorithms to operate with trinion and quaternion numbers. The trinion number system is not a mathematical field since there are elements which are not invertible. Therefore, to address this drawback, we replace the non-invertible element with an invertible one. In the quaternion number system, each nonzero element has inverse while the product operation is not commutative. The proposed algorithms get around these drawbacks.
Finally, we apply the trinion based algorithms to predicting the wind profile and compare their competitive performance with the quaternion based algorithms. However, the quaternion algorithms require remarkably higher computational complexity compared to their trinion counterparts. Also, we study the quaternion adaptive beamforming as an application of the quaternion-valued algorithms. In this manner, we will reduce the number of involved sensors in the adaptation mechanism. As a result, we can decrease the computational complexity and the energy consumption of the system.
Part of the content of this chapter was published in~\cite{Hamed_smtrinion-tcssII2016}. This chapter introduces new data selective adaptive filtering algorithms for trinion and quaternion number systems $\mathbb{T}$ and $\mathbb{H}$. The work advances the set-membership trinion and quaternion-valued normalized least mean square (SMTNLMS\abbrev{SMTNLMS}{Set-Membership Trinion-Valued NLMS} and SMQNLMS)\abbrev{SMQNLMS}{Set-Membership Quaternion-Valued NLMS} and the set-membership trinion and quaternion-valued affine projection (SMTAP\abbrev{SMTAP}{Set-Membership Trinion-Valued AP} and SMQAP)\abbrev{SMQAP}{Set-Membership Quaternion-Valued AP} algorithms. Also, as individual cases, we obtain trinion and quaternion algorithms not employing the set-membership strategy.
This chapter is organized as follows. Short introductions to quaternions and trinions are provided in Sections~\ref{sec:quaternions} and~\ref{sec:Trinions}, respectively. Section~\ref{sec:set-membership-trinion} briefly reviews the concept of SMF\abbrev{SMF}{Set-Membership Filtering} but instead of real numbers we use trinions and quaternions. The new trinion based SMTAP\abbrev{SMTAP}{Set-Membership Trinion-Valued AP} algorithm is derived in Section~\ref{sec:smtap_smnlms}. Section~\ref{sec:smqap_smqnlms} introduces the quaternion based SMQAP\abbrev{SMQAP}{Set-Membership Quaternion-Valued AP} algorithm. Section~\ref{sec:adaptive-beamforming-tr} reviews the application of quaternion-valued adaptive algorithms to adaptive beamforming. Simulations are presented in Section~\ref{sec:simulations-trinion} and Section~\ref{sec:conclusion-trinion} contains the conclusions.
\section{Quaternions} \label{sec:quaternions}
The quaternion number system is a non-commutative extension of complex numbers, denoted by $\mathbb{H}$. A quaternion $q\in\mathbb{H}$ is defined as~\cite{Hamilton_quaternion_PM1844} \symbl{$q_a$}{The real component of a quaternion $q$} \symbl{$q_b$}{The first imaginary component of a quaternion $q$} \symbl{$q_c$}{The second imaginary component of a quaternion $q$} \symbl{$q_d$}{The third imaginary component of a quaternion $q$}
\symbl{$\imath$}{The first orthogonal unit imaginary axis vector in quaternion numbers} \symbl{$\jmath$}{The second orthogonal unit imaginary axis vector in quaternion numbers} \symbl{$\kappa$}{The third orthogonal unit imaginary axis vector in quaternion numbers}
\begin{align}
q=q_a+q_b\imath+q_c\jmath+q_d\kappa,
\end{align}
where $q_a$, $q_b$, $q_c$, and $q_d$ are in $\mathbb{R}$. $q_a$ is the real component, while $q_b$, $q_c$, and $q_d$ are the three imaginary components. The orthogonal unit imaginary axis vectors $\imath$, $\jmath$, and $\kappa$ obey the following rules
\begin{align}
\imath\jmath=\kappa\qquad \jmath\kappa=\imath\qquad \kappa\imath=\jmath,\nonumber\\
\imath^2=\jmath^2=\kappa^2=\imath\jmath\kappa=-1.
\end{align}
Note that due to non-commutativity of the quaternion multiplication, we have $\jmath\imath=-\kappa\neq \imath\jmath$ for example. The element 1 is the identity element of $\mathbb{H}$, i.e., multiplication by 1 does nothing. The conjugate of a quaternion, denoted by $q^*$, is defined as \symbl{$(\cdot)^*$}{Conjugation operator}
\begin{align}
q^*=q_a-q_b\imath-q_c\jmath-q_d\kappa,
\end{align}
and the norm $|q|$ is given by
\begin{align}
|q|=\sqrt{qq^*}=\sqrt{q_a^2+q_b^2+q_c^2+q_d^2}.
\end{align}
The inverse of $q$ is introduced as
\begin{align}
q^{-1}=\frac{q^*}{|q|^2}.
\end{align}
Observe that $q$ can be reformulated into the Cayley-Dickson~\cite{Zhang_qvrab_SP2014} form as
\begin{align}
q=\underbrace{(q_a+q_c\jmath)}_{z_1}+\imath\underbrace{(q_b+q_d\jmath)}_{z_2}, \label{eq:Cayley_Dickson}
\end{align}
where $z_1$ and $z_2$ are complex numbers.
The quaternion involutions are defined as follows~\cite{Ell_quaternion_involution_CMA2011,Mandic_quaternion_gradient_SPL2011}
\begin{align}
q^\imath=&-\imath q\imath=q_a+q_b\imath-q_c\jmath-q_d\kappa,\nonumber\\
q^\jmath=&-\jmath q\jmath=q_a-q_b\imath+q_c\jmath-q_d\kappa,\nonumber\\
q^\kappa=&-\kappa q\kappa=q_a-q_b\imath-q_c\jmath+q_d\kappa.
\end{align}
Therefore, we can present the four real components of a quaternion $q$ by the convolutions of $q$
\begin{align}
q_a=&\frac{1}{4}(q+q^\imath+q^\jmath+q^\kappa),\nonumber\\
q_b=&\frac{1}{4\imath}(q+q^\imath-q^\jmath-q^\kappa),\nonumber\\
q_c=&\frac{1}{4\jmath}(q-q^\imath+q^\jmath-q^\kappa),\nonumber\\
q_d=&\frac{1}{4\kappa}(q-q^\imath-q^\jmath+q^\kappa). \label{eq:convolution-relations-tr}
\end{align}
These expressions allow us presenting any quadrivariate or quaternion-valued function $f(q)$ as~\cite{Ell_quaternion_involution_CMA2011}
\begin{align}
f(q)=f(q_a,q_b,q_c,q_d)=f(q,q^\imath,q^\jmath,q^\kappa). \label{eq:convolution_expression_function_tr}
\end{align}
We know that the quaternion ring and $\mathbb{R}^4$ are isomorphic. Hence, by the same argument in the $\mathbb{C}\mathbb{R}$ calculus~\cite{Brandwood_gradient_FCRSP1983}, to introduce the duality between the derivatives of $f(q)\in\mathbb{H}$ and the derivatives of the corresponding quadrivariate real function $g(q_a,q_b,q_c,q_d)\in\mathbb{R}^4$, we begin with~\cite{Mandic_quaternion_gradient_SPL2011}
\begin{align}
f(q)=f_a(q_a,q_b,q_c,q_d)+&f_b(q_a,q_b,q_c,q_d)\imath+f_c(q_a,q_b,q_c,q_d)\jmath\nonumber\\+&f_d(q_a,q_b,q_c,q_d)\kappa=g(q_a,q_b,q_c,q_d).
\end{align}
The real variable function $g(q_a,q_b,q_c,q_d)$ has the following differential
\begin{align}
dg=&\frac{\partial g}{\partial q_a}dq_a+\frac{\partial g}{\partial q_b}dq_b+\frac{\partial g}{\partial q_c}dq_c+\frac{\partial g}{\partial q_d}dq_d\nonumber\\
=&\frac{\partial f(q)}{\partial q_a}dq_a+\frac{\partial f(q)}{\partial q_b}dq_b\imath+\frac{\partial f(q)}{\partial q_c}dq_c\jmath+\frac{\partial f(q)}{\partial q_d}dq_d\kappa. \label{eq:dg_tr}
\end{align}
By using the relations in~\eqref{eq:convolution-relations-tr}, the derivatives of the components of a quaternion $q$ are given by
\begin{align}
dq_a&=\frac{1}{4}(dq+dq^\imath+dq^\jmath+dq^\kappa),\nonumber\\
dq_b&=\frac{-\imath}{4}(dq+dq^\imath-dq^\jmath-dq^\kappa),\nonumber\\
dq_c&=\frac{-\jmath}{4}(dq-dq^\imath+dq^\jmath-dq^\kappa),\nonumber\\
dq_d&=\frac{-\kappa}{4}(dq-dq^\imath-dq^\jmath+dq^\kappa). \label{eq:dq_components-tr}
\end{align}
Also, using~\eqref{eq:convolution_expression_function_tr} we obtain
\begin{align}
df(q)=&\frac{\partial f(q,q^\imath,q^\jmath,q^\kappa)}{\partial q}dq+\frac{\partial f(q,q^\imath,q^\jmath,q^\kappa)}{\partial q^\imath}dq^\imath\nonumber\\
&+\frac{\partial f(q,q^\imath,q^\jmath,q^\kappa)}{\partial q^\jmath}dq^\jmath+\frac{\partial f(q,q^\imath,q^\jmath,q^\kappa)}{\partial q^\kappa}dq^\kappa. \label{eq:df-tr}
\end{align}
Therefore, by replacing the components of $dq$ from~\eqref{eq:dq_components-tr} in Equation~\eqref{eq:dg_tr}, and solving for the coefficients of $dq$, $dq^\imath$, $dq^\jmath$, $dq^\kappa$ from~\eqref{eq:dg_tr} and~\eqref{eq:df-tr}, we will obtain the $\mathbb{H}\mathbb{R}$-derivatives identities as follows
\begin{align}
\left[\begin{array}{c}\frac{\partial f(q,q^\imath,q^\jmath,q^\kappa)}{\partial q}\\\frac{\partial f(q,q^\imath,q^\jmath,q^\kappa)}{\partial q^\imath}\\\frac{\partial f(q,q^\imath,q^\jmath,q^\kappa)}{\partial q^\jmath}\\\frac{\partial f(q,q^\imath,q^\jmath,q^\kappa)}{\partial q^\kappa}\end{array}\right]=\frac{1}{4}\left[\begin{array}{cccc}1&-\imath&-\jmath&-\kappa\\1&-\imath&\jmath&\kappa\\1&\imath&-\jmath&\kappa\\1&\imath&\jmath&-\kappa\end{array}\right]\left[\begin{array}{c}\frac{\partial f}{\partial q_a}\\\frac{\partial f}{\partial q_b}\\\frac{\partial f}{\partial q_c}\\\frac{\partial f}{\partial q_d}\end{array}\right].
\end{align}
Our interest is in the derivative $\frac{\partial f(q,q^\imath,q^\jmath,q^\kappa)}{\partial q}$, thus the gradient of $f(q)$ with respect to $q$ is given by~\cite{Mandic_quaternion_gradient_SPL2011}
\begin{align}
\nabla_qf=\frac{1}{4}(\frac{\partial f}{\partial q_a}-\frac{\partial f}{\partial q_b}\imath-\frac{\partial f}{\partial q_c}\jmath-\frac{\partial f}{\partial q_d}\kappa)=\frac{1}{4}(\nabla_{q_a}f-\nabla_{q_b}f\imath-\nabla_{q_c}f\jmath-\nabla_{q_d}f\kappa).
\end{align}
The real values elements $q_a$, $q_b$, $q_c$, $q_d$ of a quaternion $q$ can be presented in terms of $q^*$, $q^{\imath^*}$, $q^{\jmath^*}$, $q^{\kappa^*}$ as follows~\cite{Mandic_quaternion_gradient_SPL2011}
\begin{align}
q_a&=\frac{1}{4}(q^*+q^{\imath^*}+q^{\jmath^*}+q^{\kappa^*}),\nonumber\\
q_b&=\frac{1}{4\imath}(-q-q^{\imath^*}+q^{\jmath^*}+q^{\kappa^*}),\nonumber\\
q_c&=\frac{1}{4\jmath}(-q+q^{\imath^*}-q^{\jmath^*}+q^{\kappa^*}),\nonumber\\
q_d&=\frac{1}{4\kappa}(-q^*+q^{\imath^*}+q^{\jmath^*}-q^{\kappa^*}).
\end{align}
Then the derivative of the function $f(q)=f(q^*,q^{\imath^*},q^{\jmath^*},q^{\kappa^*})$ can be expressed as
\begin{align}
df(q)=&\frac{\partial f(q^*,q^{\imath^*},q^{\jmath^*},q^{\kappa^*})}{\partial q^*}dq^*+\frac{\partial f(q^*,q^{\imath^*},q^{\jmath^*},q^{\kappa^*})}{\partial q^{\imath^*}}dq^{\imath^*}\nonumber\\
&+\frac{\partial f(q^*,q^{\imath^*},q^{\jmath^*},q^{\kappa^*})}{\partial q^{\jmath^*}}dq^{\jmath^*}+\frac{\partial f(q^*,q^{\imath^*},q^{\jmath^*},q^{\kappa^*})}{\partial q^{\kappa^*}}dq^{\kappa^*}.
\end{align}
Also, the derivative of the quadrivariate $g(q_a,q_b,q_c,q_d)$ is given by
\begin{align}
dg(q_a,q_b,q_c,q_d)=Adq^*+Bdq^{\imath^*}+Cdq^{\jmath^*}+Ddq^{\kappa^*}.
\end{align}
By the same argument above, if we solve for the coefficients of $dq^*$, $dq^{\imath^*}$, $dq^{\jmath^*}$, $dq^{\kappa^*}$ then we will obtain the $\mathbb{H}\mathbb{R}^*$-derivatives identities,
\begin{align}
\left[\begin{array}{c}\frac{\partial f(q^*,q^{\imath^*},q^{\jmath^*},q^{\kappa^*})}{\partial q^*}\\\frac{\partial f(q^*,q^{\imath^*},q^{\jmath^*},q^{\kappa^*})}{\partial q^{\imath^*}}\\\frac{\partial f(q^*,q^{\imath^*},q^{\jmath^*},q^{\kappa^*})}{\partial q^{\jmath^*}}\\\frac{\partial f(q^*,q^{\imath^*},q^{\jmath^*},q^{\kappa^*})}{\partial q^{\kappa^*}}\end{array}\right]=\frac{1}{4}\left[\begin{array}{cccc}1&\imath&\jmath&\kappa\\1&\imath&-\jmath&-\kappa\\1&-\imath&\jmath&-\kappa\\1&-\imath&-\jmath&\kappa\end{array}\right]\left[\begin{array}{c}\frac{\partial f}{\partial q_a}\\\frac{\partial f}{\partial q_b}\\\frac{\partial f}{\partial q_c}\\\frac{\partial f}{\partial q_d}\end{array}\right].
\end{align}
The derivative $\frac{\partial f(q^*,q^{\imath^*},q^{\jmath^*},q^{\kappa^*})}{\partial q^*}$ is of particular interest, thus the gradient of $f(q)$ with respect to $q^*$ is given by~\cite{Mandic_quaternion_gradient_SPL2011}
\begin{align}
\nabla_{q^*}f&=\frac{1}{4}(\frac{\partial f}{\partial q_a}+\frac{\partial f}{\partial q_b}\imath+\frac{\partial f}{\partial q_c}\jmath+\frac{\partial f}{\partial q_d}\kappa)=\frac{1}{4}(\nabla_{q_a}f+\nabla_{q_b}f\imath+\nabla_{q_c}f\jmath+\nabla_{q_d}f\kappa).
\end{align}
\section{Trinions} \label{sec:Trinions}
As a group, the trinion number system $\mathbb{T}$ is isomorphic to $\mathbb{R}^3$. A number $v$ in $\mathbb{T}$ is composed of one real part, $v_a$, and two imaginary parts, $v_b$ and $v_c$, \symbl{$v_a$}{The real component of a trinion $v$} \symbl{$v_b$}{The first imaginary component of a trinion $v$} \symbl{$v_c$}{The third imaginary component of a trinion $v$} \symbl{$\bar{\imath}$}{The first orthogonal unit imaginary axis vector in trinion numbers} \symbl{$\bar{\jmath}$}{The second orthogonal unit imaginary axis vector in trinion numbers}
\begin{align}
v=v_a+v_b\bar\imath+v_c\bar\jmath.
\end{align}
The number system $\mathbb{T}$ has three operations: addition, scalar multiplication, and trinion multiplication. The sum of two elements of $\mathbb{T}$ is defined to be their sum as elements of $\mathbb{R}^3$. Similarly the product of an element of $\mathbb{T}$ by a real number is defined to be the same as the product by a scalar in $\mathbb{R}^3$. To make a commutative algebraic group of the basis elements 1, $\bar\imath$, and $\bar\jmath$ the following rules apply \cite{Assefa_tftci_SP2011}
\begin{align}
\bar\imath^2=\bar\jmath,~\bar\imath\bar\jmath=\bar\jmath\bar\imath=-1,~\bar\jmath^2=-\bar\imath.
\end{align}
Trinions with these rules set a commutative mathematical ring, i.e., $vw=wv$ for $v,w\in\mathbb{T}$. The basis element 1 will be the identity element of $\mathbb{T}$, meaning that multiplication by 1 does nothing. The conjugate of $v$ is given by~\cite{Guo_tdwpp_DSP2015}
\begin{align}
v^*=v_a-v_b\bar\jmath-v_c\bar\imath,
\end{align}
and the norm by~\cite{Guo_tdwpp_DSP2015} \symbl{$\Re(\cdot)$}{The real part of $(\cdot)$}
\begin{align}
|v|=\sqrt{\Re(vv^*)}=\sqrt{v_a^2+v_b^2+v_c^2}.
\end{align}
The inverse of $v$, if exists, is $w=(w_a+w_b\bar\imath+w_c\bar\jmath)\in\mathbb{T}$ such that $vw=wv=1$. To solve this equation we consider $v=[v_a~v_b~v_c]^T$ and $w=[w_a~w_b~w_c]^T$ then we get
\begin{align}
\left\{\begin{array}{l}v_aw_a-v_cw_b-v_bw_c=1,\\
v_bw_a+v_aw_b-v_cw_c=0,\\
v_cw_a+v_bw_b+v_aw_c=0,\end{array}\right.
\end{align}
or in the matrix form $\Abf w=[1~0~0]^T$ where $\Abf$ is given by
\begin{align}
\Abf=\left[\begin{array}{ccc}v_a&-v_c&-v_b\\ v_b&v_a&-v_c\\ v_c&v_b&v_a\end{array}\right],
\end{align}
thus $w=\Abf^{-1}[1~0~0]^T$. When the determinant of $\Abf$ is zero, the inverse of $v$ does not exist. In order to get around this problem when the determinant of $\Abf$ is zero, we define $\Abf=\delta \Ibf$ where $\delta$ is a small positive constant and $\Ibf$ is a $3\times3$ identity matrix. Note that $\Abf$ is replaced by the identity matrix multiplied by a small constant in order to avoid numerical problems in the matrix inversion. This strategy avoids division by zero in the trinion-valued algorithms. We will now define $v^{-1}=\Abf^{-1}[1~0~0]^T$.
In the field of complex numbers, a variable $z$ and its conjugate $z^*$ can be considered as two independent variables, so that the complex-valued gradient can be defined \cite{Bos_cgh_PVISP1994}. As far as we know, the trinion involutions, $v^{\bar{\imath}}$ and $v^{\bar{\jmath}}$, are not available in general. In this chapter, we use the following formulas for the gradients of a function $f(v)$ with respect to the trinion-valued variable $v$ and its conjugate \cite{Guo_tdwpp_DSP2015}
\begin{equation}
\begin{aligned}
\nabla_vf&=\frac{1}{3}(\nabla_{v_a}f-\nabla_{v_b}f\bar\jmath-\nabla_{v_c}f\bar\imath),\\
\nabla_{v^*}f&=\frac{1}{3}(\nabla_{v_a}f+\nabla_{v_b}f\bar\imath+\nabla_{v_c}f\bar\jmath),
\end{aligned}
\end{equation}
where $v=v_a+v_b\bar\imath+v_c\bar\jmath$.
\section{Set-Membership Filtering (SMF) in $\mathbb{T}$ and $\mathbb{H}$} \label{sec:set-membership-trinion}
The target of the SMF\abbrev{SMF}{Set-Membership Filtering} is to design $\wbf$ such that the magnitude of the estimation error is upper bounded by a predetermined parameter $\gammabar$. The value of $\gammabar$ can change with the specific application. If the value of $\gammabar$ is suitably selected, there are many valid estimates for $\wbf$. Suppose that ${\cal S}$ denotes the set of all possible input-desired data pairs $(\xbf,d)$ of interest and define $\Theta$ as the set of all vectors $\wbf$ whose magnitudes of their estimation errors are upper bounded by $\gammabar$ whenever $(\xbf,d)\in{\cal S}$. The set $\Theta$ is named feasibility set and is given by
\begin{align}
\Theta\triangleq\bigcap_{(\xbf,d)\in{\cal S}}\{\wbf\in\mathbb{F}^{N+1}:|d-\wbf^H\xbf|\leq\gammabar\},
\end{align}
where $\mathbb{F}$ is $\mathbb{T}$ or $\mathbb{H}$.
Let's define the constraint set ${\cal H}(k)$ consisting of all vectors $\wbf$ such that their estimation errors at time instant $k$ are upper bounded in magnitude by $\gammabar$,
\begin{align}
{\cal H}(k)\triangleq\{\wbf\in\mathbb{F}^{N+1}:|d(k)-\wbf^H\xbf(k)|\leq\gammabar\}.
\end{align}
The membership set $\psi(k)$ defined as
\begin{align}
\psi(k)\triangleq\bigcap_{i=0}^k{\cal H}(i)\label{eq:set-membership_set-trinion}
\end{align}
will include $\Theta$ and will coincide with $\Theta$ if all data pairs in ${\cal S}$ are traversed up to time instant $k$. Owing to difficulties to compute $\psi(k)$, adaptive approaches are required~\cite{Gollamudi_smf_letter1998}. The easiest route is to compute a point estimate using, for example, the information provided by the constraint set ${\cal H}(k)$ like in the set-membership NLMS\abbrev{NLMS}{Normalized LMS} algorithm~\cite{Gollamudi_smf_letter1998}, or several previous constraint sets as is done in the set-membership affine projection algorithm~\cite{Werner_sm_ap_letter2001}.
\section{SMTAP Algorithm} \label{sec:smtap_smnlms}
In this section, we propose the SMTAP\abbrev{SMTAP}{Set-Membership Trinion-Valued AP} algorithm. This trinion-valued algorithm is the counterpart of the real-valued SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} algorithm. Then we derive the update equations for the simpler algorithms related to the normalized LMS \abbrev{LMS}{Least-Mean-Square}algorithm.
The membership set $\psi(k)$ defined in (\ref{eq:set-membership_set-trinion}) encourages the use of more constraint sets in the update. Therefore, we elaborate an algorithm whose updates belong to a set composed of $L+1$ constraint sets.
For this purpose, we express $\psi(k)$ as
\begin{align}
\psi(k)=\bigcap_{i=0}^{k-L-1}{\cal H}(i)\bigcap_{j=k-L}^k{\cal H}(j)=\psi^{k-L-1}(k)\bigcap\psi^{L+1}(k), \label{eq:divide_intersection-trinion}
\end{align}
where $\psi^{L+1}(k)$ indicates the intersection of the $L+1$ last constraint sets, and $\psi^{k-L-1}(k)$ represents the intersection of the first $k-L$ constraint sets. Our goal is to formulate an algorithm whose coefficient update belongs to the last $L+1$ constraint sets, i.e., $\wbf(k+1)\in\psi^{L+1}(k)$. \symbl{$\psi^{L+1}(k)$}{The intersection of the $L+1$ last constraint sets}
Assume that ${\cal S}(k-i)$ denotes the set which includes all vectors $\wbf$ such that $d(k-i)-\wbf^H\xbf(k-i)=\gamma_i(k)$, for $i=0,\cdots,L$. All choices for $\gamma_i(k)$ satisfying the bound constraint are valid. That is, if all $\gamma_i(k)$ are selected such that $|\gamma_i(k)|\leq\gammabar$, then ${\cal S}(k-i)\in{\cal H}(k-i)$, for $i=0,\cdots,L$.
The objective function which we ought to minimize can now be stated. A coefficient update is implemented whenever $\wbf(k)\not\in\psi^{L+1}(k)$ as follows
\begin{align}
&\min\frac{1}{2}\|\wbf(k+1)-\wbf(k)\|^2\nonumber\\
&\text{subject to:}\nonumber\\
&\dbf(k)-(\wbf^H(k+1)\Xbf(k))^T=\gammabf(k),\label{eq:constraint-trinion}
\end{align}
where
\begin{tabular}{ll}
$\dbf(k)\in\mathbb{T}^{(L+1)\times1}$& contains the desired output from the $L+1$ last \\&time instants;\\
$\gammabf(k)\in\mathbb{T}^{(L+1)\times1}$&specifies the point in $\psi^{L+1}(k)$;\\
$\Xbf(k)\in\mathbb{T}^{(N+1)\times(L+1)}$&contains the corresponding input vectors, i.e.,
\end{tabular}
\begin{equation}
\begin{aligned}
\dbf(k)&=[d(k)~d(k-1)~\cdots~d(k-L)]^T,\\
\gammabf(k)&=[\gamma_0(k)~\gamma_1(k)~\cdots~\gamma_L(k)]^T,\\
\Xbf(k)&=[\xbf(k)~\xbf(k-1)~\cdots~\xbf(k-L)], \label{eq:pack-trinion}
\end{aligned}
\end{equation}
with $\xbf(k)$ being the input-signal vector
\begin{align}
\xbf(k)=[x(k)~x(k-1)~\cdots~x(k-N)]^T. \label{eq:x(k)-trinion}
\end{align}
If we use the method of Lagrange multipliers to transform a constrained minimization into an unconstrained one, then we have to minimize
\begin{align}
F[\wbf(k+1)]=&\frac{1}{2}\|\wbf(k+1)-\wbf(k)\|^2\nonumber\\
&+\Re\{\lambdabf^T(k)[\dbf(k)-(\wbf^H(k+1)\Xbf(k))^T-\gammabf(k)]\}, \label{eq:objective-trinion}
\end{align}
where $\lambdabf(k)\in\mathbb{T}^{(L+1)\times1}$ is a vector of Lagrange multipliers. To find the minimum solution, we must calculate the following gradient
\begin{align}
\nabla_{\wbf^*(k+1)}F[\wbf(k+1)]=&\frac{1}{3}\Big[\nabla_{\wbf_a(k+1)}F[\wbf(k+1)]+\nabla_{\wbf_b(k+1)}F[\wbf(k+1)]\bar\imath\nonumber\\
&+\nabla_{\wbf_c(k+1)}F[\wbf(k+1)]\bar\jmath\Big].\label{eq:gradient-trinion}
\end{align}
In order to find the above gradient, we ought to calculate the cost function $F[\wbf(k+1)]$ as a function of real-valued variables. As a result we have,
\begin{align}
\|\wbf(k+1)-\wbf(k)\|^2=&\|\wbf_a(k+1)-\wbf_a(k)\|^2+\|\wbf_b(k+1)-\wbf_b(k)\|^2\nonumber\\
&+\|\wbf_c(k+1)-\wbf_c(k)\|^2.\label{eq:first_part_cost-trinion}
\end{align}
We drop the time index '$k$' for the sake of compact notation. In order to find the second term in (\ref{eq:objective-trinion}) as a real-valued term we perform the following calculations,
\begin{align}
\Re&\{\lambdabf^T[\dbf-\Xbf^T\wbf^*(k+1)-\gammabf]\}
=\Re\{(\lambdabf_a^T+\lambdabf_b^T\bar\imath+\lambdabf_c^T\bar\jmath)[(\dbf_a+\dbf_b\bar\imath+\dbf_c\bar\jmath)\nonumber\\
&-(\Xbf_a^T+\Xbf_b^T\bar\imath+\Xbf_c^T\bar\jmath)(\wbf_a(k+1)-\wbf_b(k+1)\bar\jmath-\wbf_c(k+1)\bar\imath)-(\gammabf_a+\gammabf_b\bar\imath+\gammabf_c\bar\jmath)]\}\nonumber\\
=&\Re\{(\lambdabf_a^T+\lambdabf_b^T\bar\imath+\lambdabf_c^T\bar\jmath)[(\dbf_a-\Xbf_a^T\wbf_a(k+1)-\Xbf_b^T\wbf_b(k+1)-\Xbf_c^T\wbf_c(k+1)-\gammabf_a)\nonumber\\
&+(\dbf_b-\Xbf_b^T\wbf_a(k+1)+\Xbf_a^T\wbf_c(k+1)-\Xbf_c^T\wbf_b(k+1)-\gammabf_b)\bar\imath\nonumber\\
&+(\dbf_c-\Xbf_c^T\wbf_a(k+1)+\Xbf_a^T\wbf_b(k+1)+\Xbf_b^T\wbf_c(k+1)-\gammabf_c)\bar\jmath]\}\nonumber\\
=&\lambdabf_a^T(\dbf_a-\Xbf_a^T\wbf_a(k+1)-\Xbf_b^T\wbf_b(k+1)-\Xbf_c^T\wbf_c(k+1)-\gammabf_a)\nonumber\\
&-\lambdabf_b^T(\dbf_c-\Xbf_c^T\wbf_a(k+1)+\Xbf_a^T\wbf_b(k+1)+\Xbf_b^T\wbf_c(k+1)-\gammabf_c)\nonumber\\
&-\lambdabf_c^T(\dbf_b-\Xbf_b^T\wbf_a(k+1)+\Xbf_a^T\wbf_c(k+1)-\Xbf_c^T\wbf_b(k+1)-\gammabf_b).\label{eq:second_part_cost-trinion}
\end{align}
Therefore, by (\ref{eq:objective-trinion}), (\ref{eq:first_part_cost-trinion}), and (\ref{eq:second_part_cost-trinion}) we obtain
\begin{align}
F[\wbf(k+1)]=\frac{1}{2}\text{Eq.}\eqref{eq:first_part_cost-trinion}+\text{Eq.}\eqref{eq:second_part_cost-trinion}.
\end{align}
Thus, the three component-wise gradients can be attained as
\begin{align}
\nabla_{\wbf_a(k+1)}F[\wbf(k+1)]=&(\wbf_a(k+1)-\wbf_a(k))-\lambdabf_a^T\Xbf_a^T+\lambdabf_b^T\Xbf_c^T+\lambdabf_c^T\Xbf_b^T,\label{eq:component_gradient1-trinion}\\
\nabla_{\wbf_b(k+1)}F[\wbf(k+1)]=&(\wbf_b(k+1)-\wbf_b(k))-\lambdabf_a^T\Xbf_b^T-\lambdabf_b^T\Xbf_a^T+\lambdabf_c^T\Xbf_c^T,\label{eq:component_gradient2-trinion}\\
\nabla_{\wbf_c(k+1)}F[\wbf(k+1)]=&(\wbf_c(k+1)-\wbf_c(k))-\lambdabf_a^T\Xbf_c^T-\lambdabf_b^T\Xbf_b^T-\lambdabf_c^T\Xbf_a^T.\label{eq:component_gradient3-trinion}
\end{align}
On the other hand, we have
\begin{align}
\Xbf\lambdabf=&(\Xbf_a+\Xbf_b\bar\imath+\Xbf_c\bar\jmath)(\lambdabf_a+\lambdabf_b\bar\imath+\lambdabf_c\bar\jmath)\nonumber\\
=&(\Xbf_a\lambdabf_a-\Xbf_b\lambdabf_c-\Xbf_c\lambdabf_b)+(\Xbf_a\lambdabf_b+\Xbf_b\lambdabf_a-\Xbf_c\lambdabf_c)\bar\imath\nonumber\\
&+(\Xbf_a\lambdabf_c+\Xbf_c\lambdabf_a+\Xbf_b\lambdabf_b)\bar\jmath.\label{eq:xlambda-trinion}
\end{align}
Overall, by employing Equations (\ref{eq:gradient-trinion}) and (\ref{eq:component_gradient1-trinion})-(\ref{eq:xlambda-trinion}), we get,
\begin{align}
\nabla_{\wbf^*(k+1)}F[\wbf(k+1)]=&\frac{1}{3}\{[(\wbf_a(k+1)-\wbf_a(k))-(\Xbf(k)\lambdabf(k))_a]\nonumber\\
&+[(\wbf_b(k+1)-\wbf_b(k))-(\Xbf(k)\lambdabf(k))_b]\bar\imath\nonumber\\
&+[(\wbf_c(k+1)-\wbf_c(k))-(\Xbf(k)\lambdabf(k))_c]\bar\jmath\}\nonumber\\
=&\frac{1}{3}[\wbf(k+1)-\wbf(k)-\Xbf(k)\lambdabf(k)].
\end{align}
After setting the above equation equal to zero, we obtain
\begin{align}
\wbf(k+1)=\wbf(k)+\Xbf(k)\lambdabf(k).\label{(eq:update_with_lambda-trinion)}
\end{align}
If we substitute (\ref{(eq:update_with_lambda-trinion)}) in the constraint relation (\ref{eq:constraint-trinion}) the following expression results,
\begin{align}
\Xbf^T(k)\Xbf^*(k)\lambdabf^*(k)=\dbf(k)-\Xbf^T(k)\wbf^*(k)-\gammabf(k)=(\ebf(k)-\gammabf(k)).
\end{align}
From the above equation we get $\lambdabf(k)$ as
\begin{align}
\lambdabf(k)=(\Xbf^H(k)\Xbf(k))^{-1}(\ebf(k)-\gammabf(k))^*,\label{eq:lambda-trinion}
\end{align}
where
\begin{align}
\ebf(k)&=[e(k)~\epsilon(k-1)~\cdots~\epsilon(k-L)]^T, \label{eq:e_ap-trinion}
\end{align}
with $e(k)=d(k)-\wbf^H(k)\xbf(k)$, and $\epsilon(k-i)=d(k-i)-\wbf^H(k)\xbf(k-i)$ for $i=1,\cdots,L$. We can now conclude the SMTAP\abbrev{SMTAP}{Set-Membership Trinion-Valued AP} algorithm by starting from (\ref{(eq:update_with_lambda-trinion)}) with $\lambdabf(k)$ being given by (\ref{eq:lambda-trinion}), i.e.,
\begin{align}
\wbf(k+1)=\left\{\begin{array}{ll}\wbf(k)+\pbf_{\rm ap}(k)&\text{if}~|e(k)|>\gammabar,\\\wbf(k)&\text{otherwise},\end{array}\right.\label{eq:update_SMTAP}
\end{align}
where
\begin{align}
\pbf_{\rm ap}(k)&=\Xbf(k)(\Xbf^H(k)\Xbf(k))^{-1}(\ebf(k)-\gammabf(k))^*\label{eq:P(k)-trinion}.
\end{align}
{\it Remark 1:} In order to check if an update $\wbf(k+1)$ is required, we only have to test if $\wbf(k)\not\in{\cal H}(k)$ since in the previous updates $\wbf(k)\in{\cal H}(k-i+1)$ is guaranteed for $i=2,\cdots,L+1$.
{\it Remark 2:} For the initial time instants $k<L+1$, i.e., during initialization, only the knowledge of ${\cal H}(i)$ for $i=0,1,\cdots,k$ is available. As a consequence, if an update is required for $k<L+1$, the algorithm is implemented with the available $k+1$ accessible constraint sets.
{\it Remark 3:} By adopting the bound $\gammabar=0$, the algorithm will convert to the trinion affine projection (TAP)\abbrev{TAP}{Trinion-Valued Affine Projection} algorithm with unity step size which is the generalization of the conventional real-valued AP\abbrev{AP}{Affine Projection} algorithm in $\mathbb{T}$. Therefore, the TAP\abbrev{TAP}{Trinion-Valued Affine Projection} algorithm can be described as
\begin{align}
\wbf(k+1)=\wbf(k)+\mu \pbf'_{\rm ap}(k),\label{eq:TAP}
\end{align}
where $\mu$ is the convergence factor and
\begin{align}
\pbf'_{\rm ap}(k)=\Xbf(k)(\Xbf^H(k)\Xbf(k))^{-1}\ebf^*(k).
\end{align}
Note that we can utilize (\ref{eq:update_SMTAP}) and derive the update equation of the SMTNLMS\abbrev{SMTNLMS}{Set-Membership Trinion-Valued NLMS} algorithm. In this case we have to evade data-reusing in (\ref{eq:update_SMTAP}), $L=0$, so that the updating equation becomes,
\begin{align}
\wbf(k+1)=\left\{\begin{array}{ll}\wbf(k)+\pbf(k)&\text{if}~|e(k)|>\gammabar,\\\wbf(k)&\text{otherwise},\end{array}\right.\label{eq:smtnlms_without_norm}
\end{align}
where
\begin{align}
\pbf(k)&=\xbf(k)(\xbf^H(k)\xbf(k))^{-1}(e(k)-\gamma(k))^*,\\
e(k)&=d(k)-\wbf^H(k)\xbf(k). \label{eq:e-trinion}
\end{align}
We will now choose $\gamma(k)=\frac{\gammabar e(k)}{|e(k)|}$, hence from (\ref{eq:smtnlms_without_norm}) we attain the SMTNLMS\abbrev{SMTNLMS}{Set-Membership Trinion-Valued NLMS} update equation as
\begin{align}
\wbf(k+1)=\wbf(k)+\mu(k)\xbf(k)(\xbf^H(k)\xbf(k))^{-1}e^*(k),\label{eq:smtnlms}
\end{align}
where
\begin{align}
\mu(k)&=\left\{\begin{array}{ll}1-\frac{\gammabar}{|e(k)|}&\text{if}~|e(k)|>\gammabar,\\0&\text{otherwise}.\end{array}\right. \label{eq:mu-trinion}
\end{align}
Recalling that the normalized LMS \abbrev{LMS}{Least-Mean-Square}algorithm can be derived as a particular case of AP\abbrev{AP}{Affine Projection} algorithm for $L=0$.
{\it Remark 4:} By choosing the bound $\gammabar=0$ in (\ref{eq:smtnlms}), the algorithm will reduce to the TNLMS\abbrev{TNLMS}{Trinion-Valued Normalized LMS} algorithm with unity step size which is the generalization of the popular real-valued NLMS\abbrev{NLMS}{Normalized LMS} algorithm in $\mathbb{T}$. As a result, TNLMS\abbrev{TNLMS}{Trinion-Valued Normalized LMS} algorithm can be described as
\begin{align}
\wbf(k+1)=\wbf(k)+\mu\xbf(k)(\xbf^H(k)\xbf(k))^{-1}e^*(k),
\end{align}
where $\mu$ is the convergence factor.
\section{SMQAP Algorithm} \label{sec:smqap_smqnlms}
This section outlines the derivation of the SMQAP\abbrev{SMQAP}{Set-Membership Quaternion-Valued AP} algorithm. Then we obtain an update equation for the SMQNLMS\abbrev{SMQNLMS}{Set-Membership Quaternion-Valued NLMS} algorithm that follows the same steps as the derivation of the SMTNLMS\abbrev{SMTNLMS}{Set-Membership Trinion-Valued NLMS} algorithm. The SMQAP\abbrev{SMQAP}{Set-Membership Quaternion-Valued AP} and the SMQNLMS\abbrev{SMQNLMS}{Set-Membership Quaternion-Valued NLMS} algorithms are the quaternion versions of the real-valued SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} and SM-NLMS\abbrev{SM-NLMS}{Set-Membership Normalized LMS} algorithms, respectively.
The membership set $\psi(k)$ introduced in (\ref{eq:set-membership_set-trinion}) suggests the use of more constraint sets in the update. Let us express $\psi(k)$ as in (\ref{eq:divide_intersection-trinion}), our purpose is to derive an algorithm whose coefficient update belongs to the last $L+1$ constraint set, i.e., $\wbf(k+1)\in\psi^{L+1}(k)$. Suppose that ${\cal S}(k-i)$ describes the set which contains all vectors $\wbf$ such that $d(k-i)-\wbf^H\xbf(k-i)=\gamma_i(k)$, for $i=0,\cdots,L$. All choices for $\gamma_i(k)$ satisfying the bound constraint are valid. That is, if all $\gamma_i(k)$ are chosen such that $|\gamma_i(k)|\leq\gammabar$, then ${\cal S}(k-i)\in{\cal H}(k-i)$, for $i=0,\cdots,L$.
The objective function to be minimized in case of the SMQAP\abbrev{SMQAP}{Set-Membership Quaternion-Valued AP} algorithm can be stated as follows: perform a coefficient update whenever $\wbf(k)\not\in\psi^{L+1}(k)$ as in Equation (\ref{eq:constraint-trinion}). Note that $\dbf(k),\gammabf(k)\in\mathbb{H}^{(L+1)\times1}$, $\Xbf(k)\in\mathbb{H}^{(N+1)\times(L+1)}$, and $\xbf(k)$ are defined as in (\ref{eq:pack-trinion}) and (\ref{eq:x(k)-trinion}).
By employing the method of Lagrange multipliers, the unconstrained function to be minimized becomes as in Equation (\ref{eq:objective-trinion}), where $\lambdabf(k)\in\mathbb{H}^{(L+1)\times1}$ is a vector of Lagrange multipliers. After setting the gradient of $F[\wbf(k+1)]$ with respect to $\wbf^*(k+1)$ equal to zero, we will get the equation
\begin{align}
\wbf(k+1)=\wbf(k)+\Xbf(k)\lambdabf(k). \label{eq:update_smqap_with_lambda}
\end{align}
Then, by invoking the constraints in (\ref{eq:constraint-trinion}), the expression of $\lambdabf(k)$ is as
\begin{align}
\lambdabf(k)=(\Xbf^H(k)\Xbf(k))^{-1}(\ebf(k)-\gammabf(k))^*, \label{eq:lambda_q-trinion}
\end{align}
where $\ebf(k)$ is defined as in (\ref{eq:e_ap-trinion}). Finally, the update equation for the SMQAP\abbrev{SMQAP}{Set-Membership Quaternion-Valued AP} algorithm is given by
\begin{align}
\wbf(k+1)=\left\{\begin{array}{ll}\wbf(k)+\qbf_{\rm ap}(k)&\text{if}~|e(k)|>\gammabar,\\\wbf(k)&\text{otherwise},\end{array}\right.\label{eq:update_SMQAP}
\end{align}
where
\begin{align}
\qbf_{\rm ap}(k)&=\Xbf(k)(\Xbf^H(k)\Xbf(k))^{-1}(\ebf(k)-\gammabf(k))^*\label{eq:Q(k)-trinion}.
\end{align}
Note that the {\it Remarks 1} and {\it 2} of Subsection \ref{sec:smtap_smnlms} also apply to the SMQAP\abbrev{SMQAP}{Set-Membership Quaternion-Valued AP} algorithm.
{\it Remark 5:} We can quickly verify that adopting the bound $\gammabar=0$, the algorithm will reduce to QAP\abbrev{QAP}{Quaternion-Valued Affine Projection} algorithm \cite{Jahanchahil_cqvapa_SP2013} with unity step size. Therefore, the QAP\abbrev{QAP}{Quaternion-Valued Affine Projection} algorithm cab be expressed as
\begin{align}
\wbf(k+1)=\wbf(k)+\mu \Xbf(k)(\Xbf^H(k)\Xbf(k))^{-1}\ebf^*(k),\label{eq:QAP}
\end{align}
where $\mu$ is the convergence factor.
Note that we can use the SMQAP\abbrev{SMQAP}{Set-Membership Quaternion-Valued AP} algorithm to derive the update equation of the SMQNLMS\abbrev{SMQNLMS}{Set-Membership Quaternion-Valued NLMS} algorithm. In fact, the SMQNLMS\abbrev{SMQNLMS}{Set-Membership Quaternion-Valued NLMS} does not require data-reusing as the SMQAP\abbrev{SMQAP}{Set-Membership Quaternion-Valued AP} algorithm \cite{Gollamudi_smf_letter1998}, thus by taking $L=0$ and $\gamma(k)=\frac{\gammabar e(k)}{|e(k)|}$ we obtain the update equation of the SMQNLMS\abbrev{SMQNLMS}{Set-Membership Quaternion-Valued NLMS} algorithm as
\begin{align}
\wbf(k+1)=\wbf(k)+\mu(k)\|\xbf(k)\|^{-2}\xbf(k)e^*(k),\label{eq:smqnlms_without_norm}
\end{align}
where $e(k)$ and $\mu(k)$ are defined as in (\ref{eq:e-trinion}) and (\ref{eq:mu-trinion}), respectively.
{\it Remark 6:} By adopting the bound $\gammabar=0$ in (\ref{eq:smqnlms_without_norm}), the algorithm will reduce to the QNLMS\abbrev{QNLMS}{Quaternion-Valued Normalized LMS} algorithm with unity step size. Therefore, the QNLMS\abbrev{QNLMS}{Quaternion-Valued Normalized LMS} algorithm can be described as
\begin{align}
\wbf(k+1)=\wbf(k)+\mu\|\xbf(k)\|^{-2}\xbf(k)e^*(k),\label{eq:qnlms_without_regularization}
\end{align}
where $\mu$ is the convergence factor.
The computational complexity for each update of the weight vector of the trinion based and quaternion based adaptive filtering algorithms are listed in Table \ref{tab:complexity}. The filter length and the memory length are $N$ and $L$, respectively. Also, Figures \ref{fig:Complexity_L_trinion} and \ref{fig:Complexity_N_trinion} show a comparison between the total number of real multiplications and additions required by the TAP\abbrev{TAP}{Trinion-Valued Affine Projection} and the QAP\abbrev{QAP}{Quaternion-Valued Affine Projection} algorithms for two cases: $N=15$, variable $L$ and $L=3$, variable $N$. As can be seen, the trinion model can efficiently decrease the computational complexity in comparison with the quaternion model, whenever the problem at hand suits both the quaternion and trinion solutions.
\begin{table*} [!t]
\caption{{COMPUTATIONAL COMPLEXITY PER UPDATE OF THE WEIGHT VECTOR}}\label{tab:complexity}
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
Algorithm & Real Multiplications & Real additions \\
\hline
QNLMS & $20N+4$ & $20N-1$ \\\hline
QAP & $32L^3+16NL^2+16L^2$&$32L^3+16NL^2+4L^2$\\&$+19NL+26L$ & $+16NL+8L$ \\\hline
TNLMS & $12N+3$ & $12N-1$ \\\hline
TAP & $18L^3+9NL^2+9L^2$&$18L^3+9NL^2$\\&$+11NL+50L$ & $+9NL+39L$ \\
\hline
\end{tabular}
\end{center}
\end{table*}
\begin{figure}[t!]
\centering
\subfigure[b][]{\includegraphics[width=.48\linewidth,height=7cm]{Figs/Complexity_L_trinion.pdf}
\label{fig:Complexity_L_trinion}}
\subfigure[b][]{\includegraphics[width=.48\linewidth,height=7cm]{Figs/Complexity_N_trinion.pdf}
\label{fig:Complexity_N_trinion}}
\caption{The numerical complexity of the TAP and the QAP algorithms for two cases: (a) $N=15$, variable $L$; (b) $L=3$, variable $N$. \label{fig:Complexity-trinion}}
\end{figure}
\section{Application of quaternion-valued adaptive \\algorithms to adaptive beamforming} \label{sec:adaptive-beamforming-tr}
As an illustration for the use of quaternions, we can study its application to adaptive beamforming. By utilizing the crossed-dipole array and quaternions, we can decrease the number of engaged sensors in the adaptive beamforming process. Therefore, the computational complexity and the energy consumption of the system will reduce without losing the quality of the performance~\cite{Jiang-phdthesis,Gou_beamformer_MAPE2011,Tao_beamformer_TAES2013,Tao_beamformer_MPE2014}.
A uniform linear array (ULA)\abbrev{ULA}{Uniform Linear Array} is illustrated in Figure~\ref{fig:beamformer-tr}~\cite{Jiang-phdthesis,Jiang_gqvgo_DSP2014}. It contains $M$ crossed-dipole pairs, they are placed on $y$-axis and the distance between neighboring antennas is $d$. At each position, the two crossed components are parallel to $x$-axis and $y$-axis, respectively. The direction of arrival (DOA)\abbrev{DOA}{Direction of Arrival} of a far-field incident signal is defined by the angles $\theta$ and $\phi$. Assume that this signal impinges upon the array from the $y$-$z$ plane. Thus, $\phi=\frac{\pi}{2}$ or $-\frac{\pi}{2}$, and $0\leq\theta\leq\frac{\pi}{2}$. As a consequence, the spatial steering vector for this far-field incident signal is given by \symbl{$\sbf_c(\theta,\phi)$}{The spatial steering vector for a far-field incident signal in adaptive beamforming}
\begin{align}
\sbf_c(\theta,\phi)=[1,e^{-\jmath2\pi d\sin\theta\sin\phi/\lambda},\cdots,e^{-\jmath2\pi(M-1) d\sin\theta\sin\phi/\lambda}]^T,
\end{align}
where $\lambda$ stands for the wavelength of the incident signal. For a crossed-dipole the spatial-polarization coherent vector can be expressed by~\cite{Compton_beamformer_TAP1981,Li_beamformer_TAP1991}
\begin{align}
\sbf_p(\theta,\phi,\gamma,\eta)=\left\{\begin{array}{ll}[-\cos\gamma,\cos\theta\sin\gamma e^{\jmath\eta}]&{\rm for~}\phi=\frac{\pi}{2},\\ $[$\cos\gamma,-\cos\theta\sin\gamma e^{\jmath\eta}]&{\rm for~}\phi=-\frac{\pi}{2},\end{array}\right.
\end{align}
where $\gamma\in[0,\frac{\pi}{2}]$ and $\eta\in[-\pi,\pi]$ are the auxiliary polarization angle and the polarization phase difference, respectively.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=0.7\linewidth]{Figs/beamformer.pdf}
\caption{A ULA with crossed-dipole~\cite{Jiang-phdthesis}.}
\label{fig:beamformer-tr}
\end{center}
\end{figure}
We can divide the array structure into two sub-arrays so that one of them is parallel to the $x$-axis and the other one is parallel to the $y$-axis. Then the complex-valued steering vector parallel to the $x$-axis is presented as
\begin{align}
\sbf_x(\theta,\phi,\gamma,\eta)=\left\{\begin{array}{ll}-\cos\gamma\sbf_c(\theta,\phi)&{\rm for~}\phi=\frac{\pi}{2},\\ \cos\gamma\sbf_c(\theta,\phi)&{\rm for~}\phi=-\frac{\pi}{2},\end{array}\right.
\end{align}
and the one parallel to the $y$-axis is given by
\begin{align}
\sbf_y(\theta,\phi,\gamma,\eta)=\left\{\begin{array}{ll}\cos\theta\sin\gamma e^{\jmath\eta}\sbf_c(\theta,\phi)&{\rm for~}\phi=\frac{\pi}{2},\\-\cos\theta\sin\gamma e^{\jmath\eta}\sbf_c(\theta,\phi)&{\rm for~}\phi=-\frac{\pi}{2}.\end{array}\right.
\end{align}
Using the Cayley-Dickson formula~\eqref{eq:Cayley_Dickson}, we can combine $\sbf_x(\theta,\phi,\gamma,\eta)$ and $\sbf_y(\theta,\phi,\gamma,\eta)$ together, we obtain a quaternion-valued steering vector as follows
\begin{align}
\sbf_q(\theta,\phi,\gamma,\eta)=\sbf_x(\theta,\phi,\gamma,\eta)+\imath \sbf_y(\theta,\phi,\gamma,\eta).
\end{align}
The response of the array for the quaternion-valued weight vector $\wbf$ is given as below
\begin{align}
r(\theta,\phi,\gamma,\eta)=\wbf^H\sbf_q(\theta,\phi,\gamma,\eta).
\end{align}
In the case of reference signal based quaternion-valued adaptive beamforming, the reference signal $d(k)$ is available. Therefore, the response of the array is the quaternion-valued beamformer output and it is defined as $y(k)=\wbf^H(k)\xbf(k)$, where $\xbf(k)$ is the received quaternion-valued vector sensor signals and $\wbf(k)$ is the quaternion-valued weigh vector. Also, the quaternion-valued error signal can be defined as $e(k)=d(k)-y(k)$.
\section{Simulations} \label{sec:simulations-trinion}
In this section, we apply the proposed algorithms to two scenarios. Scenario 1 verifies the performance of the trinion based and the quaternion based algorithms when they are used to wind profile prediction. In Scenario 2, we implement quaternionic adaptive beamforming by quaternion-valued algorithms.
\subsection{Scenario 1} \label{sub:wind-trinion}
In this scenario, all the proposed algorithms in this chapter are applied to anemometer readings provided by Google's RE$<$C Initiative \cite{Google_wind}. The wind speed recorded on May 25, 2011, is utilized for the algorithms comparisons. The step size, $\mu$, is selected to be $10^{-8}$ for the TLMS\abbrev{TLMS}{Trinion-Valued LMS} and the QLMS\abbrev{QLMS}{Quaternion-Valued LMS} algorithms and 0.9 for the TNLMS\abbrev{TNLMS}{Trinion-Valued Normalized LMS}, the TAP\abbrev{TAP}{Trinion-Valued Affine Projection}, the QNLMS\abbrev{QNLMS}{Quaternion-Valued Normalized LMS}, and the QAP\abbrev{QAP}{Quaternion-Valued Affine Projection} algorithms, and $\gammabar$ is set to be 5. Also, the threshold bound vector $\gammabf(k)$ is selected as {\it simple choice constraint vector}~\cite{Markus_sparseSMAP_tsp2014} which is defined as $\gamma_0(k)=\frac{\gammabar e(k)}{|e(k)|}$ and $\gamma_i(k)=d(k-i)-\wbf^T(k)\xbf(k-i)$, for $i=1,\cdots,L$. The filter length is 8, the memory length, $L$, and the prediction step are chosen equal to 1. All algorithms are initialized with zeros.
The predicted results provided by trinion and quaternion based algorithms are shown in Figures~\ref{fig:Trinion_wind} and~\ref{fig:Quaternion_wind}, respectively. The learning curves using the TNLMS\abbrev{TNLMS}{Trinion-Valued Normalized LMS}, the SMTNLMS\abbrev{SMTNLMS}{Set-Membership Trinion-Valued NLMS}, the TAP\abbrev{TAP}{Trinion-Valued Affine Projection}, and the SMTAP\abbrev{SMTAP}{Set-Membership Trinion-Valued AP} algorithms are shown in Figures~\ref{fig:error_nlms-trinion} and~\ref{fig:error_ap-trinion}. Also, for comparison between the trinion and the quaternion based algorithms, the learning curves related to the TNLMS\abbrev{TNLMS}{Trinion-Valued Normalized LMS}, the QNLMS\abbrev{QNLMS}{Quaternion-Valued Normalized LMS}, the TAP\abbrev{TAP}{Trinion-Valued Affine Projection}, and the QAP\abbrev{QAP}{Quaternion-Valued Affine Projection} algorithms are depicted in Figures~\ref{fig:q_vs_t_nlms-trinion} and~\ref{fig:q_vs_t_ap-trinion}.
The average of implementation times and the number of updates performed by the trinion and the quaternion based algorithms are presented in Table~\ref{tab:update_rate}. From the results, we can observe that all algorithms can track the wind data efficiently; however, the trinion based algorithms need a shorter time for implementation compared to their corresponding quaternion based algorithms. Also, we can observe that the set-membership based versions of the TNLMS\abbrev{TNLMS}{Trinion-Valued Normalized LMS}, the QNLMS\abbrev{QNLMS}{Quaternion-Valued Normalized LMS}, the TAP\abbrev{TAP}{Trinion-Valued Affine Projection}, and the QAP\abbrev{QAP}{Quaternion-Valued Affine Projection} algorithms have a low number of updates. Therefore, the set-membership algorithms can save energy effectively.
\begin{table} [!t]
\caption{The Average of implementation times and the number of updates for the trinion and the quaternion based algorithms using MATLAB software}\label{tab:update_rate}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Algorithm & Time & Update & Algorithm & Time & Update \\ & (second) & rate & & (second) & rate \\
\hline
TLMS & 2.45 & 100$\%$ & QLMS & 7.2 & 100$\%$ \\\hline
TNLMS & 8 & 100$\%$ & QNLMS & 9.4 & 100$\%$ \\\hline
TAP & 67 & 100$\%$ & QAP & 142 & 100$\%$ \\\hline
\hspace{-0.1cm}SMTNLMS\hspace{-0.1cm} & 3.8 & 17.92$\%$ & \hspace{-0.1cm}SMQNLMS\hspace{-0.1cm} & 9.2 & 17.87$\%$ \\\hline
SMTAP & 13 & 6.52$\%$ & SMQAP & 20.1 & 6.34$\%$ \\\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure}[t!]
\begin{center}
\includegraphics[width=1\linewidth] {Figs/Trinion_wind.pdf}
\caption{Predicted results from the trinion based algorithms.}
\label{fig:Trinion_wind}
\end{center}
\end{figure}
\begin{figure}[t!]
\begin{center}
\includegraphics[width=1\linewidth] {Figs/Quaternion_wind.pdf}
\caption{Predicted results from the quaternion based algorithms.}
\label{fig:Quaternion_wind}
\end{center}
\end{figure}
\begin{figure}[t!]
\centering
\subfigure[b][]{\includegraphics[width=.48\linewidth,height=7cm]{Figs/error_nlms-trinion.pdf}
\label{fig:error_nlms-trinion}}
\subfigure[b][]{\includegraphics[width=.48\linewidth,height=7cm]{Figs/error_ap-trinion.pdf}
\label{fig:error_ap-trinion}}
\caption{Learning curves of (a) the TNLMS and the SMTNLMS algorithms; (b) the TAP and the SMTAP algorithms. \label{fig:sim-error-trinion}}
\end{figure}
\begin{figure}[t!]
\centering
\subfigure[b][]{\includegraphics[width=.48\linewidth,height=7cm]{Figs/q_vs_t_nlms-trinion.pdf}
\label{fig:q_vs_t_nlms-trinion}}
\subfigure[b][]{\includegraphics[width=.48\linewidth,height=7cm]{Figs/q_vs_t_ap-trinion.pdf}
\label{fig:q_vs_t_ap-trinion}}
\caption{Learning curves of (a) the TNLMS and the QNLMS algorithms; (b) the TAP and the QAP algorithms. \label{fig:sim-error-q_vs_t-trinion}}
\end{figure}
Moreover, we implemented the same scenario using a real-valued algorithm. Indeed, we used three affine projection (AP) algorithms whose parameters are chosen similar to the TAP algorithm to compare the tracking results between the AP and the TAP algorithms. We did not notify a significant difference between the tracking results of the AP and the TAP algorithms, thus we avoid presenting an additional figure since the results were similar to Figure~\ref{fig:Trinion_wind}(b). However, in wind profile prediction, It would be preferable to employ trinion-valued algorithms since there is some structure between the three components of data.
\subsection{Scenario 2} \label{sub:beamforming}
In this scenario, we simulate the quaternionic adaptive beamforming~\cite{Jiang_gqvgo_DSP2014} using the QLMS\abbrev{QLMS}{Quaternion-Valued LMS}, the QNLMS\abbrev{QNLMS}{Quaternion-Valued Normalized LMS}, the SMQNLMS\abbrev{SMQNLMS}{Set-Membership Quaternion-Valued NLMS}, the QAP\abbrev{QAP}{Quaternion-Valued Affine Projection}, and the SMQAP\abbrev{SMQAP}{Set-Membership Quaternion-Valued AP} algorithms. We assume a sensor array with 10 crossed-dipoles and half-wavelength spacing. The step size, $\mu$, for the QLMS\abbrev{QLMS}{Quaternion-Valued LMS}, the QNLMS\abbrev{QNLMS}{Quaternion-Valued Normalized LMS}, and the QAP\abbrev{QAP}{Quaternion-Valued Affine Projection} algorithms are $4\times10^{-5}$, 0.009, and 0.005, respectively. For the QAP\abbrev{QAP}{Quaternion-Valued Affine Projection} and the SMQAP\abbrev{SMQAP}{Set-Membership Quaternion-Valued AP} algorithms, the memory length, $L$, is set to 1. A desired signal with 20 dB SNR\abbrev{SNR}{Signal-to-Noise Ratio} ($\sigma_n^2=0.01$) impinges from broadside, $\theta=0$ and $\phi=\frac{\pi}{2}$, and two interfering signals with signal-to-interference ratio (SIR)\abbrev{SIR}{Signal-to-Interference Ratio} of -10 dB arrive from $(\theta,\phi)=(\frac{\pi}{9},\frac{\pi}{2})$ and $(\theta,\phi)=(\frac{\pi}{6},-\frac{\pi}{2})$, respectively. All the signals have the same polarization of $(\gamma,\eta)=(0,0)$. $\gammabar$ is set to be $\sqrt{2\sigma_n^2}$, and the vector $\gammabf(k)$ is selected as simple choice constraint vector defined in Scenario 1.
The learning curves of quaternion algorithms over 100 trials are shown in Figure~\ref{fig:ball}. The average number of updates performed by the SMQNLMS\abbrev{SMQNLMS}{Set-Membership Quaternion-Valued NLMS} and the SMQAP\abbrev{SMQAP}{Set-Membership Quaternion-Valued AP} algorithms are 1408 and 1815 in a total of 10000 iterations (about 14.08$\%$ and 18.15$\%$), respectively. As can be seen, the set-membership quaternion algorithms converge faster while having a lower number of updates. Also, the convergence rate of the QAP\abbrev{QAP}{Quaternion-Valued Affine Projection} algorithm is higher than the SMQNLMS\abbrev{SMQNLMS}{Set-Membership Quaternion-Valued NLMS} algorithm.
The response of a beamformer to the impinging signals as a \symbl{$B(\theta)$}{The beam pattern of a beamformer} function of $\theta$ is called beam pattern and is defined as $B(\theta)=\wbf^H\sbf(\theta)$, where $\sbf(\theta)$ is the steering vector. The magnitude of beam pattern explains the variation of a beamformer concerning the signal arriving from different DOA\abbrev{DOA}{Direction of Arrival} angles. Figure~\ref{fig:beam_pattern} illustrates the magnitude of beam pattern of the quaternion algorithms with $\theta=0$. In this figure, the positive values of $\theta$ show the value range $\theta\in[0,\frac{\pi}{2}]$ for $\phi=\frac{\pi}{2}$ and the negative values, $\theta\in[-\frac{\pi}{2},0]$, indicate the same range of $\theta\in[0,\frac{\pi}{2}]$ but $\phi=-\frac{\pi}{2}$. We can observe that all the quaternion algorithms attained an acceptable beamforming result since the two nulls at the directions of the interfering signals are clearly visible.
The output signal to desired plus noise ratio (OSDR)\abbrev{OSDR}{Output Signal to Desired Plus Noise Ratio} and the output signal to interference plus noise ratio (OSIR)\abbrev{OSIR}{Output Signal to Interference Plus Noise Ratio} for the quaternion algorithms are presented in Table~\ref{tab:output_to_interference}. The OSDR\abbrev{OSDR}{Output Signal to Desired Plus Noise Ratio} is achieved by calculating the power of the output signal and the total power of desired plus one-third of the noise signal, then we compute the ratio between these two values. Also, the OSIR\abbrev{OSIR}{Output Signal to Interference Plus Noise Ratio} is obtained by computing the power of the output signal and the total power of interference plus one-third of the noise signal, then we find the ratio between the two. As can be seen, the best results are obtained by the SMQNLMS\abbrev{SMQNLMS}{Set-Membership Quaternion-Valued NLMS} and the SMQAP\abbrev{SMQAP}{Set-Membership Quaternion-Valued AP} algorithms.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=1\linewidth]{Figs/beam.pdf}
\caption{Learning curves of the QLMS, the QNLMS, the QAP, the SMQNLMS, and the SMQAP algorithms.}
\label{fig:ball}
\end{center}
\end{figure}
\begin{figure}[t!]
\begin{center}
\includegraphics[width=1\linewidth]{Figs/beam_pattern.pdf}
\caption{Beam patterns of the QLMS, the QNLMS, the QAP, the SMQNLMS, and the SMQAP algorithms when DOA of desired signal is $(\theta,\phi)=(0,\frac{\pi}{2})$.}
\label{fig:beam_pattern}
\end{center}
\end{figure}
\begin{table} [!t]
\caption{The OSDR and the OSIR for the quaternion algorithms}\label{tab:output_to_interference}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Algorithms & QLMS & QNLMS & QAP & SMQNLMS & SMQAP \\\hline
OSDR (dB) & -1.645 & -1.502 & -0.647 & -0.024 & 0.004 \\\hline
OSIR (dB) & -11.699 & -11.557 & -10.701 & -10.079 & -10.050 \\\hline
\end{tabular}
\end{center}
\end{table}
\section{Conclusions} \label{sec:conclusion-trinion}
In this chapter, we have generalized the set-membership model for the trinion and the quaternion number systems. First, we have reviewed some properties of the quaternion and the trinion systems. Then we have derived the set-membership trinion based algorithms and, by the same argument, the quaternion based adaptive filtering algorithms have been introduced. Also, we have presented the counterparts of the proposed algorithms without employing the set-membership approach. Moreover, we have reviewed the application of quaternion algorithms to adaptive beamforming. Numerical simulations for the recorded wind data and the adaptive beamforming have proven that the set-membership based algorithms have significantly lower update rates, while the penalty to be paid for that is not noteworthy. Also, we have observed that the trinion based algorithms have comparable performance to the quaternion based ones, however with striking lower computational complexity.
\chapter{Improved Set-Membership Partial-Update Affine Projection Algorithm}
Adaptive filters have applications in a wide range of areas such as noise cancellation, signal prediction, echo cancellation, communications, radar, and speech processing. In several applications, a large number of coefficients to be updated leads to high computational complexity, turning the adaptation of the filter coefficients prohibitive regarding hardware requirements. In some cases, like acoustic echo cancellation, the adaptive filter might use a few thousand coefficients in order to model the underlying physical system with sufficient accuracy. In these applications, the convergence would entail a large number of iterations, calling for a more sophisticated updating rule which is inherently more computationally intensive. For a given adaptive filter, the computational complexity can be reduced by updating only part of the filter coefficients at each iteration, forming a family of algorithms called partial-update (PU)\abbrev{PU}{Partial-Update} algorithms. In the literature, several variants of adaptive filtering algorithms with partial-update have been proposed \cite{PUbook,Diniz_adaptiveFiltering_book2013,Douglas-PU-1997,Aboulnasr-PU-1999,Dogancay-PU-2001,Werner-PU-2003,Werner-PU-2004,Godavarti,Grira,Arablouei,Pinto,Tandon,Bhotto,Deng}.
Another powerful approach to decrease the computational complexity of an adaptive filter is to employ set-membership filtering (SMF)\abbrev{SMF}{Set-Membership Filtering} approach \cite{Diniz_adaptiveFiltering_book2013,Gollamudi_smf_letter1998}. Algorithms developed from the SMF\abbrev{SMF}{Set-Membership Filtering} framework employ a deterministic objective function related to a bounded error constraint on the filter output, such that the updates belong to a set of feasible solutions. Implementation of SMF\abbrev{SMF}{Set-Membership Filtering} algorithms involves two main steps: 1) information evaluation, 2) parameter update. As compared with the standard normalized least mean square (NLMS)\abbrev{NLMS}{Normalized LMS} and affine projection (AP)\abbrev{AP}{Affine Projection} algorithms, the set-membership normalized least mean square (SM-NLMS\abbrev{SM-NLMS}{Set-Membership Normalized LMS}) and the set-membership affine projection (SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection}) algorithms lead to reduced computational complexity chiefly due to data-selective updates \cite{Gollamudi_smf_letter1998,Diniz_sm_bnlms_tsp2003,Werner_sm_ap_letter2001,Arablouei_2012_ICASSP,Bhotto_2012_ISCCSP,Yamada_sm-nlmsAnalysis_tsp2009,Bhotto_2012_TSP,Abadi_2008_ISCCSP}.
The use of PU\abbrev{PU}{Partial-Update} strategy decreases the computational complexity while reducing convergence speed. We employ SMF\abbrev{SMF}{Set-Membership Filtering} technique to reduce further the computational load due to a lower number of updates. However applying the SMF\abbrev{SMF}{Set-Membership Filtering} and PU\abbrev{PU}{Partial-Update} strategies together might result in slow convergence speed. One approach to accelerate the convergence speed is choosing a smaller error estimation bound, but it might increase the number of updates. Also, if we adopt a higher error estimation threshold to reduce the number of updates, the convergence rate will decrease. Therefore, convergence speed and computational complexity are conflicting requirements.
In this chapter, we introduce an interesting algorithm which can accelerate the convergence speed and simultaneously reduce the
number of updates (and as a result decrease the computational complexity) in the set-membership partial-update affine projection (SM-PUAP)\abbrev{SM-PUAP}{Set-Membership Partial-Update AP} algorithm. In the SM-PUAP\abbrev{SM-PUAP}{Set-Membership Partial-Update AP} algorithm, some updates move too far from their SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} update; especially when the angle between the updating direction and the threshold hyperplane is small. In this case, we might have a significant disturbance in the coefficient update while attempting to
reach the feasibility set. Therefore, to limit the distance between two consecutive updates, first, we will construct a
hypersphere centered at the present weight vector whose radius equals the distance between the current weight vector and
the weight vector that would be obtained with the SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} algorithm. This radius is an upper bound on the Euclidean norm
of the coefficient disturbance that is allowed in the proposed improved set-membership partial-update affine projection (I-SM-PUAP)\abbrev{I-SM-PUAP}{Improved SM-PUAP} algorithm.
The content of this chapter was published in~\cite{Hamed_I_SM-PUAP_ICASSP2016}. In this chapter, first of all, we review the SM-PUAP\abbrev{SM-PUAP}{Set-Membership Partial-Update AP} algorithm in Section \ref{sec:SM-PUAP-icassp}. Then, in Section \ref{sec:M-SM-PUAP-icassp}, we derive the I-SM-PUAP\abbrev{I-SM-PUAP}{Improved SM-PUAP} algorithm. Section \ref{sec:simulation-icassp} presents simulations of the algorithms. Finally, Section \ref{sec:conclusion-icassp} contains the conclusions.
\section{Set-Membership Partial-Update Affine Projection Algorithm} \label{sec:SM-PUAP-icassp}
In this section, we present the SM-PUAP\abbrev{SM-PUAP}{Set-Membership Partial-Update AP} algorithm \cite{Diniz_adaptiveFiltering_book2013}. The main objective of the partial-update adaptation is to perform updates in $M$ out of $N+1$ adaptive filter coefficients, where $N$ is the order of the adaptive filter. The $M$ coefficients to be updated at time instant $k$ are specified by an index set ${\cal I}_M(k)=\{i_0(k),\cdots,i_{M-1}(k)\}$ with $\{i_j(k)\}_{j=0}^{M-1}$ chosen from the set $\{0,\cdots,N\}$.\symbl{${\cal I}_M(k)$}{The set of $M$ coefficients to be updated at time instant $k$} The subset of coefficients with indices in ${\cal I}_M(k)$ plays an essential role in the performance and the effectiveness of the partial-update strategy. Note that ${\cal I}_M(k)$ varies with the time instant $k$. As a result, the $M$ coefficients to be updated can change according to the time instant. The choice of which $M$ coefficients should be updated is related to the optimization criterion chosen for algorithm derivation. The SM-PUAP\abbrev{SM-PUAP}{Set-Membership Partial-Update AP} algorithm \cite{Diniz_adaptiveFiltering_book2013} takes the update vector $\wbf(k+1)$ as the vector minimizing the Euclidean distance $\|\wbf(k+1)-\wbf(k)\|^2$ subject to the constraint $\wbf(k+1)\in{\cal H}(k)$ in such a way that only $M$ coefficients are updated.
The optimization criterion in the SM-PUAP\abbrev{SM-PUAP}{Set-Membership Partial-Update AP} algorithm is described as follows. Let $\psi^{L+1}(k)$ indicate the intersection of the last $L+1$ constraint sets. A coefficient update is implemented whenever $\wbf(k)\not\in\psi^{L+1}(k)$ as follows
\begin{equation}
\begin{aligned}
&\min \|\wbf(k+1)-\wbf(k)\|^2\\
&{\rm subject~to:}\\
&\dbf(k)-\Xbf^T(k)\wbf(k+1)=\gammabf(k)\\
&\tilde{\Cbf}_{{\cal I}_M(k)}[\wbf(k+1)-\wbf(k)]=0
\end{aligned}
\end{equation}
where
\begin{tabular}{ll}
$\dbf(k)\in\mathbb{R}^{(L+1)\times1}$& contains the desired output from the \\&$L+1$ last time instants;\\
$\gammabf(k)\in\mathbb{R}^{(L+1)\times1}$&specifies the point in $\psi^{L+1}(k)$;\\
$\Xbf(k)\in\mathbb{R}^{(N+1)\times (L+1)}$&contains the corresponding input vectors, i.e.,
\end{tabular}
\begin{equation}
\begin{aligned}
\dbf(k)&=[d(k)~d(k-1)~\cdots~d(k-L)]^T,\\
\gammabf(k)&=[\gamma_0(k)~\gamma_1(k)~\cdots~\gamma_L(k)]^T,\\
\Xbf(k)&=[\xbf(k)~\xbf(k-1)~\cdots~\xbf(k-L)], \label{eq:pack-icassp}
\end{aligned}
\end{equation}
with $\xbf(k)$ being the input-signal vector
\begin{align}
\xbf(k)=[x(k)~x(k-1)~\cdots~x(k-N)]^T. \label{eq:x(k)-icassp}
\end{align}
Moreover, the matrix $\tilde{\Cbf}_{{\cal I}_M(k)}=\Ibf-\Cbf_{{\cal I}_M(k)}$ is a complementary matrix that gives $\tilde{\Cbf}_{{\cal I}_M(k)}\wbf(k+1)=\tilde{\Cbf}_{{\cal I}_M(k)}\wbf(k)$, which means that only $M$ coefficients are updated. The threshold vector elements are such that $|\gamma_i(k)|\leq\gammabar$, for $i=0,\cdots,L$. The matrix $\Cbf_{{\cal I}_M(k)}$ is a diagonal matrix that identifies the coefficients to be updated at instant $k$, if an update is required.\symbl{$\Cbf_{{\cal I}_M(k)}$}{The diagonal matrix that identifies the coefficients to be updated at instant time $k$, if an update is required} This matrix has $M$ nonzero elements equal to one located at positions declared by ${\cal I}_M(k)$.
Using the method of Lagrange multipliers we obtain the following updating rule
\begin{align}
\wbf(k+1)=\wbf(k)+\Cbf_{{\cal I}_M(k)}\Xbf(k)[\Xbf^T(k)\Cbf_{{\cal I}_M(k)}\Xbf(k)]^{-1}[\ebf(k)-\gammabf(k)]
\end{align}
The updating equation of the SM-PUAP\abbrev{SM-PUAP}{Set-Membership Partial-Update AP} algorithm is given by
\begin{align}
&\wbf(k+1)=\left\{\begin{array}{ll}\wbf(k)+\Cbf_{{\cal I}_M(k)}\Xbf(k)\Pbf(k)(\ebf(k)-\gammabf(k))&\text{if}~|e(k)|>\gammabar\\\wbf(k)&\text{otherwise}\end{array}\right.,\label{eq:update_SM-PUAP-icassp}
\end{align}
where \symbl{$\Pbf(k)$}{The auxiliary matrix $\Pbf(k)\triangleq(\Xbf^T(k)\Cbf_{{\cal I}_M(k)}\Xbf(k)+\delta\Ibf)^{-1}$}
\begin{align}
\Pbf(k)&=(\Xbf^T(k)\Cbf_{{\cal I}_M(k)}\Xbf(k)+\delta\Ibf)^{-1}, \label{eq:P'(k)-icassp}\\
\ebf(k)&=[e(k)~\epsilon(k-1)~\cdots~\epsilon(k-L)]^T,
\end{align}
with $e(k)=d(k)-\wbf^T(k)\xbf(k)$, and $\epsilon(k-i)=d(k-i)-\wbf^T(k)\xbf(k-i)$ for $i=1,\cdots,L$. In the Equation (\ref{eq:P'(k)-icassp}), $\delta$ and $\Ibf$ are a small positive constant and an $(L+1)\times (L+1)$ identity matrix, respectively. The diagonal matrix $\delta\Ibf$ is added to the matrix to be inverted in order to avoid numerical problems in the inversion operation in the cases $\Xbf^T(k)\Cbf_{{\cal I}_M(k)}\Xbf(k)$ is ill conditioned.
A natural choice for the $M$ nonzero diagonal elements of $\Cbf_{{\cal I}_M(k)}$ is those corresponding to the coefficients of $\wbf(k)$ with the most significant norms. In fact, by this selection, the $M$ coefficients with the largest norms will be updated, and the rest of the parameters will remain unchanged.
Figure \ref{fig:SM-PUAP-icassp} illustrates a possible update in SM-PUAP\abbrev{SM-PUAP}{Set-Membership Partial-Update AP} algorithm in $\mathbb{R}^3$ for $L=0$. As can be seen, $\wbf(k+1)$ is far from the $\wbf_{{\rm SM-AP}}(k)$, and it will reduce the convergence rate of the SM-PUAP\abbrev{SM-PUAP}{Set-Membership Partial-Update AP} algorithm. In the next section, we will address this issue by presenting the I-SM-PUAP\abbrev{I-SM-PUAP}{Improved SM-PUAP} algorithm.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=0.8\linewidth] {Figs/partial.pdf}
\caption{Update in SM-PUAP algorithm in $\mathbb{R}^3$ for $L=0$.}
\label{fig:SM-PUAP-icassp}
\end{center}
\end{figure}
\section{Improved Set-membership Partial-Update Affine Projection Algorithm} \label{sec:M-SM-PUAP-icassp}
In this section, we propose the I-SM-PUAP\abbrev{I-SM-PUAP}{Improved SM-PUAP} algorithm aiming at accelerating the convergence speed of SM-PUAP\abbrev{SM-PUAP}{Set-Membership Partial-Update AP} algorithm and
decreasing the number of updates.
Since the partial update strategy deviates the updating direction from the one determined by the input signal vector $\xbf(k)$ utilized by the SM-PUAP\abbrev{SM-PUAP}{Set-Membership Partial-Update AP} algorithm, it is natural that the size of the step for a partial update algorithm should be smaller than the corresponding algorithm that updates all coefficients. A solution to this problem is to constrain the Euclidean norm of the coefficient disturbance of the partial update algorithm to the disturbance implemented by the originating nonpartial updating algorithm, in our case the SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} algorithm. For that, we build hypersphere, $S(k)$, whose radius is the distance between $\wbf(k)$ and the SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} update. The SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} update takes a step towards
the hyperplanes $d(k)-\wbf^T\xbf(k)=\pm\gammabar$ with the minimum disturbance, i.e., when the step in the direction $\xbf(k)$ touches the hyperplane perpendicularly. Therefore, the radius of the hypersphere $S(k)$ is given by
\begin{align}
\mu(k)=\min\Big(\frac{|\wbf^T(k)\xbf(k)-d(k)\pm\gammabar|}{\|\xbf(k)\|_2}\Big),
\end{align}
where $\|\cdot\|_2$ is the Euclidean norm in $\mathbb{R}^{N+1}$. The equation describing the hypersphere $S(k)$ with the radius $\mu(k)$ and centered at $\wbf(k)$ is as follows \symbl{$S(k)$}{The hypersphere in $\mathbb{R}^{N+1}$ centered at $\wbf(k)$ with the radius $\mu(k)$}
\begin{align}
(w_0-w_0(k))^2+\cdots+(w_N-w_N(k))^2=\mu^2(k). \label{eq:sphere-icassp}
\end{align}
As can be observed in Figure \ref{fig:SM-PUAP-icassp}, $\wbf(k+1)$ is the point where, starting from $\wbf(k)$, the vector representing the $\wbf(k+1)$ direction touches the hyperplane
$d(k)-\wbf^T\xbf(k)=\gammabar$. Unlike the SM-PUAP\abbrev{SM-PUAP}{Set-Membership Partial-Update AP} algorithm, in the I-SM-PUAP\abbrev{I-SM-PUAP}{Improved SM-PUAP} algorithm $\wbf(k+1)$ is the point where, starting from $\wbf(k)$, the vector representing the partial direction touches the defined $N$ dimensional hypersphere $S(k)$ and points at a sparse version of $\xbf(k)$. A visual interpretation of the I-SM-PUAP\abbrev{I-SM-PUAP}{Improved SM-PUAP} algorithm is described in Figure \ref{fig:M-SM-PUAP-icassp}.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=0.8\linewidth] {Figs/I-SM-PUAP.pdf}
\caption{Update in I-SM-PUAP algorithm in $\mathbb{R}^3$ for $L=0$.}
\label{fig:M-SM-PUAP-icassp}
\end{center}
\end{figure}
Define $\hat{\wbf}(k)$ as the update result of Equation (\ref{eq:update_SM-PUAP-icassp}) with $\gammabf(k)=[0~\cdots~0]^T$.
In order to find the update of $\wbf(k)$ to the boundary of hypersphere $S(k)$ such that
$\tilde{\Cbf}_{{\cal I}_{{M}}(k)}\wbf(k+1)=\tilde{\Cbf}_{{\cal I}_{{M}}(k)}\wbf(k)$
we have to find the intersection of the hypersphere $S(k)$ with the line $l(k)$ passing through $\wbf(k)$ and $\hat{\wbf}(k)$.
This line is parallel to the vector $\ubf(k)=\frac{\abf(k)}{\|\abf(k)\|_2}$, where $\abf(k)=[\hat{w}_0(k)-w_0(k)~\cdots~\hat{w}_N(k)-w_N(k)]^T$. Hence, the equation of the line $l(k)$ is given as follows
\begin{align}
\left\{\begin{array}{ll}\frac{w_0-w_0(k)}{u_0(k)}=\cdots=\frac{w_i-w_i(k)}{u_i(k)}=\cdots=\frac{w_N-w_N(k)}{u_N(k)},&\text{for}~i\in{\cal I}_{{M}}(k),\\
w_i=w_i(k),&\text{for}~i\not\in{\cal I}_{{M}}(k).\end{array}\right. \label{eq:line-icassp}
\end{align}
In order to find the intersection of the line $l(k)$ with the hypersphere $S(k)$, we should replace Equation (\ref{eq:line-icassp}) in Equation (\ref{eq:sphere-icassp}).
Thus, we will attain $w_i=w_i(k)$ for $i\not\in{\cal I}_{{M}}(k)$, and for $i\in{\cal I}_{{M}}(k)$ we have
\begin{align}
&\frac{u_0^2(k)}{u_i^2(k)}(w_i-w_i(k))^2+\cdots+(w_i-w_i(k))^2+\cdots+\frac{u_N^2(k)}{u_i^2(k)}(w_i-w_i(k))^2=\mu^2(k).
\end{align}
Then,
\begin{align}
(w_i-w_i(k))^2=u_i^2(k)\mu^2(k),
\end{align}
where we obtained the last equality owing to $\|\ubf(k)\|_2=1$. Therefore, the intersections of the line $l(k)$ and the hypersphere $S(k)$ are given by
\begin{align}
w_i=w_i(k)\pm u_i(k)\mu(k). \label{eq:intersection-icassp}
\end{align}
We will choose the positive sign in Equation (\ref{eq:intersection-icassp}) since the direction of the vector $\abf(k)$ is from $\wbf(k)$ to $\hat{\wbf}(k)$. As a result, vector $\wbf(k+1)$ becomes as below
\begin{align}
\wbf(k+1)=\wbf(k)+\mu(k)\ubf(k).
\end{align}
Also, as an alternative method, we can get $\wbf(k+1)$ through an elegant geometrical view. Denote $\wbf(k+1)$ in Equation (\ref{eq:update_SM-PUAP-icassp}) as $\hat{\wbf}(k)$ while taking $\gammabf(k)=[0~\cdots~0]^T$. Define $\abf(k)$ as
\begin{align}
\abf(k)=\hat{\wbf}(k)-\wbf(k)=\Cbf_{{\cal I}_{{M}}(k)}\Xbf(k)\Pbf(k)\ebf(k). \label{eq:geometric-icassp}
\end{align}
If we take the step size equal to $\|\abf(k)\|_2$ and do the update in the direction of $\frac{\abf(k)}{\|\abf(k)\|_2}$, then the parameters will reach $\hat{\wbf}(k)$. However, our objective is to reach the boundary of hypersphere $S(k)$ centered at $\wbf(k)$ with radius $\mu(k)$ in the direction of $\frac{\abf(k)}{\|\abf(k)\|_2}$, thus the step size must be equal to the radius of $S(k)$ so that the update equation becomes
\begin{align}
\wbf(k+1)&=\wbf(k)+\mu(k)\frac{\abf(k)}{\|\abf(k)\|_2}=\wbf(k)+\mu(k)\ubf(k).
\end{align}
Table \ref{tb:m_sm_puap-icassp} summarizes the I-SM-PUAP algorithm.
\begin{table}[t!]
\caption{Improved Set-Membership Partial-Update Affine Projection(I-SM-PUAP) Algorithm}
\begin{center}
\begin{footnotesize}
\begin {tabular}{|l|} \hline\\ \hspace{2.2cm}{\bf I-SM-PUAP Algorithm}\\ \\
\hline\\
Initialization
\\
$\xbf(-1)=\wbf(0)=[0~\cdots~0]^T$\\
$\delta=$ small positive constant\\
choose $\gammabar$\\
Do for $k\geq0$\\
\hspace*{0.15cm} $\ebf(k)=\dbf(k)-\Xbf^T(k)\wbf(k)$\\
\hspace*{0.15cm} ${\rm if}~|e(k)|>\gammabar$\\
\hspace*{0.3cm} $\mu(k)=\min\Big(\frac{|-e(k)\pm\gammabar|}{\|\xbf(k)\|_2}\Big)$\\
\hspace*{0.3cm} $\abf(k)=\Cbf_{{\cal I}_{{M}}(k)}\Xbf(k)[\Xbf^T(k)\Cbf_{{\cal I}_{{M}}(k)}\Xbf(k)+\delta\Ibf]^{-1}\ebf(k)$ \\
\hspace*{0.3cm} $\wbf(k+1)=\wbf(k)+\frac{\mu(k)}{\|\abf(k)\|_2}\abf(k)$
\\
\hspace*{0.15cm} else\\
\hspace*{0.3cm} $\wbf(k+1)=\wbf(k)$\\
\hspace*{0.15cm} end\\
end \\
\\
\hline
\end {tabular}
\end{footnotesize}
\end{center}
\label{tb:m_sm_puap-icassp}
\end{table}
\section{Simulations} \label{sec:simulation-icassp}
\subsection{Scenario 1}
In this section, the SM-PUAP\abbrev{SM-PUAP}{Set-Membership Partial-Update AP} algorithm \cite{Diniz_adaptiveFiltering_book2013} and the proposed I-SM-PUAP\abbrev{I-SM-PUAP}{Improved SM-PUAP} algorithm are applied to a system identification problem. The unknown system has order $N=79$ and its coefficients are random scalars drawn from the standard normal distribution. The input signal is a binary phase-shift keying (BPSK)\abbrev{BPSK}{Binary Phase-Shift Keying} signal with $\sigma_x^2=1$. The signal-to-noise ratio (SNR)\abbrev{SNR}{Signal-to-Noise Ratio} is set to 20 dB, i.e., $\sigma_n^2=0.01$. The bound on the output estimation error is chosen as $\gammabar=\sqrt{25\sigma_n^2}$. Also, we adopt the threshold bound vector $\gammabf(k)$ as $\gamma_0(k)=\frac{\gammabar e(k)}{|e(k)|}$ and $\gamma_i(k)=d(k-i)-\wbf^T(k)\xbf(k-i)$, for $i=1,\cdots,L$ \cite{Diniz_adaptiveFiltering_book2013,Markus_edcv_eusipco2013}. The regularization constant, $\delta$, is $10^{-12}$ and $\wbf(0)=[1~\cdots~1]^T$ which is not close to the unknown system. All learning curves averaged over 200 trials. We are updating 50 percent of the components randomly chosen of the filter to illustrate the partial updating, i.e., half of the elements of ${\cal I}_M(k)$ are nonzero at each time instant $k$.
Figure \ref{fig:sim1-icassp} shows the learning curves for the I-SM-PUAP\abbrev{I-SM-PUAP}{Improved SM-PUAP} algorithm with $L=1,4$, and it illustrates the learning curves for the SM-PUAP\abbrev{SM-PUAP}{Set-Membership Partial-Update AP} algorithm with $L=64$ and 69. Also, in Figure \ref{fig:sim1-icassp} a blue curve is depicted using correlated inputs and $L=1$. In fact, for the blue curve all of the specifications of the system are the same as explained above and the only difference is the input signal. The correlated input signal is chosen as $x(k)=0.95x(k-1)+0.19x(k-2)+0.09x(k-3)-0.5x(k-1)+m(k-4)$, where $m(k)$ is a zero-mean Gaussian noise with unit variance.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=1\linewidth] {Figs/sim-icassp.pdf}
\caption{Learning curves of the I-SM-PUAP and the SM-PUAP algorithms applied on system identification problem.}
\label{fig:sim1-icassp}
\end{center}
\end{figure}
The average number of updates performed by the I-SM-PUAP\abbrev{I-SM-PUAP}{Improved SM-PUAP} algorithm are 8.3$\%$ and 6.5$\%$ for $L=1$ and 4, respectively, and 20$\%$ in the case of the correlated input signal. The average number of updates implemented by the SM-PUAP\abbrev{SM-PUAP}{Set-Membership Partial-Update AP} algorithm are 14$\%$ and 25$\%$ for $L=69$ and 64, respectively. Note that in both algorithms we have to find the inverse of an $(L+1)\times (L+1)$ matrix, thus large $L$ implies high computational complexity. Therefore, the I-SM-PUAP\abbrev{I-SM-PUAP}{Improved SM-PUAP} algorithm requires lower implementation time since it presents fast convergence even for a small value of $L$. Also, it is worth mentioning that for $L<64$ the SM-PUAP\abbrev{SM-PUAP}{Set-Membership Partial-Update AP} algorithm does not reach its steady-state in 10000 iterations. From the results, we can observe that the proposed algorithm, I-SM-PUAP,\abbrev{I-SM-PUAP}{Improved SM-PUAP} has faster convergence speed and lower number of updates as compared to the SM-PUAP\abbrev{SM-PUAP}{Set-Membership Partial-Update AP} algorithm.
\subsection{Scenario 2}
In this section, we perform the equalization of a channel with the following impulse response
\begin{align}
\hbf=[1~2~3~4~4~3~2~1]^T.
\end{align}
We use a known training signal that consists of independent binary samples $(-1,1)$ and an additional Gaussian white noise with variance 0.01 is present at the channel output. The I-SM-PUAP\abbrev{I-SM-PUAP}{Improved SM-PUAP} and the SM-PUAP\abbrev{SM-PUAP}{Set-Membership Partial-Update AP} algorithms are applied to find the impulse response of an equalizer of order 80. The delay in the reference signal is selected as 45. The parameters $\gammabar$ and $\gammabf(k)$ are chosen as $\sqrt{25\sigma_n^2}$ and the simple choice constraint vector is utilized as Scenario 1, respectively. The regularization constant, $\delta$, is $10^{-12}$ and $\wbf(0)=[1~\cdots~1]^T$. All learning curves are averaged over 100 trials. At each iteration, half of the elements of ${\cal I}_M(k)$ are set nonzero randomly. The memory-length, $L$, is 3.
Figure~\ref{fig:Equalization-icassp} shows the learning curves for the I-SM-PUAP\abbrev{I-SM-PUAP}{Improved SM-PUAP} and the SM-PUAP\abbrev{SM-PUAP}{Set-Membership Partial-Update AP} algorithms. The convolution of the equalizer impulse response at a given iteration after convergence with the channel impulse response is shown in Figure~\ref{fig:Convolution-icassp}. The average number of updates implemented by the I-SM-PUAP\abbrev{I-SM-PUAP}{Improved SM-PUAP} and the SM-PUAP\abbrev{SM-PUAP}{Set-Membership Partial-Update AP} algorithms are 61$\%$ and 82$\%$, respectively. As can be seen, the I-SM-PUAP\abbrev{I-SM-PUAP}{Improved SM-PUAP} algorithm has lower MSE\abbrev{MSE}{Mean-Squared Error} and lower number of updates compared to the SM-PUAP\abbrev{SM-PUAP}{Set-Membership Partial-Update AP} algorithm.
\begin{figure}[t!]
\centering
\subfigure[b][]{\includegraphics[width=.48\linewidth,height=7cm]{Figs/Equalization-icassp.pdf}
\label{fig:Equalization-icassp}}
\subfigure[b][]{\includegraphics[width=.48\linewidth,height=7cm]{Figs/Convolution-icassp.pdf}
\label{fig:Convolution-icassp}}
\caption{(a) Learning curves of the I-SM-PUAP and the SM-PUAP algorithms performing the equalization of a channel; (b) convolution results. \label{fig:Equalization}}
\end{figure}
\section{Conclusions} \label{sec:conclusion-icassp}
In this chapter, we have introduced the improved set-membership partial-update affine projection (I-SM-PUAP) algorithm aiming at accelerating the convergence rate of the set-membership partial-update affine projection (SM-PUAP) algorithm, with lower computational complexity and reduced number of updates. To achieve this goal, we use the distance between the present weight vector and the
one obtained with the SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} update, in order to provide a hypersphere that upper bounds the coefficient disturbance.
Numerical simulations for the system identification and the channel equalization problems have confirmed that the I-SM-PUAP\abbrev{I-SM-PUAP}{Improved SM-PUAP} algorithm has not only faster convergence rate, but also it requires a lower number of updates as compared to the SM-PUAP\abbrev{SM-PUAP}{Set-Membership Partial-Update AP} algorithm.
\chapter{Adaptive Filtering Algorithms for Sparse System Modeling}
Adaptive filtering applied to signals originating from time-varying systems find applications in a wide diversity of areas such as communications, control, radar, acoustics, and speech processing.
Nowadays, it is well known that many types of signal or system parameters admit sparse representation in a certain domain. However, classical adaptive algorithms such as the least-mean-square (LMS)\abbrev{LMS}{Least-Mean-Square}, the normalized LMS (NLMS)\abbrev{NLMS}{Normalized LMS}, the affine projection (AP)\abbrev{AP}{Affine Projection}, and the recursive least-squares (RLS)\abbrev{RLS}{Recursive Least-Squares} do not take into consideration the sparsity in the signal or system models.
Recently, it has been understood that by exploiting appropriately signal sparsity, significant improvement in convergence rate and steady-state performance can be achieved. As a consequence, many extensions of the classical algorithms were proposed aiming at exploiting sparsity. One of the most widely used approaches consists in updating each filter coefficient using a step-size proportional to its magnitude in order to speed up the convergence rate of the coefficients with large magnitudes. This approach led to the development of a family of algorithms known as {\it proportionate}~\cite{Duttweiler_PNLMS_tsap2000,Benesty_IPNLMS_icassp2002,Gay_pnlmsPlusPlus_acssc1998,Diniz_sm_pap_jasmp2007,Paleologu_papaEcho_spl2010}. Another interesting approach to exploit sparsity is to include a {\it sparsity-promoting penalty} (sometimes called regularization) function into the original optimization problem of classical algorithms~\cite{Markus-phdthesis}. Within this approach, most algorithms employ the $l_1$ norm as the sparsity-promoting
penalty~\cite{Vitor_SparsityAwareAPA_sspd2011,Theodoridis_l1ball_tsp2011,Chen_sparseLMS_icassp2009,Babadi_Sparse_RLS_tsp2010}, but recently an approximation to the $l_0$ norm has shown some
advantages~\cite{Markus_sparseSMAP_tsp2014,Markus_apssiAnalysis_icassp2014,Markus_apssi_icassp2013,Gu_l0_LMS_SPletter2009}. In addition, these two approaches were combined and tested
in~\cite{Pelekanakis2012,Markus_proportionatePlusPenalty_iscas2016} yielding interesting results. Observe that in all of the aforementioned approaches something is being included/added to the classical algorithms, thus entailing an increase in their computational complexity.
In this chapter, we use a different strategy to exploit sparsity.
Instead of including additional features in the algorithm, as the techniques described in the previous paragraph, we actually discard some coefficients, thus reducing the computational burden.
This idea is motivated by the existence of some uncertainty in the coefficients in practical applications. Indeed, a measured sparse impulse response of a system presents a few coefficients concentrating most of the energy, whereas the other coefficients are close to zero, but not precisely equal to zero~\cite{Markus_sparseSMAP_tsp2014}~\footnote{A system whose impulse response presents this characteristic is formally known as a {\it compressible system}~\cite{Markus-phdthesis}.}. Thus, if we have some prior information about the uncertainty in those parameters, then we can replace the parameters which are ``lower than'' this uncertainty with zero (i.e., discard the coefficients) in order to save computational resources.
In addition to this new way of exploiting sparsity, we also employ the set-membership filtering (SMF) approach \cite{Gollamudi_smf_letter1998,Diniz_adaptiveFiltering_book2013} in order to generate the {Simple Set-Membership Affine Projection} (S-SM-AP)\abbrev{S-SM-AP}{Simple SM-AP} algorithm, which is mostly the combination of the
set-membership affine projection algorithm~\cite{Werner_sm_ap_letter2001} with our strategy to exploit sparsity. The SMF\abbrev{SMF}{Set-Membership Filtering} approach is used just to reduce the computational burden even further since the filter coefficients are updated only when the estimation error is greater than a predetermined threshold.
Moreover, we derive the improved S-SM-AP\abbrev{S-SM-AP}{Simple SM-AP} (IS-SM-AP)\abbrev{IS-SM-AP}{Improved S-SM-AP} algorithm to reduce the overall number of computations required by the S-SM-AP\abbrev{S-SM-AP}{Simple SM-AP} algorithm even further by replacing small coefficients with zero. Also, we obtain the simple affine projection (S-AP)\abbrev{S-AP}{Simple AP} and the improved S-AP (IS-AP)\abbrev{IS-AP}{Improved S-AP} algorithms as special cases of the S-SM-AP\abbrev{S-SM-AP}{Simple SM-AP} and the IS-SM-AP\abbrev{IS-SM-AP}{Improved S-SM-AP} algorithms, respectively. The S-AP\abbrev{S-AP}{Simple AP} and the IS-AP\abbrev{IS-AP}{Improved S-AP} algorithms do not resort to the SMF concept and can be regarded as affine projection algorithms for sparse systems.
Finally, we introduce some sparsity-aware RLS\abbrev{RLS}{Recursive Least-Squares} algorithms employing the discard function and the $l_0$ norm approximation. The first proposed algorithm, the RLS for sparse systems (S-RLS)\abbrev{S-RLS}{RLS Algorithm for Sparse System}, sets low weights to the coefficients close to zero and exploits system sparsity with low computational complexity. On the other hand, the second algorithm, the $l_0$ norm RLS ($l_0$-RLS)\abbrev{$l_0$-RLS}{$l_0$ Norm RLS}, has higher computational complexity in comparison with the S-RLS\abbrev{S-RLS}{RLS Algorithm for Sparse System} algorithm. For both algorithms, in order to reduce the computational load further, we apply a data-selective strategy~\cite{Gollamudi_smf_letter1998} leading to the data-selective S-RLS (DS-S-RLS)\abbrev{DS-S-RLS}{Data-Selective S-RLS} and the data-selective $l_0$-RLS (DS-$l_0$-RLS)\abbrev{DS-$l_0$-RLS}{Data-Selective $l_0$-RLS} algorithms. That is, the proposed algorithms update the weight vector if the output estimation error is larger than a prescribed value. By applying the data-selective strategy, both algorithms attain lower computational complexity compared to the RLS\abbrev{RLS}{Recursive Least-Squares} algorithm.
The content of this chapter was published in~\cite{Hamed_eusipco2016,Hamed_S_RLS_ICASSP2017}. In Sections~\ref{sec:SSM-AP-eusipco} and \ref{sec:SM-PAPA-eusipco}, we review the sparsity-aware SM-AP (SSM-AP)\abbrev{SSM-AP}{Sparsity-Aware SM-AP} algorithm and the set-membership proportionate AP algorithm (SM-PAPA)\abbrev{SM-PAPA}{Set-Membership Proportionate AP Algorithm}, respectively. The proposed {S-SM-AP}\abbrev{S-SM-AP}{Simple SM-AP} algorithm is derived in Section~\ref{sec:ss-sm-ap-eusipco}. Sections~\ref{sec:s-rls-sparse} and~\ref{sec:l0-rls-sparse} propose the S-RLS\abbrev{S-RLS}{RLS Algorithm for Sparse System} and the $l_0$-RLS\abbrev{$l_0$-RLS}{$l_0$ Norm RLS} algorithms, respectively. Simulations are presented in Section~\ref{sec:simulations-eusipco} and
Section~\ref{sec:conclusions-eusipco} contains the conclusions.
\section{Sparsity-Aware SM-AP Algorithm}\label{sec:SSM-AP-eusipco}
In literature, a method to deal with the sparsity has been obtained by adding a penalty function to the original objective function \cite{Vitor_SparsityAwareAPA_sspd2011,Markus-phdthesis,Markus_sparseSMAP_tsp2014,Markus_apssiAnalysis_icassp2014,Markus_apssi_icassp2013}. This penalty function is generally related to the $l_0$ or $l_1$ norms. Utilizing $l_0$ norm has some difficulties since it leads to an NP-hard problem. Therefore, we must try to approximate the $l_0$ norm by {\it almost everywhere} differentiable functions, for then we can apply stochastic gradient methods to solve the optimization problem. In other words, the $l_0$ norm can be estimated by a continuous function $G_\beta:\mathbb{R}^{N+1}\rightarrow\mathbb{R}_+$, where $\beta\in\mathbb{R}_+$ is a parameter responsible for controlling the agreement between quality of the estimation and smoothness of $G_\beta$. This function must satisfy the following condition~\cite{Markus-phdthesis,Markus_sparseSMAP_tsp2014} \symbl{$G_\beta$}{Continuous and almost everywhere differentiable function that approximates the $l_0$ norm; $\beta$ controls the quality of the approximation}
\begin{align}
\lim_{\beta\rightarrow\infty}G_\beta(\wbf)=\|\wbf\|_0,
\end{align}
where $\|\cdot\|_0$ denotes the $l_0$ norm which, for $\wbf\in\mathbb{R}^{N+1}$, is defined as $\|\wbf\|_0\triangleq\#\{i\in\mathbb{N}:~w_i\neq0\}$, in which $\#$ stands for the cardinality of a finite set.
Here we present four examples of function $G_\beta$~\cite{Markus-phdthesis,Markus_sparseSMAP_tsp2014}
\begin{subequations}
\begin{align}
{\rm LF:~}G_\beta(\wbf)&=\sum_{i=0}^N(1-e^{-\beta|w_i|}), \label{eq:mult_Laplace}\\
{\rm MLF:~}G_\beta(\wbf)&=\sum_{i=0}^N(1-e^{-0.5\beta^2w^2_i}), \label{eq:modf_mult_Laplace}\\
{\rm GMF:~}G_\beta(\wbf)&=\sum_{i=0}^N(1-\frac{1}{1+\beta|w_i|}), \label{eq:mult_Geman}\\
{\rm MGMF:~}G_\beta(\wbf)&=\sum_{i=0}^N(1-\frac{1}{1+\beta^2w^2_i}). \label{eq:modf_mult_Geman}
\end{align}
\end{subequations}
The functions expressed in Equations~\eqref{eq:mult_Laplace} and~\eqref{eq:mult_Geman} are called the multivariate Laplace function (LF)\abbrev{LF}{Laplace Function} and the multivariate Geman-McClure function (GMF)\abbrev{GMF}{Geman-McClure Function}, respectively. Equations~\eqref{eq:modf_mult_Laplace} and~\eqref{eq:modf_mult_Geman} are modifications of the LF\abbrev{LF}{Laplace Function} and the GMF\abbrev{GMF}{Geman-McClure Function}, respectively, so that they have continuous derivatives too. Figure~\ref{fig:sparsity_Functions} shows the univariate Laplace and Geman-McClure functions for $\beta=5$.
\begin{figure}[t!]
\centering
\subfigure[b][]{\includegraphics[width=.48\linewidth,height=7cm]{Figs/Laplace_sparse.pdf}
\label{fig:Laplace_sparse}}
\subfigure[b][]{\includegraphics[width=.48\linewidth,height=7cm]{Figs/Geman_sparse.pdf}
\label{fig:Geman_sparse}}
\caption{Univariate functions $G_\beta(w)$, with $w\in[-1,1]$ and $\beta=5$: (a) LF; (b) GMF. \label{fig:sparsity_Functions}}
\end{figure}
The gradient of $G_\beta$ is defined as follows \symbl{$\gbf_\beta(\wbf)$}{Gradient of $G_\beta(\wbf)$ with respect to $\wbf$}
\begin{align}
\nabla G_\beta(\wbf)\triangleq\gbf_\beta(\wbf)\triangleq[g_\beta(w_0)~\cdots~g_\beta(w_N)]^T,
\end{align}
where $g_\beta(w_i)=\frac{\partial G_\beta(\wbf)}{\partial w_i}$. Note that~\eqref{eq:mult_Laplace} and~\eqref{eq:mult_Geman} are not differentiable at the origin, thus we define their derivatives at the origin equal to zero. The derivatives corresponding to~\eqref{eq:mult_Laplace}-\eqref{eq:modf_mult_Geman} are, respectively,
\begin{subequations}
\begin{align}
g_\beta(w_i)&=\beta {\rm sgn}(w_i){\rm e}^{-\beta|w_i|},\\ g_\beta(w_i)&=\beta^2w_i{\rm e}^{-0.5\beta^2w_i^2},\\
g_\beta(w_i)&=\frac{\beta{\rm sgn}(w_i)}{(1+\beta|w_i|)^2},\\ g_\beta(w_i)&=\frac{2\beta^2w_i}{(1+\beta^2w_i^2)^2},
\end{align}
\end{subequations}
where ${\rm sgn}(\cdot)$ denotes the sign function. \symbl{${\rm sgn}(\cdot)$}{The sign function} The interested reader can find the details of approximating the $l_0$ norm in~\cite{Markus_sparseSMAP_tsp2014}.
The SSM-AP\abbrev{SSM-AP}{Sparsity-Aware SM-AP} algorithm performs an update whenever $|e(k)|=|d(k)-\wbf^T(k)\xbf(k)|>\gammabar$, following an update recursion that is an approximation of the solution to the optimization problem \cite{Markus_sparseSMAP_tsp2014}
\begin{align}
&\min \|\wbf(k+1)-\wbf(k)\|_2^2+\alpha\|\wbf(k+1)\|_0\nonumber\\
&{\rm subject~to}\nonumber\\
&\dbf(k)-\Xbf^T(k)\wbf(k+1)=\gammabf(k),\label{eq:ssm_ap_optimization-eusipco}
\end{align}
where $\alpha\in\mathbb{R}_+$ denotes the weight given to the $l_0$ norm.
After replacing the $l_0$ norm with its approximation $G_\beta$, and using the method of Lagrange multipliers, the updating equation of the SSM-AP\abbrev{SSM-AP}{Sparsity-Aware SM-AP} algorithm is reached as follows \cite{Markus_sparseSMAP_tsp2014}
\begin{align}
\wbf(k+1)=
\left\{\begin{array}{ll}\wbf(k)+\Xbf(k)\Abf(k)[\ebf(k)-\gammabf(k)]&\\+\frac{\alpha}{2}[\Xbf(k)\Abf(k)\Xbf^T(k)-\Ibf]\gbf_\beta(\wbf(k))&\text{if }|e(k)|>\gammabar,\\\wbf(k)&\text{otherwise},\end{array}\right. \label{eq:update_rule_ssm-ap}
\end{align}
where $\Abf(k)=(\Xbf^T(k)\Xbf(k))^{-1}$.
\section{Set-Membership Proportionate AP Algorithm}\label{sec:SM-PAPA-eusipco}
The sparsity of the signals in some applications motivates
us to update each coefficient of the model independently
of the others. Therefore, in adaptive filtering, one of the most widely used methods to exploit sparsity is by implementing coefficient updates that are proportional to the magnitude of the related coefficients. Thus, the coefficients with large magnitude will update with higher convergence rate and, as a result, we have faster overall convergence speed~\cite{Benesty_IPNLMS_icassp2002}. This approach leads to a well known family of algorithms called proportionate. A noticeable number of algorithms utilizing the proportionate approach have been already introduced in the literature. Some of them are the proportionate NLMS\abbrev{NLMS}{Normalized LMS} (PNLMS)~\cite{Duttweiler_PNLMS_tsap2000}\abbrev{PNLMS}{Proportionate Normalized LMS}, the proportionate AP algorithm (PAPA)~\cite{Paleologu_papaEcho_spl2010},\abbrev{PAPA}{Proportionate Affine Projection Algorithm} and their set-membership counterparts~\cite{Diniz_sm_pap_jasmp2007}. In this section, we review the set-membership PAPA (SM-PAPA)\abbrev{SM-PAPA}{Set-Membership Proportionate AP Algorithm}. The optimization criterion of the SM-PAPA\abbrev{SM-PAPA}{Set-Membership Proportionate AP Algorithm} when it implements an update (i.e., when $|e(k)|>\gammabar$) is given by
\begin{align}
&\min \|\wbf(k+1)-\wbf(k)\|^2_{\Mbf^{-1}(k)}\nonumber\\
&{\rm subject~to}\nonumber\\
&\dbf(k)-\Xbf^T(k)\wbf(k+1)=\gammabf(k).\label{eq:sm-papa_optimization-eusipco}
\end{align}
The norm in this optimization criterion is defined as $\|\wbf\|^2_{\Mbf}\triangleq\wbf^T\Mbf\wbf$ and matrix $\Mbf(k)$ is a diagonal weighting matrix of the form
\begin{align}
\Mbf(k)\triangleq{\rm diag}[m_0(k)~\cdots~m_N(k)],
\end{align}
where
\begin{align}
m_i(k)\triangleq\frac{1-r\mu(k)}{N}+\frac{r\mu(k)|w_i(k)|}{\|\wbf(k)\|_1},
\end{align}
with
\begin{align}
\mu(k)=\left\{\begin{array}{ll}1-\frac{\gammabar}{|e(k)|}&\text{if }|e(k)|>\gammabar,\\0&\text{otherwise},\end{array}\right.
\end{align}
and $r\in[0,1]$. Also, $\|\cdot\|_1$ stands for the $l_1$ norm and for $\wbf\in\mathbb{R}^{N+1}$ it is defined as $\|\wbf\|_1=\sum_{i=0}^N|w_i|$. Utilizing the method of Lagrange multipliers to solve (\ref{eq:sm-papa_optimization-eusipco}), the update equation of the SM-PAPA\abbrev{SM-PAPA}{Set-Membership Proportionate AP Algorithm} is obtained as follows \cite{Diniz_sm_pap_jasmp2007}
\begin{align}
&\wbf(k+1)=\nonumber\\
&\left\{\begin{array}{ll}\wbf(k)+\Mbf(k)\Xbf(k)[\Xbf^T(k)\Mbf(k)\Xbf(k)]^{-1}[\ebf(k)-\gammabf(k)]&\text{if }|e(k)|>\gammabar,\\\wbf(k)&\text{otherwise}.\end{array}\right.
\end{align}
\section{{A Simple Set-Membership Affine Projection Algorithm}} \label{sec:ss-sm-ap-eusipco}
In the previous sections, we have observed that to exploit sparsity, we require a higher number of arithmetic operations compared to the SM-AP algorithm, which cannot exploit sparsity. Here we introduce a new algorithm to exploit sparsity with low computational complexity. In this algorithm, instead of including/adding something to the classical algorithms, we discard the coefficients close to zero.
In Subsection~\ref{sub:derivation-eusipco}, we propose {a Simple Set-Membership Affine Projection (S-SM-AP)}\abbrev{S-SM-AP}{Simple SM-AP} algorithm that exploits the sparsity of the involved system with low computational complexity. For this purpose, the strategy consists in not updating the coefficients of the sparse filter which are close to zero. Then, in Subsection~\ref{sub:discussion-eusipco}, we include a discussion of some characteristics of the proposed algorithm. In Subsection~\ref{sub:Modified-eusipco}, we introduce an improved version of the proposed algorithm aiming at reducing the computational burden even further. Finally, in Subsection~\ref{sub:non-SM-eusipco}, we derive the S-AP\abbrev{S-AP}{Simple AP} and IS-AP\abbrev{IS-AP}{Improved S-AP} algorithms by not employing the SMF\abbrev{SMF}{Set-Membership Filtering} technique.
\subsection{Derivation of the {S-SM-AP} algorithm \label{sub:derivation-eusipco}}
Let us define the {\it discard function} $f_\epsilon:\mathbb{R}\rightarrow\mathbb{R}$ for the
positive constant $\epsilon$ as follows \symbl{$f_\epsilon(\cdot)$}{Discard function; $\epsilon$ defines what is considered as close to zero}
\begin{align}
f_\epsilon(w)=\left\{\begin{array}{ll}w&{\rm if~} |w|> \epsilon, \\0&{\rm if~} |w|\leq \epsilon. \end{array}\right.\label{eq:f_epsilon-eusipco}
\end{align}
That is, function $f_\epsilon$ discards the values of $w$ which are close to zero.
The parameter $\epsilon$ defines what is considered as close to zero and, therefore, should be chosen
based on some {\it a priori} information about the relative importance of a coefficient to the sparse system.
Figure \ref{fig:f-a-eusipco} depicts the function $f_\epsilon(w)$ for $\epsilon=10^{-4}$. Note that the function $f_\epsilon(w)$ is not differentiable at $\pm \epsilon$, however, we
need to differentiate this function in order to derive the {S-SM-AP}\abbrev{S-SM-AP}{Simple SM-AP} algorithm.
To address this issue, we define the derivative of $f_\epsilon(w)$ at $+\epsilon$ and $-\epsilon$
as equal to the left and the right derivatives, respectively.
Thus, the derivative of $f_\epsilon(w)$ at $\pm \epsilon$ is zero.
Define the {\it discard vector function}
$\fbf_\epsilon:\mathbb{R}^{N+1}\rightarrow\mathbb{R}^{N+1}$ as \symbl{$\fbf_\epsilon(\cdot)$}{Discard vector function}
\begin{align}
\fbf_\epsilon(\wbf)=[f_\epsilon(w_0)~\cdots~f_\epsilon(w_N)]^T. \label{eq:discard vector function}
\end{align}
\begin{figure}[t!]
\centering
\includegraphics[width=0.5\linewidth]{Figs/f_a.pdf}
\caption{Discard function $f_\epsilon(w)$ for $\epsilon=10^{-4}$.\label{fig:f-a-eusipco}}
\end{figure}
The {S-SM-AP}\abbrev{S-SM-AP}{Simple SM-AP} algorithm updates the coefficients whose absolute values are larger than
$\epsilon$ whenever the error is such that $| e(k) | = |d(k) - \wbf^T(k) \xbf(k)|>\gammabar$.
Let $\psi^{L+1}(k)$ denote the intersection of the last $L+1$ constraint sets and state the following optimization criterion for the vector update whenever $\wbf(k)\not\in\psi^{L+1}(k)$
\begin{align}
&\min \frac{1}{2}\|\fbf_\epsilon(\wbf(k+1))-\wbf(k)\|^2 \nonumber\\
&{\rm subject~to}\nonumber\\
&\dbf(k)-\Xbf^T(k)\wbf(k+1)=\gammabf(k).\label{eq:ssm_optimization-eusipco}
\end{align}
In order to solve this optimization problem, we construct the Lagrangian $\mathbb{L}$ as
\begin{align}
\mathbb{L}=\frac{1}{2}\|\fbf_\epsilon(\wbf(k+1))-\wbf(k)\|^2
+\lambdabf^T(k)[\dbf(k)-\Xbf^T(k)\wbf(k+1)-\gammabf(k)],
\end{align}
where $\lambdabf(k)\in\mathbb{R}^{L+1}$ is a vector of Lagrange multipliers. After differentiating the above equation with respect to $\wbf(k+1)$ and setting the result
equal to zero, we obtain
\begin{align}
\fbf_\epsilon(\wbf(k+1))=\wbf(k)+\Fbf_\epsilon^{-1}(\wbf(k+1))\Xbf(k)\lambdabf(k),\label{eq:F_a(w(k+1))-eusipco}
\end{align}
where $\Fbf_\epsilon(\wbf(k+1))$ is the Jacobian matrix of $\fbf_\epsilon(\wbf(k+1))$. \symbl{$\Fbf_\epsilon(\wbf)$}{The Jacobian matrix of $\fbf_\epsilon(\wbf)$} In Equation (\ref{eq:F_a(w(k+1))-eusipco}), by employing a similar strategy as the PASTd\abbrev{PASTd}{Projection Approximation Subspace Tracking with Deflation} (projection approximation subspace tracking with deflation)~\cite{Wang_WirelessCommunicationSystems_book2004}, we replace $\fbf_\epsilon(\wbf(k+1))$ and $\Fbf_\epsilon^{-1}(\wbf(k+1))$ with $\wbf(k+1)$ and $\Fbf_\epsilon^{-1}(\wbf(k))$, respectively, in order to form the recursion, then we obtain
\begin{align}
\wbf(k+1)=\wbf(k)+\Fbf_\epsilon^{-1}(\wbf(k))\Xbf(k)\lambdabf(k).\label{eq:w(k+1)-ssm-eusipco}
\end{align}
If we substitute the above equation in the constraint relation (\ref{eq:ssm_optimization-eusipco}), then we will find $\lambdabf(k)$ as follows
\begin{align}
\lambdabf(k)=(\Xbf^T(k)\Fbf_\epsilon^{-1}(\wbf(k))\Xbf(k))^{-1}(\ebf(k)-\gammabf(k)).\label{eq:lambda-ssm-eusipco}
\end{align}
Replacing (\ref{eq:lambda-ssm-eusipco}) into (\ref{eq:w(k+1)-ssm-eusipco}) leads to the following updating equation
\begin{align}
\wbf(k+1)&=\wbf(k)\nonumber\\
&+\Fbf_\epsilon^{-1}(\wbf(k))\Xbf(k)(\Xbf^T(k)\Fbf_\epsilon^{-1}(\wbf(k))\Xbf(k))^{-1}(\ebf(k)-\gammabf(k)).
\end{align}
Note that $\Fbf_\epsilon(\wbf(k))$ is not an invertible matrix and, therefore,
we apply the Moore-Penrose pseudoinverse (generalization of the inverse matrix)
instead of the standard inverse. However, $\Fbf_\epsilon(\wbf(k))$ is a diagonal matrix with diagonal entries
equal to zero or one.
Indeed, for the components of $\wbf(k)$ whose absolute values are larger
than $\epsilon$, their corresponding entries on the diagonal matrix $\Fbf_\epsilon(\wbf(k))$
are equal to one, whereas the remaining entries are zero.
Hence, the pseudoinverse of $\Fbf_\epsilon(\wbf(k))$ is again $\Fbf_\epsilon(\wbf(k))$.
As a result, the update equation of the {S-SM-AP}\abbrev{S-SM-AP}{Simple SM-AP} algorithm is as follows
\begin{align}
\wbf(k+1)=\left\{\begin{array}{ll}\wbf(k)+\qbf(k)&\text{if }|e(k)|>\gammabar,\\\wbf(k)&\text{otherwise},\end{array}\right. \label{eq:update_equation-eusipco}
\end{align}
where
\begin{align}
\qbf(k)=\Fbf_\epsilon(\wbf(k))\Xbf(k)[\Xbf^T(k)\Fbf_\epsilon(\wbf(k))\Xbf(k)+\delta\Ibf]^{-1}(\ebf(k)-\gammabf(k)). \label{eq:q(k)-eusipco}
\end{align}
Note that, we applied a regularization factor $\delta\Ibf$ in (\ref{eq:q(k)-eusipco}) in order to avoid numerical problems in the matrix inversion. The S-SM-AP algorithm is described in Table~\ref{tb:S-SM-AP-chap6}.
\begin{table}[t!]
\caption{Simple set-membership affine projection algorithm (S-SM-AP)}
\begin{center}
\begin{footnotesize}
\begin {tabular}{|l|} \hline\\ \hspace{3.4cm}{\bf S-SM-AP Algorithm}\\
\\
\hline\\
Initialization
\\
$\wbf(0)=[1~1~\cdots~1]^T$\\
choose $\gammabar$ around $\sqrt{5\sigma_n^2}$ and small constant $\delta>0$\\
Do for $k>0$\\
\hspace*{0.3cm} $\ebf(k)=\dbf(k)-\Xbf^T(k)\wbf(k)$\\
\hspace*{0.3cm} if $|e(k)|>\gammabar$\\
\hspace*{0.45cm} $\qbf(k)=\Fbf_\epsilon(\wbf(k))\Xbf(k)[\Xbf^T(k)\Fbf_\epsilon(\wbf(k))\Xbf(k)+\delta\Ibf]^{-1}(\ebf(k)-\gammabf(k))$\\
\hspace*{0.45cm} $\wbf(k+1)= \wbf(k)+\qbf(k)$\\
\hspace*{0.3cm} else\\
\hspace*{0.45cm} $\wbf(k+1)= \wbf(k)$\\
\hspace*{0.3cm} end\\
end\\
\\
\hline
\end {tabular}
\end{footnotesize}
\end{center}
\label{tb:S-SM-AP-chap6}
\end{table}
\subsection{Discussion of the {S-SM-AP} algorithm \label{sub:discussion-eusipco}}
\subsubsection{Computational Complexity}
The update equation of the {S-SM-AP}\abbrev{S-SM-AP}{Simple SM-AP} algorithm is similar to the update equation of
the SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} algorithm, but the former one updates only the subset of coefficients of $\wbf(k)$
whose absolute values are larger than $\epsilon$. As a result, the role of matrix $\Fbf_\epsilon(\wbf(k))$ is to discard some coefficients
of $\wbf(k)$, thus reducing the computational complexity when compared to the SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} algorithm.
The computational complexity for each update of the weight vector of the
SM-PAPA~\cite{Diniz_sm_pap_jasmp2007}\abbrev{SM-PAPA}{Set-Membership Proportionate AP Algorithm},
the SSM-AP~\cite{Markus_sparseSMAP_tsp2014}\abbrev{SSM-AP}{Sparsity-Aware SM-AP},
and the proposed {S-SM-AP}\abbrev{S-SM-AP}{Simple SM-AP} algorithms are listed in Table~\ref{tab1-eusipco}. The filter order and the memory length factors are $N$ and $L$, respectively.
It should be noted that the number of operations in Table~\ref{tab1-eusipco} is presented for
the full update of all coefficients.
In other words, for the {S-SM-AP}\abbrev{S-SM-AP}{Simple SM-AP} algorithm we have presented the worst case scenario
which is equivalent to setting $\epsilon=0$,\footnote{In this case, the complexity of the
{S-SM-AP}\abbrev{S-SM-AP}{Simple SM-AP} and SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} algorithms are the same.}
while in practice we are updating only the coefficients with absolute values larger than
a predetermined positive constant. Also, it is notable that the number of divisions in
the {S-SM-AP}\abbrev{S-SM-AP}{Simple SM-AP} algorithm is less than the SM-PAPA\abbrev{SM-PAPA}{Set-Membership Proportionate AP Algorithm} and SSM-AP\abbrev{SSM-AP}{Sparsity-Aware SM-AP} algorithms. This is quite significant, as divisions are more complex than
other operations. Figures \ref{fig:Complexity_L_eusipco} and \ref{fig:Complexity_N_eusipco} show a comparison of the total number of arithmetic operations required by the SM-PAPA\abbrev{SM-PAPA}{Set-Membership Proportionate AP Algorithm}, the SSM-AP\abbrev{SSM-AP}{Sparsity-Aware SM-AP}, and the S-SM-AP\abbrev{S-SM-AP}{Simple SM-AP} algorithms for two cases: $N=15$, variable $L$ and $L=3$, variable $N$. As can be seen, the S-SM-AP\abbrev{S-SM-AP}{Simple SM-AP} algorithm is much less complex than the other two algorithms, especially for high values of $N$ and $L$.
\begin{figure}[t!]
\centering
\subfigure[b][]{\includegraphics[width=.48\linewidth,height=7cm]{Figs/Complexity_L_eusipco.pdf}
\label{fig:Complexity_L_eusipco}}
\subfigure[b][]{\includegraphics[width=.48\linewidth,height=7cm]{Figs/Complexity_N_eusipco.pdf}
\label{fig:Complexity_N_eusipco}}
\caption{The numerical complexity of the SM-PAPA, the SSM-AP, and the IS-SM-AP algorithms for two cases: (a) $N=15$, variable $L$; (b) $L=3$, variable $N$. \label{fig:Complexity-eusipco}}
\end{figure}
\begin{table*}[t]
\caption{Number of operations for SM-PAPA, SSM-AP, and {S-SM-AP} algorithms \label{tab1-eusipco}}
\begin{center}
\begin{tabular}{|l|c|c|c|} \hline
Algorithm & Addition $\&$ Subtraction & Multiplication & Division\\\hline
\parbox[t]{0mm}{\multirow{2}{*}{SM-PAPA}} & $N^2+(L^2+4L+5)N+$ & $(L^2+5L+7)N+$ & $2N+$ \\ & $(2L^3+5L^2+7L+5)$ & $(2L^3+6L^2+9L+8)$ & $(2L^2+4L+4)$ \\\hline
\parbox[t]{0mm}{\multirow{2}{*}{SSM-AP}} & $(L^2+6L+7)N+$ & $(L^2+6L+9)N+$ & $N+$ \\
& $(2L^3+6L^2+9L+7)$ & $(2L^3+7L^2+12L+11)$ & $(2L^2+4L+3)$ \\\hline
\parbox[t]{0mm}{\multirow{2}{*}{{S-SM-AP}}} & $\frac{1}{2}(L^2+5L+6)N$ & $\frac{1}{2}(L^2+5L+6)N$ & \parbox[t]{4mm}{\multirow{2}{*}{$L^2$}} \\ & $\frac{1}{2}(L^3+4L^2+11L+8)$ & $\frac{1}{2}(L^3+6L^2+11L+8)$ & \\\hline
\end{tabular}
\end{center}
\end{table*}
\subsubsection{Initialization}
Unlike classical algorithms in which the initialization of the weight vector is often chosen as
$\wbf(0) = {\bf 0}$, this same procedure cannot be applied to the proposed algorithm. If the initial coefficients have absolute values lower than $\epsilon$, then the matrix $\Fbf_\epsilon$ is equal to the zero matrix, and it does not allow any update.
Indeed, for the {S-SM-AP}\abbrev{S-SM-AP}{Simple SM-AP} algorithm, each of the coefficients should be initialized as
$|w_i (0)| > \epsilon$ for $i = 0, 1, \cdots, N$.
\subsubsection{Relation with other algorithms}
The similarities and differences between the proposed algorithm and the SM-AP\abbrev{SM-AP}{Set-Membership Affine Projection} algorithm
were already addressed when we discussed the complexity of these algorithms.
Now, one should observe that the update equation of the {S-SM-AP}\abbrev{S-SM-AP}{Simple SM-AP} algorithm is similar to
the one of the set-membership partial update affine projection (SM-PUAP)
algorithm \cite{Diniz_adaptiveFiltering_book2013}, in which our matrix $\Fbf_\epsilon(\wbf(k))$ is replaced by
a diagonal matrix $\Cbf$ also with entries equal to 1 or 0, but there is no specific
form to set/select $\Cbf$. Therefore, the proposed algorithm can be considered as a particular case of the SM-PUAP in
which there is a mathematically defined way (based on the sparsity of the unknown system)
to select the coefficients that are relevant and the ones that will be discarded. {Regarding the memory
requirements of the proposed algorithm, they are the same as in the AP\abbrev{AP}{Affine Projection} algorithm, i.e., determined by the data-reuse factor $L$.}
\subsection{The Improved {S-SM-AP} (IS-SM-AP) algorithm} \label{sub:Modified-eusipco}
As we can observe in the update equation of the S-SM-AP\abbrev{S-SM-AP}{Simple SM-AP} algorithm,
if a coefficient of the weight vector falls inside the interval
$[-\epsilon,+\epsilon]$, then in the next update this coefficient
does not update since it is eliminated by the discard function. On the other hand, the coefficients $w_i(k)$ inside the interval $[-\epsilon,+\epsilon]$
are close to zero, and the best intuitive approximation for them is zero (the center of the interval).
Besides, making these coefficients $w_i(k)$ equal to zero implies in a reduction of
computational complexity, because it reduces the number of operations required to
compute the output of the adaptive filter $y(k) = \xbf^T(k) \wbf(k)$.\footnote{This
additional reduction in the number of operations becomes more important as the filter
order increases. For instance, in acoustic echo cancellation systems, in which the
adaptive filter has a few thousands of coefficients~\cite{Hansler_echo_book2004,Benesty_echo_book2010},
this simple strategy implies in significant computational savings.} For this purpose, we multiply $\wbf(k)$ by $\Fbf_\epsilon(\wbf(k))$, and
obtain the Improved S-SM-AP (IS-SM-AP)\abbrev{IS-SM-AP}{Improved S-SM-AP} algorithm as follows
\begin{align}
\wbf(k+1)=\left\{\begin{array}{ll}\Fbf_\epsilon(\wbf(k))\wbf(k)+\qbf(k)&\text{if }|e(k)|>\gammabar,\\\wbf(k)&\text{otherwise}.\end{array}\right. \label{eq:is-sm-ap-update_equation-eusipco}
\end{align}
Table~\ref{tb:IS-SM-AP-chap6} illustrates the IS-SM-AP algorithm.
\begin{table}[t!]
\caption{Improved simple set-membership affine projection algorithm (IS-SM-AP)}
\begin{center}
\begin{footnotesize}
\begin {tabular}{|l|} \hline\\ \hspace{3.4cm}{\bf IS-SM-AP Algorithm}\\
\\
\hline\\
Initialization
\\
$\wbf(0)=[1~1~\cdots~1]^T$\\
choose $\gammabar$ around $\sqrt{5\sigma_n^2}$ and small constant $\delta>0$\\
Do for $k>0$\\
\hspace*{0.3cm} $\ebf(k)=\dbf(k)-\Xbf^T(k)\wbf(k)$\\
\hspace*{0.3cm} if $|e(k)|>\gammabar$\\
\hspace*{0.45cm} $\qbf(k)=\Fbf_\epsilon(\wbf(k))\Xbf(k)[\Xbf^T(k)\Fbf_\epsilon(\wbf(k))\Xbf(k)+\delta\Ibf]^{-1}(\ebf(k)-\gammabf(k))$\\
\hspace*{0.45cm} $\wbf(k+1)= \Fbf_\epsilon(\wbf(k))\wbf(k)+\qbf(k)$\\
\hspace*{0.3cm} else\\
\hspace*{0.45cm} $\wbf(k+1)= \wbf(k)$\\
\hspace*{0.3cm} end\\
end\\
\\
\hline
\end {tabular}
\end{footnotesize}
\end{center}
\label{tb:IS-SM-AP-chap6}
\end{table}
\subsection{The S-AP and the IS-AP algorithms} \label{sub:non-SM-eusipco}
By adopting the bound $\gammabar=0$, the S-SM-AP\abbrev{S-SM-AP}{Simple SM-AP} algorithm will convert to the S-AP\abbrev{S-AP}{Simple AP} algorithm with unity step size. Therefore, the S-AP\abbrev{S-AP}{Simple AP} algorithm can be described as follows
\begin{align}
\wbf(k+1)=\wbf(k)+\mu\Fbf_\epsilon(\wbf(k))\Xbf(k)[\Xbf^T(k)\Fbf_\epsilon(\wbf(k))\Xbf(k)+\delta\Ibf]^{-1}\ebf(k) \label{eq:update-s-ap-eusipco}
\end{align}
where $\mu$ is the convergence factor.
By the same argument, we can obtain the update equation of the IS-AP\abbrev{IS-AP}{Improved S-AP} algorithm as below
\begin{align}
\wbf(k+1)=&\Fbf_\epsilon(\wbf(k))\wbf(k)\nonumber\\
&+\mu\Fbf_\epsilon(\wbf(k))\Xbf(k)[\Xbf^T(k)\Fbf_\epsilon(\wbf(k))\Xbf(k)+\delta\Ibf]^{-1}\ebf(k) \label{eq:update-is-ap-eusipco}
\end{align}
where $\mu$ is the convergence factor. These algorithms are counterparts of the AP\abbrev{AP}{Affine Projection} algorithm, however they can exploit the sparsity in systems.
{\it Remark}: In the previous sections, we have focused on the AP\abbrev{AP}{Affine Projection} algorithms. However, the NLMS and the binormalized data-reusing LMS\abbrev{LMS}{Least-Mean-Square} algorithms can be derived as special cases of the AP\abbrev{AP}{Affine Projection} algorithms. Indeed, by choosing $L=0$ and $1$, the AP\abbrev{AP}{Affine Projection} algorithms will be reduced to the NLMS and the binormalized data-reusing LMS\abbrev{LMS}{Least-Mean-Square} algorithms, respectively.
\section{Some issues of the S-SM-AP and the \\IS-SM-AP Algorithms} \label{sec:difficulties-is-sm-ap-chap6}
As we discussed in Subsection~\ref{sub:discussion-eusipco}, the proposed S-SM-AP\abbrev{S-SM-AP}{Simple SM-AP} and the IS-SM-AP\abbrev{IS-SM-AP}{Improved S-SM-AP} algorithms are sensitive to the initialization. In fact, the absolute value of parameters of $\wbf(0)$ have to be greater than $\epsilon$ and $w_i(0)w_{oi}>0$ for $i=0,\cdots,N$, i.e., $w_i(0)$ and $w_{oi}$ must have the same sign, where $w_{oi}$ is the $i$-th component of the unknown system. Moreover, when the system is time-varying, these algorithms cannot track the system. In other words, if a coefficient falls inside $[-\epsilon,\epsilon]$, then it cannot go out. Thus, in the case of time-varying systems, it means that the algorithm is unable to track the system.
To address this issue, we can use an auxiliary weight vector $\mbf(k)$ as in~\cite{Hu_shrink_sparse_icassp2014}. Through this technique, the discard function applies only to the auxiliary weight vector, and we can propose the discard SM-AP (D-SM-AP)\abbrev{D-SM-AP}{Discard SM-AP} algorithm. The D-SM-AP\abbrev{D-SM-AP}{Discard SM-AP} algorithm is presented in Table~\ref{tb:D-SM-AP-chap6}. Note that the computational burden of the D-SM-AP\abbrev{D-SM-AP}{Discard SM-AP} algorithm is higher than the IS-SM-AP\abbrev{IS-SM-AP}{Improved S-SM-AP} and the S-SM-AP\abbrev{S-SM-AP}{Simple SM-AP} algorithms. However, it can be utilized in time-varying systems, and we can adopt any initialization $\wbf(0)$.
\begin{table}[t!]
\caption{Discard set-membership affine projection algorithm (D-SM-AP)}
\begin{center}
\begin{footnotesize}
\begin {tabular}{|l|} \hline\\ \hspace{4.4cm}{\bf D-SM-AP Algorithm}\\
\\
\hline\\
Initialization
\\
$\wbf(0)={\bf 0}$ and $\mbf(0)={\bf 0}$\\
choose $\gammabar$ around $\sqrt{5\sigma_n^2}$ and small constant $\delta>0$\\
Do for $k>0$\\
\hspace*{0.3cm} $\ebf(k)=\dbf(k)-\Xbf^T(k)\wbf(k)$\\
\hspace*{0.3cm} $\mbf(k+1)= \left\{\begin{array}{ll}\mbf(k)+\Xbf(k)[\Xbf^T(k)\Xbf(k)+\delta\Ibf]^{-1}(\ebf(k)-\gammabf(k))&{\rm if~} |e(k)|>\gammabar\\\mbf(k)&{\rm otherwise}\end{array}\right.$\\
\hspace*{0.3cm} $\wbf(k+1)=\Fbf_{\epsilon}(\mbf(k+1))\mbf(k+1)$\\
end\\
\\
\hline
\end {tabular}
\end{footnotesize}
\end{center}
\label{tb:D-SM-AP-chap6}
\end{table}
\section{Recursive Least-Squares Algorithm Exploiting Sparsity} \label{sec:s-rls-sparse}
In this section, we utilize the discard function to introduce an RLS\abbrev{RLS}{Recursive Least-Squares} algorithm for sparse systems. In Subsection~\ref{sub:derivation-s-rls-sparse}, we derive the S-RLS\abbrev{S-RLS}{RLS Algorithm for Sparse System} algorithm that exploits the sparsity of the estimated parameters by giving low weight to the small coefficients.
For this purpose, the strategy consists in multiplying the coefficients of the sparse filter which
are close to zero by a small constant.
Then, in Subsection~\ref{sub:discussion-s-rls-sparse}, we include a discussion of some
characteristics of the proposed algorithm. Subsection~\ref{sub:ds-s-rls-sparse} briefly describes the DS-S-RLS\abbrev{DS-S-RLS}{Data-Selective S-RLS} algorithm, the data-selective version of the S-RLS algorithm.
\subsection{Derivation of the S-RLS algorithm} \label{sub:derivation-s-rls-sparse}
We utilize the discard vector function defined in Equation~\eqref{eq:discard vector function} in order to introduce the objective function of the S-RLS\abbrev{S-RLS}{RLS Algorithm for Sparse System} algorithm as follows
\begin{align}
&\min \xi^d(k)=\sum_{i=0}^k\lambda^{k-i}[d(i)-\xbf^T(i)\fbf_\epsilon(\wbf(k))]^2,\label{eq:ssm_optimization-sparse}
\end{align}
where the parameter $\lambda$ is an exponential weighting factor that should be selected in the range $0\ll\lambda\leq1$.
By differentiating $\xi^d(k)$ with respect to $\wbf(k)$, we obtain
\begin{align}
\frac{\partial\xi^d(k)}{\partial\wbf(k)}=-2\sum_{i=0}^k\lambda^{k-i}\Fbf_\epsilon(\wbf(k))\xbf(i)[d(i)-\xbf^T(i)\fbf_\epsilon(\wbf(k))],
\end{align}
where $\Fbf_\epsilon(\wbf(k))$ is the Jacobian matrix of $\fbf_\epsilon(\wbf(k))$ (see~\eqref{eq:discard vector function}). By equating the above equation to zero, we find the optimal vector $\wbf(k)$ that solves the least-square problem, as follows
\begin{align}
-\sum_{i=0}^k\lambda^{k-i}\Fbf_\epsilon(\wbf(k))\xbf(i)\xbf^T(i)\fbf_\epsilon(\wbf(k))+\sum_{i=0}^k\lambda^{k-i}\Fbf_\epsilon(\wbf(k))\xbf(i)d(i)=\left[\begin{array}{c}0\\\vdots\\0\end{array}\right].
\end{align}
Therefore,
\begin{align}
\fbf_\epsilon(\wbf(k))=&\Big[\sum_{i=0}^k\lambda^{k-i}\Fbf_\epsilon(\wbf(k))\xbf(i)\xbf^T(i)\Big]^{-1}\times\sum_{i=0}^k\lambda^{k-i}\Fbf_\epsilon(\wbf(k))\xbf(i)d(i). \label{eq:before_double_F-sparse}
\end{align}
Note that $\Fbf_\epsilon(\wbf(k))$ is a diagonal matrix with diagonal entries equal to zero or one.
Indeed, for the components of $\wbf(k)$ whose absolute values are larger than $\epsilon$, their corresponding entries on the diagonal matrix $\Fbf_\epsilon(\wbf(k))$ are one, whereas the remaining entries are zero. Hence,
\begin{align}
\Fbf_\epsilon(\wbf(k))\xbf(i)\xbf^T(i)&=\Fbf_\epsilon^2(\wbf(k))\xbf(i)\xbf^T(i)=\Fbf_\epsilon(\wbf(k))(\xbf^T(i)\Fbf_\epsilon(\wbf(k)))^T\xbf^T(i)\nonumber\\
&=\Fbf_\epsilon(\wbf(k))\xbf(i)\xbf^T(i)\Fbf_\epsilon(\wbf(k)). \label{eq:F^2=F_indentity-sparse}
\end{align}
By utilizing (\ref{eq:F^2=F_indentity-sparse}) in (\ref{eq:before_double_F-sparse}) and replacing $\fbf_\epsilon(\wbf(k))$ by $\wbf(k+1)$, we get
\begin{align}
\wbf(k+1)&=\Big[\sum_{i=0}^k\lambda^{k-i}\Fbf_\epsilon(\wbf(k))\xbf(i)\xbf^T(i)\Fbf_\epsilon(\wbf(k))\Big]^{-1}\times\sum_{i=0}^k\lambda^{k-i}\Fbf_\epsilon(\wbf(k))\xbf(i)d(i)\nonumber\\
&=\Rbf_{D,\epsilon}^{-1}(k)\pbf_{D,\epsilon}(k), \label{eq:original_version-s-rls}
\end{align}
where $\Rbf_{D,\epsilon}(k)$ and $\pbf_{D,\epsilon}(k)$ are called the deterministic correlation matrix of the input signal and the
deterministic cross-correlation vector between the input and the desired signals, respectively. \symbl{$\Rbf_{D,\epsilon}(k)$}{The deterministic correlation matrix of the input signal involved $\Fbf_\epsilon(\wbf(k))$} \symbl{$\pbf_{D,\epsilon}(k)$}{The deterministic cross-correlation vector between the input and the desired signals involved $\Fbf_\epsilon(\wbf(k))$} Whenever the $i$-th diagonal entry of matrix $\Fbf_\epsilon(\wbf(k))$ is zero, it is replaced by a small power-of-two (e.g., $2^{-5}$) multiplied by the sign of the component $w_i(k)$ in order to avoid that matrix $\Rbf_{D,\epsilon}(k)$ becomes ill conditioned.
If we apply the direct method to calculate the inverse of $\Rbf_{D,\epsilon}(k)$, then the resulting algorithm has computational complexity of $O[N^3]$. Generally, in the traditional RLS\abbrev{RLS}{Recursive Least-Squares} algorithm, the inverse matrix is computed through the matrix inversion lemma~\cite{Goodwin_Dynamic_system_id_book1977}. In matrix inversion lemma, we have
\begin{align}
[\Abf+\Bbf\Cbf\Dbf]^{-1}=\Abf^{-1}-\Abf^{-1}\Bbf[\Dbf\Abf^{-1}\Bbf+\Cbf^{-1}]^{-1}\Dbf\Abf^{-1},
\end{align}
where $\Abf$, $\Bbf$, $\Cbf$, and $\Dbf$ are matrices of appropriate dimensions, and $\Abf$ and $\Cbf$ are invertible. If we choose $\Abf=\lambda\Rbf_{D,\epsilon}(k-1)$, $\Bbf=\Dbf^T=\Fbf_\epsilon(\wbf(k))\xbf(k)$, and $\Cbf=1$ then by
using the matrix inversion lemma, the inverse of the deterministic correlation matrix can be calculated in the form \symbl{$\Sbf_{D,\epsilon}(k)$}{The inverse of $\Rbf_{D,\epsilon}(k)$}
\begin{align}
\Sbf_{D,\epsilon}(k)=&\Rbf_{D,\epsilon}^{-1}(k)\nonumber\\=&\frac{1}{\lambda}\Big[\Sbf_{D,\epsilon}(k-1)
-\frac{\Sbf_{D,\epsilon}(k-1)\Fbf_\epsilon(\wbf(k))\xbf(k)\xbf^T(k)\Fbf_\epsilon(\wbf(k))\Sbf_{D,\epsilon}(k-1)}{\lambda+\xbf^T(k)\Fbf_\epsilon(\wbf(k))\Sbf_{D,\epsilon}(k-1)\Fbf_\epsilon(\wbf(k))\xbf(k)}\Big].\label{eq:S_D-sparse}
\end{align}
The resulting equation to compute $\Rbf_{D,\epsilon}^{-1}(k)$ has computational complexity of $O[N^2]$, whereas the computational resources for the direct inversion is of order $N^3$. Finally,
\begin{align}
\wbf(k+1)=\Sbf_{D,\epsilon}(k)\pbf_{D,\epsilon}(k).
\end{align}
Table~\ref{tb:S-RLS-sparse} describes the S-RLS\abbrev{S-RLS}{RLS Algorithm for Sparse System} algorithm.
\begin{table}[t!]
\caption{Recursive least-squares algorithm for sparse systems (S-RLS)}
\begin{center}
\begin{footnotesize}
\begin {tabular}{|l|} \hline\\ \hspace{2.9cm}{\bf S-RLS Algorithm}\\
\\
\hline\\
Initialization
\\
$\Sbf_{D,\epsilon}(-1)=\delta\Ibf$\\
where $\delta$ can be the inverse of the input signal power estimate\\
$\pbf_{D,\epsilon}(-1)=[0~0~\cdots~0]^T$\\
$\wbf(0)=[1~1~\cdots~1]^T$\\
Do for $k\geq0$\\
\hspace*{0.3cm} compute $\Sbf_{D,\epsilon}(k)$ through Equation (\ref{eq:S_D-sparse})\\
\hspace*{0.3cm} $\pbf_{D,\epsilon}(k)=\lambda \pbf_{D,\epsilon}(k-1)+\Fbf_\epsilon(\wbf(k))\xbf(k)d(k)$\\
\hspace*{0.3cm} $\wbf(k+1)=\Sbf_{D,\epsilon}(k)\pbf_{D,\epsilon}(k)$\\
end\\
\\
\hline
\end {tabular}
\end{footnotesize}
\end{center}
\label{tb:S-RLS-sparse}
\end{table}
We can introduce the alternative S-RLS (AS-RLS)\abbrev{AS-RLS}{Alternative S-RLS} algorithm in order to decrease the computational load of the S-RLS\abbrev{S-RLS}{RLS Algorithm for Sparse System}. Assuming $\Fbf_\epsilon(\wbf(k)) \approx\Fbf_\epsilon(\wbf(k-1))$, we can rewrite Equation~\eqref{eq:original_version-s-rls} as
\begin{align}
&\Big[\sum_{i=0}^k\lambda^{k-i}\Fbf_\epsilon(\wbf(k))\xbf(i)\xbf^T(i)\Fbf_\epsilon(\wbf(k))\Big]\wbf(k+1)=\sum_{i=0}^k\lambda^{k-i}\Fbf_\epsilon(\wbf(k))\xbf(i)d(i)\nonumber\\
&=\lambda\Big[\sum_{i=0}^{k-1}\lambda^{k-i-1}\Fbf_\epsilon(\wbf(k))\xbf(i)d(i)\Big]+\Fbf_\epsilon(\wbf(k))\xbf(k)d(k)\nonumber\\
&\approx\lambda\Big[\sum_{i=0}^{k-1}\lambda^{k-i-1}\Fbf_\epsilon(\wbf(k-1))\xbf(i)d(i)\Big]+\Fbf_\epsilon(\wbf(k))\xbf(k)d(k)\nonumber\\
&=\lambda\pbf_{D,\epsilon}(k-1)+\Fbf_\epsilon(\wbf(k))\xbf(k)d(k)
\end{align}
By considering that $\Rbf_{D,\epsilon}(k-1)\wbf(k)=\pbf_{D,\epsilon}(k-1)$, we obtain
\begin{align}
&\Big[\sum_{i=0}^k\lambda^{k-i}\Fbf_\epsilon(\wbf(k))\xbf(i)\xbf^T(i)\Fbf_\epsilon(\wbf(k))\Big]\wbf(k+1)\nonumber\\
&\approx\lambda\Rbf_{D,\epsilon}(k-1)\wbf(k)+\Fbf_\epsilon(\wbf(k))\xbf(k)d(k)\nonumber\\
&=\Big[\sum_{i=0}^{k-1}\lambda^{k-i}\Fbf_\epsilon(\wbf(k-1))\xbf(i)\xbf^T(i)\Fbf_\epsilon(\wbf(k-1))\Big]\wbf(k)+\Fbf_\epsilon(\wbf(k))\xbf(k)d(k)\nonumber\\
&\approx\Big[\sum_{i=0}^{k}\lambda^{k-i}\Fbf_\epsilon(\wbf(k))\xbf(i)\xbf^T(i)\Fbf_\epsilon(\wbf(k))-\Fbf_\epsilon(\wbf(k))\xbf(k)\xbf^T(k)\Fbf_\epsilon(\wbf(k))\Big]\wbf(k)\nonumber\\
&+\Fbf_\epsilon(\wbf(k))\xbf(k)d(k).
\end{align}
Then, by using Equation~\eqref{eq:F^2=F_indentity-sparse} and a few manipulations, we get
\begin{align}
\wbf(k+1)\approx\wbf(k)+e(k)\Sbf_{D,\epsilon}(k)\Fbf_\epsilon(\wbf(k))\xbf(k),
\end{align}
where $e(k)=d(k)-\xbf^T(k)\wbf(k)$. Table~\ref{tb:alt-S-RLS-sparse} illustrates the AS-RLS\abbrev{AS-RLS}{Alternative S-RLS} algorithm.
\begin{table}[t!]
\caption{Alternative recursive least-squares algorithm for sparse systems}
\begin{center}
\begin{footnotesize}
\begin {tabular}{|l|} \hline\\ \hspace{2.8cm}{\bf AS-RLS Algorithm}\\
\\
\hline\\
Initialization
\\
$\Sbf_{D,\epsilon}(-1)=\delta\Ibf$\\
where $\delta$ can be inverse of the input signal power estimate\\
$\wbf(0)=[1~1~\cdots~1]^T$\\
Do for $k\geq0$\\
\hspace*{0.3cm} $e(k)=d(k)-\xbf^T(k)\wbf(k)$\\
\hspace*{0.3cm} $\psi(k)=\Sbf_{D,\epsilon}(k-1)\Fbf_\epsilon(\wbf(k))\xbf(k)$\\
\hspace*{0.3cm} $\Sbf_{D,\epsilon}(k)=\frac{1}{\lambda}\Big[\Sbf_{D,\epsilon}(k-1)-\frac{\psi(k)\psi^T(k)}{\lambda+\psi^T(k)\Fbf_\epsilon(\wbf(k))\xbf(k)}\Big]$\\
\hspace*{0.3cm} $\wbf(k+1)=\wbf(k)+e(k)\Sbf_{D,\epsilon}(k)\Fbf_\epsilon(\wbf(k))\xbf(k)$\\
end\\
\\
\hline
\end {tabular}
\end{footnotesize}
\end{center}
\label{tb:alt-S-RLS-sparse}
\end{table}
\subsection{Discussion of the {S-RLS} algorithm} \label{sub:discussion-s-rls-sparse}
The update equation of the {S-RLS}\abbrev{S-RLS}{RLS Algorithm for Sparse System} algorithm is similar to the update equation of the RLS\abbrev{RLS}{Recursive Least-Squares} algorithm, but the former gives importance only to the subset of coefficients of $\wbf(k)$ whose absolute values are larger than $\epsilon$.
The matrix $\Fbf_\epsilon(\wbf(k))$ defines the important coefficients of $\wbf(k)$.
\subsection{DS-S-RLS algorithm} \label{sub:ds-s-rls-sparse}
In this subsection, our goal is to reduce the update rate of the S-RLS\abbrev{S-RLS}{RLS Algorithm for Sparse System} algorithm. In fact, when the current weight vector is acceptable, i.e., the output estimation error is small, we can save computational resources by avoiding the new update. The data selective S-RLS (DS-S-RLS)\abbrev{DS-S-RLS}{Data-Selective S-RLS} algorithm updates whenever the output estimation error is larger than a prescribed value $\gammabar$, i.e., when $|e(k)|=|d(k)-\wbf^T(k)\xbf(k)|>\gammabar$. Therefore, the DS-S-RLS\abbrev{DS-S-RLS}{Data-Selective S-RLS} algorithm reduces the computational complexity by avoiding updates whenever the estimate is acceptable. Table~\ref{tb:DS-S-RLS-sparse} describes the DS-S-RLS algorithm.
\begin{table}[t!]
\caption{Data-selective recursive least-squares algorithm for sparse systems (DS-S-RLS)}
\begin{center}
\begin{footnotesize}
\begin {tabular}{|l|} \hline\\ \hspace{2.7cm}{\bf DS-S-RLS Algorithm}\\
\\
\hline\\
Initialization
\\
$\Sbf_{D,\epsilon}(-1)=\delta\Ibf$\\
where $\delta$ can be the inverse of the input signal power estimate\\
choose $\gammabar$ around $\sqrt{5\sigma_n^2}$\\
$\pbf_{D,\epsilon}(-1)=[0~0~\cdots~0]^T$\\
$\wbf(0)=[1~1~\cdots~1]^T$\\
Do for $k\geq0$\\
\hspace*{0.3cm} $e(k)=d(k)-\wbf^T(k)\xbf(k)$\\
\hspace*{0.3cm} if $|e(k)|>\gammabar$\\
\hspace*{0.45cm} compute $\Sbf_{D,\epsilon}(k)$ through Equation (\ref{eq:S_D-sparse})\\
\hspace*{0.45cm} $\pbf_{D,\epsilon}(k)=\lambda \pbf_{D,\epsilon}(k-1)+\Fbf_\epsilon(\wbf(k))\xbf(k)d(k)$\\
\hspace*{0.45cm} $\wbf(k+1)=\Sbf_{D,\epsilon}(k)\pbf_{D,\epsilon}(k)$\\
\hspace*{0.3cm} else\\
\hspace*{0.45cm} $\wbf(k+1)=\wbf(k)$\\
\hspace*{0.3cm} end\\
end\\
\\
\hline
\end {tabular}
\end{footnotesize}
\end{center}
\label{tb:DS-S-RLS-sparse}
\end{table}
\section{{$l_0$} Norm Recursive Least-Squares Algorithm} \label{sec:l0-rls-sparse}
In the previous section, we have introduced the S-RLS\abbrev{S-RLS}{RLS Algorithm for Sparse System} algorithm for sparse systems utilizing the discard function. Another interesting approach to exploit the system sparsity can be derived by using the $l_0$ norm~\cite{Markus_sparseSMAP_tsp2014} leading to the $l_0$-RLS\abbrev{$l_0$-RLS}{$l_0$ Norm RLS} algorithm. However, as mentioned earlier, the resulting optimization problem of $l_0$ norm has difficulties due to the discontinuity of the $l_0$ norm. Thus, we use Equations~\eqref{eq:mult_Laplace}-\eqref{eq:modf_mult_Geman} to approximate the $l_0$ norm.
Therefore, the objective function of the $l_0$-RLS\abbrev{$l_0$-RLS}{$l_0$ Norm RLS} algorithm is given by
\begin{align}
\min \sum_{i=0}^k\lambda^{k-i}[d(i)-\xbf^T(i)\wbf(k)]^2+\alpha\|\wbf(k)\|_0,
\end{align}
where $\alpha\in\mathbb{R}_+$ is the weight given to the $l_0$ norm penalty. Replacing $\|\wbf(k)\|_0$ by its approximation, we obtain
\begin{align}
\min \sum_{i=0}^k\lambda^{k-i}[d(i)-\xbf^T(i)\wbf(k)]^2+\alpha G_\beta(\wbf(k)).
\end{align}
By differentiating the above equation with respect to $\wbf(k)$, and equating the result to zero, we get
\begin{align}
\wbf(k)=&\Big[\sum_{i=0}^k\lambda^{k-i}\xbf(i)\xbf^T(i)\Big]^{-1}
\times\Big((\sum_{i=0}^k\lambda^{k-i}\xbf(i)d(i))-\frac{\alpha}{2}\gbf_\beta(\wbf(k))\Big)\nonumber\\
=&\Rbf_D^{-1}(k)\Big(\pbf_D(k)-\frac{\alpha}{2}\gbf_\beta(\wbf(k))\Big). \label{eq:pre_DS-l0-RLS-sparse}
\end{align}
If we adopt $\Abf=\lambda\Rbf_D(k-1)$, $\Bbf=\Dbf^T=\xbf(k)$, and $\Cbf=1$ then by using the matrix inversion lemma, the update equation of the $l_0$-RLS\abbrev{$l_0$-RLS}{$l_0$ Norm RLS} algorithm is given as follows
\begin{align}
\wbf(k)=\Sbf_D(k)\Big(\pbf_D(k)-\frac{\alpha}{2}\gbf_\beta(\wbf(k-1))\Big), \label{eq:DS-l0-RLS-sparse}
\end{align}
where the same strategy as the PASTd\abbrev{PASTd}{Projection Approximation Subspace Tracking with Deflation}
(projection approximation subspace tracking with deflation)~\cite{Wang_WirelessCommunicationSystems_book2004} is employed and $\gbf_\beta(\wbf(k))$ is replaced by $\gbf_\beta(\wbf(k-1))$ in order to form the recursion. Also, $\pbf_D(k)$ and $\Sbf_D(k)$ are given as follows
\begin{align}
\pbf_D(k)=&\lambda\pbf_D(k-1)+d(k)\xbf(k),\\
\Sbf_D(k)=&\frac{1}{\lambda}\Big[\Sbf_D(k-1)-\frac{\Sbf_D(k-1)\xbf(k)\xbf^T(k)\Sbf_D(k-1)}{\lambda+\xbf^T(k)\Sbf_D(k-1)\xbf(k)}\Big]. \label{eq:S_D-l0-sparse}
\end{align}
Table~\ref{tb:l0-RLS-sparse} presents the $l_0$-RLS\abbrev{$l_0$-RLS}{$l_0$ Norm RLS} algorithm.
\begin{table}[t!]
\caption{$l_0$ norm recursive least-squares algorithm for sparse systems ($l_0$-RLS)}
\begin{center}
\begin{footnotesize}
\begin {tabular}{|l|} \hline\\ \hspace{2.9cm}{\bf $l_0$-RLS Algorithm}\\
\\
\hline\\
Initialization
\\
$\Sbf_{D}(-1)=\delta\Ibf$\\
where $\delta$ can be inverse of the input signal power estimate\\
$\pbf_{D}(-1)=[0~0~\cdots~0]^T$\\
$\wbf(-1)=[1~1~\cdots~1]^T$\\
Do for $k\geq0$\\
\hspace*{0.3cm} $\Sbf_{D}(k)$ as in Equation (\ref{eq:S_D-l0-sparse})\\
\hspace*{0.3cm} $\pbf_{D}(k)=\lambda \pbf_{D}(k-1)+d(k)\xbf(k)$\\
\hspace*{0.3cm} $\wbf(k)=\Sbf_D(k)\Big(\pbf_D(k)-\frac{\alpha}{2}\gbf_\beta(\wbf(k-1))\Big)$\\
end\\
\\
\hline
\end {tabular}
\end{footnotesize}
\end{center}
\label{tb:l0-RLS-sparse}
\end{table}
Similarly to the AS-RLS\abbrev{AS-RLS}{Alternative S-RLS} algorithm, we can derive the alternative $l_0$-RLS (A-$l_0$-RLS)\abbrev{A-$l_0$-RLS}{Alternative $l_0$-RLS} algorithm. We can rewrite Equation~\eqref{eq:DS-l0-RLS-sparse} as
\begin{align}
\Rbf_D(k)\wbf(k)=\pbf_D(k)-\frac{\alpha}{2}\gbf_\beta(\wbf(k-1))=\lambda\pbf_D(k-1)+\xbf(k)d(k)-\frac{\alpha}{2}\gbf_\beta(\wbf(k-1)).
\end{align}
By Equation~\eqref{eq:pre_DS-l0-RLS-sparse}, we have $\Rbf_D(k-1)\wbf(k-1)=\pbf_D(k-1)-\frac{\alpha}{2}\gbf_\beta(\wbf(k-1))$, then we get
\begin{align}
\Rbf_D(k)\wbf(k)=&\lambda\Rbf_D(k-1)\wbf(k-1)+\frac{\lambda\alpha}{2}\gbf_\beta(\wbf(k-1))\nonumber\\
&-\frac{\alpha}{2}\gbf_\beta(\wbf(k-1))+\xbf(k)d(k)\nonumber\\
=&\Big[\sum_{j=0}^k\lambda^{k-i}\xbf(i)\xbf^T(i)-\xbf(k)\xbf^T(k)\Big]\wbf(k-1)\nonumber\\
&+\frac{(\lambda-1)\alpha}{2}\gbf_\beta(\wbf(k-1))+\xbf(k)d(k).
\end{align}
If we define the {\it a priori} error as
\begin{align}
e(k)=d(k)-\xbf^T(k)\wbf(k-1),
\end{align}
we obtain
\begin{align}
\Rbf_D(k)\wbf(k)=\Rbf_D(k)\wbf(k-1)+e(k)\xbf(k)+\frac{(\lambda-1)\alpha}{2}\gbf_\beta(\wbf(k-1)).
\end{align}
Therefore, the update equation of the A-$l_0$-RLS\abbrev{A-$l_0$-RLS}{Alternative $l_0$-RLS} algorithm is given by
\begin{align}
\wbf(k)=\wbf(k-1)+\Sbf_D(k)[e(k)\xbf(k)+\frac{(\lambda-1)\alpha}{2}\gbf_\beta(\wbf(k-1))].
\end{align}
Table~\ref{tb:alt-l0-RLS-sparse} presents the A-$l_0$-RLS\abbrev{A-$l_0$-RLS}{Alternative $l_0$-RLS} algorithm.
\begin{table}[t!]
\caption{Alternative $l_0$ norm recursive least-squares algorithm for sparse systems}
\begin{center}
\begin{footnotesize}
\begin {tabular}{|l|} \hline\\ \hspace{2.8cm}{\bf A-$l_0$-RLS Algorithm}\\
\\
\hline\\
Initialization
\\
$\Sbf_{D}(-1)=\delta\Ibf$\\
where $\delta$ can be inverse of the input signal power estimate\\
$\wbf(-1)=[1~1~\cdots~1]^T$\\
Do for $k\geq0$\\
\hspace*{0.3cm} $e(k)=d(k)-\xbf^T(k)\wbf(k-1)$\\
\hspace*{0.3cm} $\psi(k)=\Sbf_{D}(k-1)\xbf(k)$\\
\hspace*{0.3cm} $\Sbf_{D}(k)=\frac{1}{\lambda}\Big[\Sbf_{D}(k-1)-\frac{\psi(k)\psi^T(k)}{\lambda+\psi^T(k)\xbf(k)}\Big]$\\
\hspace*{0.3cm} $\wbf(k)=\wbf(k-1)+\Sbf_D(k)[e(k)\xbf(k)+\frac{(\lambda-1)\alpha}{2}\gbf_\beta(\wbf(k-1))]$\\
end\\
\\
\hline
\end {tabular}
\end{footnotesize}
\end{center}
\label{tb:alt-l0-RLS-sparse}
\end{table}
\subsection{DS-{$l_0$}-RLS algorithm} \label{sub:ds-l0-rls-sparse}
In this subsection, we propose the DS-$l_0$-RLS\abbrev{DS-$l_0$-RLS}{Data-Selective $l_0$-RLS} algorithm to decrease the update rate of the $l_0$-RLS\abbrev{$l_0$-RLS}{$l_0$ Norm RLS} algorithm. Similarly to the discussion in Subsection~\ref{sub:ds-s-rls-sparse}, the DS-$l_0$-RLS\abbrev{DS-$l_0$-RLS}{Data-Selective $l_0$-RLS} algorithm for sparse systems can be derived by implementing an update in the $l_0$-RLS\abbrev{$l_0$-RLS}{$l_0$ Norm RLS} algorithm whenever the output estimation error is larger than a predetermined value $\gammabar$, i.e., when $|e(k)|=|d(k)-\wbf^T(k)\xbf(k)|>\gammabar$. Hence, the computational resources of the DS-$l_0$-RLS\abbrev{DS-$l_0$-RLS}{Data-Selective $l_0$-RLS} algorithm is lower than the $l_0$-RLS\abbrev{$l_0$-RLS}{$l_0$ Norm RLS} algorithm since it prevents unnecessary updates. The DS-$l_0$-RLS algorithm is described in Table~\ref{tb:DS-l0-RLS-sparse}.
\begin{table}[t!]
\caption{Data-selective $l_0$ norm recursive least-squares algorithm for sparse systems (DS-$l_0$-RLS)}
\begin{center}
\begin{footnotesize}
\begin {tabular}{|l|} \hline\\ \hspace{2.7cm}{\bf DS-$l_0$-RLS Algorithm}\\
\\
\hline\\
Initialization
\\
$\Sbf_{D}(-1)=\delta\Ibf$\\
where $\delta$ can be inverse of the input signal power estimate\\
choose $\gammabar$ around $\sqrt{5\sigma_n^2}$\\
$\pbf_{D}(-1)=[0~0~\cdots~0]^T$\\
$\wbf(-1)=[1~1~\cdots~1]^T$\\
Do for $k\geq0$\\
\hspace*{0.3cm} $e(k)=d(k)-\wbf^T(k-1)\xbf(k)$\\
\hspace*{0.3cm} if $|e(k)|>\gammabar$\\
\hspace*{0.45cm} $\Sbf_{D}(k)$ as in Equation (\ref{eq:S_D-l0-sparse})\\
\hspace*{0.45cm} $\pbf_{D}(k)=\lambda \pbf_{D}(k-1)+d(k)\xbf(k)$\\
\hspace*{0.45cm} $\wbf(k)=\Sbf_D(k)\Big(\pbf_D(k)-\frac{\alpha}{2}\gbf_\beta(\wbf(k-1))\Big)$\\
\hspace*{0.3cm} else\\
\hspace*{0.45cm} $\wbf(k)=\wbf(k-1)$\\
\hspace*{0.3cm} end\\
end\\
\\
\hline
\end {tabular}
\end{footnotesize}
\end{center}
\label{tb:DS-l0-RLS-sparse}
\end{table}
In Subsection~\ref{sub:simulation_rls_based_sparse}, we compare the simulation results of the RLS-based\abbrev{RLS}{Recursive Least-Squares} algorithms with the Adaptive Sparse Variational Bayes iterative scheme based on Laplace prior (ASVB-L)\abbrev{ASVB-L}{Adaptive Sparse Variational Bayes Iterative Scheme Based on Laplace Prior} algorithm~\cite{Themelis_BayesianAP_tsp2014,Giampouras_Bayesian_LR_Subspace_eusipco2015,Themelis_Bayesian_GIGMC_eusipco2015}. Therefore, it is worthwhile to compare the computational complexity of these algorithms. Table~\ref{tab-complexity-rls} shows the number of real multiplications, real additions, and real divisions must be performed at each iteration by these algorithms.
\begin{table*}[t]
\caption{Number of operations for AS-RLS, $l_0$-RLS, and ASVB-L algorithms \label{tab-complexity-rls}}
\begin{center}
\begin{tabular}{|l|c|c|c|} \hline
Algorithm & Addition $\&$ Subtraction & Multiplication & Division\\\hline
AS-RLS & $N^2+3N$ & $N^2+5N+1$ & $1$ \\\hline
A-$l_0$-RLS & $N^2+5N$ & $N^2+9N+1$ & $N+1$ \\\hline
ASVB-L & $N^2+7N+6$ & $2N^2+10N+3$ & $6N+2$ \\\hline
\end{tabular}
\end{center}
\end{table*}
\section{Simulations} \label{sec:simulations-eusipco}
In this section, we present some numerical simulations for the proposed algorithms. In all scenarios, we deal with the system identification problem. In Subsection~\ref{sub:simulation_lms_based_sparse}, we apply the LMS-based algorithms. The numerical results of the RLS-based\abbrev{RLS}{Recursive Least-Squares} algorithms are illustrated in Subsection~\ref{sub:simulation_rls_based_sparse}.
\subsection{Simulation results of the LMS-based algorithms} \label{sub:simulation_lms_based_sparse}
Here, we have applied the algorithms described in this chapter, the NLMS,\abbrev{NLMS}{Normalized LMS} and the AP\abbrev{AP}{Affine Projection} algorithms to identify three unknown sparse systems of order 14.\footnote{The results
for the S-SM-AP\abbrev{S-SM-AP}{Simple SM-AP} algorithm are not shown here because they are almost identical to the results of the
IS-SM-AP\abbrev{IS-SM-AP}{Improved S-SM-AP} algorithm, but the latter has the advantage of requiring fewer computations.} The first one is an arbitrary sparse system $\wbf_o$, the second one is a block sparse
system $\wbf'_o$, and the third one is a symmetric-block sparse system $\wbf''_o$.
The coefficients of these three systems are presented in Table \ref{tab2-eusipco}. The input is a binary phase-shift keying (BPSK)\abbrev{BPSK}{Binary Phase-Shift Keying} signal with variance $\sigma_x^2=1$.
The signal-to-noise ratio (SNR)\abbrev{SNR}{Signal-to-Noise Ratio} is set to be 20 dB, i.e., the noise variance is $\sigma_n^2=0.01$. The data-reuse factor is $L=1$, the bound on the estimation error is set to be
$\gammabar=\sqrt{5\sigma_n^2}$, and the threshold bound vector $\gammabf(k)$ is selected
as the {\it simple-choice constraint vector}~\cite{Markus_sparseSMAP_tsp2014} which is defined as
$\gamma_0(k)=\frac{\gammabar e(k)}{|e(k)|}$ and $\gamma_i(k)=d(k-i)-\wbf^T(k)\xbf(k-i)$,
for $i=1,\cdots,L$. The initial vector $\wbf(0)$ and the regularization factor are $10^{-3}\times[1,\cdots,1]^T$ and
$10^{-12}$, respectively. The learning curves are the results of averaging of the outcomes of 500 trials.
\begin{table}[t!]
\caption{The coefficients of unknown systems $\wbf_o$, $\wbf'_o$, and $\wbf''_o$. \label{tab2-eusipco}}
\begin{center}
\begin{tabular}{|c|c|c|}\hline
$\wbf_o$&$\wbf'_o$&$\wbf''_o$\\\hline
{\bf 24e-2}&2e-7&2e-8\\
2e-8&-21e-10&-1e-9\\
{\bf -23e-2}&17e-8&1e-7\\
-3e-7&21e-8&-3e-7\\
{\bf 5e-1}&-3e-7&{\bf -64e-3}\\
-1e-9&{\bf 24e-2}&{\bf 2e-1}\\
{\bf 2e-1}&{\bf 7e-1}&{\bf 5e-1}\\
1e-7&{\bf 2e-1}&{\bf 2e-1}\\
-5e-8&{\bf 33e-2}&{\bf -64e-3}\\
12e-6&{\bf -6e-1}&-5e-5\\
1e-8&-5e-7&12e-6\\
-5e-6&18e-9&1e-8\\
4e-6&-5e-7&-5e-6\\
-1e-7&21e-8&4e-6\\
{\bf -2e-1}&-11e-8&-1e-5\\\hline
\end{tabular}
\end{center}
\end{table}
\subsubsection{Scenario 1}
In this scenario, we have implemented the IS-SM-AP\abbrev{IS-SM-AP}{Improved S-SM-AP}, the SSM-AP\abbrev{SSM-AP}{Sparsity-Aware SM-AP}, the SM-PAPA\abbrev{SM-PAPA}{Set-Membership Proportionate AP Algorithm}, and the NLMS\abbrev{NLMS}{Normalized LMS} algorithms to identify the three unknown sparse systems in Table \ref*{tab2-eusipco}. The convergence factor of the NLMS\abbrev{NLMS}{Normalized LMS} algorithm is $\mu =0.9$. The constant $\epsilon$ in the IS-SM-AP\abbrev{IS-SM-AP}{Improved S-SM-AP} algorithm is chosen as $2\times10^{-4}$; that is, on average, 5 out of 15 coefficients (boldface coefficients in $\wbf_o$, $\wbf'_o$, and $\wbf''_o$) are updated at each iteration. We have selected $\alpha=5\times10^{-3}$, $\beta=5$, and $\varepsilon=100$ for the SM-PAPA\abbrev{SM-PAPA}{Set-Membership Proportionate AP Algorithm} and the SSM-AP\abbrev{SSM-AP}{Sparsity-Aware SM-AP} algorithms. In the SSM-AP\abbrev{SSM-AP}{Sparsity-Aware SM-AP} algorithm, we have used the GMF\abbrev{GMF}{Geman-McClure Function} as the approximation of the $l_0$ norm.
\begin{figure}[t!]
\centering
\subfigure[b][]{\includegraphics[width=.48\linewidth,height=7cm]{Figs/sim1-eusipco.pdf}
\label{fig:sim1-eusipco}}
\subfigure[b][]{\includegraphics[width=.48\linewidth,height=7cm]{Figs/sim2-eusipco.pdf}
\label{fig:sim2-eusipco}}
\subfigure[b][]{\includegraphics[width=.48\linewidth,height=7cm]{Figs/sim3-eusipco.pdf}
\label{fig:sim3-eusipco}}
\caption{The learning curves of the SM-PAPA, the SSM-AP, the IS-SM-AP, and the NLMS algorithms applied on: (a) $\wbf_o$; (b) $\wbf'_o$; (c) $\wbf''_o$. \label{fig:1-eusipco}}
\end{figure}
Figures \ref{fig:sim1-eusipco}, \ref{fig:sim2-eusipco}, and \ref{fig:sim3-eusipco} depict the learning curves for
the IS-SM-AP\abbrev{IS-SM-AP}{Improved S-SM-AP}, the SM-PAPA\abbrev{SM-PAPA}{Set-Membership Proportionate AP Algorithm}, the SSM-AP\abbrev{SSM-AP}{Sparsity-Aware SM-AP}, and the NLMS\abbrev{NLMS}{Normalized LMS} algorithms to identify the unknown systems $\wbf_o$, $\wbf'_o$, and $\wbf''_o$, respectively. The average number of updates implemented by the IS-SM-AP\abbrev{IS-SM-AP}{Improved S-SM-AP}, the SM-PAPA\abbrev{SM-PAPA}{Set-Membership Proportionate AP Algorithm}, and the SSM-AP\abbrev{SSM-AP}{Sparsity-Aware SM-AP} algorithms are given in columns 2 to 4 of Table \ref{tab:update-rate-eusipco}.
In addition, we have applied all the aforementioned algorithms in this scenario, using the parameters that
were already defined in the previous paragraph, but changing the input signal model to an
autoregressive (AR)\abbrev{AR}{Autoregressive} process in order to identify the unknown system $\wbf_o$. The new input signal is generated as a first-order AR\abbrev{AR}{Autoregressive} process defined as
$x(k)=0.95x(k-1)+n(k)$.
In this case, the learning curves of the algorithms are shown in Figure~\ref{fig:sim4-eusipco},
and the average number of updates performed by the IS-SM-AP\abbrev{IS-SM-AP}{Improved S-SM-AP}, the SM-PAPA\abbrev{SM-PAPA}{Set-Membership Proportionate AP Algorithm}, and the SSM-AP\abbrev{SSM-AP}{Sparsity-Aware SM-AP} algorithms
are presented in the fifth column of Table~\ref{tab:update-rate-eusipco}. Also, the number of arithmetic operations required by the IS-SM-AP\abbrev{IS-SM-AP}{Improved S-SM-AP}, the SM-PAPA\abbrev{SM-PAPA}{Set-Membership Proportionate AP Algorithm}, and the SSM-AP\abbrev{SSM-AP}{Sparsity-Aware SM-AP} algorithms in whole iterations are 41635, 110835, and 84396, respectively.
Observe that, in every scenario we tested, the IS-SM-AP\abbrev{IS-SM-AP}{Improved S-SM-AP} algorithm performed as well as
the other state-of-the-art sparsity-aware algorithms, but this algorithm has the advantage of
requiring fewer computations since at each iteration in which an update occurs only
a subset (on average, one third) of the coefficients is updated. Another interesting observation is that the SM-PAPA\abbrev{SM-PAPA}{Set-Membership Proportionate AP Algorithm} algorithm works better with BPSK\abbrev{BPSK}{Binary Phase-Shift Keying} input signal,
whereas the SSM-AP\abbrev{SSM-AP}{Sparsity-Aware SM-AP} algorithm is slightly better when a correlated input signal is used.
\begin{figure}[t!]
\centering
\includegraphics[width=1\linewidth]{Figs/sim4-eusipco.pdf}
\caption{The learning curves of the SM-PAPA, the SSM-AP, the IS-SM-AP, and the NLMS algorithms applied on $\wbf_o$ using AR input signal.\label{fig:sim4-eusipco}}
\end{figure}
\begin{table*}
\caption{The average number of updates implemented by
the IS-SM-AP, the SM-PAPA, and the SSM-AP algorithms \label{tab:update-rate-eusipco}}
\begin{center}
\begin{tabular}{|l|c|c|c|c|}\hline
Algorithm&$\wbf_o$ BPSK input&$\wbf'_o$ BPSK input&$\wbf''_o$ BPSK input& $\wbf''_o$ AR input\\\hline
IS-SM-AP&6.3$\%$&6.3$\%$&7.6$\%$&8.4$\%$\\
SM-PAPA&5.3$\%$&5.3$\%$&5.9$\%$&7.7$\%$\\
SSM-AP&8.9$\%$&8.9$\%$&20.5$\%$&5.6$\%$\\\hline
\end{tabular}
\end{center}
\end{table*}
\begin{figure}[t!]
\centering
\subfigure[b][]{\includegraphics[width=.48\linewidth,height=7cm]{Figs/sim5-eusipco.pdf}
\label{fig:sim5-eusipco}}
\subfigure[b][]{\includegraphics[width=.48\linewidth,height=7cm]{Figs/sim6-eusipco.pdf}
\label{fig:sim6-eusipco}}
\subfigure[b][]{\includegraphics[width=.48\linewidth,height=7cm]{Figs/sim7-eusipco.pdf}
\label{fig:sim7-eusipco}}
\caption{The learning curves of the AP and the IS-AP algorithms applied on: (a) $\wbf_o$; (b) $\wbf'_o$; (c) $\wbf''_o$. \label{fig:2-eusipco}}
\end{figure}
\subsubsection{Scenario 2}
In this scenario, we have applied the AP\abbrev{AP}{Affine Projection} and the IS-AP\abbrev{IS-AP}{Improved S-AP} algorithms to identify the three unknown sparse systems in Table \ref*{tab2-eusipco}. To identify $\wbf_o$ and $\wbf'_o$ we choose the convergence factor $\mu=0.6$ and to identify $\wbf''_o$ we adopt $\mu=0.1$. Figures \ref{fig:sim5-eusipco}, \ref{fig:sim6-eusipco}, and \ref{fig:sim7-eusipco} show the learning curves for
the AP\abbrev{AP}{Affine Projection} and the IS-AP\abbrev{IS-AP}{Improved S-AP} algorithms to identify the
unknown systems $\wbf_o$, $\wbf'_o$, and $\wbf''_o$, respectively.
Moreover, we have applied the AP\abbrev{AP}{Affine Projection} and the IS-AP\abbrev{IS-AP}{Improved S-AP} algorithms in this scenario, with same parameters, but changing the input signal model to an
autoregressive (AR)\abbrev{AR}{Autoregressive} as Scenario 1 to identify the unknown system $\wbf_o$. The convergence factor $\mu$ is equal to 0.6. Their learning curves are shown in Figure~\ref{fig:sim8-eusipco}. By comparing Figures \ref{fig:sim4-eusipco} and \ref{fig:sim8-eusipco} we can observe the value of set-membership filtering. In fact, by utilizing the SMF\abbrev{SMF}{Set-Membership Filtering} approach not only we have a lower number of arithmetic operations, but also we improve the steady state performance. Note that, we have obtained better MSE\abbrev{MSE}{Mean-Squared Error} in all figures of Scenario 1 compared to their corresponding figures in Scenario 2.
\begin{figure}[t!]
\centering
\includegraphics[width=1\linewidth]{Figs/sim8-eusipco.pdf}
\caption{The learning curves of the AP and the IS-AP algorithms applied on $\wbf_o$ using AR input signal.\label{fig:sim8-eusipco}}
\end{figure}
\begin{figure}[t!]
\centering
\subfigure[b][]{\includegraphics[width=.48\linewidth,height=7cm]{Figs/RLS_sys1_sparse.pdf}
\label{fig:RLS_sys1_sparse}}
\subfigure[b][]{\includegraphics[width=.48\linewidth,height=7cm]{Figs/RLS_sys2_sparse.pdf}
\label{fig:RLS_sys2_sparse}}
\subfigure[b][]{\includegraphics[width=.48\linewidth,height=7cm]{Figs/RLS_sys3_sparse.pdf}
\label{fig:RLS_sys3_sparse}}
\caption{The learning curves of the RLS, the S-RLS, the $l_0$-RLS, and the ASVB-L algorithms applied to identify: (a) $\wbf_o$; (b) $\wbf'_o$; (c) $\wbf'''_o$. \label{fig:RLS_sparse}}
\end{figure}
\subsection{Simulation results of the RLS-based algorithms} \label{sub:simulation_rls_based_sparse}
Here, the RLS\abbrev{RLS}{Recursive Least-Squares}, the S-RLS\abbrev{S-RLS}{RLS Algorithm for Sparse System}, the AS-RLS\abbrev{AS-RLS}{Alternative S-RLS}, the $l_0$-RLS\abbrev{$l_0$-RLS}{$l_0$ Norm RLS}, the A-$l_0$-RLS\abbrev{A-$l_0$-RLS}{Alternative $l_0$-RLS}, the ASVB-L~\cite{Themelis_BayesianAP_tsp2014,Giampouras_Bayesian_LR_Subspace_eusipco2015,Themelis_Bayesian_GIGMC_eusipco2015}\abbrev{ASVB-L}{Adaptive Sparse Variational Bayes Iterative Scheme Based on Laplace Prior}, the DS-S-RLS\abbrev{DS-S-RLS}{Data-Selective S-RLS}, the DS-$l_0$-RLS\abbrev{DS-$l_0$-RLS}{Data-Selective $l_0$-RLS}, and the data-selective ASVB-L (DS-ASVB-L)\abbrev{DS-ASVB-L}{Data-Selective ASVB-L} algorithms are tested to identify three unknown sparse systems of order 14. The first model is an arbitrary sparse system $\wbf_o$, the second model is a block sparse
system $\wbf'_o$, and the third model, $\wbf'''_o$, is a sparse system which its coefficients changes at $500th$ and $1000th$ iterations.
The coefficients of $\wbf_o$ and $\wbf'_o$ are listed in Table~\ref{tab2-eusipco}.
The input is an autoregressive signal generated by $x(k)=0.95x(k-1)+n(k-1)$.
The signal-to-noise ratio (SNR)\abbrev{SNR}{Signal-to-Noise Ratio} is set to be 20 dB, meaning that the noise variance is $\sigma_n^2=0.01$.
The bound on the estimation error is set to be $\gammabar=\sqrt{5\sigma_n^2}$. The initial vector $\wbf(0)$ and $\lambda$ are $[1,\cdots,1]^T$ and
$0.97$, respectively. The parameter $\delta$ is $0.2$ and the constant $\epsilon$ is chosen as $0.015$. For the DS-$l_0$-RLS\abbrev{DS-$l_0$-RLS}{Data-Selective $l_0$-RLS} and the $l_0$-RLS\abbrev{$l_0$-RLS}{$l_0$ Norm RLS} algorithms, the parameters $\alpha$ and $\beta$ are chosen as 0.005 and 5, respectively. We have chosen the GMF\abbrev{GMF}{Geman-McClure Function} as the approximation of the $l_0$ norm. The depicted learning curves represent the results of averaging of the outcomes of 500 trials.
\begin{figure}[t!]
\centering
\subfigure[b][]{\includegraphics[width=.48\linewidth,height=7cm]{Figs/DS_RLS_sys1_sparse.pdf}
\label{fig:DS_RLS_sys1_sparse}}
\subfigure[b][]{\includegraphics[width=.48\linewidth,height=7cm]{Figs/DS_RLS_sys2_sparse.pdf}
\label{fig:DS_RLS_sys2_sparse}}
\subfigure[b][]{\includegraphics[width=.48\linewidth,height=7cm]{Figs/DS_RLS_sys4_sparse.pdf}
\label{fig:DS_RLS_sys4_sparse}}
\caption{The learning curves of the DS-S-RLS, the DS-$l_0$-RLS, and the DS-ASVB-L algorithms applied to identify: (a) $\wbf_o$; (b) $\wbf'_o$; (c) $\wbf'''_o$. \label{fig:DS_RLS_sparse}}
\end{figure}
Figures~\ref{fig:RLS_sys1_sparse}, \ref{fig:RLS_sys2_sparse}, and \ref{fig:RLS_sys3_sparse} show the learning curves for the RLS\abbrev{RLS}{Recursive Least-Squares}, the S-RLS\abbrev{S-RLS}{RLS Algorithm for Sparse System}, the $l_0$-RLS\abbrev{$l_0$-RLS}{$l_0$ Norm RLS}, and the ASVB-L\abbrev{ASVB-L}{Adaptive Sparse Variational Bayes Iterative Scheme Based on Laplace Prior} algorithms to identify the unknown systems $\wbf_o$, $\wbf'_o$, and $\wbf'''_o$, respectively. Figures~\ref{fig:DS_RLS_sys1_sparse}, \ref{fig:DS_RLS_sys2_sparse}, and \ref{fig:DS_RLS_sys4_sparse} illustrate the learning curves for the DS-S-RLS\abbrev{DS-S-RLS}{Data-Selective S-RLS}, the DS-$l_0$-RLS\abbrev{DS-$l_0$-RLS}{Data-Selective $l_0$-RLS}, and the DS-ASVB-L\abbrev{DS-ASVB-L}{Data-Selective ASVB-L} algorithms to identify the unknown systems $\wbf_o$, $\wbf'_o$, and $\wbf'''_o$, respectively. The average number of updates implemented by the DS-S-RLS\abbrev{DS-S-RLS}{Data-Selective S-RLS}, the DS-$l_0$-RLS\abbrev{DS-$l_0$-RLS}{Data-Selective $l_0$-RLS}, and the DS-ASVB-L\abbrev{DS-ASVB-L}{Data-Selective ASVB-L} algorithms are presented in columns 2 to 4 of Table~\ref{tab:update-rate-DS-RLS-sparse}.
Observe that, in every scenario we tested, the S-RLS\abbrev{S-RLS}{RLS Algorithm for Sparse System} and the $l_0$-RLS\abbrev{$l_0$-RLS}{$l_0$ Norm RLS} algorithms performed as well as the RLS\abbrev{RLS}{Recursive Least-Squares} algorithm. The S-RLS\abbrev{S-RLS}{RLS Algorithm for Sparse System} algorithm has lower computational complexity compared to the $l_0$-RLS\abbrev{$l_0$-RLS}{$l_0$ Norm RLS} algorithm. As can be seen, the performances of the S-RLS\abbrev{S-RLS}{RLS Algorithm for Sparse System} and the DS-S-RLS\abbrev{DS-S-RLS}{Data-Selective S-RLS} algorithms are close to the ASVB-L\abbrev{ASVB-L}{Adaptive Sparse Variational Bayes Iterative Scheme Based on Laplace Prior} and the DS-ASVB-L\abbrev{DS-ASVB-L}{Data-Selective ASVB-L} algorithms, respectively, while the former ones require lower computational resources.
Finally, Figures~\ref{fig:A_RLS_sys1_sparse} and~\ref{fig:A_RLS_sys2_sparse} depict the learning curves of the S-RLS, the AS-RLS, the $l_0$-RLS, and the A-$l_0$-RLS algorithms, when they are applied to identify the unknown systems $\wbf_o$ and $\wbf_o'$, respectively. As can be seen, the performances of the AS-RLS and the A-$l_0$-RLS algorithms are similar to the S-RLS and the $l_0$-RLS algorithms, respectively.
\begin{table*}
\caption{The average number of updates implemented by
the DS-S-RLS, the DS-$l_0$-RLS, and the DS-ASVB-L algorithms \label{tab:update-rate-DS-RLS-sparse}}
\begin{center}
\begin{tabular}{|l|c|c|c|}\hline
Algorithm&$\wbf_o$ &$\wbf'_o$ &$\wbf'''_o$ \\\hline
DS-S-RLS&11.95$\%$&14.13$\%$&19.40$\%$\\
DS-$l_0$-RLS&8.72$\%$&10.90$\%$&17.74$\%$\\
DS-ASVB-L&9.18$\%$&10.53$\%$&19.69$\%$\\\hline
\end{tabular}
\end{center}
\end{table*}
\begin{figure}[t!]
\centering
\subfigure[b][]{\includegraphics[width=.48\linewidth,height=7cm]{Figs/Alternative_wo.pdf}
\label{fig:A_RLS_sys1_sparse}}
\subfigure[b][]{\includegraphics[width=.48\linewidth,height=7cm]{Figs/Alternative_wo_prime.pdf}
\label{fig:A_RLS_sys2_sparse}}
\caption{The learning curves of the S-RLS, the AS-RLS, the $l_0$-RLS, and the A-$l_0$-RLS algorithms applied to identify: (a) $\wbf_o$; (b) $\wbf'_o$. \label{fig:A_RLS_sparse}}
\end{figure}
\section{Conclusions} \label{sec:conclusions-eusipco}
In this chapter, we have proposed the S-SM-AP\abbrev{S-SM-AP}{Simple SM-AP} and the IS-SM-AP\abbrev{IS-SM-AP}{Improved S-SM-AP} algorithms
to take advantage of sparsity in the signal models while attaining low computational
complexity.
To reach this target, we have derived a simple update equation which only updates
the filter coefficients whose magnitudes are greater than a predetermined value.
Also, this method is jointly applied with the well-known set-membership approach
aiming at obtaining even lower computational complexity and better convergence rate.
The simulation results have shown the excellent performance of the algorithm and
lower computational complexity as compared to some other sparsity-aware data-selective
adaptive filters.
{Indeed, the IS-SM-AP\abbrev{IS-SM-AP}{Improved S-SM-AP} algorithm performed as well as the SM-PAPA\abbrev{SM-PAPA}{Set-Membership Proportionate AP Algorithm} algorithm while requiring fewer arithmetic operations
(for the scenarios in Section \ref{sec:simulations-eusipco}, it entailed about 38$\%$ of the operations spent by the SM-PAPA).}\abbrev{SM-PAPA}{Set-Membership Proportionate AP Algorithm} Also, the numerical results in Section~\ref{sec:simulations-eusipco} confirm the importance of SMF\abbrev{SMF}{Set-Membership Filtering} technique for the proposed algorithm.
Moreover, we have used the discard function and the $l_0$ norm in order to propose the S-RLS\abbrev{S-RLS}{RLS Algorithm for Sparse System} and the $l_0$-RLS\abbrev{$l_0$-RLS}{$l_0$ Norm RLS} algorithms, respectively, to exploit the sparsity in the involved signal models. Also, we have employed the data-selective strategy to implement an update when the output estimation error is greater than a pre-described positive value leading to
reduced update rate and lower computational complexity. The simulation results have shown the excellent performance of the proposed algorithms as compared to the standard RLS\abbrev{RLS}{Recursive Least-Squares} algorithm being
competitive with the new proposed state-of-the-art ASVB-L\abbrev{ASVB-L}{Adaptive Sparse Variational Bayes Iterative Scheme Based on Laplace Prior} algorithm which requires much more computations. It is worthy to mention that there are many RLS-based algorithms to exploit sparsity in signal and system models~\cite{Angelosante_rls-sparse_cd_tsp2010,Angelosante_rls_lasso_sparse_icassp2009,Valdman_rls_lar_eusipco2014}; however, their update equation is entirely different from the algorithms proposed in this chapter. Therefore, we avoid comparing the RLS-based algorithms proposed here with other RLS-based algorithms in the literature.
\chapter{Feature LMS algorithms}
Among the adaptive filtering algorithms, the popular least-mean-square (LMS)\abbrev{LMS}{Least-Mean-Square} algorithm, first introduced in 1960~\cite{Widrow_lms_1960,Maloberti_history_book2016}, has been widely considered as the most used in the field. Elaborate studies of the LMS\abbrev{LMS}{Least-Mean-Square} algorithm were presented in~\cite{Widrow_adaptiveFiltering_book1985,Diniz_adaptiveFiltering_book2013}.
Also, the LMS\abbrev{LMS}{Least-Mean-Square} and its variants solve real problems including active noise control~\cite{Rupp_active_noise_control_eusipco2014}, digital equalization~\cite{Rebhi_digital_equalizer_ICTON2016}, continuous-time filter tuning~\cite{Westwick_continuous_time_filter_tuning_IEECDS2005}, system identification~\cite{Ciochina_LMS_system_identification_eusipco2016}, among others.
In the previous chapter, some adaptive filtering algorithms exploiting the sparsity in the system parameters were proposed. Also, a number of adaptive filtering algorithms exploiting the sparsity in the model coefficients has been introduced by imposing some constraints
in the cost function~\cite{Markus_sparseSMAP_tsp2014,Candes_reweightedl1_fourier2008,Gasso_nonconvex_penalties_tsp2009,Vitor_SparsityAwareAPA_sspd2011}. This strategy relies on the attraction of some coefficient values to zero enabling the detection of nonrelevant parameters of the model.
In this chapter, we introduce the feature LMS (F-LMS)\abbrev{F-LMS}{Feature LMS} family of algorithms inducing simple sparsity properties hidden in the parameters. The type of feature to seek determines the structure of the feature matrix $\Fbf (k)$\symbl{$\Fbf(k)$}{Feature matrix} to be applied in the constraints of the F-LMS\abbrev{F-LMS}{Feature LMS} algorithm. In fact, a plethora of featured algorithms is possible to be defined by applying smart combinations of feature matrices to the coefficient vector. In this work, some simple cases are discussed whereas many more advanced solutions will be exploited in future publications. Moreover, by introducing {\it feature function}, we propose the low-complexity F-LMS (LCF-LMS) algorithm to reduce the computational complexity of the F-LMS algorithms. The LCF-LMS algorithm implements less multiplication in calculating the output signal.
The content of this chapter was partially published in~\cite{Hamed_Flms_ICASSP2018}. This chapter is organized as follows.
Section~\ref{sec:F-LMS-chap7} proposes the F-LMS\abbrev{F-LMS}{Feature LMS} family of algorithms.
Some examples of F-LMS\abbrev{F-LMS}{Feature LMS} algorithms for systems with lowpass and highpass spectrum are introduced in Section~\ref{sec:example_algorithms-chap7}. The LCF-LMS and the alternative LCF-LMS (ALCF-LMS) algorithms are derived in Sections~\ref{sec:low_comp_f_lms-chap7} and~\ref{sec:a-lcf-lms}, respectively. The matrix representation of the feature function is explained in Section~\ref{sec:matrix-trend-function}.
Simulation results are presented in Section~\ref{sec:simulations-chap7} and the conclusions are drawn in Section~\ref{sec:conclusions-chap7}.
\section{The Feature LMS algorithms} \label{sec:F-LMS-chap7}
Feature LMS (F-LMS)\abbrev{F-LMS}{Feature LMS} refers to a family of LMS-type\abbrev{LMS}{Least-Mean-Square} algorithms capable of exploiting the features inherent to the unknown systems to be identified. These algorithms minimize the general objective function \symbl{${\cal P}(\cdot)$}{Sparsity-promoting penalty function}
\begin{align}
\xi_{\text{F-LMS}} (k) = \underbrace{ \frac{1}{2}|e(k)|^2 }_{\text{standard LMS term}} + \underbrace{\alpha {\cal P} \left( \Fbf(k) \wbf(k) \right) }_{\text{feature-inducing term}} , \label{eq:objective_function_general-chap7}
\end{align}
where $\alpha\in\mathbb{R}_+$ stands for the weight given to the {\it sparsity-promoting penalty function} ${\cal P}$, which maps a vector to the nonnegative reals $\mathbb{R}_+$, and $\Fbf(k)$ is the so-called {\it feature matrix} responsible for revealing the hidden sparsity, i.e., the result of applying $\Fbf(k)$ to $\wbf(k)$ should be a sparse vector (in the sense that most entries of the vector $\Fbf(k)\wbf(k)$ should be close or equal to zero).
The penalty function ${\cal P}$ can be any sparsity-promoting penalty function that is almost everywhere differentiable in order to allow for
gradient-based methods.
Examples of suitable functions are:
(i) vector norms, especially the widely used $l_1$ norm~\cite{Candes_reweightedl1_fourier2008,Vitor_SparsityAwareAPA_sspd2011};
(ii) vector norms combined with shrinking strategies~\cite{Hamed_eusipco2016};
(iii) a function that approximates the $l_0$ norm \cite{Markus_sparseSMAP_tsp2014,Markus_apssi_icassp2013}.
The feature matrix $\Fbf(k)$ can vary at each iteration and it represents any linear combination that when applied to $\wbf(k)$ results in a sparse vector.
In practice, $\Fbf(k)$ should be chosen based on some previous knowledge about the unknown system $\wbf_o$.
For instance, $\wbf_o$ can represent a lowpass or a highpass filter, it can have linear phase, it can be an upsampled or downsampled signal, etc.
All these features can be exploited by the F-LMS\abbrev{F-LMS}{Feature LMS} algorithm in order to accelerate convergence and/or achieve lower mean-squared error (MSE).\abbrev{MSE}{Mean-Squared Error}
The resulting gradient-based algorithms using the objective function given in~\eqref{eq:objective_function_general-chap7} are known as F-LMS\abbrev{F-LMS}{Feature LMS} algorithms, and their
recursions have the general form
\begin{align}
\wbf(k+1)=\wbf(k)+\mu e(k)\xbf(k) -\mu\alpha\pbf(k), \label{eq:update_equation-chap7}
\end{align}
where $\mu \in \mathbb{R}_+$ is the step size, which should be small enough to ensure convergence~\cite{Diniz_adaptiveFiltering_book2013},
and $\pbf(k) \in \mathbb{R}^{N+1}$ is the gradient of function ${\cal P} \left( \Fbf(k) \wbf(k) \right)$. \symbl{$\pbf(k)$}{Gradient of ${\cal P} \left( \Fbf(k) \wbf(k) \right)$}
\section{Examples of F-LMS algorithms} \label{sec:example_algorithms-chap7}
From Section~\ref{sec:F-LMS-chap7}, it is clear that the F-LMS\abbrev{F-LMS}{Feature LMS} family contains infinitely many algorithms. So, in this section we introduce some of these algorithms in order to illustrate how some specific features of the unknown system can be exploited. For the sake of clarity, we focus on simple algorithms and, therefore, we choose function ${\cal P}$ to be the $l_1$ norm and the feature matrix to be time-invariant $\Fbf$ so that the cost function in~\eqref{eq:objective_function_general-chap7} simplifies to
\begin{align}
\xi_{\text{F-LMS}} (k) = \frac{1}{2}|e(k)|^2 + \alpha \|\Fbf\wbf(k)\|_1, \label{eq:objective_function-chap7}
\end{align}
where $\|\cdot\|_1$ denotes the $l_1$-norm and for a vector $\wbf\in\mathbb{R}^{N+1}$ it is given by $\|\wbf\|_1=\sum_{i=0}^N|w_i|$. As a consequence, the reader will notice that the computational complexity of the algorithms proposed in this section is only slightly superior to the complexity of the LMS\abbrev{LMS}{Least-Mean-Square} algorithm, as the computation of $\pbf(k)$ required in~\eqref{eq:update_equation-chap7} is very simple (does not involve multiplication or division).
\subsection{The F-LMS algorithm for lowpass systems} \label{sub:F-LMS-lowpass-chap7}
Most systems found in practice have their energy concentrated mainly in the low frequencies. If the unknown system has lowpass narrowband spectrum, then its impulse response $\wbf_o$ is smooth, meaning that the difference between adjacent coefficients is small (probably close to zero).
The adaptive filtering algorithm can take advantage of this feature present in the unknown system by selecting the feature matrix properly.
Indeed, by selecting $\Fbf$ as $\Fbf_l$, where $\Fbf_l$ is a $N \times N+1$ matrix defined as \symbl{$\Fbf_l$}{Feature matrix for systems with lowpass narrowband spectrum}
\begin{align}
\Fbf_l=\left[\begin{array}{ccccc}1&-1&0&\cdots&0\\0&1&-1&\cdots&0\\\vdots&&\ddots&\ddots&\\0&0&\cdots&1&-1\end{array}\right], \label{eq:F_lowpass}
\end{align}
and $\|\Fbf_l\wbf(k)\|_1=\sum_{i=0}^{N-1}|w_i(k)-w_{i+1}(k)|$,
the optimization problem in~\eqref{eq:objective_function-chap7} can be interpreted as: we seek for $\wbf(k)$ that minimizes both the squared error (LMS\abbrev{LMS}{Least-Mean-Square} term) and the distances between adjacent coefficients of $\wbf(k)$. In other words, the F-LMS\abbrev{F-LMS}{Feature LMS} algorithm for lowpass systems acts like the LMS\abbrev{LMS}{Least-Mean-Square} algorithm, but enforcing $\wbf(k)$ to be a lowpass system. It is worth mentioning that if $\wbf_o$ is indeed a lowpass system, then matrix $\Fbf_l$ yields a sparse vector $\Fbf_l\wbf(k)$.\footnote{A matrix similar to the $\Fbf_l$ in~\eqref{eq:F_lowpass} is already known by the statisticians working on a field called {\it trend filtering}~\cite{Wang_Trend_Graphs_jmlr2016}.}
Thus, the F-LMS\abbrev{F-LMS}{Feature LMS} algorithm for lowpass systems is defined by the recursion given in~\eqref{eq:update_equation-chap7}, but replacing
vector $\pbf(k)$ with $\pbf_l(k)$ defined as
\begin{align}
\left\{\begin{array}{ll}p_{l,i}(k)={\rm sgn}(w_0(k)-w_1(k))&{\rm if~} i=0,\\
p_{l,i}(k)=-{\rm sgn}(w_{i-1}(k)-w_i(k))+{\rm sgn}(w_i(k)-w_{i+1}(k))&{\rm if~} i=1,\cdots,N-1,\\
p_{l,i}(k)=-{\rm sgn}(w_{N-1}(k)-w_{N}(k))&{\rm if~}i=N,\end{array}\right. \label{eq:p_lowpass-chap7}
\end{align}
where ${\rm sgn}(\cdot)$ denotes the sign function.
As previously explained, the F-LMS\abbrev{F-LMS}{Feature LMS} algorithm above tries to reduce the distances between consecutive coefficients of $\wbf(k)$, i.e.,
matrix $\Fbf_l$ can be understood as the process of windowing $\wbf(k)$ with a window of length $2$ (i.e., two coefficients are considered at a time). We can increase the window length, in order to make a smoothing considering more coefficients simultaneously, by nesting linear combinations as follows
\begin{align}
\Fbf_l^{M{\rm -nested}} = \prod_{m=1}^{M} \Fbf_l^{(m)} \Fbf_l ,
\end{align}
where $\Fbf_l^{(m)}$ has the same structure given in~\eqref{eq:F_lowpass}, but losing $m$ rows and $m$ columns in relation to the dimensions of $\Fbf_l$.
In addition to the previous examples, suppose that the unknown system is the result of upsampling a lowpass system by a factor of $L$. In this case, we should use matrix $\Fbf_l^*$, whose rows have $L-1$ zeros between the $\pm 1$ entries, in~\eqref{eq:objective_function-chap7}. For $L=2$, we have the following matrix
\begin{align}
\Fbf_l^*=\left[\begin{array}{cccccc}1&0&-1&0&\cdots&0\\0&1&0&-1&\cdots&0\\\vdots&&\ddots&\ddots&\ddots&\\0&0&\cdots&1&0&-1\end{array}\right], \label{eq:F*_lowpass-chap7}
\end{align}
and $\|\Fbf_l^*\wbf(k)\|_1=\sum_{i=0}^{N-2}|w_i(k)-w_{i+2}(k)|$.
Next the F-LMS\abbrev{F-LMS}{Feature LMS} algorithm using such $\Fbf_l^*$ has the update rule given in~\eqref{eq:update_equation-chap7}, but replacing $\pbf(k)$ with $\pbf_l^*(k)$ defined as
\begin{align}
\left\{\begin{array}{ll}p_{l,i}^*(k)={\rm sgn}(w_i(k)-w_{i+2}(k))&{\rm if~} i=0,1,\\
p_{l,i}^*(k)=-{\rm sgn}(w_{i-2}(k)-w_i(k))+{\rm sgn}(w_i(k)-w_{i+2}(k))&{\rm if~} i=2,\cdots,N-2,\\
p_{l,i}^*(k)=-{\rm sgn}(w_{i-2}(k)-w_{i}(k))&{\rm if~}i=N-1,N.\end{array}\right. \label{eq:p*_lowpass-chap7}
\end{align}
\subsection{The F-LMS algorithm for highpass systems} \label{sub:F-LMS-highpass-chap7}
If the unknown system $\wbf_o$ has a highpass narrowband spectrum, then adjacent coefficients tend to have similar absolute values, but with opposite signs.
Therefore, the sum of two consecutive coefficients is close to zero and we can exploit this feature in the learning process by minimizing the sum of
adjacent coefficients of $\wbf(k)$.
This can be accomplished by selecting $\Fbf$ as $\Fbf_h$, where $\Fbf_h$ is an $N \times N+1$ feature matrix defined as \symbl{$\Fbf_h$}{Feature matrix for systems with highpass narrowband spectrum}
\begin{align}
\Fbf_h=\left[\begin{array}{ccccc}1&1&0&\cdots&0\\0&1&1&\cdots&0\\\vdots&&\ddots&\ddots&\\0&0&\cdots&1&1\end{array}\right], \label{eq:D_highpass-chap7}
\end{align}
such that $\|\Fbf_h\wbf(k)\|_1=\sum_{i=0}^{N-1}|w_i(k)+w_{i+1}(k)|$.
The F-LMS\abbrev{F-LMS}{Feature LMS} algorithm for highpass systems is characterized by the recursion given in~\eqref{eq:update_equation-chap7}, but replacing $\pbf(k)$
with $\pbf_h(k)$, which is defined as
\begin{align}
\left\{\begin{array}{ll}p_{h,i}(k)={\rm sgn}(w_0(k)+w_1(k))&{\rm if~} i=0,\\
p_{h,i}(k)={\rm sgn}(w_{i-1}(k)+w_i(k))+{\rm sgn}(w_i(k)+w_{i+1}(k))&{\rm if~} i=1,\cdots,N-1,\\
p_{h,i}(k)={\rm sgn}(w_{N-1}(k)+w_{N}(k))&{\rm if~}i=N.\end{array}\right. \label{eq:p_highpass-chap7}
\end{align}
Similar to the lowpass case, let us consider that the unknown system is the result of interpolating a highpass system by a factor $L=2$.
The set of interpolated highpass systems leads to a notch filter with zeros at $z=\pm \jmath$.
In this case, we can utilize $\Fbf_h^*$ in the objective function~\eqref{eq:objective_function-chap7}, where $\Fbf_h^*$ is described by
\begin{align}
\Fbf_h^*=\left[\begin{array}{cccccc}1&0&1&0&\cdots&0\\0&1&0&1&\cdots&0\\\vdots&&\ddots&\ddots&\ddots&\\0&0&\cdots&1&0&1\end{array}\right], \label{eq:F*_highpass-chap7}
\end{align}
and $\|\Fbf_h^*\wbf(k)\|_1=\sum_{i=0}^{N-2}|w_i(k)+w_{i+2}(k)|$.
Using $\Fbf_h^*$, the F-LMS\abbrev{F-LMS}{Feature LMS} recursion in~\eqref{eq:update_equation-chap7} should substitute $\pbf(k)$ by $\pbf_h^*(k)$ defined as
\begin{align}
\left\{\begin{array}{ll}p_{h,i}^*(k)={\rm sgn}(w_i(k)+w_{i+2}(k))&{\rm if~} i=0,1,\\
p_{h,i}^*(k)={\rm sgn}(w_{i-2}(k)+w_i(k))+{\rm sgn}(w_i(k)+w_{i+2}(k))&{\rm if~} i=2,\cdots,N-2,\\
p_{h,i}^*(k)={\rm sgn}(w_{i-2}(k)+w_{i}(k))&{\rm if~} i=N-1,N.\end{array}\right. \label{eq:p*_highpass-chap7}
\end{align}
\section{Low-complexity F-LMS Algorithms} \label{sec:low_comp_f_lms-chap7}
In this section, we derive the low-complexity feature LMS (LCF-LMS)\abbrev{LCF-LMS}{Low-Complexity Feature LMS} algorithm to exploit sparsity in the linear combination of the parameters, as the F-LMS algorithms do, while also reducing the computational cost of calculating the output signal.
Here, the idea is to reduce the number of multiplications required for computing the output signal when there is a strong relation between neighboring coefficients. In systems with lowpass frequency content, for example, neighboring coefficients vary smoothly. Therefore, when the input signal is highly correlated, we can fix the value of the neighboring coefficients where the distances (the absolute value of their differences) between any two consecutive coefficients are less than a small constant $\epsilon>0$. As a result, we reduce the number of multiplications in the calculation of $y(k)\triangleq\wbf^T(k)\xbf(k)$. For instance, if for nonnegative integers $m$ and $j$, where $m,j<N$, the discrepancies between the coefficients with indexes $m$ to $m+j$ are less than $\epsilon$, then we can use the $m$th coefficient as a reference. Mathematically, if the value of $|w_{m+i+1}(k)-w_{m+i}(k)|\leq\epsilon$ for $i=0,1,2,\cdots,j-1$, then in the calculation of the output signal instead of computing
\begin{align}
y(k)=w_m(k)x_m(k)+\cdots+w_{m+j}(k)x_{m+j}(k),
\end{align}
we can approximate $y(k)$ as
\begin{align}
\hat{y}(k)\triangleq\underbrace{w_m(k)x_m(k)+\cdots+w_m(k)x_m(k)}_{(j+1)-{\rm times}}. \label{eq:output_approx}
\end{align}
As a result, we decrease the number of multiplications from $j+1$ to one. Hence, for a block of coefficients in which the distance between any two consecutive coefficients is less than $\epsilon$, we can use the first parameter of the block as the reference parameter. As soon as the distance between two consecutive coefficients becomes greater than $\epsilon$, we will use the new one as a reference for the new block of coefficients.
To this end, for each block of coefficients in which the distance of any two consecutive coefficients is less than $\epsilon$, we have to preserve the first coefficient of the block, and the rest of them will be replaced by zero. Furthermore, when the absolute value of a coefficient is less than $\epsilon$, we can replace it with zero to avoid additional multiplication~\cite{Hamed_eusipco2016,Hamed_S_RLS_ICASSP2017}. Therefore, two subsets of parameters will be replaced by zero: (I) the coefficients whose absolute values are less than $\epsilon$, and (II) the coefficients whose distances from their antecessor are less than $\epsilon$.
The above reasoning can be implemented by means of the {\it feature function}, $\mathbb{F}_\epsilon:\mathbb{R}^{N+1}\rightarrow\mathbb{R}^{N+1}$, \symbl{$\mathbb{F}_\epsilon$}{Feature function} applied to the weigh vector of the adaptive filter. The $i$th element of the feature function, for $i=0,1,\cdots,N$, is defined as
\begin{align}
\mathbb{F}_{\epsilon,i}(\wbf(k))\triangleq\left\{\begin{array}{ll}f_\epsilon(w_0(k))&{\rm if~}i=0,\\
f_\epsilon(w_i(k))&{\rm if~}|w_i(k)-w_{i-1}(k)|>\epsilon~\&~i\neq0,\\
0&{\rm if~}|w_i(k)-w_{i-1}(k)|\leq\epsilon~\&~i\neq0, \end{array}\right. \label{eq:trend_function}
\end{align}
where $f_\epsilon$ is the discard function defined in~\eqref{eq:f_epsilon-eusipco}. As can be observed, the feature function replaces the subsets (I) and (II) of the coefficients of $\wbf(k)$ with zero. Let us define $\wbf_s(k)\triangleq\mathbb{F}_\epsilon(\wbf(k))$. Figure~\ref{fig:stem_explain} shows an example for the impulse response of $\wbf(k)$ and $\wbf_s(k)$ when $\epsilon=0.02$. As can be observed, $\wbf(k)$ has fifteen nonzero coefficients, and after using the feature function twelve of them are replaced by zero.
\begin{figure}[t!]
\centering
\subfigure[b][]{\includegraphics[width=.48\linewidth,height=7cm]{Figs/stem_explain_prio.pdf}
\label{fig:stem_explain_prio}}
\subfigure[b][]{\includegraphics[width=.48\linewidth,height=7cm]{Figs/stem_explain_pos.pdf}
\label{fig:stem_explain_pos}}
\caption{The impulse response of (a) $\wbf(k)$; (b) $\wbf_s(k)=\mathbb{F}_\epsilon(\wbf(k))$ for $\epsilon=0.02$. \label{fig:stem_explain}}
\end{figure}
Our goal is to utilize $\wbf_s(k)=\mathbb{F}_\epsilon(\wbf(k))$ in the calculation of the output signal. However, we must determine from which subset of coefficients of $\wbf(k)$ the zero elements of $\wbf_s(k)$ came, i.e., subsets (I) or (II). In fact, for some $i$, $w_{s_i}(k)$ is zero if and only if $w_i(k)$ belongs to the subsets (I) or (II). If $w_i(k)$ belongs to the subset (I), then we can directly apply $w_{s_i}(k)$ to calculate the output signal, i.e., we use $w_{s_i}(k)x_i(k)=0$. However, if $w_i(k)$ belongs to the subset (II), then we must apply the last nonzero coefficient of $\wbf_s(k)$ before the $i$th index to compute the output signal. Assume that this nonzero coefficient has index $m$, then we use $w_{s_m}(k)$ instead of $w_i(k)$ since their values are close to each other. Hence, in the calculation of the output signal, we use $w_{s_m}(k)x_m(k)$ instead of $w_{s_i}(k)x_i(k)$.
In order to determine the background of the zero coefficients in $\wbf_s(k)$, we define a binary vector $\bbf(k)\in\{0,1\}^{N+1}$ as $\bbf(k)=\fbf_\epsilon(\wbf(k))$, where $\fbf_\epsilon$ is the discard vector function. Then, for some $i$, if $w_{s_i}(k)$ and $b_i(k)$ are zero, we infer that $w_i(k)$ belongs to the subset (I). However, if $w_{s_i}(k)=0$ and $b_i(k)=1$, then we conclude that $w_i(k)$ belongs to the subset (II).
Finally, we can present the LCF-LMS\abbrev{LCF-LMS}{Low-Complexity Feature LMS} algorithm in Table~\ref{tb:LCF-LMS}. This algorithm implements less multiplication as compared to the LMS algorithm.
\begin{table}[t!]
\caption{Low-complexity feature LMS algorithm}
\begin{center}
\begin{footnotesize}
\begin {tabular}{|l|} \hline\\ \hspace{0.7cm}{\bf LCF-LMS Algorithm}\\ \\
\hline\\
Initialization
\\
$\wbf_s(0)=\bbf(0)=\wbf(0)=[0~\cdots~0]^T$\\
choose $\mu$ in the range $0<\mu\ll 1$\\
choose small constant $\epsilon>0$\\
Do for $k\geq0$\\
\hspace*{0.15cm} ${\rm temp}=0$, $y(k)=0$\\
\hspace*{0.15cm} for $i=0$ to $N$\\
\hspace*{0.3cm} if $w_{s_i}(k)\neq0$\\
\hspace*{0.45cm} ${\rm temp}=w_{s_i}(k)x_i(k)$\\
\hspace*{0.45cm} $y(k)=y(k)+{\rm temp}$\\
\hspace*{0.3cm} else\\
\hspace*{0.45cm} $y(k)=y(k)+({\rm temp}\times b_i(k))$\\
\hspace*{0.3cm} end\\
\hspace*{0.15cm} end\\
\hspace*{0.15cm} $e(k)=d(k)-y(k)$\\
\hspace*{0.15cm} $\wbf(k+1)=\wbf(k)+\mu e(k)\xbf(k)$\\
\hspace*{0.15cm} $\wbf_s(k+1)=\mathbb{F}_\epsilon(\wbf(k+1))$\\
\hspace*{0.15cm} $\bbf(k+1)=\fbf_\epsilon(\wbf(k+1))$\\
end \\
\\
\hline
\end {tabular}
\end{footnotesize}
\end{center}
\label{tb:LCF-LMS}
\end{table}
As mentioned earlier, for proposing the LCF-LMS algorithm, we assumed that the input signal is highly correlated. This assumption restricts the use of the LCF-LMS algorithm. To avoid this assumption, instead of approximating $y(k)$ by~\eqref{eq:output_approx}, we can approximate $y(k)$ as
\begin{align}
\hat{y}(k)\triangleq w_m(k)(x_m(k)+x_{m+1}(k)+\cdots+x_{m+j}(k)). \label{eq:output_approx_modified}
\end{align}
In other words, when $w_m(k)$ represents a block of coefficients of length $j+1$, the LCF-LMS algorithm sums $j+1$ copies of $w_m(k)x_m(k)$; however, in Equation~\eqref{eq:output_approx_modified}, we multiply $w_m(k)$ by the sum of the input signal components corresponding to the coefficients represented by $w_m(k)$. Note that the number of required arithmetic operations in~\eqref{eq:output_approx_modified} and~\eqref{eq:output_approx} are identical; i.e., both equations implement one multiplication and $j$ additions. The algorithm using Equation~\eqref{eq:output_approx_modified} in calculating output signal is called the improved LCF-LMS (I-LCF-LMS) \abbrev{I-LCF-LMS}{Improved LCF-LMS} algorithm, and its application is not limited to cases with correlated input signals. The I-LCF-LMS algorithm is presented in Table~\ref{tb:ILCF-LMS}.
\begin{table}[t!]
\caption{Improved low-complexity feature LMS algorithm}
\begin{center}
\begin{footnotesize}
\begin {tabular}{|l|} \hline\\ \hspace{0.8cm}{\bf I-LCF-LMS Algorithm}\\ \\
\hline\\
Initialization
\\
$\wbf_s(0)=\bbf(0)=\wbf(0)=[0~\cdots~0]^T$\\
choose $\mu$ in the range $0<\mu\ll 1$\\
choose small constant $\epsilon>0$\\
Do for $k\geq0$\\
\hspace*{0.15cm} ${\rm temp}_x=0$, ${\rm temp}_w=0$, $y(k)=0$\\
\hspace*{0.15cm} for $i=0$ to $N$\\
\hspace*{0.3cm} if $w_{s_i}(k)\neq0$\\
\hspace*{0.45cm} $y(k)=y(k)+({\rm temp}_w\times{\rm temp}_x)$\\
\hspace*{0.45cm} ${\rm temp}_w=w_{s_i}(k)$\\
\hspace*{0.45cm} ${\rm temp}_x=x_i(k)$\\
\hspace*{0.3cm} else\\
\hspace*{0.45cm} ${\rm temp}_x={\rm temp}_x+(x_i(k)\times b_i(k))$\\
\hspace*{0.3cm} end\\
\hspace*{0.15cm} end\\
\hspace*{0.15cm} $y(k)=y(k)+({\rm temp}_w\times{\rm temp}_x)$\\
\hspace*{0.15cm} $e(k)=d(k)-y(k)$\\
\hspace*{0.15cm} $\wbf(k+1)=\wbf(k)+\mu e(k)\xbf(k)$\\
\hspace*{0.15cm} $\wbf_s(k+1)=\mathbb{F}_\epsilon(\wbf(k+1))$\\
\hspace*{0.15cm} $\bbf(k+1)=\fbf_\epsilon(\wbf(k+1))$\\
end \\
\\
\hline
\end {tabular}
\end{footnotesize}
\end{center}
\label{tb:ILCF-LMS}
\end{table}
\section{Alternative LCF-LMS Algorithm} \label{sec:a-lcf-lms}
In the LCF-LMS\abbrev{LCF-LMS}{Low-Complexity Feature LMS} algorithm, when $\wbf(k)$ contains a long sequence of coefficients with almost similar absolute values, then $\wbf_s(k)$ contains a long block of zeros. Therefore, when calculating the output signal, all parameters of this block are represented by the first element of the block. As a result, since we are using a fixed coefficient to represent many ones, we could have an accumulated error. In this section, we introduce the alternative LCF-LMS (ALCF-LMS)\abbrev{ALCF-LMS}{Alternative Low-Complexity Feature LMS} algorithm to address this problem.
To avoid accumulated error because of many adjacent zeros in $\wbf_s(k)$, for some natural number $p<N$, we can force the feature function to keep every $p$ coefficients of $\wbf(k)$ in $\wbf_s(k)$ if the absolute value of the coefficient is greater than $\epsilon$. In other words, no parameter can represent a block of coefficients with more than $p$ elements. The only exception is the case when the parameters of the block have absolute values smaller than $\epsilon$ (i.e., they are really close to zero; therefore, they must be replaced by zero). Let us denote by $\mathbb{F}^a_\epsilon:\mathbb{R}^{N+1}\rightarrow\mathbb{R}^{N+1}$ \symbl{$\mathbb{F}^a_\epsilon$}{Alternative feature function} the new feature function, and it is called the {\it alternative feature function}. The $i$th element of $\mathbb{F}^a_\epsilon$, for $i=0,1,\cdots,N$, is defined by
\begin{align}
\mathbb{F}^a_{\epsilon,i}(\wbf(k))\triangleq\left\{\begin{array}{ll}f_\epsilon(w_i(k))&{\rm if~mod}(i,p)=0,\\
f_\epsilon(w_i(k))&{\rm if~}|w_i(k)-w_{i-1}(k)|>\epsilon~\&~{\rm mod}(i,p)\neq0,\\
0&{\rm if~}|w_i(k)-w_{i-1}(k)|\leq\epsilon~\&~{\rm mod}(i,p)\neq0, \end{array}\right. \label{eq:a_trend_function}
\end{align}
where ${\rm mod}(i,p)$ stands for the remainder of $\frac{i}{p}$. Therefore, the ALCF-LMS\abbrev{ALCF-LMS}{Alternative Low-Complexity Feature LMS} algorithm is similar to the LCF-LMS\abbrev{LCF-LMS}{Low-Complexity Feature LMS} one in Table~\ref{tb:LCF-LMS}, but the feature function is replaced by the alternative feature function (i.e., $\wbf_s(k+1)=\mathbb{F}^a_\epsilon(\wbf(k+1))$).
By using the same argument, we can propose the alternative I-LCF-LMS (AI-LCF-LMS) \abbrev{AI-LCF-LMS}{Alternative I-LCF-LMS} algorithm. Indeed, if we replace the feature function in Table~\ref{tb:ILCF-LMS} with the alternative feature function, then we obtain the AI-LCF-LMS algorithm.
\section{Matrix Representation of the Feature Function} \label{sec:matrix-trend-function}
In this section, we show how to generate $\wbf_s(k)$ through matrix operations. Indeed, presenting $\wbf_s(k)$ through matrix operations is helpful for future mathematical analysis.
To generate $\wbf_s(k)$, we use quantization matrices $\Qbf_t(k)$ for $t=1,2,3$, and two feature matrices $\Fbf_1$ and $\Fbf_2(k)$, all matrices belong to $\mathbb{R}^{(N+1)\times(N+1)}$. The matrices $\Fbf_1$ and $\Fbf_2(k)$ are responsible for exploiting the sparsity in the linear combination of the parameters and reconstructing the weight vector after exploiting the sparsity, respectively. Therefore, to exploit the hidden sparsity in the parameters of $\wbf(k)$ and their linear combinations, we introduce $\wbf_s(k)$ as follows
\begin{align}
\wbf_s(k)\triangleq\Qbf_3(k)\Fbf_2(k)\Qbf_2(k)\Fbf_1\Qbf_1(k)\wbf(k). \label{eq:exploit_sparsity-chap7}
\end{align}
In the following, we describe the matrices and justify their actions. We define the quantization matrix $\Qbf_1(k)$ as the Jacobian matrix of $\fbf_\epsilon(\wbf(k))$. Therefore, $\Qbf_1(k)$ is a diagonal matrix whose entries are zero or one. For the coefficients of $\wbf(k)$ where their absolute values are less than $\epsilon$, the corresponding entries on the diagonal of $\Qbf_1(k)$ are zero, otherwise they are one. Similarly, the matrices $\Qbf_2(k)$ and $\Qbf_3(k)$ are defined as the Jacobian matrices of $\fbf_\epsilon(\Fbf_1\Qbf_1(k)\wbf(k))$ and $\fbf_\epsilon(\Fbf_2(k)\Qbf_2(k)\Fbf_1\Qbf_1(k)\wbf(k))$, respectively. Thus $\Qbf_2(k)$ is a diagonal matrix with zero and one. Its diagonal entries are zero (one) for the corresponding elements of $\Fbf_1\Qbf_1(k)\wbf(k)$ with the absolute value lower (greater) than $\epsilon$. Also, $\Qbf_3(k)$ is a diagonal matrix similar to $\Qbf_2(k)$; however, it is derived from the vector $\Fbf_2(k)\Qbf_2(k)\Fbf_1\Qbf_1(k)\wbf(k)$. The diagonal entries of $\Qbf_3(k)$ are one for the corresponding elements of $\Fbf_2(k)\Qbf_2(k)\Fbf_1\Qbf_1(k)\wbf(k)$ with absolute value greater than $\epsilon$, and zero for the others.
The feature matrix $\Fbf_1$ has to find the difference between the coefficients of the vector $\Qbf_1(k)\wbf(k)$. In fact, it keeps the first parameter unchanged, and for other coefficients replaces them with the differences between them and the previous one. Thus, it can be represented as
\begin{align}
\Fbf_1\triangleq\left[\begin{array}{cccccc}1&0&0&0&\cdots&0\\-1&1&0&0&\cdots&0\\0&-1&1&0&\cdots&0\\\vdots&0&\ddots&\ddots&0&\vdots\\0&\cdots&0&-1&1&0\\0&0&\cdots&0&-1&1\end{array}\right]. \label{eq:F_matrix}
\end{align}
The function of the feature matrix $\Fbf_2(k)$ is to reconstruct the weight vector from the vector $\rbf(k)\triangleq\Qbf_2(k)\Fbf_1\Qbf_1(k)\wbf(k)$. The structure of $\Fbf_2(k)$ is a little complicated. In the following steps, we explain how to construct $\Fbf_2(k)$:
\begin{enumerate}
\item Assume that the first nonzero element of $\rbf(k)$ is $r_{i_1}(k)$, thus all rows of $\Fbf_2(k)$ before the $i_1$th row are zero vectors.
\item For $i_1$th row, the element corresponding to the $r_{i_1}(k)$ is one, and other entries of this row are zero.
\item If the next element of $\rbf(k)$ is nonzero, then the next row of $\Fbf_2(k)$ contains one more nonzero entry equal to one corresponding to these nonzero coefficients of $\rbf(k)$. We repeat this step as far as a zero element appears in $\rbf(k)$.
\item As soon as a zero element appears in $\rbf(k)$, we look for the next nonzero element, and assume that it is $r_{i_2}(k)$. Then the next row of $\Fbf_2(k)$ is similar to the previous row, but the element corresponding to $r_{i_2}(k)$ must be equal to one.
\item Suppose that the first nonzero element of $\rbf(k)$ after $r_{i_2}(k)$ is $r_{i_3}(k)$. Then next rows of $\Fbf_2(k)$ until the $(i_3-1)$th row are identical to the last constructed row. Note that if it does not exist some nonzero element as $r_{i_3}(k)$, the remaining rows of $\Fbf_2(k)$ are identical to the last constructed row.
\item The $i_3$th row of $\Fbf_2(k)$ contains only one nonzero element equal to one, and it must be placed on column $i_3$. This row is similar to the $i_1$th row (step 2); however, the position of one is different. Now, we go back to the step 3 and repeat the same process to construct the next rows of $\Fbf_2(k)$.
\end{enumerate}
In Equation~\eqref{eq:exploit_sparsity-chap7}, the matrix $\Qbf_1(k)$ replaces the coefficients of $\wbf(k)$ which has absolute value lower than $\epsilon$ with zero. Then matrix $\Fbf_1$ keeps the first coefficient unchanged. For the other components, this matrix subtracts the previous component from each of them. Hence, for the resulting vector, the matrix $\Qbf_2(k)$ changes the elements with an absolute value lower than $\epsilon$ to zero. Afterwards, the matrix $\Fbf_2(k)$ reconstructs the weight vector and, in some sense, it inverts the effect of $\Fbf_1$. Finally, for the resulting vector, the matrix $\Qbf_3(k)$ replaces the coefficients inside $[-\epsilon,\epsilon]$ with zero. The final result is identical to $\mathbb{F}_\epsilon(\wbf(k))$.
To clarify the process above, we describe the details for $\wbf(k)=[0~0.5~0.51~0.01~0.6~0.7~0.8~0.81~0~-0.01]^T$, as an example, when $\epsilon=0.02$. $\Qbf_1(k)$ is a diagonal matrix, where its diagonal is
$[0~1~1~0~1~1~1~1~0~0]^T$. Therefore, $\Qbf_1(k)\wbf(k)=[0~0.5~0.51~0~0.6~0.7~0.8~0.81~0~0]^T$. Then $\Fbf_1\Qbf_1(k)\wbf(k)=[0~0.5~0.01~-0.51~0.6~0.1~0.1~0.01~-0.81~0]^T$. The diagonal of $\Qbf_2(k)$ is $[0~1~0~1~1~1~1~0~1~0]^T$, and $\Qbf_2(k)\Fbf_1\Qbf_1(k)\wbf(k)=[0~0.5~0~-0.51~0.6~0.1~0.1~0~-0.81~0]^T$. Following the procedure explained to construct $\Fbf_2(k)$, we obtain the matrix $\Fbf_2(k)$ as follows
\begin{align}
\Fbf_2(k)=\left[\begin{array}{cccccccccc}0&0&0&0&0&0&0&0&0&0\\
0&1&0&0&0&0&0&0&0&0\\
0&1&0&1&0&0&0&0&0&0\\
0&1&0&1&0&0&0&0&0&0\\
0&0&0&0&1&0&0&0&0&0\\
0&0&0&0&1&1&0&0&0&0\\
0&0&0&0&1&1&1&0&0&0\\
0&0&0&0&1&1&1&0&1&0\\
0&0&0&0&1&1&1&0&1&0\\
0&0&0&0&1&1&1&0&1&0\end{array}\right].
\end{align}
Then $\Fbf_2(k)\Qbf_2(k)\Fbf_1\Qbf_1(k)\wbf(k)=[0~0.5~-0.01~-0.01~0.6~0.7~0.8~-0.01~-0.01~-0.01]^T$. The diagonal of $\Qbf_3(k)$ is $[0~1~0~0~1~1~1~0~0~0]^T$. Hence, $\wbf_s(k)=\Qbf_3(k)\Fbf_2(k)\Qbf_2(k)\Fbf_1\Qbf_1(k)\wbf(k)=[0~0.5~0~0~0.6~0.7~0.8~0~0~0]^T$. Also, if we use the feature function with $\epsilon=0.02$, then we obtain $\wbf_s(k)=\mathbb{F}_\epsilon(\wbf(k))=[0~0.5~0~0~0.6~0.7~0.8~0~0~0]^T$.
\section{Simulations} \label{sec:simulations-chap7}
In this section, we apply the LMS, the F-LMS, the LCF-LMS, and the ALCF-LMS algorithms to system identification problems. In scenario 1, we utilize the LMS and the F-LMS algorithms. Then, in scenario 2, we use the LMS, the LCF-LMS, and the ALCF-LMS algorithms.
In both scenarios, the order of all the unknown systems is 39, i.e., they have 40 coefficients. The signal-to-noise ratio (SNR)\abbrev{SNR}{Signal-to-Noise Ratio} is chosen as 20 dB. For all algorithms, the initial vector is $\wbf(0) = [0~\cdots~0]^T$, and the MSE\abbrev{MSE}{Mean-Squared Error} learning curves are computed by averaging the outcomes of 200 independent trials.
\subsection{Scenario 1}
In this scenario, we apply the LMS\abbrev{LMS}{Least-Mean-Square} and the F-LMS\abbrev{F-LMS}{Feature LMS} algorithms to identify some unknown lowpass and highpass systems. The first example considers predominantly lowpass and highpass systems defined as $\wbf_{o,l}= [0.4,\cdots,0.4]^T$ and $\wbf_{o,h}= [0.4,-0.4,0.4,\cdots,-0.4]^T$, respectively. The second example uses the interpolated models $\wbf_{o,l}'=[0.4,0,0.4,\cdots,0,0.4,0]^T$ and
$\wbf_{o,h}'=[0.4,0,-0.4,0,0.4,\cdots,0]^T$. The third example uses block-sparse lowpass and block-sparse highpass models, $\wbf_{o,l}''$ and $\wbf_{o,h}''$, whose entries are defined in~\eqref{eq:second_wo_lowpass-chap7}
and~\eqref{eq:second_wo_highpass-chap7}, respectively.
\begin{align}
w_{o,l_i}''&=\left\{\begin{array}{ll}0 & {\rm if~}0 \leq i \leq 9,\\
0.05(i-9) & {\rm if~} 10\leq i \leq 14,\\
0.3 & {\rm if~} 15 \leq i \leq 24,\\
0.3-0.05(i-24) & {\rm if~} 25 \leq i \leq 29,\\
0 & {\rm if~} 30 \leq i \leq 39,\end{array}\right. \label{eq:second_wo_lowpass-chap7}\\
w_{o,h_i}''&=(-1)^{i+1}w_{o,l_i}''. \label{eq:second_wo_highpass-chap7}
\end{align}
The input signal is a zero-mean white Gaussian noise with unit variance. The value of $\alpha$ for the F-LMS algorithm is chosen as 0.05. The values of the step size $\mu$ are informed later for each simulated scenario. The MSE\abbrev{MSE}{Mean-Squared Error} learning curves of the LMS\abbrev{LMS}{Least-Mean-Square} and the F-LMS\abbrev{F-LMS}{Feature LMS} algorithms are depicted in Figures~\ref{fig:LP-chap7} to~\ref{fig:Block-chap7}.
\begin{figure}[t!]
\centering
\subfigure[b][]{\includegraphics[width=.48\linewidth,height=7cm]{Figs/LP_same_mu.pdf}
\label{fig:LP_same_mu-chap7}}
\subfigure[b][]{\includegraphics[width=.48\linewidth,height=7cm]{Figs/LP_diff_mu.pdf}
\label{fig:LP_diff_mu-chap7}}
\caption{MSE learning curves of the LMS and F-LMS algorithms considering $\wbf_{o,l}$:
(a) both algorithms with the same step size: $\mu = 0.03$; (b) LMS and F-LMS with step sizes equal to 0.01 and 0.03, respectively. \label{fig:LP-chap7}}
\end{figure}
\begin{figure}[t!]
\centering
\subfigure[b][]{\includegraphics[width=.48\linewidth,height=7cm]{Figs/HP_same_mu.pdf}
\label{fig:HP_same_mu-chap7}}
\subfigure[b][]{\includegraphics[width=.48\linewidth,height=7cm]{Figs/HP_diff_mu.pdf}
\label{fig:HP_diff_mu-chap7}}
\caption{MSE learning curves of the LMS and F-LMS algorithms considering $\wbf_{o,h}$:
(a) both algorithms with the same step size: $\mu = 0.03$; (b) LMS and F-LMS with step sizes equal to 0.01 and 0.03, respectively. \label{fig:HP-chap7}}
\end{figure}
Figure~\ref{fig:LP-chap7} depicts the MSE\abbrev{MSE}{Mean-Squared Error} learning curves of the LMS\abbrev{LMS}{Least-Mean-Square} and the F-LMS\abbrev{F-LMS}{Feature LMS} algorithms considering the lowpass system $\wbf_{o,l}$. In Figure~\ref{fig:LP_same_mu-chap7}, both algorithms use the same step size $\mu = 0.03$ so that they exhibit similar convergence speeds. In this figure, we can observe that the F-LMS\abbrev{F-LMS}{Feature LMS} algorithm achieved a steady-state MSE\abbrev{MSE}{Mean-Squared Error} which is more than $3$~dB lower than the MSE\abbrev{MSE}{Mean-Squared Error} results of the LMS\abbrev{LMS}{Least-Mean-Square} algorithm. In Figure~\ref{fig:LP_diff_mu-chap7}, the steady-state MSE\abbrev{MSE}{Mean-Squared Error} of the algorithms are fixed in order to compare their convergence speeds.
Thus, we set the step sizes of the LMS\abbrev{LMS}{Least-Mean-Square} and the F-LMS\abbrev{F-LMS}{Feature LMS} algorithms as 0.01 and 0.03, respectively. We can observe, in this figure, that the F-LMS\abbrev{F-LMS}{Feature LMS} algorithm converged much faster than the LMS\abbrev{LMS}{Least-Mean-Square} algorithm.
In Figure~\ref{fig:HP-chap7}, we present results equivalent to the ones presented in Figure~\ref{fig:LP-chap7}, but considering the highpass system $\wbf_{o,h}$. Once again, when the step sizes of both algorithms are the same ($\mu = 0.03$), refer to Figure~\ref{fig:HP_same_mu-chap7}, the F-LMS\abbrev{F-LMS}{Feature LMS} algorithm achieved
lower steady-state MSE;\abbrev{MSE}{Mean-Squared Error} whereas the F-LMS\abbrev{F-LMS}{Feature LMS} algorithm (with $\mu = 0.03$) converged much faster than the LMS\abbrev{LMS}{Least-Mean-Square} algorithm (with $\mu = 0.01$) when their steady-state MSEs\abbrev{MSE}{Mean-Squared Error} are fixed, as illustrated in Figure~\ref{fig:HP_diff_mu-chap7}.
\begin{figure}[t!]
\centering
\subfigure[b][]{\includegraphics[width=.48\linewidth,height=7cm]{Figs/LP_int.pdf}
\label{fig:LP_int-chap7}}
\subfigure[b][]{\includegraphics[width=.48\linewidth,height=7cm]{Figs/HP_int.pdf}
\label{fig:HP_int-chap7}}
\caption{MSE learning curves of the LMS and F-LMS algorithms, both with step size $\mu = 0.03$, considering
the unknown systems: (a) $\wbf_{o,l}'$ and (b) $\wbf_{o,h}'$. \label{fig:int-chap7}}
\end{figure}
\begin{figure}[t!]
\centering
\subfigure[b][]{\includegraphics[width=.48\linewidth,height=7cm]{Figs/LP_Block.pdf}
\label{fig:LP_Block-chap7}}
\subfigure[b][]{\includegraphics[width=.48\linewidth,height=7cm]{Figs/HP_Block.pdf}
\label{fig:HP_Block-chap7}}
\caption{MSE learning curves of the LMS and F-LMS algorithms, both with step size $\mu = 0.03$, considering
the unknown systems: (a) $\wbf_{o,l}''$ and (b) $\wbf_{o,h}''$. \label{fig:Block-chap7}}
\end{figure}
Figures~\ref{fig:LP_int-chap7} and~\ref{fig:HP_int-chap7} depict the MSE\abbrev{MSE}{Mean-Squared Error} learning curves of the LMS\abbrev{LMS}{Least-Mean-Square} and the F-LMS\abbrev{F-LMS}{Feature LMS} algorithms, both using $\mu = 0.03$, considering the interpolated systems $\wbf_{o,l}'$ and $\wbf_{o,h}'$, respectively. Notice, in both figures, that the F-LMS\abbrev{F-LMS}{Feature LMS} algorithm achieved lower steady-state MSE,\abbrev{MSE}{Mean-Squared Error} thus outperforming the LMS\abbrev{LMS}{Least-Mean-Square} algorithm.
Figures~\ref{fig:LP_Block-chap7} and~\ref{fig:HP_Block-chap7} depict the MSE\abbrev{MSE}{Mean-Squared Error} learning curves of the LMS\abbrev{LMS}{Least-Mean-Square} and the F-LMS\abbrev{F-LMS}{Feature LMS} algorithms, both using $\mu = 0.03$, considering the block-sparse systems $\wbf_{o,l}''$ and $\wbf_{o,h}''$, respectively. In both cases, the F-LMS\abbrev{F-LMS}{Feature LMS} algorithm achieved lower steady-state MSE,\abbrev{MSE}{Mean-Squared Error} thus outperforming the LMS\abbrev{LMS}{Least-Mean-Square} algorithm.
\subsection{Scenario 2}
In this scenario, we apply the LMS, the LCF-LMS, the ALCF-LMS, the I-LCF-LMS, and the AI-LCF-LMS algorithms to identify two unknown systems. The first unknown system is the predominantly lowpass system $\wbf_{o,l}$. The second unknown model is a block-sparse model, $\wbf_{o,l}'''$, defined as follows
\begin{align}
w_{o,l_i}'''&=\left\{\begin{array}{ll}0 & {\rm if~}0 \leq i \leq 9,\\
0.04+0.01(i-9) & {\rm if~} 10\leq i \leq 17,\\
0.5 & {\rm if~} 18 \leq i \leq 21,\\
0.13-0.01(i-21) & {\rm if~} 22 \leq i \leq 29,\\
0 & {\rm if~} 30 \leq i \leq 39.\end{array}\right. \label{eq:second_lowpass_for_LCF-LMS-chap7}
\end{align}
In the case of the LCF-LMS and the ALCF-LMS algorithms, the input signal is an autoregressive signal generated by $x(k)=0.99x(k-1)+n(k-1)$. However, we do not have any restrictions on the input signal when utilizing the I-LCF-LMS and the AI-LCF-LMS algorithms. Thus, we use a zero-mean white Gaussian noise with unit variance as the input signal when implementing the I-LCF-LMS and the AI-LCF-LMS algorithms. The step size $\mu$ for the all algorithms is 0.003. Also, we adopt $\epsilon$ equal to 0.02.
Figures~\ref{fig:ALCF_LMS-chap7} and~\ref{fig:ALCF_LMS_Block-chap7} show the MSE learning curves of the LMS, the LCF-LMS, and the ALCF-LMS algorithms. Furthermore, the MSE learning curves of the LMS, the I-LCF-LMS, and the AI-LCF-LMS algorithms are illustrated in Figures~\ref{fig:Improved_Lowpass-chap7} and~\ref{fig:Improved_Block_Lowpass-chap7}.
\begin{figure}[t!]
\centering
\subfigure[b][]{\includegraphics[width=.48\linewidth,height=7cm]{Figs/ALCF_LMS.pdf}
\label{fig:ALCF_LMS-chap7}}
\subfigure[b][]{\includegraphics[width=.48\linewidth,height=7cm]{Figs/ALCF_LMS_Block.pdf}
\label{fig:ALCF_LMS_Block-chap7}}
\caption{MSE learning curves of the LMS, the LCF-LMS, and the ALCF-LMS algorithms considering
the unknown systems: (a) $\wbf_{o,l}$ and (b) $\wbf_{o,l}'''$. \label{fig:LCF_LMS-chap7}}
\end{figure}
Figure~\ref{fig:ALCF_LMS-chap7} shows the learning curves of the mentioned algorithms when they are applied to identify the predominantly lowpass unknown system $\wbf_{o,l}$. We can observe that the LCF-LMS algorithm, the blue curve, has high MSE but it has the lowest computational complexity. In the steady-state environment, it implements only one multiplication to calculate the error signal. However, the LMS algorithms, the black curve, requires forty multiplication to compute the error signal, and it has the highest computational burden. The ALCF-LMS algorithms have acceptable performances and, using $p=3$ and 7, they need thirteen and six multiplication to calculate the error signal, respectively.
Figure~\ref{fig:ALCF_LMS_Block-chap7} depicts the learning curves of the algorithms, when they are applied to identify the block-sparse lowpass unknown model $\wbf_{o,l}'''$. As can be seen, the LCF-LMS algorithm, the blue curve, has the highest MSE but it executes only three multiplication to compute the error signal. The red curve illustrates the remarkable performance of the ALCF-LMS algorithm. Indeed, its learning curve is extremely close to the learning curve of the LMS algorithm. However, in the steady-state environment, it implements only six multiplication to calculate the error signal.
\begin{figure}[t!]
\centering
\subfigure[b][]{\includegraphics[width=.48\linewidth,height=7cm]{Figs/Improved_Lowpass.pdf}
\label{fig:Improved_Lowpass-chap7}}
\subfigure[b][]{\includegraphics[width=.48\linewidth,height=7cm]{Figs/Improved_Block_Lowpass.pdf}
\label{fig:Improved_Block_Lowpass-chap7}}
\caption{MSE learning curves of the LMS, the I-LCF-LMS, and the AI-LCF-LMS algorithms considering
the unknown systems: (a) $\wbf_{o,l}$ and (b) $\wbf_{o,l}'''$. \label{fig:Improved_LCF_LMS-chap7}}
\end{figure}
Figure~\ref{fig:Improved_Lowpass-chap7} illustrates the learning curves of the LMS, the I-LCF-LMS, and the AI-LCF-LMS algorithms when they are utilized in the identification of the predominantly lowpass unknown system $\wbf_{o,l}$. The three algorithms have the same convergence rate; however, the LMS algorithm has the best MSE, followed by the AI-LCF-LMS and the I-LCF-LMS algorithms. As can be seen, the superiority of the MSE of the LMS algorithm to the MSE of the other two algorithms is not remarkable but the LMS algorithm has higher computational load. In the steady-state environment, for the calculation of the error signal, the LMS algorithm implements 40 multiplication, whereas the I-LCF-LMS and the AI-LCF-LMS algorithms execute one and eight multiplication, respectively.
The MSE learning curves of the LMS, the I-LCF-LMS, and the AI-LCF-LMS algorithms, when they are applied to identify the block-sparse unknown system $\wbf_{o,l}'''$, are presented in Figure~\ref{fig:Improved_Block_Lowpass-chap7}. The curves shown in this figure indicate that the LMS algorithm has the best misadjustment, followed by the AI-LCF-LMS and the I-LCF-LMS algorithms. Moreover, we can observe that the three algorithms have similar convergence speed. We must note that the computational complexity of the LMS algorithm is higher than that of the I-LCF-LMS and of the AI-LCF-LMS algorithms. In other words, to compute the error signal in the steady-state environment, the LMS algorithm requires 40 multiplication; however, the I-LCF-LMS and the AI-LCF-LMS algorithms need three and six multiplication, respectively.
As can be seen, in Scenario 1, the learning curves of the F-LMS algorithm are lower than that of the LMS algorithm. However, in Scenario 2, the learning curves of the LCF-LMS, the ALCF-LMS, the I-LCF-LMS, and the AI-LCF-LMS algorithms are higher than that of the LMS algorithm. It is worthwhile to mention that the computational complexity of the F-LMS algorithm is higher than that of the LMS algorithm, whereas the LCF-LMS, the ALCF-LMS, the I-LCF-LMS, and the AI-LCF-LMS algorithms require lower computational resources as compared to the LMS algorithm. Therefore, higher MSE in the performance of the low-complexity F-LMS algorithms is compensated by their lower computational complexity.
\section{Conclusions} \label{sec:conclusions-chap7}
In this chapter, we have proposed a family of algorithms called Feature LMS (F-LMS)\abbrev{F-LMS}{Feature LMS}. The F-LMS\abbrev{F-LMS}{Feature LMS} algorithms are capable of exploiting specific features of the unknown system to be identified in order to accelerate convergence speed and/or reduce steady-state MSE,\abbrev{MSE}{Mean-Squared Error} obtaining a more accurate estimate. The main idea is to apply a sparsity-promoting function to a linear combination of the parameters, in which this linear combination should reveal the sparsity hidden in the parameters, i.e., the linear combination exploits the specific structure/feature in order to generate a sparse vector. Some examples of the F-LMS\abbrev{F-LMS}{Feature LMS} algorithms having low computational complexity and exploiting the lowpass and highpass characteristics of unknown systems were introduced. Simulation results confirmed the superior performance of the F-LMS\abbrev{F-LMS}{Feature LMS} algorithm in comparison with the LMS\abbrev{LMS}{Least-Mean-Square} algorithm.
Furthermore, we have introduced the low-complexity F-LMS (LCF-LMS) and the alternative LCF-LMS (ALCF-LMS) algorithms in order to exploit hidden sparsity in the parameter with low computational cost. For this purpose, we have defined the feature function. The proposed algorithms have lower computational burden compared to the LMS algorithm; however, they have competitive performance. Also, we have introduced the improved versions of the LCF-LMS and the ALCF-LMS algorithms. Numerical results showed the competitive performance of the AI-LCF-LMS algorithm while requiring less multiplication to compute the error signal.
In future works, we intend to investigate other choices for the sparsity-promoting penalty function and the feature matrix. Also, we want to analyze the stability and MSE\abbrev{MSE}{Mean-Squared Error} of the F-LMS\abbrev{F-LMS}{Feature LMS} and the LCF-LMS algorithms.
\chapter{Conclusions, and Future Works}
In this thesis, we have investigated a number of data-selective adaptive filtering algorithms. It is generally accepted that data selection is an effective strategy to reduce the computational resources of the adaptive algorithms. To benefit from data selection in adaptive filtering algorithms, we have utilized the set-membership filtering (SMF) approach.
In set-membership (SM) adaptive filtering algorithms, the inclusion of {\it a priori} information, such as the noise bound, into the objective function leads to some noticeable advantages. The SM adaptive algorithms evaluate, choose, and process data at each iteration of their learning process. These algorithms have the potential to outperform the conventional adaptive filtering algorithms. Indeed, they retain the advantages of their traditional counterparts; however, they are more accurate, more robust against noise, and have lower computational load.
Moreover, we incorporate some sparsity-aware techniques into the SM adaptive algorithms. Thus, we introduced some sparsity-aware set-membership adaptive filtering algorithms. In order to exploit the sparsity in system models, we utilized the $l_0$ norm approximation, the discard function, and the feature matrices. The $l_0$ norm approximation and the discard function exploit the sparsity in coefficients close to zero; however, the feature matrices exploit the sparsity in linear combination of the parameters.
\section{Contributions} \label{sec:contributions}
The thesis started by reviewing the classical adaptive filtering algorithms. Also, we have introduced the SM normalized least-mean-square (SM-NLMS) and the SM affine projection (SM-AP) algorithms briefly. Then we have analyzed the robustness (in the sense of $l_2$ stability) of the SM-NLMS and the SM-AP algorithms. One of the major drawbacks of adopting the conventional algorithms is that one cannot guarantee the convergence of the algorithm independent of the choice of the parameters. However, when the additional noise is bounded, we have proved that the SM algorithms never diverge.
Moreover, the SMF approach has been generalized to trinion and quaternion numbers. Whenever the problem at hand suits both the quaternion and trinion solutions, the trinion algorithms clearly have an advantage over the quaternion ones in terms of computational burden. Furthermore, we have derived a new set-membership partial-update affine projection algorithm. This algorithm can improve the convergence rate significantly, particularly in a nonstationary environment.
In addition, some data-selective adaptive filtering algorithms have been proposed in order to exploit sparsity in systems with low computational cost. The key idea is to apply the discard function and the $l_0$ norm approximation. In particular, the use of discard function can effectively decrease the computational complexity. Finally, we have derived some feature least-mean-square (F-LMS) algorithms to exploit hidden sparsity in models when adjacent coefficients have a strong relation. To this end, the feature matrices and the feature function play fundamental roles.
\section{Future Works} \label{sec:future}
In this section, we list our future works. Indeed, research into studying and analyzing the F-LMS and the low-complexity (LCF-LMS) algorithms is already in progress. We are investigating some mathematical properties, such as the stability and MSE, of the F-LMS and the LCF-LMS algorithms. Also, we are currently in the process of investigating other choices for the sparsity-promoting penalty function and the feature matrix.
A possible topic for research is to employ distinct feature matrices in an online basis aiming at verifying the best one for a given iteration. It is also possible to derive a multitude for feature matrices inspired by previous knowledge of the spectral content of the unknown system model.
Another future work will concentrate on proposing some set-membership quaternion-valued adaptive filtering algorithms to exploit sparsity in system models. Also, further works need to be performed in order to analyze the performance of the proposed trinion- and quaternion-valued and partial-update adaptive algorithms.
\chapter*{Acknowledgments}
I would like to express my sincere gratitude to my advisor, Professor Paulo S. R. Diniz, for the continuous support, guidance, patience, motivation, and immense knowledge. Specially, I would like to thank him for his generous support and patience during my illness that lasted for about one year. Also, his extreme competence and friendly comprehension inspire me to be a better professional and friend. In fact, he is a remarkable example of a Brazilian. I could not have imagined having a better advisor for my Ph.D. study.
Also, I would like to thank Professor Markus V. S. Lima, my other advisor. He helped me for all details of my thesis. In fact, I am grateful for having his guidance during my study. He was always keen to know what I was doing and how I was proceeding. He always inspired me to be a serious and diligent researcher. I thank him for being not only my advisor, but also a friend.
Beside my advisors, I would like to thank my thesis committee: Prof. Marcello L. R. de Campos, Prof. José A. Apolinário Jr., Prof. Mário S. Filho, and Prof. Cássio G. Lopes for their encouragement, insightful comments and suggestions. My thesis benefited from their valuable comments. Moreover, I would like to express my sincere gratitude to Prof. José A. Apolinário Jr. for his invaluable comments on Chapter 7 of the text.
My sincere thanks also goes to Prof. Sergio L. Netto and Prof. Eduardo A. B. da Silva for offering me a research project in their group. I have
learned a lot from them during the project.
I would like to thank the professors of the Programa de Engenharia Elétrica (PEE) who have contributed to my education. In particular, I am grateful to Prof. Wallace A. Martins for the courses he taught.
Also, I would like to thank the staff of the SMT Lab. I am particularly grateful to Michelle Nogueira for her support and assistance during my Ph.D. study. Moreover, I thank the university staff, in particular, Daniele C. O. da Silva and Mauricio de Carvalho Machado for their help.
My sincere thanks also goes to Camila Gussen and all friends of the SMT Lab. They make the SMT Lab a pleasant and collaborative workplace. Also, I would like to thank Prof. Tadeu Ferreira for his special attention and help.
A very special gratitude goes out to Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES), Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq), and Fundação de Amparo à Pesquisa do Estado do Rio de Janeiro (FAPERJ) for the financial support.
I am really grateful to my lovely girlfriend, Ana Clara, and her family for all their love, patience, and help. Her love motivates me to continue my studies in Brazil and to choose this beautiful country as my home. Her continuous encouragement, unfailing emotional support, and permanent attention played fundamental roles throughout my years of study.
I am deeply grateful to my parents for giving birth to me at the first place and supporting me spiritually throughout my life. I can never pay them back the sacrifice they made for me. My father, Mohammad Yazdanpanah, and my mother, Mina Alizadeh, have provided me through moral and emotional support during my education. Finally, I must express my very profound gratitude to my brother and my sister for providing me with support and continuous encouragement through the process of researching and writing this thesis. This accomplishment would not have been possible without my family. Thank you.
|
2,869,038,154,011 | arxiv |
\section{Introduction}
\input{introduction}
\section{The saturated IM without signal injection}\label{sec:nohf}
\subsection{A general model of the saturated IM}\label{sec:model}
\input{model}
\subsection{Sensorless first-order observability of the saturated IM}\label{sec:obsnohf}
\input{obs_nohf}
\section{Analysis of HF signal injection}\label{sec:hf}
\subsection{Virtual measurements and signal injection}\label{sec:effects}
\input{effects}
\input{demod}
\subsection{The magnetic saliency matrix}\label{sec:xp}
\input{results}
\section{Observability with signal injection}\label{sec:obshf}
\input{obs_hf}
\section{Conclusion}
\input{conclusion}
\bibliographystyle{phmIEEEtran}
|
2,869,038,154,012 | arxiv | \section{Introduction}
\label{intro}
Proteins, polymers formed by different kinds of amino acids, fold to
form a specific tridimensional shape. This geometric pattern defines
the majority of functionality within an organism, \emph{i.e.}, the
macroscopic properties, function, and behavior of a given protein.
For instance, the hemoglobin is able to carry oxygen to the blood
stream due to its 3D geometric pattern. However, contrary to the
mapping from DNA to the amino acids sequence, the complex folding of
this last sequence still remains not well-understood. Moreover, the
determination of 3D protein structure from the amino acid linear
sequence, that is to say, the exact computational search for the
optimal conformation of a molecule, is completely unfeasible. It is
due to the astronomically large number of possible 3D protein
structures for a corresponding primary sequence of amino acids
\cite{Hoque09}: the computation capability required even for handling
a moderately-sized folding transition exceeds drastically the
computational capacity around the world. Additionally, the forces
involved in the stability of the protein conformation are currently
not modeled with \linebreak enough accuracy \cite{Hoque09}, and one
can even wonder if one day a fully accurate model can be found.
Then it is impossible to compute exactly the 3D structures of the
proteins. Indeed, the Protein Structure Prediction (PSP) problem is
NP-complete \cite{Crescenzi98}. This is why the 3D conformations of
proteins are \emph{predicted}: the most stable energy-free states are
looked for by using computational intelligence tools like genetic
algorithms \cite{DBLP:conf/cec/HiggsSHS10}, ant colonies
\cite{Shmygelska2005Feb}, particle swarm
\cite{DBLP:conf/cec/Perez-HernandezRG10}, memetic algorithms
\cite{Islam:2009:NMA:1695134.1695181}, or neural networks
\cite{Dubchak1995}. This search is justified by the Afinsen's
``Thermodynamic Hypothesis'', claiming that a protein's native
structure is at its lowest free energy minimum
\cite{Anfinsen20071973}. The use of computational intelligence tools
coupled with proteins energy approximation models (like AMBER,
DISCOVER, or ECEPP/3), come from the fact that finding the exact
minimum energy of a 3D structure of a protein is a very time consuming
task. Furthermore, in order to tackle the complexity of the PSP
problem, authors that try to predict the protein folding process use
models of various resolutions. In low resolution models, atoms in the
same amino acid can for instance be considered as the same entity.
These low resolution models are used as the first stage of the 3D
structure prediction: the backbone of the 3D conformation is
determined. Then, high resolution models come next for further
exploration. Such a prediction strategy is commonly used in PSP
softwares like ROSETTA \cite{Bonneau01,Chivian2005} or TASSER
\cite{Zhang2005}.
In this paper, which is an extension of \cite{bgc11:ip}, we
mathematically demonstrate that a particular dynamical system, used in
low resolutions models to predict the backbone of the protein, is
chaotic according to the Devaney's formulation. Chaos in protein
folding has been already investigated in the past years. For
instance, in \cite{Bohm1991375}, the Lyapunov exponent of a folding
process has been experimentally computed, to show that protein folding
is highly complex. More precisely, the author has established that
the crambin protein folding process, which is a small plant seed
protein constituted by 46~amino acids from \emph{Crambe Abyssinica},
has a positive Lyapunov exponent. In \cite{Zhou96}, an analysis of
molecular dynamics simulation of a model $\alpha$-helix indicates that
the motion of the helix system is chaotic, \emph{i.e.}, has nonzero
Lyapunov exponents, broad-band power spectra, and strange attractors.
Finally, in \cite{Braxenthaler97}, the authors investigated the
response of a protein fragment in an explicit solvent environment to
very small perturbations of the atomic positions, showing that very
tiny changes in initial conditions are amplified exponentially and
lead to vastly different, inherently unpredictable behavior. These
papers have studied experimentally the dynamics of protein folding and
stated that this process exhibits some chaotic properties, where
``chaos'' refers to various physical understandings of the phenomenon.
They noted the complexity of the process in concrete cases, without
offering a study framework making it possible to understand the
origins of such a behavior.
The approach presented in this research work is different for the two
following reasons. First, we focus on mathematical aspects of chaos,
like the Devaney's formulation of a chaotic dynamical system. This
well-known topological notion for a chaotic behavior is one of the
most established mathematical definition of unpredictability for
dynamical systems. Second, we do not study the biological folding
process, but the protein folding process as it is described in the 2D
hydrophobic-hydrophilic (HP) lattice model \cite{Berger98}. In other
words, we mathematically study the folding dynamics used in this
model, and we wonder if this model is stable through small
perturbations. For instance, what are the effects in the 2D model of
changing a residue from hydrophobic to hydrophilic? Or what happens
if we do not realize exactly the good rotation on the good residue, at
one given stage of the 2D folding process, due to small errors in the
knowledge of the protein?
Let us recall that the 2D HP square lattice model is a popular model
with low resolution that focuses only on hydrophobicity by separating
the amino acids into two sets: hydrophobic (H) and hydrophilic (or
polar P) \cite{Dill1985}. This model has been used several times for
protein folding prediction
\cite{DBLP:conf/cec/HiggsSHS10,Braxenthaler97,DBLP:conf/cec/IslamC10,Unger93,DBLP:conf/cec/HorvathC10}.
In what follows, we show that \emph{the folding process is
unpredictable (chaotic) in the 2D HP square lattice model used for
prediction}, and we investigate the consequences of this fact.
Chaos here refers to our inability to make relevant prediction with
this model, which does not \emph{necessarily} imply that the
biological folding dynamics is chaotic, too. In particular, we do not
claim that these biological systems must try a large number of
conformations in order to find the best one. Indeed, the prediction
model is proven to be chaotic, but this fact is not clearly related to
the impact of environmental factors on true biological protein
folding.
\bigskip
After having established by two different proofs the chaos, as defined
in the Devaney's formulation, of the dynamical system used in the 2D
model, we will deepen the evaluation of the disorder generated by this
system for backbone prediction. A qualitative topological study shows
that its folding dynamics is both indecomposable and unstable.
Moreover, the unpredictability of the system is evaluated
quantitatively too, by computing the constant of sensibility to the
initial conditions and the constant of expansivity.
All of these results show that the dynamical system used for backbone
prediction in the 2D model has a very intense chaotic behavior and is
highly unpredictable.
Consequences of these theoretical results are then outlined. More
precisely, we will focus on the following questions. First, some
artificial intelligence tools used for protein folding prediction are
then based, for the backbone evaluation, on a dynamical system that
presents several chaotic properties. It is reasonable to wonder
whether these properties impact the quality of the prediction. More
specifically, we will study if neural networks are able to learn a
topological chaotic behavior, and if predictions resulting from this
learning are close to the reality. Moreover, the initial
conformation, encompassing the sequence of amino acids, their
interactions, and the effects of the outside world, are never known
with infinite precision. Taking into account the fact that the model
used for prediction embeds a dynamical system being sensitive to its
initial condition, what can we conclude about the confidence put into
the final 3D conformation? Concerning the biological aspects of the
folding process, the following facts can be remarked. On the one
hand, a chaotic behavior seems to be incompatible with approximately
one thousand general categories of folds: this final kind of order
seems in contradiction with chaos. Additionally, sensibility to
initial conditions seems to be contradictory with the fact that a
sequence of amino acids always folds in the same conformation,
whatever the environment dependency. So, as the 2D HP lattice model
for backbone prediction is chaotic whereas the whole folding process
seems not, one can wonder whether this backbone prediction is founded
or not. On the other hand, recent experimental researches recalled
previously tend to prove that the folding process presents, at least
to a certain extent, some characteristics of a chaotic behavior
\cite{Bohm1991375,Zhou96,Braxenthaler97}. If this theory is confirmed
and biological explanations are found (for instance, regulatory
processes could repair or delete misfolded proteins), then this
research work could appear as a first step in the theoretical study of
the chaos of protein folding.
In fact, the contradiction raised above is only apparent, as it is
wrong to claim that all of the sequences of amino acids always fold in
a constant and well-defined conformation. More precisely, a large
number of proteins, called ``intrinsically unstructured proteins'' or
``intrinsically disordered proteins'', lay at least in part outside
this rule. More than 600~proteins are proven to be of this kind:
antibodies, p21 and p27 proteins, fibrinogen, casein in mammalian
milk, capsid of the Tobacco mosaic virus, proteins of the capsid of
bacteriophages, to name a few. Indeed, a large number of proteins have
at least a disordered region of greater or lesser size. This
flexibility allow them to exert various functions into an organism or
to bind to various macromolecules. For instance, the p27 protein can
be binded to various kind of enzymes. Furthermore, some studies have
shown that between 30\% and 50\% of the eukaryote proteins have at
least one large unstructured region
\cite{Dyson2005,doi:10.1146/annurev.biophys.37.032807.125924}. Hence,
regular and disordered proteins can be linked to the mathematical
notions of chaos as understood by Devaney, or Knudsen, which consist
in the interlocking of points having a regular behavior with points
whose desire is to visit the whole space.
The remainder of this paper is structured as follows. In the next
section we recall some notations and terminologies on the 2D model and
the Devaney's definition of chaos. In Section \ref{sec:dynamical
system}, the folding process in the 2D model is written as a
dynamical system on a relevant metrical space. Compared
to~\cite{bgc11:ip}, we have simplified the folding function and
refined the metrical space to the set of all acceptable conformations.
This work, which is the first contribution of this paper, has been
realized by giving a complete understanding of the so-called
Self-Avoiding Walk (SAW) requirement. In Sections~\ref{sec:HP=chaos}
and \ref{sec:CI=chaos}, proofs of the chaotic behavior of a dynamical
system used for backbone prediction, are taken from~\cite{bgc11:ip}
and adapted to this set of acceptable conformations. This adaptation
is the second contribution of this research work. The first proof is
directly achieved in Devaney's context whereas the second one uses a
previously proven result concerning chaotic iterations
\cite{guyeux09}. The following section is devoted to qualitative and
quantitative evaluations of the disorder exhibited by the folding
process. This is the third theoretical contribution of this extension
of \cite{bgc11:ip}. Consequences of this unpredictable behavior are
given in Section \ref{Sec:Consequences}. Among other things, it is
regarded whether chaotic behaviors are harder to predict than
``normal'' behaviors or not, and if such behaviors are easy to learn.
This section extends greatly the premises outlined formerly
in~\cite{bgc11:ip}. Additionally, reasons explaining why a chaotic
behavior unexpectedly leads to approximately one thousand categories
of folds are proposed. This paper ends by a conclusion section, in
which our contribution is summarized and intended future work is
presented.
\section{Basic Concepts}
\label{Sec:basic recalls}
In the sequel $S^{n}$ denotes the $n^{th}$ term of a sequence $S$ and
$V_{i}$ the $i^{th}$ component of a vector $V$.
The $k^{th}$
composition of a single function $f$ is represented by $f^{k}=f
\circ...\circ f$.
The set of congruence classes modulo 4 is denoted by $\mathds{Z}/4\mathds{Z}$.
Finally, given two integers $a<b$, the following notation is used:
$\llbracket a;b\rrbracket =\{a,a+1,\hdots,b\}$.
\subsection{2D Hydrophilic-Hydrophobic (HP) Model}
\subsubsection*{HP Model}
In the HP model, hydrophobic interactions are supposed to dominate
protein folding. This model was formerly introduced by Dill, who
considers in \cite{Dill1985} that the protein core freeing up energy
is formed by hydrophobic amino acids, whereas hydrophilic amino acids
tend to move in the outer surface due to their affinity with the
solvent (see Fig.~\ref{fig:hpmodel}).
In this model, a protein conformation is a ``self-avoi\-ding walk
(SAW)'' on a 2D or 3D lattice such that its energy $E$, depending on
topological neighboring contacts between hydrophobic amino acids that
are not contiguous in the primary structure, is minimal. In other
words, for an amino-acid sequence $P$ of length $\mathsf{N}$ and for
the set $\mathcal{C}(P)$ of all SAW conformations of $P$, the chosen
conformation will be $C^* = argmin \left\{E(C) \big/ C \in
\mathcal{C}(P)\right\}$ \cite{Shmygelska05}. In that context and for
a conformation $C$, \linebreak $E(C)=-q$ where $q$ is equal to the
number of topological hydrophobic neighbors. For example, $E(c)=-5$
in Fig.~\ref{fig:hpmodel}.
\begin{figure}[t]
\centering
\includegraphics[width=2.35in]{HPmodel.pdf}
\caption{Hydrophilic-hydrophobic model (black squares are
hydrophobic residues)}
\label{fig:hpmodel}
\end{figure}
\subsubsection*{Protein Encoding}
Additionally to the direct coordinate presentation, at least two other
isomorphic encoding strategies for HP models are possible: relative
encoding and absolute encoding. In relative encoding \cite{Hoque09},
the move direction is defined relative to the direction of the
previous move. Alternatively, in absolute encoding
\cite{Backofen99algorithmicapproach}, which is the encoding chosen in
this paper, the direct coordinate presentation is replaced by letters
or numbers representing directions with respect to the lattice
structure.
For absolute encoding in the 2D square lattice, the permitted moves
are: forward $\rightarrow$ (denoted by 0), down $\downarrow$ (1),
backward $\leftarrow$ (2), and up $\uparrow$ (3). A 2D conformation
$C$ of $\mathsf{N}+1$ residues for a protein $P$ is then an element
$C$ of $\mathds{Z}/4\mathds{Z}^{\mathsf{N}}$, with a first component
equal to 0 (forward) \cite{Hoque09}. For instance, in
Fig.~\ref{fig:hpmodel}, the 2D absolute encoding is 00011123322101
(starting from the upper left corner). In that situation, at most
$4^{\mathsf{N}}$ conformations are possible when considering
$\mathsf{N}+1$ residues, even if some of them are invalid due to the
SAW requirement.
\subsection{Devaney's Chaotic Dynamical Systems}
\label{subsection:Devaney}
From a mathematical point of view, deterministic chaos has been
thoroughly studied these last decades, with different research works
that have provide various definitions of chaos. Among these
definitions, the one given by Devaney~\cite{Devaney} is perhaps the
most well established.
Consider a topological space $(\mathcal{X},\tau)$ and a continuous
function $f$ on $\mathcal{X}$. Topological transitivity occurs when,
for any point, any neighborhood of its future evolution eventually
overlap with any other given region. More precisely,
\begin{definition}
$f$ is said to be \emph{topologically transitive} if, for any pair
of open sets $U,V \subset \mathcal{X}$, there exists $k>0$ such that
$f^k(U) \cap V \neq \emptyset$.
\end{definition}
This property implies that a dynamical system cannot be broken into
simpler subsystems. It is intrinsically complicated and cannot be
simplified. Besides, a dense set of periodic points is an element of
regularity that a chaotic dynamical system has to exhibit.
\begin{definition}
An element (a point) $x$ is a \emph{periodic element} (point) for
$f$ of period $n\in \mathds{N}^*,$ if $f^{n}(x)=x$.
\end{definition}
\begin{definition}
$f$ is said to be \emph{regular} on $(\mathcal{X}, \tau)$ if the set
of periodic points for $f$ is dense in $\mathcal{X}$: for any point
$x$ in $\mathcal{X}$, any neighborhood of $x$ contains at least one
periodic point.
\end{definition}
This regularity ``counteracts'' the effects of transitivity. Thus,
due to these two properties, two points close to each other can behave
in a completely different manner, leading to unpredictability for the
whole system. Then,
\begin{definition}[Devaney's chaos]
$f$ is said to be \emph{chao\-tic} on $(\mathcal{X},\tau)$ if $f$ is
regular and topologically transitive.
\end{definition}
The chaos property is related to the notion of ``sensitivity'',
defined on a metric space $(\mathcal{X},d)$ by:
\begin{definition} \label{sensitivity}
$f$ has \emph{sensitive dependence on initial conditions} if there
exists $\delta >0$ such that, for any $x\in \mathcal{X}$ and any
neighborhood $V$ of $x$, there exist $y\in V$ and $n \geq 0$ such
that $d\left(f^{n}(x), f^{n}(y)\right) >\delta $.
$\delta$ is called the \emph{constant of sensitivity} of $f$.
\end{definition}
Indeed, Banks \emph{et al.} have proven in~\cite{Banks92} that when
$f$ is chaotic and $(\mathcal{X}, d)$ is a metric space, then $f$ has
the property of sensitive dependence on initial conditions (this
property was formerly an element of the definition of chaos). To sum
up, quoting Devaney in~\cite{Devaney}, a chaotic dynamical system ``is
unpredictable because of the sensitive dependence on initial
conditions. It cannot be broken down or simplified into two
subsystems which do not interact because of topological transitivity.
And in the midst of this random behavior, we nevertheless have an
element of regularity''. Fundamentally different behaviors are
consequently possible and occur in an unpredictable way.
\section{A Dynamical System for the 2D HP Square Lattice Model}
\label{sec:dynamical system}
The objective of this research work is to establish that the protein
folding process, as it is described in the 2D model, has a chaotic
behavior. To do so, this process must be first described as a
dynamical system.
\subsection{Initial Premises}
Let us start with preliminaries introducing some concepts that will be
useful in our approach.
The primary structure of a given protein $P$ with $\mathsf{N}+1$
residues is coded by $0 0 \hdots 0$ ($\mathsf{N}$ times) in absolute
encoding. Its final 2D conformation has an absolute encoding equal to
$0 C_1^* \hdots C_{\mathsf{N}-1}^*$, where $\forall i, C_i^* \in
\mathds{Z}/4\mathds{Z}$, is such that $E(C^*) = argmin \left\{E(C)
\big/ C \in \mathcal{C}(P)\right\}$. This final conformation depends
on the repartition of hydrophilic and hydrophobic amino acids in the
initial sequence.
Moreover, we suppose that, if the residue number $n+1$ is forward the
residue number $n$ in absolute encoding ($\rightarrow$) and if a fold
occurs after $n$, then the forward move can only by changed into up
($\uparrow$) or down ($\downarrow$). That means, in our simplistic
model, only rotations of $+\frac{\pi}{2}$ or $-\frac{\pi}{2}$ are
possible.
Consequently, for a given residue that is supposed to be updated, only
one of the two possibilities below can appear for its absolute move
during a fold:
\begin{itemize}
\item $0 \longmapsto 1, 1 \longmapsto 2, 2 \longmapsto 3,$ or $ 3 \longmapsto 0$
for a fold in the clockwise direction, or
\item $1 \longmapsto 0, 2 \longmapsto 1, 3 \longmapsto 2,$ or $0 \longmapsto 3$
for an anticlockwise.
\end{itemize}
This fact leads to the following definition:
\begin{definition}
The \emph{clockwise fold function} is the function $f:
\mathds{Z}/4\mathds{Z} \longrightarrow \mathds{Z}/4\mathds{Z}$ defined
by $f(x)=x+1 (\textrm{mod}~ 4)$.
\end{definition}
Obviously the dual anticlockwise fold function is \linebreak
$f^{-1}(x)=x-1 (\textrm{mod}~ 4)$.
Thus at the $n^{th}$ folding time, a residue $k$ is chosen and its
absolute move is changed by using either $f$ or $f^{-1}$. As a
consequence, all of the absolute moves must be updated from the
coordinate $k$ until the last one $\mathsf{N}$ by using the same
folding function.
\begin{example}
\label{ex1}
If the current conformation is $C=000111$, i.e.,
\begin{figure}[h]
\centering
\includegraphics[width=1.25in]{conformation1.pdf}
\end{figure}
\noindent and if the third residue is chosen to fold by a rotation of
$-\frac{\pi}{2}$ (mapping $f$), the new conformation will be:
$$(C_1,C_2,f(C_3),f(C_4),f(C_5),f(C_6)) = (0,0,1,2,2,2).$$
\noindent That is,
\begin{figure}[h]
\centering
\includegraphics[width=1.25in]{conformation2.pdf}
\end{figure}
\end{example}
These considerations lead to the formalization described hereafter.
\subsection{Formalization and Notations}
Let $\mathsf{N}+1$ be a fixed number of amino acids, where $\mathsf{N}\in\mathds{N}^*$.
We define
$$\check{\mathcal{X}}=\mathds{Z}/4\mathds{Z}^\mathsf{N}\times \llbracket -\mathsf{N};\mathsf{N}
\rrbracket^\mathds{N}$$
as the phase space of all possible folding processes.
An element $X=(C,F)$ of this dynamical folding space is constituted by:
\begin{itemize}
\item A conformation of the $\mathsf{N}+1$ residues in absolute encoding: $C=(C_1,\hdots, C_\mathsf{N}) \in \mathds{Z}/4\mathds{Z}^\mathsf{N}$. Note that we do not require self-avoiding walks here.
\item A sequence $F \in \llbracket -\mathsf{N} ; \mathsf{N} \rrbracket^\mathds{N}$ of future folds such that, when $F_i \in \llbracket -\mathsf{N}; \mathsf{N} \rrbracket$ is $k$, it means that it occurs:
\begin{itemize}
\item a fold after the $k-$th residue by a rotation of $-\frac{\pi}{2}$ (mapping $f$) at the $i-$th step, if $k = F_i >0$,
\item no fold at time $i$ if $k=0$,
\item a fold after the $|k|-$th residue by a rotation of $\frac{\pi}{2}$ (\emph{i.e.}, $f^{-1}$) at the $i-$th time, if $k<0$.
\end{itemize}
\end{itemize}
On this phase space, the protein folding dynamic in the 2D model can be formalized as follows.
\medskip
Denote by $i$ the map that transforms a folding sequence in its first term (\emph{i.e.}, in the first folding operation):
$$
\begin{array}{lccl}
i:& \llbracket -\mathsf{N};\mathsf{N} \rrbracket^\mathds{N} & \longrightarrow & \llbracket -\mathsf{N};\mathsf{N} \rrbracket \\
& F & \longmapsto & F^0,
\end{array}$$
by $\sigma$ the shift function over $\llbracket -\mathsf{N};\mathsf{N} \rrbracket^\mathds{N}$, that is to say,
$$
\begin{array}{lccl}
\sigma :& \llbracket -\mathsf{N};\mathsf{N} \rrbracket^\mathds{N}
& \longrightarrow & \llbracket -\mathsf{N};\mathsf{N} \rrbracket^\mathds{N} \\
& \left(F^k\right)_{k \in \mathds{N}} & \longmapsto
& \left(F^{k+1}\right)_{k \in \mathds{N}},
\end{array}$$
\noindent and by $sign$ the function:
$$
sign(x) = \left\{
\begin{array}{ll}
1 & \textrm{if } x>0,\\
0 & \textrm{if } x=0,\\
-1 & \textrm{else.}
\end{array}
\right.
$$
Remark that the shift function removes the first folding operation from the folding sequence $F$ once it has been achieved.
Consider now the map $G:\check{\mathcal{X}} \to \check{\mathcal{X}}$ defined by:
$$G\left((C,F)\right) = \left( f_{i(F)}(C),\sigma(F)\right),$$
\noindent where $\forall k \in \llbracket -\mathsf{N};\mathsf{N} \rrbracket$,
$f_k: \mathds{Z}/4\mathds{Z}^\mathsf{N} \to \mathds{Z}/4\mathds{Z}^\mathsf{N}$
is defined by:
\begin{flushleft}
$f_k(C_1,\hdots,C_\mathsf{N}) =$
\end{flushleft}
\begin{flushright}
$ (C_1,\hdots,C_{|k|-1},f^{sign(k)}(C_{|k|}),\hdots,f^{sign(k)}(C_\mathsf{N})).$
\end{flushright}
Thus the folding process of a protein $P$ in the 2D HP square lattice model, with initial conformation equal to $(0,0, \hdots, 0)$ in absolute encoding and a folding sequence equal to $(F^i)_{i \in \mathds{N}}$, is defined by the following dynamical system over $\check{\mathcal{X}}$:
$$
\left\{
\begin{array}{l}
X^0=((0,0,\hdots,0),F)\\
X^{n+1}=G(X^n), \forall n \in \mathds{N}.
\end{array}
\right.
$$
In other words, at each step $n$, if $X^n=(C,F)$, we take the first
folding operation to realize, that is $i(F) = F^0 \in \llbracket
-\mathsf{N};\mathsf{N} \rrbracket$, we update the current conformation
$C$ by rotating all of the residues coming after the $|i(F)|-$th one,
which means that we replace the conformation $C$ with $f_{i(F)}(C)$.
Lastly, we remove this rotation (the first term $F^0$) from the
folding sequence $F$: $F$ becomes $\sigma(F)$.
\begin{example}
Let us reconsider Example \ref{ex1}. The unique iteration of this
folding process transforms a point of $\check{X}$ having the form
$\left((0,0,0,1,1,1),(+3, F^1, F^2, \hdots)\right)$ in
$G\left(((0,0,0,1,1,1),(+3,F^1,F^2, \hdots))\right),$ which is equal to
$\left((0,0,1,2,2,2),(F^1,F^2, \hdots)\right)$.
\end{example}
\begin{remark}
Such a formalization allows the study of proteins that never stop to
fold, for instance due to never-ending interactions with the
environment.
\end{remark}
\begin{remark}
A protein $P$ that has finished to fold, if such a protein exists, has
the form $(C,(0,0,0,\hdots))$, where $C$ is the final 2D structure of
$P$. In this case, we can assimilate a folding sequence that is
convergent to 0, \emph{i.e.}, of the form $(F^0, \hdots, F^n,0
\hdots)$, with the finite sequence $(F^0, \hdots, F^n)$.
\end{remark}
We will now introduce the SAW requirement in our formulation of the folding process in the 2D model.
\subsection{The SAW Requirement}
\subsubsection{Towards a Basic SAW Requirement Definition}
Let $\mathcal{P}$ denotes the 2D plane and
$$
\begin{array}{cccc}
p: & \mathds{Z}/4\mathds{Z}^\mathsf{N} & \to & \mathcal{P}^{\mathsf{N}+1} \\
& (C_1, \hdots, C_\mathsf{N}) & \mapsto & (X_0, \hdots, X_\mathsf{N})
\end{array}
$$
where $X_0 = (0,0)$ and
$$
X_{i+1} = \left\{
\begin{array}{ll}
X_i + (1,0) & ~\textrm{if } c_i = 0,\\
X_i + (0,-1) & ~\textrm{if } c_i = 1,\\
X_i + (-1,0) & ~\textrm{if } c_i = 2,\\
X_i + (0,1) & ~\textrm{if } c_i = 3.
\end{array}
\right.
$$
The map $p$ transforms an absolute encoding in its 2D representation.
For instance, $p((0,0,0,1,1,1))$ is \linebreak
((0,0);(1,0);(2,0);(3,0);(3,-1);(3,-2);(3,-3)), that is, the first
figure of Example \ref{ex1}.
Now, for each $(P_0, \hdots, P_\mathsf{N})$ of $\mathcal{P}^{\mathsf{N}+1}$, we denote by $$support((P_0, \hdots, P_\mathsf{N}))$$ the set (with no repetition): $\left\{P_0, \hdots, P_\mathsf{N}\right\}$. For instance,
$$support\left(((0,0);(0,1);(0,0);(0,1))\right) = \left\{(0,0);(0,1)\right\}.$$
Then,
\begin{definition}
\label{def:SAW}
A conformation $(C_1, \hdots, C_\mathsf{N}) \in \mathds{Z}/4\mathds{Z}^{\mathsf{N}}$ satisfies the \emph{self-avoiding walk (SAW) requirement} iff the cardinality of $support(p((C_1, \hdots, C_\mathsf{N})))$ is $\mathsf{N}+1$.
\end{definition}
We can remark that Definition \ref{def:SAW} concerns only one conformation, and not a \emph{sequence} of conformations that occurs in a folding process.
\subsubsection{Understanding the so-called SAW Requirement for a Folding Process}
The next stage in the formalization of the protein folding process in
the 2D model as a dynamical system is to take into account the
self-avoiding walk (SAW) requirement, by restricting the set
$\mathds{Z}/4\mathds{Z}^\mathsf{N}$ of all possible conformations to
one of its subsets. That is, to define precisely the set
$\mathcal{C}(P)$ of acceptable conformations of a protein $P$ having
$\mathsf{N}+1$ residues. This stage needs a clear definition of the
SAW requirement. However, as stated above, Definition \ref{def:SAW}
only focus on the SAW requirement of a given conformation, but not on
a complete folding process. In our opinion, this requirement applied
to the whole folding process can be understood at least in four ways.
\medskip
In the first and least restrictive approach, we call it ``$SAW_1$'',
we only require that the studied conformation satisfy the SAW
requirement of Definition \ref{def:SAW}. It is not regarded whether
this conformation is the result of a folding process that has started
from $(0,0,\hdots,0)$. Such a SAW requirement has been chosen by
authors of \cite{Crescenzi98} when they have proven the
NP-completeness of the PSP problem.
The second approach called $SAW_2$ requires that, starting from the
initial condition $(0,0,\hdots, 0)$, we obtain by a succession of
folds a final conformation that is a self-avoiding walk. In other
words, we want that the final tree corresponding to the true 2D
conformation has 2 vertices with 1 edge and $\mathsf{N}-2$ vertices
with 2 edges. For instance, the folding process of Figure \ref{saw2}
is acceptable in $SAW_2$, even if it presents residues that overlap in
an intermediate conformation. Such an approach corresponds to programs
that start from the initial conformation $(0,0, \hdots, 0)$, fold it
several times according to their embedding functions, and then obtain
a final conformation on which the SAW property is checked: only the
last conformation has to satisfy the Definition~\ref{def:SAW}.
\begin{figure}
\centering
\caption{Folding process acceptable in $SAW_2$ but not in $SAW_3$}
\label{saw2}
\includegraphics[width=1.5in]{saw2.pdf}
\end{figure}
In the next approach, namely the $SAW_3$ requirement, it is demanded
that each intermediate conformation, between the initial one and the
returned (final) one, satisfy the Definition \ref{def:SAW}. It
restricts the set of all conformations
$\mathds{Z}/4\mathds{Z}^\mathsf{N}$, for a given $\mathsf{N}$, to the
subset $\mathfrak{C}_\mathsf{N}$ of conformations $(C_1,\hdots,
C_\mathsf{N})$ such that $\exists n \in \mathds{N}^*,$ $\exists k_1,
\hdots, k_n \in \llbracket -\mathsf{N}; \mathsf{N}
\rrbracket$, $$(C_1, \hdots, C_\mathsf{N}) = G^n\left(((0,0, \hdots,
0),(k_1, \hdots, k_n))\right)$$ $\forall i \leqslant n$,
the conformation $G^i\left(((0, \hdots, 0), (k_1, \hdots, k_n))
\right)$ satisfies the Definition \ref{def:SAW}. This $SAW_3$ folding
process requirement, which is perhaps the most usual meaning of ``SAW
requirement'' in the literature (it is used, for instance, in
\cite{DBLP:conf/cec/HiggsSHS10,Braxenthaler97,DBLP:conf/cec/IslamC10,Unger93,DBLP:conf/cec/HorvathC10}),
has been chosen in this research work. In this approach, the
acceptable conformations are obtained starting from the initial
conformation $(0,0, \hdots, 0)$ and are such that all the intermediate
conformations satisfy the Definition \ref{def:SAW}.
Finally, the $SAW_4$ approach is a $SAW_3$ requirement in which there
is no intersection of vertex or edge during the transformation of one
conformation to another. For instance, the transformation of Figure
\ref{saw4} is authorized in the $SAW_3$ approach but refused in the
$SAW_4$ one: during the rotation around the residue identified by a
cross, the structure after this residue will intersect the remainder
of the ``protein''. In this last approach it is impossible, for a
protein folding from one plane conformation to another plane one, to
use the whole space to achieve this folding.
\begin{figure}
\centering
\caption{Folding process acceptable in $SAW_3$ but not in $SAW_4$}
\label{saw4}
\includegraphics[width=3.25in]{saw4.pdf}
\end{figure}
Obviously, $SAW_4 \subsetneq SAW_3 \subseteq SAW_2 \subseteq SAW_1$.
Indeed, it is easy to prove that $SAW_3 \subsetneq SAW_2$ too, but we
do not know whether $SAW_2 \subsetneq SAW_1$ or not. The study of
these four sets, their cardinality, characterization, and the
consequence of the fact that the NP-completeness of the PSP problem
has been established in $SAW_1$, will be investigated in a future
work.
In the present document we cannot decide what is the most reasonable
approach between $SAW_i$, \linebreak $i \in
\left\{1,\hdots,4\right\}$, that is, the most close to a true natural
protein folding. However, due to its complexity, the $SAW_4$
requirement is never used by tools that embed a 2D HP square lattice
model for protein structure prediction. That is why we will consider,
in this research work, that the so-called ``SAW requirement'' for a 2D
folding process corresponds to the $SAW_3$ approach detailed
previously. Indeed, it is the most used one, and we only want to
study the ability of PSP software to find the most probable 2D
conformation. Thus, in what follows, the set of acceptable
conformations with $\mathsf{N}+1$ residues will be the set
$\mathfrak{C}_\mathsf{N}$ (\emph{i.e.}, $\mathcal{C}(P) =
\mathfrak{C}_\mathsf{N}$).
\subsection{A Metric for the Folding Process}
We define a metric $d$ over $\mathcal{X} = \mathfrak{S}_\mathsf{N} \times \llbracket -\mathsf{N};\mathsf{N} \rrbracket^\mathds{N}$ by:
$$\displaystyle{d(X, \check{X}) = d_C(C, \check{C}) + d_F (F, \check{F}).}$$
where
$$ \left\{
\begin{array}{l}
\delta(a,b)=0 \textrm{ if } a=b, \textrm{ otherwise }\delta(a,b)=1, \\
d_C(C, \check{C}) = \displaystyle{\sum_{k=1}^\mathsf{N} \delta(C_k,\check{C}_k)
2^{\mathsf{N}-k}}, \\
d_F (F, \check{F}) = \displaystyle{\dfrac{9}{2 \mathsf{N}} \sum_{k=0}^\infty
\dfrac{|F^k-\check{F}^k|}{10^{k+1}}.}
\end{array}
\right.$$
This new distance for the dynamical description of the protein folding
process in the 2D HP square lattice model can be justified as follows.
The integral part of the distance between two points $X=(C,F)$ and
$\check{X}=(\check{C},\check{F})$ of $\mathcal{X}$ measures the
differences between the current 2D conformations of $X$ and
$\check{X}$. More precisely, if $d_C(C,\check{C})$ is in $\llbracket
2^{N-(k+1)};2^{N-k} \rrbracket$, then the first $k$ terms in the
acceptable conformations $C$ and $\check{C}$ (their absolute
encodings) are equal, whereas the $k+1^{th}$ terms differ: their 2D
conformations will differ after the \linebreak $k+1-$th residue. If
the decimal part of $d(X, \check{X})$ is between $10^{-(k+1)}$ and
$10^{-k}$, then the next k foldings of $C$ and $\check{C}$ will occur
in the same place (residue), same order, and same angle. The decimal
part of $d(X,\check{X})$ will then decrease as the duration where the
folding process is similar increases.
More precisely, $F^k =
\check{F}^k$ (same residue and same angle of rotation at the $k-$th
stage of the 2D folding process) if and only if the $k+1^{th}$ digit
of this decimal part is 0. Lastly, $\frac{9}{\mathsf{2N}}$ is just a
normalization factor.
For instance, if we know where are now the $\mathsf{N}+1$ residues of
our protein $P$ in the lattice (knowledge of the correct
conformation), and if we have discovered what will be its $k$ next
foldings, then we know that the point $X=(C,F)$ describing the folding
process of the considered protein in the 2D model, will be
``somewhere'' into the ball $\mathcal{B}(C, 10^{-k})$, that is, very
close to the point $(C,F)$ if $k$ is large.
\begin{example}
Let us consider two points
\begin{itemize}
\item $X = \left((0,0,0,1,1,1),(3,-4,2)\right)$,
\item and $X' = \left((0,0,0,1,1,1),(3,-4,-6)\right)$
\end{itemize}
of $\mathcal{X}$. We note $X=(C,F)$ and $X'=(C',F)$. $d_C(C,C')=0$,
then these two points have the same current (first) conformation. As
$d_F(F,F') = \frac{9}{2\times 6}\frac{|2-(-6)|}{10^3} = 0.006$ is in
$\left[10^{-3};10^{-2}\right[$, we can deduce that the two next
foldings of $X$ and of $X'$ will lead to identical conformations,
whereas the third folding operation will lead to different
conformations. A possible way to represent these two points of
the phase space is to draw the successive conformations induced by
these points, as illustrated in Figure \ref{fig:representation du
phase space}.
\end{example}
\begin{figure}
\centering
\caption{Representation of $X = \left((0,0,0,1,1,1),(3,-4,2)\right)$ and $X' = \left((0,0,0,1,1,1),(3,-4,-6)\right)$ of the phase space $\mathcal{X}$ ($X$ is in left part of the figure, $X'$ is its right part).}
\label{fig:representation du phase space}
\includegraphics[width=2.75in]{example3.pdf}
\end{figure}
\begin{example}
Figure \ref{fig:representation du phase space2} contains the
representation of the two ``points'' $X =
\left((0,0,0,1,1,1),(3,-4,2)\right)$ and \linebreak $X' =
\left((0,0,1,2,2,2),(-4,-5)\right)$. Let $X=(C,F)$ and
$X'=(C',F')$. We have
$$d_C(C,C') = 2^{6-3}+2^{6-4}+2^{6-5}+2^{6-6} = 15$$
and
$d_F =
\frac{9}{12}\left(\frac{|3-(-4)|}{10}+\frac{|-4-(-5)|}{100}+\frac{|2-0|}{1000}\right)
= 0.534,$
then $d(X,X') = 15.534$. As 15 is in $\left[2^3;2^4\right[$, we can
conclude that the absolute encodings of the two initial
conformations are similar for the first $k=N-4=2$ terms.
\end{example}
\section{Folding Process in 2D Model is Chaotic}
\label{sec:HP=chaos}
\subsection{Motivations}
In our topological description of the protein folding process in the
2D model, all the information is embedded into the folding sequence
$F$. Indeed, roughly speaking, it is as if nature has a function
$\mathcal{N}$ that translates a protein $P$ having a linear
conformation $(0,...,0)$ into an environment $E$, in a folding
sequence $F$, \emph{i.e.}, \linebreak $F=\mathcal{N}(P,E)$. Having
this ``natural'' folding sequence~$F$, we are able to obtain its true
conformation in the 2D model, by computing
$G^n\left(((0,\hdots,0),F)\right)$, where $n$ is the size of $F$. On
our side, we have only a partial knowledge of the environment $E$ and
of the protein $P$ (exact interactions between atoms). We thus
consider $\check{P}$ and $\check{E}$, as close as we can to $P$ and
$E$ respectively. Moreover, we have only a model
$\check{\mathcal{N}}$ of $\mathcal{N}$ as, for instance, we use
various approximations: models for free energy, approximations of
hydrophobic/hydrophilic areas and electro-polarity, etc. This is why
we can only deduce an approximation
$\check{F}=\check{\mathcal{N}}(\check{P},\check{E})$ of the natural
folding sequence $F=\mathcal{N}(P,E)$. One important motivation of
this work is to determine whether, having an approximation $\check{F}$
of $F$, we obtain a final conformation $\check{C} =
G^{\check{n}}\left(((0,\hdots,0),\check{F})\right)_0$ close to the
natural conformation $C = G^{n}\left(((0,\hdots,0),F)\right)_0$ or
not. In this last sentence, $n$ and $\check{n}$ are the sizes of $F$
and $\check{F}$ respectively, and the terms ``approximation'' and
``close'' can be understood by using $d_F$ and $d_C$, respectively. To
sum up, even if we cannot have access with an infinite precision to
all of the forces that participate to the folding process,
\emph{i.e.}, even if we only know an approximation ${X'}^0 =
\left((0,\hdots,0),\check{F}\right)$ of $X^0=
\left((0,\hdots,0),F\right)$, can we claim that the predicted
conformation ${X'}^{n_1} =
G^{n_1}\left(((0,\hdots,0),\check{F})\right)$ still remains close to
the true conformation ${X}^{n_2} =
G^{n_2}\left(((0,\hdots,0),F)\right)$? Or, on the contrary, do we
have a chaotic behavior, a kind of butterfly effect that magnifies any
error on the evaluation of the forces in presence?
Raising such a question leads to the study of the dynamical behavior
of the folding process.
\begin{figure}
\centering
\caption{Representation of $X = \left((0,0,0,1,1,1),(3,-4,2)\right)$
and $X' = \left((0,0,1,2,2,2),(-4,-5)\right)$ of the phase space
$\mathcal{X}$ ($X$ is in left part of the figure, $X'$ is its right
part).}
\label{fig:representation du phase space2}
\includegraphics[width=3in]{example4.pdf}
\end{figure}
\subsection{Continuity of the Folding Process}
We will now give a first proof of the chaotic behavior of the protein
folding dynamics in the 2D model. To do so, we must establish first
that $G$ is a continuous map on $(\mathcal{X},d)$. Indeed, the
mathematical theory of chaos only studies dynamical systems defined by
a recurrence relation of the form $X^{n+1}=G(X^n)$, with $G$
continuous.
\begin{proposition}
$G$ is a continuous map on $(\mathcal{X},d)$.
\end{proposition}
\begin{proof}
We will use the sequential characterization of the continuity. Let
$(X^n)_{n \in \mathds{N}} = \left((C^n,F^n)\right)_{n \in \mathds{N}}
\in \mathcal{X}^\mathds{N},$ such that $X^n \rightarrow X =
(\check{C},\check{F})$. We will then show that $G\left(X^n\right)
\rightarrow G(X)$. Let us remark that $\forall n \in \mathds{N}, F^n$
is a sequence: $F$ is thus a sequence of sequences.
On the one hand, as $X^n=(C^n,F^n) \rightarrow (\check{C},\check{F})$,
we have $d_C\left(C^n,\check{C}\right) \rightarrow 0$, thus $\exists
n_0 \in \mathds{N},$ $n \geqslant n_0$ $\Rightarrow
d_C(C^n,\check{C})=0$. That is, $\forall n \geqslant n_0$ and $\forall
k \in \llbracket 1;\mathsf{N} \rrbracket$, $\delta(C_k^n,\check{C}_k)
= 0$, and so $C^n = \check{C}, \forall n \geqslant n_0.$ Additionally,
since $d_F(F^n,\check{F}) \rightarrow 0$, $\exists n_1 \in \mathds{N}$
such that we have $d_F(F^n_1, \check{F}) \leqslant \frac{1}{10}$. As
a consequence, $\exists n_1 \in \mathds{N},$ $\forall n \geqslant
n_1$, the first term of the sequence $F^n$ is $\check{F}^0$: $i(F^n) =
i(\check{F})$. So, $\forall n \geqslant max(n_0,n_1),$
$f_{i(F^n)}\left(C^n\right)=
f_{i\left(\check{F}\right)}\left(\check{C}\right)$, and then
$f_{i(F^n)}\left(C^n\right)$ $\rightarrow$
$f_{i\left(\check{F}\right)}\left(\check{C}\right)$.
On the other hand, $\sigma(F^n) \rightarrow \sigma(\check{F})$.
Indeed, $F^n \rightarrow \check{F}$ implies $\sum_{k=0}^{\infty}
\frac{| \left(F^n\right)^k-\check{F}^k |}{10^{k+1}} \rightarrow 0$,
from which we obtain $\frac{1}{10} \sum_{k=0}^{\infty} \frac{|
\left(F^n\right)^{k+1}-\check{F}^{k+1} |}{10^{k+1}} \rightarrow 0$,
so $\sum_{k=0}^{\infty} \frac{| \sigma(F^n)^k-\sigma(\check{F})^k
|}{10^{k+1}}$ converges towards 0. Finally, $\sigma(F^n) \rightarrow
\sigma(\check{F})$.
Since we have shown that $f_{i(F^n)}\left(C^n\right)$ $\rightarrow$
$f_{i\left(\check{F}\right)}\left(\check{C}\right)$ and $\sigma(F^n)
\rightarrow \sigma(\check{F})$, we conclude that
$G\left(X^n\right) \rightarrow G(X)$.
\end{proof}
It is now possible to study the chaotic behavior of the folding
process.
\subsection{A First Fundamental Lemma}
Let us start by introducing the following fundamental lemma, meaning
that we can transform any acceptable conformation to any other one in
$SAW_3$, by finding a relevant folding sequence.
\begin{lemma}
\label{lem}
$\forall C,C'$ in $\mathfrak{C}_\mathsf{N},$ $\exists n \in
\mathds{N}^*$ and $k_1, \hdots, k_n$ in $\llbracket -\mathsf{N};
\mathsf{N} \rrbracket$ s.t.
$$G^n\left((C,(k_1, \hdots,
k_n,0,\hdots))\right) = \left(C',(0,\hdots,0)\right).$$
\end{lemma}
\begin{proof}
As we consider conformations of $\mathfrak{C}_\mathsf{N}$, we take place in the $SAW_3$ requirement, and thus there exist \linebreak $n_1, n_2 \in \mathds{N}^*$ and $l_1, \hdots, l_{n_1}, m_1, \hdots, m_{n_2}$ in $\llbracket -\mathsf{N}; \mathsf{N} \rrbracket$ such that $C = G^{n_1}\left(((0,...,0),(l_1, \hdots, l_{n_1}))\right)$ and \linebreak $C' = G^{n_2}\left(((0,...,0),(m_1, \hdots, m_{n_2}))\right)$.
The result of the lemma is then obtained with $$(k_1, \hdots, k_n) = (-l_{n_1},-l_{n_1-1},\hdots,-l_1,m_1,\hdots, m_{n_2}).$$
\end{proof}
\subsection{Regularity and Transitivity}
Let us recall that the first component $X_0$ of $X=(C,F)$ is the
current conformation $C$ of the protein and the second component $X_1$
is its future folding process $F$. We will now prove that,
\begin{proposition}
Folding process in 2D model is regular.
\end{proposition}
\begin{proof}
Let $X=(C,F) \in \mathcal{X}$ and $\varepsilon > 0$.
Then we define $k_0=-\lfloor log_{10} (\varepsilon) \rfloor$ and $\tilde{X}$ such that:
\begin{enumerate}
\item $\tilde{X}_0 = C$,
\item $\forall k \leqslant k_0, G^k(\tilde{X})_1 = G^k(X)_1$,
\item $\forall i \in \llbracket 1; n \rrbracket,
G^{k_0+i}(\tilde{X})_1 = k_i$,
\item $\forall i \in \mathds{N}, G^{k_0+n+i+1}(\tilde{X})_1 =
G^i(\tilde{X})_1$,
\end{enumerate}
where $k_1, \hdots, k_n$ are integers given by Lemma \ref{lem} with $C=G^{k_0}(X)_0$ and $C'= X_0$.
Such an $\tilde{X}$ is a periodic point for $G$ into the ball $\mathcal{B}(X,\varepsilon)$.
(1) and (2) are to make $\tilde{X}$
$\varepsilon-$close to $X$, (3) is for mapping the conformation $G^{k_0}(\tilde{X})_0$ into $C$ in at most $n$ foldings.
Lastly, (4) is for the periodicity of the folding process.
\end{proof}
Let us now consider the second property required in the Devaney's
definition. Instead of proving the transitivity of $G$, we will
establish its strong transitivity:
\begin{definition}
A dynamical system $\left( \mathcal{X}, f\right)$ is strongly
transitive if $\forall x,y \in \mathcal{X},$ $\forall r > 0,$ $\exists
z \in \mathcal{X},$ $d(z,x) \leqslant r \Rightarrow$ $\exists n \in
\mathds{N}^*,$ $f^n(z)=y$.
\end{definition}
In other words, for all $x,y \in \mathcal{X}$, it is possible to find
a point $z$ in the neighborhood of $x$ such that an iterate $f^n(z)$
is $y$.
Obviously, strong transitivity implies transitivity. Let us now prove
that,
\begin{proposition}
Folding process in 2D model is strongly transitive.
\end{proposition}
\begin{proof}
Let $X_A=(C_A,F_A)$, $X_B=(C_B, F_B)$, and $\varepsilon > 0$. We will
show that $X \in \mathcal{B}\left(X_A, \varepsilon\right)$ and $n \in
\mathds{N}$ can be found such that $G^n(X)=X_B$. Let $k_0 = - \lfloor
log_{10} (\varepsilon ) \rfloor$ and $\check{X}=G^{k_0}(C_A,F_A)$
denoted by $\check{X}=(\check{C},\check{F})$. According to Lemma
\ref{lem} applied to $\check{C}$ and $C_B$, $\exists k_1, \hdots, k_n$
in $\llbracket -\mathsf{N}, \mathsf{N} \rrbracket$ such
that $$G^n\left((\check{C}, (k_1, \hdots, k_n,0,\hdots))\right) =
\left(C_B, (0, \hdots )\right).$$
Let us define $X=(C,F)$ in the following way:
\begin{enumerate}
\item $C=C_A$,
\item $\forall k \leqslant k_0, F^k=F_A^k$,
\item $\forall i \in \llbracket 1; n \rrbracket, F^{k_0+i} =
k_i$,
\item $\forall i \in \mathds{N}, F^{k_0+n+i+1}=F_B^i$.
\end{enumerate}
This point $X$ is thus an element of $\mathcal{B}(X_A,\varepsilon)$
(due to $1,2$) being such that $G^{k_0+n+1}(X) = X_B$ (by
using $3,4$). As a consequence, $G$ is strongly transitive.
\end{proof}
Strong transitivity states that being as close as possible of the true
folding process (2D model) is not a guarantee of success. Indeed, let
$P$ be a protein under interest and $F$ its natural folding process in
the 2D model. Then, for all possible conformation $C$ of the square
lattice, there exists a folding sequence $\check{F}$ very close to $F$
leading to $C$. More precisely, for any $\varepsilon > 0$ (as small
as possible), an infinite number of folding sequences are in
$\mathcal{B}_{d_F}(F,\varepsilon)$ and lead to $C$. The strong
transitivity property implies that without the knowledge of the
\emph{exact} initial condition (the natural folding process, and thus
the exact free energy), all the conformations are possible.
Additionally, no conformation of the square lattice can be discarded
when studying a protein folding in the 2D HP square lattice model: the
dynamical system obtained by such a formalization is intrinsically
complicated and cannot be decomposed or simplified. Furthermore, this
trend to visit the whole space of acceptable conformations is
counteracted by elements of regularity stated before: it is even
impossible to dress a kind of qualitative description of the dynamics
in the 2D square lattice model, as two points close to each other can
have fundamentally different behaviors.
\subsection{Chaotic behavior of the folding process}
As $G$ is regular and (strongly) transitive, we have:
\begin{theorem}
\label{leth}
The folding process $G$ in the 2D model is chaotic according to Devaney.
\end{theorem}
Consequently this process is highly sensitive to its initial
conditions. If the 2D model can accurately describe the natural
process, then Theorem~\ref{leth} implies that even a minute difference
on an intermediate conformation of the protein, in forces that act in
the folding process, or in the position of an atom, can lead to
enormous differences in its final conformation, even over fairly small
timescales. This is the so-called butterfly effect. In particular,
it seems very difficult to predict, in this 2D model, the structure of
a given protein by using the knowledge of the structure of similar
proteins. Let us remark that the whole 3D folding process with real
torsion angles is obviously more complex than this 2D HP model. And
finally, that chaos refers to our incapacity to make good prediction,
it does not mean that the biological process is a random one.
Before studying some practical aspects of this unpredictability in
Section \ref{Sec:Consequences}, we will initiate a second proof of the
chaotic behavior of this process and deepen its chaotic properties.
\section{Outlines of a second proof}
\label{sec:CI=chaos}
In this section a second proof of the chaotic behavior of the protein
folding process is given. It is proven that the folding dynamics can
be modeled as chaotic iterations (CIs). CIs are a tool used in
distributed computing and in the computer science security field
\cite{guyeuxTaiwan10} that has been established to be chaotic
according to Devaney \cite{guyeux10}.
This second proof is the occasion to introduce these CIs, which will
be used at the end of this paper to study whether a chaotic behavior
is really more difficult to learn with a neural network than a
``normal'' behavior.
\subsection{Chaotic Iterations: Recalls of Basis}
Let us consider a \emph{system} with a finite number $\mathsf{N} \in
\mathds{N}^*$ of elements (or \emph{cells}), so that each cell has a
Boolean \emph{state}. A sequence of length $\mathsf{N}$ of Boolean
states of the cells corresponds to a particular \emph{state of the
system}. A sequence, which elements are subsets of $\llbracket
1;\mathsf{N} \rrbracket $, is called a \emph{strategy}. The set of all
strategies is denoted by $\mathbb{S}$ and the set $\mathds{B}$ is for
the Booleans $\{0,1\}$.
\begin{definition}
\label{Def:chaotic iterations}
Let $f:\mathds{B}^{\mathsf{N}}\longrightarrow \mathds{B}^{\mathsf{N}}$
be a function and $S\in \mathbb{S}$ be a strategy. The so-called
\emph{chaotic iterations} (CIs) are defined by $x^0\in
\mathds{B}^{\mathsf{N}}$ and $\forall n\in \mathds{N}^{\ast }, \forall i\in
\llbracket1;\mathsf{N}\rrbracket ,$
$$
x_i^n=\left\{
\begin{array}{ll}
x_i^{n-1} & \text{ if } i \notin S^n \\
\left(f(x^{n-1})\right)_i & \text{ if } i \in S^n.
\end{array}\right.
$$
\end{definition}
In other words, at the $n^{th}$ iteration, only the $S^{n}-$th cells
are ``iterated''. Let us notice that the term ``chao\-tic'', in the
name of these iterations, has \emph{a priori} no link with the
mathematical theory of chaos recalled previously. We will now recall
that CIs can be written as a dynamical system, and characterize
functions $f$ such that their CIs are chaotic according to Devaney
\cite{guyeux09}.
\subsection{CIs and Devaney's chaos}
Let
$f: \mathds{B}^{\mathsf{N}}\longrightarrow \mathds{B}^{\mathsf{N}}$.
We define $F_{f}:$
$\llbracket1;\mathsf{N}\rrbracket\times \mathds{B}^{\mathsf{N}}\longrightarrow
\mathds{B}^{\mathsf{N}}$
by:
$$
F_{f}(k,E)=\left( E_{j} \cdot\delta (k,j)+f(E)_{k} \cdot
\overline{\delta (k,j)}\right)_{j\in \llbracket1;\mathsf{N}\rrbracket},
$$
where + and $\cdot$ are the Boolean addition and product operations,
and $\overline{x}$ is for the negation of $x$.
We have proven in \cite{guyeux09} that chaotic iterations can be
described by the following dynamical system:
$$
\left\{
\begin{array}{l}
X^{0}\in \tilde{\mathcal{X}} \\
X^{k+1}=\tilde{G}_{f}(X^{k}),
\end{array}
\right.
$$
where $\tilde{G}_{f}\left((S,E)\right) =\left( \sigma
(S),F_{f}(i(S),E)\right)$, and $\tilde{\mathcal{X}}$ is a metric space
for an ad hoc distance such that $\tilde{G}$ is continuous on
$\mathcal{X}$ \cite{guyeux09}.
Let us now consider the following oriented graph, called \emph{graph
of iterations}. Its vertices are the elements of
$\mathds{B}^\mathsf{N}$, and there is an arc from $x = (x_1, \hdots,
x_i, \hdots, x_\mathsf{N}) \in \mathds{B}^\mathsf{N}$ to $x = (x_1,
\hdots, \overline{x_i}, \hdots, x_\mathsf{N})$ if and only if
$F_f(i,x) = (x_1, \hdots, \overline{x_i}, \hdots, x_\mathsf{N})$. If
so, the label of the arc is $i$. In the following, this graph of
iterations will be denoted by $\Gamma(f)$.
We have proven in \cite{bcgr11:ip} that:
\begin{theorem}
\label{Th:Caracterisation des IC chaotiques}
Functions $f : \mathds{B}^{n} \to \mathds{B}^{n}$ such that
$\tilde{G}_f$ is chaotic according to Devaney, are functions such that
the graph $\Gamma(f)$ is strongly connected.
\end{theorem}
We will now show that the protein folding process can be modeled as
chaotic iterations, and conclude the proof by using the theorem
recalled above.
\subsection{Protein Folding as Chaotic Iterations}
The attempt to use chaotic iterations in order to model protein
folding can be justified as follows. At each iteration, the same
process is applied to the system (\emph{i.e.}, to the conformation),
that is the folding operation. Additionally, it is not a necessity
that all of the residues fold at each iteration: indeed it is possible
that, at a given iteration, only some of these residues folds. Such
iterations, where not all the cells of the considered system are to be
updated, are exactly the iterations modeled by CIs.
Indeed, the protein folding process with folding sequence $(F^n)_{n
\in \mathds{N}}$ consists in the following chaotic iterations: $C^0 =
(0,0, \hdots, 0)$ and,
$$
C_{|i|}^{n+1} = \left\{
\begin{array}{ll}
C_{|i|}^n & \textrm{if } i \notin S^n\\
f^{sign(i)}(C^n)_i & \textrm{else}
\end{array}
\right.,
$$
where the chaotic strategy is defined by $\forall n \in \mathds{N},
\linebreak S^n=\llbracket -\mathsf{N}; \mathsf{N} \rrbracket \setminus
\llbracket -F^n; F^n \rrbracket$.
Thus, to prove that the protein folding process is chaotic as defined
by Devaney, is equivalent to prove that the graph of iterations of the
CIs defined above is strongly connected. This last fact is obvious, as
it is always possible to find a folding process that map any
conformation $(C_1, \hdots, C_\mathsf{N}) \in \mathfrak{C}_\mathsf{N}$
to any other \linebreak $(C_1', \hdots, C_\mathsf{N}') \in
\mathfrak{C}_\mathsf{N}$ (this is Lemma \ref{lem}).
Let us finally remark that it is easy to study processes s.t. more
than one fold occur per time unit, by using CIs. This point will be
deepened in a future work. We will now investigate some consequences
resulting from the chaotic behavior of the folding process.
\section{Qualitative and quantitative evaluations}
Behaviors qualified as ``chaos'' are too complicated to be encompassed
by only one rigorous definition, as perfect as it could be. Indeed,
the mathematical theory of chaos brings several nonequivalent
definitions for a complex, unpredictable dynamical system, each of
them highlighting this complexity in a well-defined but restricted
understanding. This is why, in this section, we continue the
evaluation of the chaotic behavior of the 2D folding dynamical system
initiated by the proof of the Devaney's chaos.
\subsection{Qualitative study}
\label{QUALITATIVE MEASURE}
First of all, the transitivity property implies the indecomposability
of the system:
\begin{definition}
A dynamical system $\left( \mathcal{X}, f\right)$ is indecomposable if
it is not the union of two closed sets $A, B \subset \mathcal{X}$ such
that $f(A) \subset A, f(B) \subset B$.
\end{definition}
Thus it is impossible to reduce, in the 2D model, the set of protein
foldings in order to simplify its complexity. Furthermore, the
folding process has the instability property:
\begin{definition}
A dynamical system $\left( \mathcal{X}, f\right)$ is unstable if for
all $x \in \mathcal{X}$, the orbit $\gamma_x:n \in \mathds{N}
\longmapsto f^n(x)$ is unstable, that is: $\exists \varepsilon > 0,$
$\forall \delta > 0,$ $\exists y \in \mathcal{X},$ $\exists n \in
\mathds{N},$ $d(x,y) < \delta$ and $d\left(\gamma_x(n),
\gamma_y(n)\right) \geqslant \varepsilon.$
\end{definition}
This property, which is implied by the sensitive dependence to the
initial conditions, leads to the fact that in all of the neighborhoods
of any $x$, there are points that are separated from $x$ under
iterations of $f$. We thus can claim that the behavior of the folding
process is unstable.
\subsection{Quantitative measures}
\label{QUANTITATIVE MEASURE}
\label{par:Sensitivity}
One of the most famous measures in the theory of chaos is the constant
of sensitivity given in Definition \ref{sensitivity}. Intuitively, a
function $f$ having a constant of sensitivity equal to $\delta $
implies that there exists points arbitrarily close to any point $x$
that \emph{eventually} separate from $x$ by at least $\delta $ under
some iterations of $f$. This induces that an arbitrarily small error
on an initial condition \emph{may} be magnified upon iterations of
$f$. The sensitive dependence on the initial conditions is a
consequence of regularity and transitivity in a metrical
space~\cite{Banks92}. However, the constant of sensitivity $\delta$
can be obtained by proving the property without using Banks' theorem.
\begin{proposition}
Folding process in the 2D model has sensitive dependence on initial
conditions on $(\mathcal{X},d)$ and its constant of sensitivity is at
least equal to $2^{\mathsf{N}-1}$.
\end{proposition}
\begin{proof}
Let $X = (C,F) \in \mathcal{X}$, $r>0$, $B =
\mathcal{B}\left(X,r\right)$ an open ball centered in $X$, and $k_0
\in \mathds{Z}$ such that $10^{-k_0-1} \leqslant r < 10^{-k_0}$. We
define $\tilde{X}$ by:
\begin{itemize}
\item $\tilde{C} = C$,
\item $\tilde{F}^k = F^k$, $\forall k \in \mathds{N}$ such that $k
\leqslant k_0$,
\item $\tilde{F}^{k_0+1} = 1$ if $\left|F^{k_0+1}\right| \neq 1$, and
$\tilde{F}^{k_0+1} = - F^{k_0+1}$ else.
\item $\forall k \geqslant k_0+2, \tilde{F}^{k} = - F^{k}$.
\end{itemize}
Only two cases can occur:
\begin{enumerate}
\item If $\left|F^{k_0+1}\right| \neq 1$, then
\begin{flushleft}
$d\left( G^{k_0+1}\left(X\right), G^{k_0+1}\left(\tilde{X}\right)\right)$
\end{flushleft}
\begin{flushright}
$\begin{array}{l}
\displaystyle{= 2^{\mathsf{N}-1}+2^{\mathsf{N}-F^{k_0+1}}+\dfrac{9}{2\mathsf{N}}
\sum_{k=k_0+1}^{\infty} \dfrac{\left| F^k - \tilde{F}^k \right|}{10^{k+1}}}\\
\displaystyle{= 2^{\mathsf{N}-1}+2^{\mathsf{N}-F^{k_0+1}}+\dfrac{9}{2\mathsf{N}}
\sum_{k=k_0+1}^{\infty} \dfrac{2 \mathsf{N}}{10^{k+1}}}\\
\displaystyle{= 2^{\mathsf{N}-1}+2^{\mathsf{N}-F^{k_0+1}}+9 \dfrac{1}{10^{k_0+2}}
\dfrac{1}{1 - \dfrac{1}{10}} }\\
\displaystyle{= 2^{\mathsf{N}-1}+2^{\mathsf{N}-F^{k_0+1}}+\dfrac{1}{10^{k_0+1}}.}\\
\end{array}$
\end{flushright}
\item Else, $d\left( G^{k_0+1}\left(X\right), G^{k_0+1}\left(\tilde{X}\right)\right)
= 2^{\mathsf{N}-1}+\dfrac{1}{10^{k_0+1}}.$
\end{enumerate}
In all of these cases, the sensibility to the initial condition is
greater than $2^{\mathsf{N}-1}$.
\end{proof}
Let us now recall another common quantitative measure of disorder of a dynamical system.
\begin{definition}
A function $f$ is said to have the property of \emph{expansivity} if
$$
\exists \varepsilon >0,\forall x\neq y,\exists n\in \mathbb{N}%
,d(f^{n}(x),f^{n}(y))\geqslant \varepsilon .
$$
\end{definition}
Then $\varepsilon $ is the \emph{constant of expansivity} of $f$: an
arbitrarily small error on any initial condition is \emph{always}
amplified until $\varepsilon $.
\begin{proposition}
The folding process in the 2D model is an expansive chaotic system on
$(\mathcal{X},d)$. Its constant of expansivity is at least equal to 1.
\end{proposition}
\begin{proof}
Let $X=(C,F)$ and $X'=(C',F')$ such that $X \neq X'$.
\begin{itemize}
\item If $C \neq C'$, then $\exists k_0 \in \llbracket 1;\mathsf{N}
\rrbracket, C_{k_0} \neq C_{k_0}'$. So, $$d\left( G^0(X) , G^0(X')
\right) \geqslant 2^{\mathsf{N}-k_0} \geqslant 1.$$
\item Else $F' \neq F$. Let $k_0 = min \left\{ k \in \mathds{N}, F^k
\neq F'^k \right\}.$ Then $\forall k < k_0, G^k(X) = G^k(X')$. Let
$\check{X} = (\check{C}, \check{F}) = G^{k_0-1}(X) =
G^{k_0-1}(X')$.
\begin{flushleft}
Then $d\left( G^{k_0}\left(X\right), G^{k_0}\left(X'\right)\right)$
\end{flushleft}
\vspace{-0.5cm}
\begin{flushright}
$\begin{array}{l} \geqslant d_C\left( f_{F^{k_0}}\left(\check{C}_1,
\hdots, \check{C}_\mathsf{N}\right), f_{F'^{k_0}}\left(\check{C}_1,
\hdots, \check{C}_\mathsf{N}\right)\right)\\ \geqslant d_C\left(
\left(\check{C}_1, \hdots,\check{C}_{\left|F^{k_0} \right|-1},
f^{sign(F^{k_0})}\left(\check{C}_{\left|F^{k_0}\right|}\right),
\hdots,\right.\right. \\
\left.f^{sign(F^{k_0})}\left(\check{C}_{\mathsf{N}}\right)\right),
\left(\check{C}_1, \hdots,\check{C}_{\left|F'^{k_0}
\right|-1},\right.\\
\left.\left.f^{sign(F'^{k_0})}\left(\check{C}_{\left|F'^{k_0}\right|}\right),
\hdots,
f^{sign(F'^{k_0})}\left(\check{C}_{\mathsf{N}}\right)\right)\right)\\
\geqslant 2^{\mathsf{N}-max\left(\left|F^{k_0}\right|,
\left|F'^{k_0}\right|\right)} \\ \geqslant 1.
\end{array}$
\end{flushright}
\end{itemize}
\end{proof}
So the result is established.
\section{Consequences}
\label{Sec:Consequences}
\subsection{Is Chaotic Behavior Incompatible with Approximately one
Thousand Folds?}
Results established previously only concern the folding process in the
2D HP square lattice model. At this point, it is natural to wonder if
such a model, being a reasonable approximation of the true natural
process, is chaotic because this natural process is chaotic too.
Indeed, claiming that the natural protein folding process is chaotic
seems to be contradictory with the fact that only approximately one
thousand folds have been discovered this last decade
\cite{Andreeva01012004}. The number of proteins that have an
understood 3D structure increases largely year after year. However
the number of new categories of folds seems to be limited by a fixed
value approximately equal to one thousand. Indeed, there is no
contradiction as a chaotic behavior does not forbid a certain form of
order. As stated before, chaos only refers to limitations in
prediction. For example, seasons are not forbidden even if weather
forecast has a non-intense chaotic behavior. A similar regularity
appears in brains: even if hazard and chaos play an important role on
a microscopic scale, a statistical order appears in the neural
network.
That is, a certain order can emerge from a chaotic behavior, even if
it is not a rule of thumb. More precisely, in our opinion these
thousand folds can be related to basins of attractions or strange
attractors of the dynamical system, objects that are well described by
the mathematical theory of chaos. Thus, it should be possible to
determine all of the folds that can occur, by refining our model and
looking for its basins of attractions with topological tools.
However, this assumption still remains to be investigated.
\subsection{Is Artificial Intelligence able to Predict Chaotic Dynamic?}
We will now focus on the impact of using a chaotic model for
prediction. We give some results on two kinds of experiments, both
using neural networks. Firstly, we will study whether a
(mathematical) chaotic behavior can be learned by a neural network or
not. Therefore, we design a global recurrent network that models the
function $F_f$ introduced in the previous section and we show that it
is more difficult to train the network when $f$ is chaotic. These
considerations have been formerly proposed in \cite{bgs11:ip} and are
extended here. Secondly, we will try to learn the future conformation
of proteins that consist of a small number of residues. Our objective
is to assess if a neural network can learn the future conformation
given the current one and a sequence of a few folds.
In this work, we choose to train a classical neural network
architecture: the MultiLayer Perceptron, a model of network widely
used and well-known for its universal approximation property
\cite{DBLP:journals/nn/HornikSW89}. Let us notice that for the first
kind of experiments global feedback connections are added, in order to
have a proper modeling of chaotical iterations, while for the latter
kind of experiments the MLPs used are feed-forward ones. In both cases
we consider networks having sigmoidal hidden neurons and output
neurons with a linear activation function. They are trained using the
Limited-memory Broyden-Fletcher-Glodfarb-Shanno quasi-Newton algorithm
with Wolfe linear search. The training process can either be
controlled by the number of network parameters (weights and biases)
updates, also called epochs, or by a mean square error criterion.
\subsubsection{Can a Neural Network Learn Chaotic Functions?}
\smallskip
{\it Experimental Protocol}
\smallskip
We consider $f:\mathds{B}^\mathsf{N} \longrightarrow
\mathds{N}^\mathsf{N}$, strategies of singletons ($\forall n \in
\mathds{N}, S^n \in \llbracket 1; \mathsf{N} \rrbracket$), and a MLP
that recognize $F_{f}$. That means, for all $(k,x) \in \llbracket 1 ;
\mathsf{N} \rrbracket \times \mathds{B}^\mathsf{N}$, the response of
the output layer to the input $(k,x)$ is $F_{f}(k,x)$. We thus
connect the output layer to the input one as it is depicted in
Fig.~\ref{perceptron}, leading to a global recurrent artificial neural
network working as follows \cite{bgs11:ip}.
\begin{figure}[b]
\centering
\includegraphics[width=3.25in]{perceptron.pdf}
\caption{Recurrent neural network modeling $F_{f}$
\label{perceptron}
\hfil
\end{figure}
At the initialization stage, the network receives a Boolean vector
$x^0\in\mathds{B}^\mathsf{N}$ as input state, and $S^0 \in \llbracket
1;\mathsf{N}\rrbracket$ in its input integer channel $i()$. Thus, $x^1
= F_{f}(S^0, x^0)\in\mathds{B}^\mathsf{N}$ is computed by the neural
network. This state $x^1$ is published as an output. Additionally,
$x^1$ is sent back to the input layer, to act as Boolean state in the
next iteration. Finally, at iteration number $n$, the recurrent neural
network receives the state $x^n\in\mathds{B}^\mathsf{N}$ from its
output layer and $i\left(S^n\right) \in \llbracket
1;\mathsf{N}\rrbracket$ from its input integer channel $i()$. It can
thus calculate $x^{n+1} = F_{f}(i\left(S^n\right),
x^n)\in\mathds{B}^\mathsf{N}$, which will be the new output of the
network. Obviously, this particular MLP produces exactly the same
values as CIs with update function $f$. That is, such MLPs are
equivalent, when working with $i(s)$, to CIs with $f$ as update
function and strategy $S$ \cite{bgs11:ip}.
Let us now introduce the two following functions:
\begin{itemize}
\item $f_1(x_{1},x_2,x_3)=(\overline{x_{1}},\overline{x_{2}},\overline{x_{3}})$,
\item $f_2(x_{1},x_2,x_3)=(\overline{x_{1}},x_{1},x_{2})$.
\end{itemize}
It can easily be checked that these functions satisfy the hypothesis
of Theorem \ref{Th:Caracterisation des IC chaotiques}, thus their CIs
are chaotic according to Devaney. Then, when the MLP defined
previously learns to recognize $F_{f_1}$ or $F_{f_2}$, it tries to
learn these CIs, that is, a chaotic behavior as defined by Devaney
\cite{bgs11:ip}. On the contrary, the function
$$
g(x_{1},x_2,x_3)=(\overline{x_{1}},x_{2},x_{3})
$$ is such that $\Gamma(g)$ is not strongly connected. In this case,
due to Theorem \ref{Th:Caracterisation des IC chaotiques}, the MLP
does not learn a chaotic process. We will now recall the study of the
training process of functions $F_{f_1}$, $F_{f_2}$, and $F_{g}$
\cite{bgs11:ip}, that is to say, the ability to learn one iteration of
CIs.
\medskip
\noindent {\it Experimental Results}
\smallskip
For each neural network we have considered MLP architectures with one
and two hidden layers, with in the first case different numbers of
hidden neurons. Thus we will have different versions of a neural
network modeling the same iteration function \cite{bgs11:ip}. Only the
size and number of hidden layers may change, since the numbers of
inputs and output neurons are fully specified by the function. The
training is performed until the learning error is lower than a chosen
threshold value ($10^{-2}$).
\begin{table}[!t]
\renewcommand{\arraystretch}{1.3}
\caption{Results of some iteration functions learning, using different
recurrent MLP architectures}
\label{tab1}
\centering
\begin{scriptsize}
\begin{tabular}{|c||c|c|c|c|}
\hline
& \multicolumn{4}{c|}{One hidden layer} \\
\cline{2-5}
& \multicolumn{2}{c|}{8 neurons} & \multicolumn{2}{c|}{10 neurons} \\
\hline
Function & Mean & Success & Mean & Success \\
& epoch & rate & epoch & rate \\
\hline
$f_1$ & 82.21 & 100\% & 73.44 & 100\% \\
$f_2$ & 76.88 & 100\% & 59.84 & 100\% \\
$g$ & 36.24 & 100\% & 37.04 & 100\% \\
\hline
\hline
& \multicolumn{4}{c|}{Two hidden layers: 8 and 4 neurons} \\
\cline{2-5}
& \multicolumn{2}{c|}{Mean epoch number} & \multicolumn{2}{|c|}{Success rate} \\
\hline
$f_1$ & \multicolumn{2}{c|}{203.68} & \multicolumn{2}{c|}{76\%} \\
$f_2$ & \multicolumn{2}{c|}{135.54} & \multicolumn{2}{c|}{96\%} \\
$g$ & \multicolumn{2}{c|}{72.56} & \multicolumn{2}{c|}{100\%} \\
\hline
\end{tabular}
\end{scriptsize}
\end{table}
Table~\ref{tab1} gives for each considered neural network the mean
number of epochs needed to learn one iteration in their ICs, and a
success rate that reflects a successful training in less than 1000
epochs. Both values are computed considering 25 trainings with random
weights and biases initialization. These results highlight several
points \cite{bgs11:ip}. First, the two hidden layer structure seems to
be quite inadequate to learn chaotic behaviors. Second, training
networks so that they behave chaotically seems to be difficult for
these simplistic functions only iterated one time, since they need in
average more epochs to be correctly trained. In the case of the two
hidden layer network topology, a comparison of the mean number of
epochs needed for a successful learning of 10~chaotic functions with
that obtained for 10~non chaotic functions reinforces the previous
observation. Indeed, the learning of chaotic functions needs in
average 284.57~epochs, whereas non~chaotic functions require
232.87~epochs. In the future, we also plan to consider larger values
for~$\mathsf{N}$.
\subsubsection{Can a Neural Network Predict a Future Protein Conformation?}
\smallskip
{\it Experimental Protocol}
\smallskip
In this second set of experiments, multilayer perceptrons are used to
learn the conformation of very simple proteins (peptides, indeed). In
fact, we consider proteins composed of five residues, of which only 4
can change since the first one is always $0$, and folding dynamics of
two or three folds. For example, if the current protein conformation
is $(0)1222$, and folds $4$ and $-1$ are successively applied, then
the new conformation will be $(0)0112$. Obviously, these choices, that
lead respectively to 20736 and 186624 potential conformations, do not
correspond to realistic folding processes. However, they allow to
evaluate the ability of neural networks to learn very simple
conformations.
The networks consist of MLP with 3 or 4~inputs, the current
conformation without the first residue, and a sequence of 2 or
3~successive folds. It produces a single output: the resulting
conformation. Additionally, we slightly change the classical MLP
structure in order to improve the capacity of such neural networks to
model nonlinear relationships and to be trained faster. Therefore, we
retain the HPU (Higher-order Processing Unit) structure
\cite{DBLP:journals/ijns/GhoshS92}. This latter artificially increases
the number of inputs by adding polynomial combinations of the initial
inputs up to a given degree, called the order of the network. To
prevent overfitting and to assess the generalization performance we
use holdout validation, which means that the data set is split into
learning, validation, and test subsets. These subsets are obtained
through a random sampling strategy.
To estimate the prediction accuracy we use the coefficient of
variation of the root mean square error (CV\-RMSE), usually presented
as a percentage, the average relative variance (AVR), and the
coefficient of efficiency denoted by (E). These measures give a good
estimation of the capacity of a neural network to explain the total
variance of the data. The CVRMSE of the prediction is defined as:
$$
\mbox{CVRMSE}=\frac{100}{\overline{e_k}} \cdot
\sqrt{\frac{\sum_{k=1}^N \left(e_k-p_k\right)^2}{N}},
$$
where $e_k$ is the expected output for the $k-$th input-output pair,
$p_k$ is the predicted output, $N$ is the number of pairs in the test
set, and $\overline{e_k}$ is the mean value of the expected output.
The average relative variance and coefficient of efficiency are
respectively expressed by:
$$
\mbox{ARV}=\frac{\sum_{k=1}^N \left(e_k-p_k\right)^2}
{\sum_{k=1}^N \left(e_k-\overline{e_k}\right)^2} \mbox{ and }
\mbox{E}=1-\mbox{ARV}.
$$
These values reflect the accuracy of the prediction as follows: the
nearer CVRMSE and AVR to $0$, and consequently $E$ close to 1.0, the
better the prediction.
\medskip
\noindent {\it Experimental Results}
\smallskip
The considered neural networks differ in the size of a single hidden
layer and are trained until a maximum number of epochs is reached. As
we set the order of the HPU structure to~$3$, instead of 3 and
4~initial inputs we have 19 and 34~inputs. We train neural networks of
15 and 25~hidden neurons, using as maximum number of epochs a value in
$\{500,1000,2500\}$, whatever the number of initial inputs (see
Table~\ref{tab2}). The learning, validation, and test subsets are
built in such a way that they respectively represent 65\%, 10\%, and
25\% of the whole data set. In the case of 3~initial inputs data sets
of 5184, 10368, and 15552~samples are used, they represent 25\%, 50\%,
and 75\% of the 20736~potential conformations. For the 4~initial
outputs case we restrict our experiments to a data set of
46656~samples, that corresponds to 25\% of the 186624~potential
conformations.
In Table~\ref{tab2} we give, for the different learning setups and
data set sizes, the mean values of CVMSE, AVR, and E for the
output. To compute these values, 10~trainings with random subsets
construction and network parameters initialization were performed. It
can be seen that in all cases the better performances are obtained for
the larger networks (25~hidden neurons) that are the most trained
(2500~epochs). Furthermore, a larger data set allows only a slight
increase of the prediction quality. We observe for a three time
increase of the data set: from 5184 to 15552~samples, a very small
improvement of 1.66\% for the coefficient~$E$.
At a first glance, the prediction accuracy seems not too bad for the
3~initial inputs topology, with coefficients of efficiency above
0.9. However, remember that we try to predict very simple
conformations far away from the realistic ones. Furthermore, a look at
the results obtained for the second topology, the one with~4~initial
outputs, shows that predicting a conformation that undergoes only one
more folding transition is intractable with the considered learning
setups: the efficiency coefficient is always below 0.5. Clearly, the
different neural networks have failed at predicting the protein
folding dynamics. A larger neural network with a longer training
process may be able to improve these results. But finding a learning
setup well suited for the prediction of relevant proteins structures
that are far more complex seems very hypothetical.
\begin{table}[!b]
\renewcommand{\arraystretch}{1.3}
\caption{Results of the validation of networks with an HPU structure of
order~3 for several numbers of hidden neurons}
\label{tab2}
\centering
\begin{scriptsize}
\begin{tabular}{|c||c|c|c|c|}
\hline
Topology & \multicolumn{4}{c|}{3 initial~/~19 HPU inputs and 1 output} \\
\cline{2-5}
Hidden neurons & Epochs & \%~CVRMSE & ARV & E \\
\hline
& \multicolumn{4}{|c|}{Data set of 5184 samples} \\
\cline{2-5}
\multirow{3}{*}{15 neurons} & 500 & 24.97 & 0.3824 & 0.6176 \\
& 1000 & 20.67 & 0.2628 & 0.7372 \\
& 2500 & 16.69 & 0.1731 & 0.8269 \\
\cline{2-5}
\multirow{3}{*}{25 neurons} & 500 & 23.33 & 0.3373 & 0.6627 \\
& 1000 & 15.94 & 0.1565 & 0.8435 \\
& 2500 & 10.75 & 0.0715 & 0.9285 \\
\hline
& \multicolumn{4}{|c|}{Data set of 10368 samples} \\
\cline{2-5}
\multirow{3}{*}{15 neurons} & 500 & 26.27 & 0.4223 & 0.5777 \\
& 1000 & 22.08 & 0.3000 & 0.7000 \\
& 2500 & 18.81 & 0.2225 & 0.7775 \\
\cline{2-5}
\multirow{3}{*}{25 neurons} & 500 & 24.54 & 0.3685 & 0.6315 \\
& 1000 & 16.11 & 0.1591 & 0.8409 \\
& 2500 & 9.43 & 0.0560 & 0.9440 \\
\hline
& \multicolumn{4}{|c|}{Data set of 15552 samples} \\
\cline{2-5}
\multirow{3}{*}{15 neurons} & 500 & 24.74 & 0.3751 & 0.6249 \\
& 1000 & 19.92 & 0.2444 & 0.7556 \\
& 2500 & 16.35 & 0.1659 & 0.8341 \\
\cline{2-5}
\multirow{3}{*}{25 neurons} & 500 & 22.90 & 0.3247 & 0.6753 \\
& 1000 & 15.42 & 0.1467 & 0.8533 \\
& 2500 & 8.89 & 0.0501 & 0.9499 \\
\hline
\hline
Topology & \multicolumn{4}{c|}{4 initial~/~34 HPU inputs and 1 output} \\
\hline
& \multicolumn{4}{|c|}{Data set of 46656 samples} \\
\cline{2-5}
\multirow{3}{*}{15 neurons} & 500 & 35.27 & 0.7606 & 0.2394 \\
& 1000 & 33.50 & 0.6864 & 0.3136 \\
& 2500 & 31.94 & 0.6259 & 0.3741 \\
\cline{2-5}
\multirow{3}{*}{25 neurons} & 500 & 35.05 & 0.7535 & 0.2465 \\
& 1000 & 32.25 & 0.6385 & 0.3615 \\
& 2500 & 28.61 & 0.5044 & 0.4956 \\
\hline
\end{tabular}
\end{scriptsize}
\end{table}
Finally, let us notice that the HPU structure has a major impact on
the learning quality. Indeed, let us consider the coefficient of
efficiency obtained for the data set of 5184~samples and a network
composed of 25~hidden neurons trained during 2500 epochs. As shown in
Table~2 the coefficient $E$ is about 0.9285 if the neural network has
an HPU structure of order~3, whereas experiments made without
increasing the number of initial inputs give 0.6930 as mean value for
$E$. Similar experiments in case of the second topology result in
$E=0.2154$ for the classical structure that has no HPU structure.
That represents a respective decrease of more than 25 and 50\%, from
which we can say that MLP networks with a classical structure would
have given worse predictions.
At this point we can only claim that it is not completely evident that
computational intelligence tools like neural networks are able to
predict, with a good accuracy, protein folding. To reinforce this
belief, tools optimized to chaotic behaviors must be found -- if such
tools exist. Similarly, there should be a link between the training
difficulty and the ``quality'' of the disorder induced by a chaotic
iteration function (their constants of sensitivity, expansivity,
etc.), and this second relation must be found.
\section{Conclusion}
\label{Conclusion}
In this paper the topological dynamics of protein folding has been
evaluated. More precisely, we have studied whether this folding
process is predictable in the 2D model or not. It is achieved to
determine if it is reasonable to think that computational intelligence
tools like neural networks are able to predict the 3D shape of an
amino acids sequence. It is mathematically proven, by using two
different ways, that protein folding in 2D hydrophobic-hydrophilic
(HP) square lattice model is chaotic according to Devaney.
Consequences for both structure prediction and biology are then
outlined. In particular, the first comparison of the learning by
neural networks of a chaotic behavior on the one hand, and of a more
natural dynamics on the other hand, are outlined. The results tend to
show that such chaotic behaviors are more difficult to learn than
non-chaotic ones. It is not our pretension to claim that it is
impossible to predict chaotic behaviors such as protein folding with
computational intelligence. Our opinion is just that this important
point must now be regarded with attention.
In future work the dynamical behavior of the protein folding process
will be more deeply studied, by using topological tools as topological
mixing, Knudsen and Li-Yorke notions of chaos, topological entropy,
etc. The quality and intensity of this chaotic behavior will then be
evaluated. Consequences both on folding prediction and on biology will
then be regarded in detail. This study may also allow us to determine,
at least to a certain extent, what kind of errors on the initial
condition lead to acceptable results, depending on the intended number
of iterations (i.e., the number of folds). Such a dependence may
permit to define strategies depending on the type and the size of the
proteins, their proportion of hydrophobic residues, and so on.
Other molecular or genetic dynamics will be investigate by using
mathematical topology, and other chaotic behaviors will be looked for
(as neurons in the brain). More specifically, various tools taken
from the field of computational intelligence will be studied to
determine if some of these tools are capable to predict behaviors that
are chaotic with a good accuracy. It is highly possible that
prediction depends both on the tool and on the chaos quality.
Moreover, the study presented in this paper will be extended to high
resolution 3D models. Impacts of the chaotic behavior of the protein
folding process in biology will be regarded. Finally, the links
between this established chaotic behavior and stochastic models in
gene expression, mutation, or in Evolution, will be investigated.
\bibliographystyle{spphys}
|
2,869,038,154,013 | arxiv | \section{Introduction}
\label{sec1}
In 1999-2002, we developed a three-dimensional, gas-dynamical model and used
it to study the flow patterns in binary
systems~\cite{cit1,cit2,cit3,cit4,cit5,cit6,cit7,cit8,cit9,cit10,cit11,cit12,cit13,cit14}.
These studies
indicate that the flow structure is substantially affected by rarefied gas
of the intercomponent envelope. In particular, a self-consistent solution
does not include a shock interaction between the stream from the inner
Lagrange point $\mathop{\rm L_1}\nolimits$ and the forming accretion disk (a ``hot spot'').
The region
of enhanced energy release (the ``hot line'') is located beyond the disk and
is due to the interaction between the envelope and the stream. However,
these solutions were obtained for temperatures of the outer parts of the
accretion disk of 200~000 -- 500~000~K. To check if this behavior is
universal, the morphology of the flow must be considered for various disk
temperatures.
First, we will study here the interval of plausible temperatures of
accretion disks in close binaries. In Section~\ref{sec2}, based on an analysis of
heating and cooling in accretion disks, we will show that, for realistic
parameters of the disks in close binaries,
(${\raisebox{1pt}{\hbox{$\stackrel{\bullet}{M}$}}}\ \!\!\simeq10^{-12}\div10^{-7}M_\odot/year$ and
$\alpha\simeq10^{-1}\div10^{-2}$), the gas temperature in the outer parts of
the disk is between 13~600~K
and $\sim 10^6$~K. This implies that cool accretion disks can form in some
close binaries.
Second, we will consider the morphology of the interaction
between
streams of matter and cool accretion disks in semi-detached binary systems
(Sections~\ref{sec3} and~\ref{sec4}). The basic problem here is whether the
interaction between the stream and the disk remains shockless, as was shown
for relatively hot disks \cite{cit1,cit3,cit4,cit8,cit14}.
Section~\ref{sec5} presents our main
conclusions and a physical basis for the universal nature of the shockless
interaction between the stream and disk.
\section{Heating and Cooling in Accretion Disks}
\label{sec2}
In this Section, we consider the temperature of an accretion disk for
various accretion rates, i.e., the dependence $T({\raisebox{1pt}{\hbox{$\stackrel{\bullet}{M}$}}}\ \!\!)$.
\subsection{Basic Equations}
\label{sec21}
The vertical structure of an accretion disk is specified by the balance
between the vertical component of the gravitational force and the (vertical)
pressure gradient, which, in turn, is specified by the balance between
heating and cooling of the gas. The heating is associated with viscous
dissipation of kinetic energy, and also with bulk radiative heating, which,
in turn, is specified by the radiation of the central object. Cooling is
brought about by several mechanisms: bulk radiative cooling, radiative heat
conduction, and convection. Assuming that advective terms and terms
associated with adiabatic heating or cooling are small, the steady-state
energy equation
$$
Q^+-Q^-=0
$$
can be written as follows.
(1) For the optically thin case, when $Q^+$ is specified by bulk radiative
heating and viscous heating and $Q^-$ is determined by bulk radiative cooling,
\begin{equation}
Q^+_{visc}(\rho,T)+n^2\cdot(\Gamma(T,T_{wd})-\Lambda(T))=0\,.
\label{eq1a}
\end{equation}
Here, $\Gamma(T,T_{wd})$ is the radiative-heating function, which depends on
the
gas temperature $T$ and the temperature of the central object $T_{wd}$,
$\Lambda(T)$ is the
radiative-cooling function, and $Q^+_{visc}(\rho,T)$ is the viscous heating.
(2) For the optically thick case, $Q^+$ is specified by viscous heating,
while $Q^-$
is specified by radiative heat conduction\footnote{We neglect molecular
heat conduction since it is very small compared with the radiative heat
conduction.} and convection in the vertical direction:
\begin{equation}
Q^+_{visc}(\rho,T) -\dfrac{\partial F_{rad}}{\partial z}
-\dfrac{\partial F_{conv}}{\partial z} =0\,.
\label{eq1b}
\end{equation}
Here, $F_{rad}$ and $F_{conv}$ are the radiative and convective energy fluxes.
To determine the functions in~\eqref{eq1a} and~\eqref{eq1b}, we will need
\begin{itemize}
\item[--] the equation of continuity
\setcounter{equation}{1}
\begin{equation}
-{\raisebox{1pt}{\hbox{$\stackrel{\bullet}{M}$}}}\ \!\!=2\pi\int r\cdot\rho\cdot v_r {~\rm d}z={\rm const}\,,
\label{cont}
\end{equation}
\item[--] the equation of angular-momentum balance $\lambda\equiv
r^2\Omega_K$ in the radial direction:
\begin{equation}
\dfrac{\partial}{\partial r} \left(r\cdot\rho\cdot v_r\cdot\lambda
\right)= \dfrac{\partial}{\partial r} \left(\nu\cdot\rho\cdot
r^2\cdot \dfrac{\partial\Omega_K}{\partial r}\right)\,,
\label{angmom}
\end{equation}
from which it follows that
\begin{equation}
|v_r|=-\nu\cdot\Omega_K'\cdot\Omega^{-1}_K\cdot r^{-1}
\simeq\nu\cdot r^{-1}\,,
\label{angmom1}
\end{equation}
\item[--] and the viscous heating
\begin{equation}
Q^+_{visc}=\rho\cdot\nu\cdot\left(r\cdot\dfrac{\partial\Omega_K}{\partial
r}\right)^2\,.
\label{vischeat}
\end{equation}
\end{itemize}
Here, ${\raisebox{1pt}{\hbox{$\stackrel{\bullet}{M}$}}}\ \!\!$ is the accretion rate, $\Omega_K=\sqrt{GM/r^3}$
the angular velocity of the
Keplerian rotation of the disk, $M$ the mass of the central object, $G$ the
gravitational constant, $\rho$ -- the density, $v_r$ the radial velocity, and
the $\nu$ -- coefficient of kinematic viscosity.
Note that the molecular viscosity cannot
provide the necessary dissipation, and dissipation processes are usually
considered to be associated with turbulent or magnetic viscosity.
To determine the vertical pressure gradient, we will use the equation of
hydrostatic balance in the vertical direction
\begin{equation}
\dfrac1\rho\cdot \dfrac{\partial P}{\partial z}=
\dfrac{\partial}{\partial z} \left(\dfrac{GM}{\sqrt{r^2+z^2}}\right)
\simeq -\Omega_K^2 z\,, \label{vert}
\end{equation}
as well as the equation of state of an ideal gas with radiation
$$
P=\rho{\cal R}T+\slantfrac13aT^4\,.
$$
Here, $P$ is the pressure, $T$ the temperature, ${\cal R}$ the gas constant,
and $a$ the radiation constant. All equations are given in cylindrical
coordinates, $(r,z)$.
\subsection{The Solution Method}
\label{sec22}
To determine the dependence $T({\raisebox{1pt}{\hbox{$\stackrel{\bullet}{M}$}}}\ \!\!)$, we will use \eqref{cont} and
\eqref{angmom1} together with
the expression for the viscosity coefficient . We will use the formula for
suggested by Shakura~\cite{cit15}, $\nu=\alpha c_s H$, where $H$ is the height of the
disk and $c_s\simeq\sqrt{{\cal R}T+\slantfrac13aT^4/\rho}$
is the sound speed. If we neglect the $z$ dependence of the
density and use $\bar\rho$ averaged over the height (further, we will denote
this
quantity simply as $\rho$), the integration of \eqref{vert}
yields the height of the
disk $H$:
$$
H=c_s\cdot\Omega_K^{-1}\,.
$$
We will determine $c_s$ from the temperature in the equatorial plane of the
disk, $z=0$. This approach is sufficiently correct for our purposes due to the
uncertainty in the parameter $\alpha$. As a result, we obtain an equation
relating ${\raisebox{1pt}{\hbox{$\stackrel{\bullet}{M}$}}}\ \!\!$,
$\left.T\vphantom{x^x_x}\right|_{z=0}$, and $\rho$ for the specified
$r$ and $\alpha$,
\begin{equation}
{\raisebox{1pt}{\hbox{$\stackrel{\bullet}{M}$}}}\ \!\!=2\pi\cdot\alpha\cdot\Omega_K^{-2}\cdot\rho\cdot
c_s^3=2\pi\cdot\alpha\cdot\Omega_K^{-2}\cdot\left({\cal
R}T\rho^{2/3}+\slantfrac13aT^4\rho^{-1/3}\right)^{3/2}\,.
\label{appeq1}
\end{equation}
This equation reduces to a cubic equation in the variable $\rho^{1/3}$, and
its solution has two branches: one with a negative real root and two complex
ones, and one with three real roots, one of which is negative. Only positive
real roots for the density are physically meaningful. For such roots to
exist, the following condition must be satisfied:
\begin{equation}
{\raisebox{1pt}{\hbox{$\stackrel{\bullet}{M}$}}}\ \!\!>\dfrac{\sqrt{3}\pi\cdot\sqrt{\cal R}\cdot a\cdot\alpha \cdot
T^{9/2}}{\Omega_K^2}\,.
\label{estim1}
\end{equation}
which yields the minimum accretion rate for the given $T$, $r$, and $\alpha$.
When
deriving this condition, we used the equation of state taking into account
the radiation pressure.
This estimate can also be written in the form
$$
T<7\cdot10^5\left(\dfrac r{R_{wd}}\right)^{-2/3}
\left(\dfrac{{\raisebox{1pt}{\hbox{$\stackrel{\bullet}{M}$}}}\ \!\!}{10^{-9}M_\odot/year}\right)^{2/9}
\left(\dfrac\alpha{0.1}\right)^{-2/9}\quad {\rm K}\,,
$$
where $R_{wd}=10^9cm$ is the radius of the accretor (white dwarf).
Let us consider the condition \eqref{estim1} for the outer parts of the accretion disk.
Let us take $r=A/5$, where $A$ is the distance between the components of the
binary $A=1.42R_\odot$), which corresponds to situation for the dwarf nova
IP~Peg; as a result, we obtain
\begin{equation}
{\raisebox{1pt}{\hbox{$\stackrel{\bullet}{M}$}}}\ \!\!>10^{-9}\left(\dfrac{T}{10^5~{\rm K}}\right)^{9/2}
\left(\dfrac\alpha{0.1}\right)\quad M_\odot/year\,.
\label{estim2}
\end{equation}
If \eqref{estim1} is satisfied, the roots of Eq.~\eqref{appeq1} relating $\rho$, $T$, and ${\raisebox{1pt}{\hbox{$\stackrel{\bullet}{M}$}}}\ \!\!$
for a given $r$ and $\alpha$ can be written
$$
\rho=({\cal R}T)^{-3/4}
\left(\dfrac{{\raisebox{1pt}{\hbox{$\stackrel{\bullet}{M}$}}}\ \!\!\Omega_K^2}{2\pi\alpha}\right)
\sin^3\left(\slantfrac13 \arcsin\left(\sqrt{\cal R}aT^{9/2}
\dfrac{2\pi\alpha}{{\raisebox{1pt}{\hbox{$\stackrel{\bullet}{M}$}}}\ \!\!\Omega_K^2}\right) \right)\,,
$$
$$
\rho=({\cal R}T)^{-3/4}
\left(\dfrac{{\raisebox{1pt}{\hbox{$\stackrel{\bullet}{M}$}}}\ \!\!\Omega_K^2}{2\pi\alpha}\right)
\cos^3\left(\slantfrac13 \arcsin\left(\sqrt{\cal R}aT^{9/2}
\dfrac{2\pi\alpha}{{\raisebox{1pt}{\hbox{$\stackrel{\bullet}{M}$}}}\ \!\!\Omega_K^2}\right) +\dfrac\pi6\right)
$$
(for simplicity, we have omitted numerical factors
$\slantfrac{\sqrt{3}}2\simeq1$). The first of
these corresponds to disks with dominant radiation pressure
($\beta=\slantfrac13aT^4/\rho{\cal R}T>1$)
and the second to disks with dominant gas pressure ($\beta<1$).
These formulas describe the two branches of the two-parameter dependence
$\rho({\raisebox{1pt}{\hbox{$\stackrel{\bullet}{M}$}}}\ \!\!,T)$. To calculate the dependence $T({\raisebox{1pt}{\hbox{$\stackrel{\bullet}{M}$}}}\ \!\!)$, we must use the
additional heat balance equations~\eqref{eq1a}--\eqref{eq1b}.
As follows from Section~\ref{sec21}, the form
of~\eqref{eq1a}--\eqref{eq1b} depends on the optical depth of the disk, which, accordingly, must
be calculated.
\subsection{Optical Depth}
\label{sec23}
The optical depth $\tau$ is specified by the product of the absorption
coefficient $\kappa$, the density, and the geometrical depth of the layer
$l$: $\tau=\kappa\cdot\rho\cdot l$. For disk
accretion, the basic parameter is the ratio of the geometrical depth of the
layer where $\tau=1$ and the height of the disk: $l^{\tau=1}/H$.
After simple manipulation, we obtain
\begin{equation}
\dfrac{l^{\tau=1}}{H}
=\dfrac{2\pi\cdot\alpha}{\kappa\cdot{\raisebox{1pt}{\hbox{$\stackrel{\bullet}{M}$}}}\ \!\!}\cdot
c_s^2\cdot\Omega_K^{-1}\,.
\label{eq4}
\end{equation}
\begin{figure}[t]
\centerline{\hbox{\epsfig{figure=ris1a.eps,width=10cm}}}
\caption{\small The $\kappa(T)$ dependence for $n=10^{18}cm^{-3}$,
$n=10^{17}cm^{-3}$, $n=10^{16}cm^{-3}$, $n=10^{15}cm^{-3}$, and
$n=10^{14}cm^{-3}$ (top to bottom)~\protect\cite{cit18}.}
\label{fig1}
\end{figure}
The absorption coefficient $\kappa$ displays a complicated dependence on $T$
and $\rho$ (and
also on the degree of ionization, chemical composition, etc.). Here, we
adopted the simple approximation for $\kappa(T,\rho)$~\cite{cit16,cit17,cit18}
$$
\kappa(T,\rho)=\left\{
\begin{array}{ccc}
\kappa_1\cdot\rho^{2/3}\cdot T^{3}\,,
&\qquad \kappa_1=10^{-8}\,,
&\qquad (\kappa1)\\
~\\
\kappa_2\cdot\rho^{1/3}\cdot T^{10}\,,
&\qquad \kappa_2=10^{-36}\,,
&\qquad (\kappa2)\\
~\\
\kappa_3\cdot\rho\cdot T^{-5/2}\,,
&\qquad \kappa_3=1.5\cdot10^{20}\,,
&\qquad (\kappa3)\\
~\\
\kappa_4\,,
&\qquad \kappa_4=0.348\,.
&\qquad (\kappa4)
\end{array}\right.
$$
According to~\cite{cit18}, these four subregions correspond to scattering on
molecular hydrogen, scattering on atomic hydrogen, free--free and free--bound
transitions, and Thompson scattering. The boundaries of the sub-regions,
i.e. the transitions from one expression to another, are specified by the
equality of the $\kappa$ values calculated from these expressions.
Figure~\ref{fig1}
presents the dependences of $\kappa$ on $T$ and $\rho$. We can see regions
with $d\kappa/dT>0$, where
thermal instability can develop when the dependence between the surface
density and the disk temperature forms an S curve in the $(\Sigma,T_{eff})$
plane. Thermal instability is often invoked to explain the phenomenon of
dwarf novae (see, for example,~\cite{cit19,cit20});
however, it is clear that this can
occur only for sufficiently cool disks.
\begin{figure}[t]
\centerline{\hbox{\epsfig{figure=ltoh1.eps,width=10cm}}}
\caption{\small Solution of~\eqref{appeq1} for disks with dominant gas pressure in
the $(T,\protect{\raisebox{1pt}{\hbox{$\stackrel{\bullet}{M}$}}}\ \!\!)$
plane for $\alpha=0.1$ and $r=A/5$. The horizontal thick shading indicates
optically thick disks, and vertical shading optically thin disks; the solid
line marks the border between these regions. The dashed line corresponds to
condition~\eqref{estim2} for the existence of the solution of~\eqref{appeq1};
there is no solution
below this line.}
\label{fig2a}
\end{figure}
\begin{figure}[t]
\centerline{\hbox{\epsfig{figure=ltoh2.eps,width=10cm}}}
\caption{\small The same as Fig.~\ref{fig2a} for disks with dominant radiation
pressure.}
\label{fig2b}
\end{figure}
Let us return to~\eqref{appeq1}, taking $\alpha=0.1$ and $r=A/5$ and considering disks
with dominant gas pressure, for which $\beta=1/3aT^4/\rho{\cal R}T<1$.
The shaded region in the $(T,{\raisebox{1pt}{\hbox{$\stackrel{\bullet}{M}$}}}\ \!\!)$ plane in Fig.~\ref{fig2a} corresponds to all
possible solutions for these
disks. The dashed line corresponds to condition~\eqref{estim2} for the existence of a
solution for~\eqref{appeq1} -- there is no solution below this line. The solid line
indicates the boundary between the optically thick and optically thin
solutions: the horizontal shading marks the region of optically thick disks,
while the vertical shading marks optically thin disks. Figure~\ref{fig2b} presents a
similar pattern for disks with dominant radiation pressure ($\beta>1$).
We can see from Fig.~\ref{fig2a} that, for realistic values
${\raisebox{1pt}{\hbox{$\stackrel{\bullet}{M}$}}}\ \!\!\in[10^{-12},10^{-7}] M_\odot/year$,
disks with dominant gas pressure are mainly optically thick, though
solutions corresponding to optically thin cool disks are possible for small
${\raisebox{1pt}{\hbox{$\stackrel{\bullet}{M}$}}}\ \!\!$. It follows from Fig.~\ref{fig2b} that disks with dominant radiation pressure
are mainly optically thin; optically thick hot disks can exist only for high
${\raisebox{1pt}{\hbox{$\stackrel{\bullet}{M}$}}}\ \!\!$.
\subsection{Optically Thick Disks}
\label{sec24}
In Section~\ref{sec22}, we derived Eq.~\eqref{appeq1}, which relates ${\raisebox{1pt}{\hbox{$\stackrel{\bullet}{M}$}}}\ \!\!$, $T$, and $\rho$
for given $a$, $r$ and $\alpha$ . Using the supplementary heat-balance
equations~\eqref{eq1a}--\eqref{eq1b}, we can reduce the number of unknowns and obtain the desired
relation between ${\raisebox{1pt}{\hbox{$\stackrel{\bullet}{M}$}}}\ \!\!$ and $T$.
The vertical temperature distributions in optically thick disks are
described by the equation of radiative heat conduction with a source due to
viscous heating~\eqref{eq1b}, which can be written in the form
\begin{equation}
\dfrac{\partial e}{\partial t}= \dfrac{\partial}{\partial
z}\left(\dfrac{1}{\kappa\rho} \dfrac{\partial}{\partial
z}\left(\slantfrac{1}{3}acT^4\right)\right) +\rho\alpha
c_s^2\Omega_K\,, \label{difur}
\end{equation}
where $e$ is the specific internal energy and $c$ is the velocity of light.
To solve~\eqref{difur}, we must specify boundary conditions. Due to the symmetry of
the problem, the temperature derivative in the equatorial plane must be zero;
i.e., $\left.T'\vphantom{x^x_x}\right|_{z=0}=0$. The temperature at the upper
boundary of the disk is specified by the condition
$\Gamma(T_*,T_{wd})=\Lambda(T_*)$. Though the functions $\Gamma(T,T_{wd})$ and
$\Lambda(T)$ are complex, they are known and can be found in the literature
(see, for example,~\cite{cit21,cit22,cit23}).
The temperature derived by equating these
functions (for a temperature of the central object (white dwarf) of
$T_{wd}=70~000$~K) is $T(H)=T_*=13~600$~K.
The solution of~\eqref{difur} enters a stationary regime when the characteristic
heat-conduction time
$$
t_{diff}\simeq\dfrac{{\cal R}\kappa\rho^2H^2}{acT^3}
$$
is comparable to the time for viscous heating
$$
t_{heat}\simeq\dfrac{{\cal R}T}{\alpha c_s^2\Omega_K}\simeq
\alpha^{-1}\Omega_K^{-1}\,.
$$
Note that~\eqref{difur} can be integrated analytically in the steady-state case. Let
us denote $U=T^4$, $U_*=T_*^4$,
$U_0=\left.U\vphantom{x^x_x}\right|_{z=0}$ and again assume that does not
depend on $z$. Then,
$$
\dfrac{d}{d z}\left(\dfrac{1}{\kappa\rho} \dfrac{d}{d
z}\left(\dfrac{ac}{3}U\right)\right) =-\rho\alpha c_s^2\Omega_K\,.
$$
After integrating over $z$, we obtain
$$
\dfrac{1}{\kappa\rho} \dfrac{d}{d z}\left(\dfrac{ac}{3}U\right)
=-\rho\alpha c_s^2\Omega_K z\,,
$$
The integration constant is equal to zero, since
$\left.U'\vphantom{x^x_x}\right|_{z=0}=0$. For
convenience, we will transform this last equation to the form
$$
\dfrac{1}{\kappa} \dfrac{dU}{d z}\equiv\dfrac{dB}{d z}
=-\dfrac{3}{ac}\rho^2\alpha c_s^2\Omega_K z\,,
$$
where the function $B(U)$ is determined from the differential equation
$\displaystyle\dfrac{dB}{dU}=\dfrac{1}{\kappa(U,\rho)}$ and can be written
in an analytical form if $\rho$ is fixed. Integrating this last equation
over $z$, we obtain
$$
B(U)=B(U_*)+\dfrac{3}{2ac}\rho^2\alpha c_s^2\Omega_K(H^2-z^2)\,,
$$
or, for $z=0$
$$
B(U_0)=B(U_*)+\dfrac{3}{2ac}\rho^2\alpha c_s^2\Omega_KH^2\,.
$$
Using the expressions
$$
c_s^2= \left({\cal
R}U_0^{1/4}+\slantfrac13\dfrac{aU_0}{\rho}\right)\,,
$$
$$
H^2= \left({\cal R}U_0^{1/4}+\slantfrac13\dfrac{aU_0}{\rho}\right)
\Omega_K^{-2}
$$
we obtain the algebraic equation for $U_0$
$$
B(U_0)=B(U_*)+\dfrac{3}{2ac}\rho^2\alpha \Omega_K^{-1} \left({\cal
R}U_0^{1/4}+\slantfrac13\dfrac{aU_0}{\rho}\right)^2\,.
$$
This equation implicitly specifies the dependence $U_0(\rho)$; i.e., $T(\rho)$.
Expressing ${\raisebox{1pt}{\hbox{$\stackrel{\bullet}{M}$}}}\ \!\!$ in terms of $\rho$ and $T$, we can derive the dependence
${\raisebox{1pt}{\hbox{$\stackrel{\bullet}{M}$}}}\ \!\!(\rho)={\raisebox{1pt}{\hbox{$\stackrel{\bullet}{M}$}}}\ \!\!(T(\rho),\rho)$, which
yields the dependence $T({\raisebox{1pt}{\hbox{$\stackrel{\bullet}{M}$}}}\ \!\!)$ in parametric form. Formally, the resulting
solution can also exist in optically thin regions; however, given the
adopted assumptions, these points can be rejected.
\begin{figure}[t]
\centerline{\hbox{\epsfig{figure=new1.eps,width=10cm}}}
\caption{\small Solution of~\eqref{difur} for an optically thick disk for
$\alpha=1$ and $r=A/5$ (asterisks). Solutions of~\eqref{difur1} taking into account
convection are labeled by squares. The dashed line is a lower bound for the
region in which there exist solutions of~\eqref{appeq1}, and the solid line separates
the regions of optically thin and optically thick disks.}
\label{fig3a}
\end{figure}
\begin{figure}
\centerline{\hbox{\epsfig{figure=new2.eps,width=10cm}}}
\caption{\small The same as Fig.~\ref{fig3a} for $\alpha=0.1$.}
\label{fig3b}
\end{figure}
\begin{figure}[t]
\centerline{\hbox{\epsfig{figure=new3.eps,width=10cm}}}
\caption{\small The same as Fig.~\ref{fig3a} for $\alpha=0.01$.}
\label{fig3c}
\end{figure}
\begin{figure}[t]
\centerline{\hbox{\epsfig{figure=ris6d.eps,width=10cm}}}
\caption{\small A disk with dominant gas pressure. The solid
lines indicate possible states of the disk in the $H/r-\protect{\raisebox{1pt}{\hbox{$\stackrel{\bullet}{M}$}}}\ \!\!$
plane for $r=A/5$.
Solutions of~\eqref{difur} taking into account radiative heat conduction and viscous
heating are shown by the lines with asterisks for $\alpha=1$,
$\alpha=0.1$, $\alpha=10^{-2}$, $\alpha=10^{-3}$ (top to the bottom).
Solutions of~\eqref{difur1} taking into account radiative heat
conduction, convection, and viscous heating are shown by the lines with
squares for $\alpha=1$. The dashed lines bound from below regions in which the
solution of~\eqref{appeq1} can exist for $\alpha=1$,
$\alpha=0.1$, $\alpha=10^{-2}$, $\alpha=10^{-3}$ (top to bottom).}
\label{fig3d}
\end{figure}
Let us consider a graphical representation of the solution derived.
Figure~\ref{fig3a} presents the dependence $T({\raisebox{1pt}{\hbox{$\stackrel{\bullet}{M}$}}}\ \!\!)$ for $\alpha=1$ and $r=A/5$,
marked by asterisks.
As in Fig.~\ref{fig2a}--~\ref{fig2b},
the dashed lines bound from below the domain in which there
exist solutions of~\eqref{appeq1}, while the solid line separates the domains of
optically thin and optically thick disks. Figures~\ref{fig3b}--\ref{fig3c}
display the solutions
for $\alpha=0.1$ and $\alpha=0.01$, respectively.
Figure~\ref{fig3c} presents the accretion rate
as a function of the disk thickness. We can see that all the obtained disks
are geometrically thin; i.e., $H\ll r$.
Radiative heat conduction is not the only mechanism for heat transfer into
optically thin regions. Under certain conditions, convection can also play a
substantial role. Neglecting the radiation pressure, the convective flux can
be written in the form~\cite{cit24,cit25}
$$
F_{conv}=c_P\cdot\rho\cdot\left(\dfrac{|g|}{T}\right)^{1/2}
\cdot\dfrac{l^2}{4} \cdot(\Delta\nabla T)^{3/2}\,,
$$
$$
\Delta\nabla T=-\dfrac{T}{c_P}\cdot \dfrac{\partial S}{\partial
z}\,.
$$
Here, $c_P$ is the heat capacity at constant pressure, $S={\cal
R}\cdot\ln(T^{3/2}/\rho)$ the specific
entropy, $g=-\Omega_K^2 z$ the gravitational acceleration, and $l$
the mixing
length, taken to be $l=\alpha H$. To determine the vertical temperature
distribution taking convection into account,
we must solve the equation
\begin{equation}
\dfrac{\partial e}{\partial t}= \dfrac{\partial}{\partial
z}\left(\dfrac{1}{\kappa\rho} \dfrac{\partial}{\partial
z}\left(\slantfrac{1}{3}acT^4\right)\right) -\dfrac{\partial
F_{conv}}{\partial z} +\rho\alpha c_s^2\Omega_K \label{difur1}
\end{equation}
with the same boundary conditions as for~\eqref{difur}.
Equation~\eqref{difur1} does not admit
a simple analytical solution, and we solved this equation numerically using
the method of establishment. The solution is denoted by the squares in
Figs.\ref{fig3a}--\ref{fig3c}.
We can see that convection plays a significant role only when
$\alpha\simeq1$.
Summarizing, we can assert that, in the optically thick case with small
${\raisebox{1pt}{\hbox{$\stackrel{\bullet}{M}$}}}\ \!\!$, the disk displays the constant temperature $T=T_*=13600^\circ$~K,
while the temperature increases as $T\propto{\raisebox{1pt}{\hbox{$\stackrel{\bullet}{M}$}}}\ \!\!^{1/3}$ at larger values of
${\raisebox{1pt}{\hbox{$\stackrel{\bullet}{M}$}}}\ \!\!$. Thus, for
realistic parameters of the accretion disks in close binaries,
${\raisebox{1pt}{\hbox{$\stackrel{\bullet}{M}$}}}\ \!\!\simeq10^{-12}\div10^{-7}M_\odot/year$ and
$\alpha\simeq10^{-1}\div10^{-2}$, the gas temperature in the outer parts of
the disk ($r\simeq A/5\div A/10$) is $10^4$~K to $\sim10^6$~K.
Solving~\eqref{difur} for various $r$, we can also calculate the dependences $T(r)$
and $\rho(r)$. The calculations indicate that $T\propto r^{-0.8}$ and
$\rho\propto r^{-1.8}$, which is
consistent with the dependence $T\propto r^{-3/4}$ obtained by Shakura and
Sunyaev~\cite{cit26}.
\subsection{Optically Thin Disks}
\label{sec25}
In this case, the temperature of the disk is specified by the balance between
radiative heating $\Gamma(T,T_{wd})$ and viscous heating~\eqref{vischeat}, on the one hand,
and radiative cooling $\Lambda(T)$, on the other. The heat-balance
equation~\eqref{eq1a} can be written
$$
\rho\alpha c_s^2\Omega_K+ \rho^2\cdot
m_p^2\cdot(\Gamma(T,T_{wd})-\Lambda(T))=0\,,
$$
which can be reduced to the quadratic equations in $\rho$
$$
\alpha\cdot(\rho{\cal R}T+\slantfrac13aT^4)\cdot\Omega_K+
\rho^2\cdot m_p^{-2}\cdot(\Gamma(T,T_{wd})-\Lambda(T))=0\,.
$$
The solution of this equation for specified $r$ and $\alpha$ yields the
dependence $\rho(T)$, and thus $T({\raisebox{1pt}{\hbox{$\stackrel{\bullet}{M}$}}}\ \!\!)$. Formally,
this solution can also exist in optically thick regions, however these points
were rejected by virtue of the adopted assumptions.
It is shown in Section~\ref{sec23} that disks in which gas pressure dominates are
primarily optically thick, and solutions that correspond to optically thin
disks are possible only for small ${\raisebox{1pt}{\hbox{$\stackrel{\bullet}{M}$}}}\ \!\!$. Disks in which radiation pressure
dominates are primarily optically thin. The domination of radiation pressure
is possible only in the inner parts of the disk; therefore, we will adopt
$r=A/20$ for the further analysis. For the typical dwarf nova IP~Peg, this
corresponds to five radii of the accretor (white dwarf).
\begin{figure}
\centerline{\hbox{\epsfig{figure=ris7a.eps,width=10cm}}}
\caption{\small Solutions for an optically thin disk for $\alpha=1$,
$\alpha=0.1$, $\alpha=10^{-2}$,
$\alpha=10^{-3}$ (top to bottom) and $r=A/20$ (asterisks).
The dashed lines bound
from below the domain in which there exists a solution of~\eqref{appeq1}; the solid lines
separate the regions for optically thin and optically thick disks.
}
\label{fig4}
\end{figure}
Figure~\ref{fig4} presents the results of our calculations; the asterisks denote the
$T({\raisebox{1pt}{\hbox{$\stackrel{\bullet}{M}$}}}\ \!\!)$ dependences for $\alpha=1$,
$\alpha=0.1$, $\alpha=10^{-2}$,
$\alpha=10^{-3}$ (top to bottom), and $r=A/20$.
The disks obtained in these solutions are geometrically thick,
$H\simeq r$. Note that the initial assumptions of the model restrict its
applicability: it is suitable only for geometrically thin disks, and the
solutions for geometrically thick disks are purely formal.
\section{The Model}
\label{sec3}
We described the flow structure in a binary system using a system of
gravitational gas-dynamical equations taking into account radiative heating
and cooling of the gas for the optically thin case:
\begin{equation}
\left\{
\begin{array}{l}
\dfrac{\partial\rho}{\partial t}+\mathop{\rm div}\nolimits\rho{\mvec{v}}=0\,,\\[5mm]
\dfrac{\partial\rho\mvec{v}}{\partial t}
+\mathop{\rm div}\nolimits(\rho\mvec{v}\otimes\mvec{v})+\mathop{\rm grad}\nolimits P=-\rho\mathop{\rm grad}\nolimits\Phi\,,\\[5mm]
\dfrac{\partial\rho(\varepsilon+|\mvec{v}|^2/2)}{\partial t}
+\mathop{\rm div}\nolimits\rho\mvec{v}(\varepsilon+P/\rho+|\mvec{v}|^2/2)=\\[5mm]
\qquad\qquad
=-\rho\mvec{v}\mathop{\rm grad}\nolimits\Phi+\rho^2m_p^{-2}\left(\Gamma(T,T_{wd})-\Lambda(T)\right).
\end{array}
\right. \label{HDC}
\end{equation}
Here, as usual, $\rho$ is the density, $\mvec{v}=(u,v,w)$ the velocity vector,
$P$ the pressure, $\varepsilon$ the internal energy, $\Phi$ the Roche
gravitational potential, $m_p$ the
proton mass, and $\Gamma(T,T_{wd})$ and $\Lambda(T)$ the radiative heating
and cooling functions, respectively. The system of gas-dynamical equations was
closed with the Clapeyron equation $P=(\gamma-1)\rho\varepsilon$, where
$\gamma$ is the adiabatic index. We took the parameter $\gamma$ to be 5/3.
Our main goal here is to study the morphology of the interaction between the
stream and the cool accretion disk. It follows from Section~\ref{sec2} that the outer
parts of the accretion disk can be cool for small ${\raisebox{1pt}{\hbox{$\stackrel{\bullet}{M}$}}}\ \!\!$ and, in particular,
in
the case of an optically thin disk. The system of equations~\eqref{HDC} enables us
to carry out three-dimensional modeling of the flow structure in a binary
within our formulation of the problem. In the model, the temperature of the
disk is 13~600~K.
We solved this system of equations using the Roe-Osher
method~\cite{cit14,cit27,cit28},
adapted for multiprocessing computations via spatial decomposition the
computation grid (i.e., partitioning into subregions, with synchronization
of the boundary conditions)~\cite{cit29}. We considered a semi-detached binary
system containing a donor with mass $M_2$ filling Roche lobe and an accretor
with mass $M_1$. The system parameters were specified to be those of the dwarf
nova IP~Peg: $M_1=1.02M_\odot, M_2=0.5M_\odot, A=1.42R_\odot$.
The modeling was carried out in a non-inertial reference frame rotating with
the binary, in Cartesian coordinates in a rectangular three-dimensional
grid. Since the problem is symmetrical about the equatorial plane, only half
the space occupied by the disk was modeled. To join the solutions, we
specified a corresponding boundary condition at the lower boundary of the
computation domain. The accretor had the form of a sphere with radius
$10^{-2}A$.
All matter that ended up within any of the cells forming the accretor was
taken to fall onto the star. A free boundary condition was specified at the
outer boundaries of the disk -- the density was constant
($\rho_b=10^-8\rho_{\mathop{\rm L_1}\nolimits}$), where $\rho_{\mathop{\rm L_1}\nolimits}$ is the density at the
point $\mathop{\rm L_1}\nolimits$, the temperature was 13~600~K, and the
velocity was equal to zero. The stream was specified in the form of a
boundary condition: matter with temperature 5800~K, density
$\rho_{\mathop{\rm L_1}\nolimits}=1.6\times 10^{-8} g/cm^3$ and velocity along the $x$ axis
$v_x=6.3 km/s$ was injected into a zone
around $\mathop{\rm L_1}\nolimits$ with radius $0.014A$.
For this rate of matter input into the system, the model accretion rate was
$\sim 10^{-12} M_{\odot}/year$.
The size of the computation domain, $1.12A\times 1.14A\times 0.17A$, was
selected so that it entirely contains both the disk and stream,
including the point $\mathop{\rm L_1}\nolimits$.
The computation grid with $121\times 121\times 62$ cells was distributed
between 81 processors, which constituted a two-dimensional $9\times 9$ matrix.
To increase the accuracy of the solution, the grid was made denser in the
zone of interaction between the stream and disk, making it possible to
resolve well the formed shock wave. The grid was also denser towards the
equatorial plane, so that the vertical structure was resolved, even for such
a cool disk.
We used the solution obtained for a model without cooling as the initial
conditions~\cite{cit12}. The model with cooling was computed during approximately
five revolutions of the system, until the solution became established. The
total time of the computations was $\approx$ 1000~h on the MBC1000A
computer of the Joint Supercomputer Center (JSC).
\section{Computation Results}
\label{sec4}
\begin{figure}
\centerline{\hbox{\epsfig{figure=xy.eps,width=13.1cm}}}
\caption{\small Contours of constant density and velocity vectors in
the equatorial plane $XY$ of the system. The shaded rectangle indicates the
zone of interaction between the stream and disk, shown in Figs~\ref{fig7}
and~\ref{fig8}. The
point $\mathop{\rm L_1}\nolimits$ and the direction towards $\mathop{\rm L_3}$ are marked.}
\label{fig5}
\end{figure}
\begin{figure}
\centerline{\hbox{\epsfig{figure=xz.eps,width=13.1cm}}}
\caption{\small Density contours in the frontal plane $XZ$ of the
system.}
\label{fig6a}
\end{figure}
\begin{figure}
\centerline{\hbox{\epsfig{figure=yz.eps,width=13.1cm}}}
\caption{\small Density contours in the plane $YZ$ containing the accretor
and perpendicular to the line connecting the binary components.}
\label{fig6b}
\end{figure}
\begin{figure}
\centerline{\hbox{\epsfig{figure=xy_zoom.eps,width=13cm}}}
\caption{\small Contours of constant density and velocity vectors in
the zone of interaction between the stream and disk (the shaded rectangle in
Fig.~\ref{fig5}).}
\label{fig7}
\end{figure}
\begin{figure}
\centerline{\hbox{\epsfig{figure=lica.eps,width=13cm}}}
\caption{\small Visualization of the velocity field in the zone of
interaction between the stream and disk (the shaded rectangle in
Fig.~\ref{fig5}).}
\label{fig8}
\end{figure}
Figures~\ref{fig5} to~\ref{fig8} present the morphology of gas flows in
the binary. Figure~\ref{fig5} shows the density and velocity vector
distributions in
the equatorial plane of the system (the $XY$ plane), while Figs.~\ref{fig6a}
and~\ref{fig6b} present density contours in the frontal ($XZ$) plane and in
the $YZ$ plane
containing the accretor and perpendicular to the line connecting the binary
components. In spite of the small height of the forming accretion disk, use
of the JSC parallel-processing computers made it possible to resolve its
vertical structure (the outer parts of the disk were covered by 15 grid
cells, and the inner parts by no fewer than 3 cells). Figure~\ref{fig7}
gives an
enlarged view of the density and velocity vector distributions in the zone of
interaction between the stream and the outer edge of the disk (the area in
the shaded rectangle in Fig.~\ref{fig5}). Figure~\ref{fig8}
presents the so-called texture a
visualization of the velocity field in the zone of interaction between the
stream and disk, constructed using the Line Integral Convolution procedure
(LIC)~\cite{cit30}.
According to our considerations in~\cite{cit8,cit14}, the gasdynamical flow pattern in
a semi-detached binary is formed by the stream from~$\mathop{\rm L_1}\nolimits$, the disk, a
circumdisk halo, and the intercomponent envelope. This subdivision is based
on physical differences between these elements of the flow structure:
(1) if the motion of the gas is not determined by the gravitational field of
the accretor, it forms the intercomponent envelope; (2) if the gas makes one
revolution around the accretor, but later mixes with the initial stream,
this gas does not become part of the disk, instead forming the circum-disk
halo; (3) the disk is formed by that part of the stream that loses its
momentum and moves towards the center of gravity after entering the
gravitation field of the accretor, rather than interacting with the stream.
In this framework, let us consider the morphology of the flow when the
temperature decreases to 13~600~K over the entire computation domain due to
cooling. Figure~\ref{fig5} indicates that, in this case, the intercomponent envelope
is formed primarily in the vicinity of $\mathop{\rm L_3}$, and does not
affect the solution substantially. We can see from Figs.~\ref{fig5}
and~\ref{fig6a}--\ref{fig6b} that the
circum-disk halo is pressed against the disk, and its density increases
sharply towards the disk edge.
Figures~\ref{fig7} and~\ref{fig8} show that, in the cool-disk case, the
interaction between
the circum-disk halo and the stream displays all features typical of an
oblique collision of two streams. We can clearly see two shock waves and a
tangential discontinuity between them. The gases forming the halo and stream
pass through the shocks corresponding to their flows, mix, and move along
the tangential discontinuity between the two shocks. Further, this material
forms the disk itself, the halo, and the envelope.
The solution for the cool case displays the same qualitative characteristics
as the solution for the case when the outer parts of the disk are hot: the
interaction between the stream and disk is shockless, a region of enhanced
energy release is formed due to the interaction between the circum-disk halo
and the stream and is located beyond the disk, and the resulting shock is
fairly extended, which is particularly important for explaining the
observations. However, unlike the solution with a high temperature in the
outer regions of the disk~\cite{cit1,cit12,cit14}, in the cool case, the shape of the
zone of shock interaction between the stream and halo is more complex than a
simple ``hot line''. This is due to the sharp increase of the halo density as
the disk is approached. Those parts of the halo that are far from the disk
have low density, and the shock due to their interaction with the stream
is situated along the edge of the stream. As the halo density increases, the
shock bends, and eventually stretches along the edge of the disk.
\section{Conclusions}
\label{sec5}
Our analysis of the basic processes of heating and cooling in accretion
disks in binaries has shown that, for realistic parameters of the accretion
disks in close binary systems (${\raisebox{1pt}{\hbox{$\stackrel{\bullet}{M}$}}}\ \!\!\simeq10^{-12}\div10^{-7}M_\odot/year$
and
$\alpha\simeq10^{-1}\div10^{-2}$), the
gas temperature in the outer parts of the disk is~$10^4$~K to
$\sim10^6$~K.
Previously, we carried out three-dimensional simulations of the flow
structure in close binaries for the case when the temperature of the outer
parts of the accretion disk was 200--500 thousand~K. Those solutions showed
that the interaction between the stream from the inner Lagrange point and
the disk was shockless. To determine the generality of the solution, the
morphology of the flow for different disk temperatures must be considered.We
have presented here the results of simulations for the case when cooling
decreases the temperature to 13~600~K over the entire computation domain.
Our analysis of the flow pattern for the cool outer parts of the disk confirms
that the interaction between the stream and disk is again shockless. The
computations indicate that the solution for the cool disk case displays the
same qualitative features as in the case when the outer parts of the disk
are hot: the interaction between the stream and disk is shockless, a region
of enhanced energy release formed by the interaction between the circum-disk
halo and the stream is located beyond the disk, and the shock wave that is
formed is fairly extended, and can be considered a ``hot line'' . The cool
solution demonstrates the universal character of our previous conclusions
that the interaction between the stream and disk in semidetached binaries is
shockless.
\section{Acknowledgments}
This work was supported by the Russian Foundation for Basic Research
(project codes 02-02-16088, 02-02-17642), the State Science and Technology
Program in Astronomy, a Presidential Grant of the Russian Federation
(00-15-96722), the Programs of the Presidium of the Russian Academy of
Sciences ``Mathematical Modeling'' and ``Non-steady State Processes
in Astronomy'', and the INTAS Foundation (grant 01-491).
|
2,869,038,154,014 | arxiv | \section{Introduction}
Cloud infrastructures are widely deployed to support various emerging applications such as: Google App Engine, Microsoft Window Live Service, IBM Blue Cloud, and Apple Mobile Me \cite{Sadiku2014}. Large-scale data centers (\emph{DC}s), which are the fundamental engines for data processing, are the essential elements in cloud computing \cite{Zhangyan_13DC, zhangyan_dc_survey}. Information and Communication Technology (\emph{ICT}) is estimated to be responsible for about $14\%$ of the worldwide energy consumption by 2020 \cite{Pickavet2008}. The energy consumption of DCs accounts for nearly 25\% of the total ICT energy consumption \cite{Pickavet2008}. Hence, the energy consumption of DCs becomes an imperative problem.
Renewable energy, which includes solar energy and wind power, produces $12.7\%$ domestic electricity of the United States in 2011 \cite{Green-cloud2013}. Renewable energy will be widely adopted to reduce the brown energy consumption of ICT \cite{tao2014_magazine}. For example, Parasol is a solar-powered DC \cite{Goiri_GreenDc}. In Parasol, GreenSwitch, a management system, is designed to manage the work loads and the power supplies \cite{Goiri_GreenDc}. The availability of renewable energy varies in different areas and changes over time. The work loads of DCs also vary in different areas and at different time. As a result, the renewable energy availability and energy demands in DCs usually mismatch with each other. This mismatch leads to inefficient renewable energy usage in DCs. To solve this problem, it is desirable to balance the work loads among DCs according to their green energy availability. Although the current cloud computing solutions such as cloud bursting \cite{Wood_cloudNet_2014}, VMware and F5 \cite{VMware_F5} support inter-datacenter (\emph{inter-DC}) virtual machine (\emph{VM}) migration, it is not clear how to migrate VM among renewable energy powered DCs to minimize their brown energy consumption.
Elastic Optical Networks (\emph{EONs}), by employing orthogonal frequency division multiplexing (\emph{OFDM}) techniques, not only provide a high network capacity but also enhance the spectrum efficiency because of the low spectrum granularity \cite{Shieh2007}. The granularity in EONs can be 12.5 GHz or even much smaller \cite{Armstrong2009}. Therefore, EONs are one of the promising networking technologies for inter-DC networks \cite{Develder2012}.
Powering DCs with renewable energy can effectively reduce the brown energy consumption, and thus alleviate green house gas emissions. DCs are usually co-located with the renewable energy generation facilities such as solar and wind farms \cite{Figuerola_2009}. Since transmitting renewable energy via the power grid may introduce a significant power loss, it is desirable to maximize the utilization of renewable energy in the DC rather than transmitting the energy back to the power grid. In this paper, we investigate the \emph{r}enewable \emph{e}nergy-\emph{a}ware \emph{i}nter-DC VM \emph{m}igration (\emph{RE-AIM}) problem that optimizes the renewable energy utilization by migrating VMs among DCs. Fig. \ref{fig:six-cloud} shows the architecture of an inter-DC network. The vertices in the graph stand for the optical switches in EONs. DCs are connected to the optical switches via IP routers \footnote{In this paper, we focus on the EONs. The design and optimization of the IP networks are beyond the scope of this paper.}. These DCs are powered by hybrid energy including brown energy, solar energy, and wind energy. In migrating VMs among DCs, the background traffic from other applications are also considered in the network. For example, assume that DC $1$ lacks renewable energy while DC $2$ and DC $3$ have superfluous renewable energy. Some VMs can be migrated out of DC $1$ in order to save brown energy. Because of the background traffic and the limited network resource, migrating VMs using different paths (Path $1$ or Path $2$) has different impacts on the network in terms of the probability of congesting the network. It is desirable to select a migration path with minimal impact on the network.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.95\columnwidth]{six-cloud.eps}
\caption{\small Inter-DC architecture.}
\label{fig:six-cloud}
\end{figure}
The rest of this paper is organized as follows. Section \ref{sec:related_work} describes the related work. Section \ref{sec:problem} formulates the RE-AIM problem. Section \ref{sec:analysis-algorithms} briefly analyzes the property of the RE-AIM problem and proposes two heuristic algorithms to solve the problem. Section \ref{sec:evaluations} demonstrates the viability of the proposed algorithms via extensive simulation results. Section \ref{conclusion} concludes the paper.
\section{Related Work}\label{sec:related_work}
Owing to the energy demands of DCs, many techniques and algorithms have been proposed to minimize the energy consumption of DCs \cite{Ghamkhari_2013}.
Fang \emph {et al.} \cite{Yonggang_Wen_Globemcom_2012} presented a novel power management strategy for the DCs, and their target was to minimize the energy consumption of switches in a DC. Cavdar and Alagoz \cite{Survey_Green_DC_2012} surveyed the energy consumption of server and network devices of intra-DC networks, and showed that both computing resources and network elements should be designed with energy proportionality. In other words, it is better if the computing and networking devices can be designed with multiple sleeping states. A few green metrics are also provided by this survey, such as Power Usage Effectiveness (PUE) and Carbon Usage Effectiveness (CUE).
Deng \emph {et al.} \cite{Fangming_Liu_IEEE_Network2014} presented five aspects of applying renewable energy in the DCs: the renewable energy generation model, the renewable energy prediction model, the planning of green DCs (i.e., various renewable options, avalabity of energy sources, different energy storage devices), the intra-DC work loads scheduling, and the inter-DC load balancing. They also discussed the research challenges of powering DCs with renewable energy. Ghamkhari and Mohsenian-Rad \cite{Ghamkhari_2013} developed a mathematical model to capture the trade-off between the energy consumption of a data center and its revenue of offering Internet services. They proposed an algorithm to maximize the revenue of a DC by adapting the number of active servers according to the traffic profile. Gattulli \emph {et al.} \cite{IP_over_WDM_icc_2012} proposed algorithms to reduce $CO_{2}$ emissions in DCs by balancing the loads according to the renewable energy generation. These algorithms optimize renewable energy utilization while maintaining a relatively low blocking probability.
Mandal \emph {et al.} \cite{Green-cloud2013} studied green energy aware VM migration techniques to reduce the energy consumption of DCs. They proposed an algorithm to enhance the green energy utilization by migrating VMs according to the available green energy in DCs. However, they did not consider the network constraints while migrating VMs among DCs. In the optical networks, the available spectrum is limited. The large amount of traffic generated by the VM migration may congest the optical networks and increase the blocking rate of the network. Therefore, it is important to consider the network constraints in migrating VMs. In this paper, we propose algorithms to solve the green energy aware inter-DC VM migration problem with network constraints.
\section{Problem Formulation}
\label{sec:problem}
In this section, we present the network model, the energy model, and the formulation of the RE-AIM problem. The key notations are summarized in Table \ref{tab:notations}.
\begin{table}[!htb]
\begin{center}
\caption{The Important Notations}\label{tab:notations}
\begin{tabular}{{|l|p{185pt}|}}
\hline
Symbol & Definiton \\
\hline
$c_{e}$ & The capacity of a link $e \in E$ in terms of spectrum slots.\\
$c_{s}$ & The capacity of a spectrum slot. \\
$c_{m}$ & The maximum number of servers in the $m$th DC. \\
$\varsigma$ & The maximum number of VMs can be supported in a server. \\
$\varPhi_{m}$ & The amount of renewable energy in the $m$th DC. \\
$\varTheta_{m}$ & The number of VMs in the $m$th DC.\\
$\alpha_{m}$ & Per unit energy cost for the $m$th DC. \\
$\zeta_{m,k}$ & The required bandwidth for migrating the $k$th VM in the $m$th DC. \\
$\mathcal {R}$ & The set of the migration requests.\\
$\mathcal {Q}_{r}$& The set of VMs migrated in the $r$th migration.\\
$\kappa$ & The migration granularity.\\
$w_{p}^{r,m}$ & The used spectrum slot ratio of the $p$th path in the $r$th migration from the $m$th DC.\\
$w_{B}$ & The maximum network congestion ratio.\\
$p_{s}$ & The maximum energy consumption of a server. \\
$\eta$ & The power usage efficiency. \\
\hline
\end{tabular}
\end{center}
\end{table}
\subsection{Network Model}\label{sec:network-model}
We model the inter-DC network by a graph, $\mathcal{G}(V, E, B)$. Here, $V$, $E$ and $B$ are the node set, the link set and the spectrum slot set, respectively. The set of DC nodes is denoted as $\mathcal{D}$. We assume that all DCs are powered by hybrid energy. We denote $\mathcal {D}_{s}$ as the set of DCs that does not have sufficient renewable energy to support their work loads and $\mathcal {D}_{d}$ as the set of DCs that has surplus renewable energy. During the migration, $\mathcal {D}_{s}$ and $\mathcal {D}_{d}$ correspond to the two sets of DCs acting as the sources and destinations, respectively. We define $\kappa$ as the migration granularity, which determines the maximum routing resource that can be used in one migration to each DC.
\subsection{Energy Model}\label{sec:energy-model}
We assume that there are $c_{m}$ servers in the $m$th DC and each server can support up to $\varsigma$ VMs.
The energy consumption of a server is $p_{s}$ when it is active. A server is active as long as it hosts
at least one active VM; otherwise, the server is in the idle state. Here, we assume that an idling server will be turned off and its energy consumption is zero. Then, $\left\lceil \varTheta_{m}/\varsigma \right\rceil$ is the number of active servers required in the $m$th DC \cite{Green-cloud2013}. We denote $\eta$ as the power usage effectiveness, which is defined as the ratio of a DC's total energy consumption (which includes the facility energy consumption for cooling, lighting, etc. \cite{abbas2015_green_dc}) to that of the servers in the DC. Given $\eta$, a DC's total energy consumption is $\eta \cdot p_{s} \cdot \varTheta_{m}/\varsigma$. We denote $\varUpsilon_{m}$ as the brown energy consumption in the $m$th DC. Then,
\begin{equation}\label{eq:v1}
\varUpsilon_{m}=\max(0, \eta \cdot p_{s} \cdot \left\lceil \varTheta_{m}/\varsigma\right\rceil-\varPhi_{m})
\end{equation}
\subsection{Problem Formulation}\label{sec:formulation}
In the problem formulation, $\chi_{p,f}^{m,k}$ is a binary variable. $\chi_{p,f}^{m,k}=1$ indicates that the $k$th VM in the $m$th DC is migrated using the $p$ path with the $f$th spectrum slot as the starting spectrum slot. The objective of the RE-AIM problem is to minimize the total brown energy cost in all DCs with the VM service constraints and the network resource constraints. The problem is formulated as:
\begin{align}
\label{eq:objective}
\min_{\chi_{p,f}^{m,k}}\quad & \sum_{m} {\alpha_{m} \cdot \varUpsilon_{m}}\\
s.t.:\nonumber & \\
& VM\;service\;constraints:\nonumber\\
&\sum_{m}\sum_{k}\sum_{p}\sum_{f}\chi_{p,f}^{m,k}=\sum_{m}\varTheta_{m}\label{eq:c1}\\
&\sum_{k}\sum_{p}\sum_{f}\chi_{p,f}^{m,k} \leq c_{m}, \forall m\in \mathcal {D}_{s}\label{eq:c2}\\
&\begin{aligned}\label{eq:c3}
&\sum_{m'\in \mathcal {D}_{s}}\sum_{k}\sum_{p}\sum_{f}\chi_{p,f}^{m',k} + \\
&\sum_{k}\sum_{p}\sum_{f}\chi_{p,f}^{m,k} \leq c_{m},\forall m\in \mathcal {D}_{d}
\end{aligned}\\
& Network\;resource\;constraints:\nonumber\\
& w_{p}^{r,m}+ \frac{\varGamma_{p,f}^{r,m}}{c_{e}}\leq w_{B},\quad\forall m\in \mathcal {D}_{s}, r \in\mathcal{R} \label{eq:c4}\\
& f(\chi_{p,f}^{m,k})+b(\chi_{p,f}^{m,k}) \leq c_{e}\label{eq:c5} \\
& f(\chi_{p,f}^{m,k})+b(\chi_{p,f}^{m,k})-f(\chi_{p,f}^{m,k+1})\leq 0\label{eq:c6}\\
& \begin{aligned}\label{eq:c9}
& f(i)+b(i)-f(j)\leq [2-\delta_{i,j}-y(i, j)] \cdot\\
& F_{max}, \quad\forall i \neq j
\end{aligned}\\
& \begin{aligned}\label{eq:c10}
& f(j)+b(j)-f(i)\leq [1+\delta_{i,j}-y(i, j)] \cdot\\
&F_{max}, \quad\forall i \neq j
\end{aligned}
\end{align}
Here, Eqs. \eqref{eq:c1}-\eqref{eq:c3} are the VM service constraints. Eq. \eqref{eq:c1} constrains that all the VMs should be hosted in the DCs, while Eqs. \eqref{eq:c2}-\eqref{eq:c3} constrain that the total number of VMs in a DC should not exceed the DCs' capacity. The network resource constraints are shown in Eqs. \eqref{eq:c4}-\eqref{eq:c10}.
Eq.\eqref{eq:c4} constrains the network congestion ratio to be less than $w_{B}$, which is the maximum network congestion ratio allowed for routing in the network. In Eq. \eqref{eq:c4}, $w_{p}^{r,m}$ is the spectrum slot ratio of the $p$th path in the $r$th migration from the $m$th DC, which is defined as the ratio of the number of occupied spectrum slots in the $p$th path to the total number of spectrum slots of this path. $\varGamma_{p,f}^{r, m}$ is defined as the number of spectrum slots used in the $p$th path for the $r$th migration from the $m$th DC.
Eq.\eqref{eq:c5} is a link capacity constraint of the network; it constrains the bandwidth used in migrating VMs not to exceed the capacity of the network resource. Here, $b(\cdot)$ is the bandwidth requirement in terms of spectrum slots, and $f(\cdot)$ is the index of the starting spectrum slot of a path. For example, $f(\chi_{p,f}^{m,k})$ represents the starting spectrum slot index of the path, which is used by $\chi_{p,f}^{m,k}$. Eq.\eqref{eq:c6} is the spectrum non-overlapping constraint of a path used by two different VMs in one migration. This constraint must be met for each VM in every migration; if two VMs use the same spectrum slot in one migration, the total bandwidth allocated to the two VMs should not exceed the capacity of a spectrum slot; otherwise, each VM must use a unique spectrum slot. In the migration, the VMs are sorted in ascending order based on their bandwidth requirement. We assume the VMs are migrated according to an ascending order; for example, the $(k+1)$th VM is moved after the $k$th VM is migrated.
Eqs. \eqref{eq:c9}-\eqref{eq:c10} are the spectrum non-overlapping and the continuity constraints \cite{Christodoulopoulos2011}. This spectrum non-overlapping constraint is used for different paths. In these constraints, $i$ and $j$ represent two different paths used in the migration. Here, $F_{max}$ is the upper bound of the total bandwidth requirement in terms of spectrum slots. $\delta_{i,j}$ $(\forall i \neq j)$ is a Boolean variable defined in Eq. \eqref{eq:c15}, which equals $1$ if the starting spectrum slot index of the $i$th path is smaller than that of the $j$th path; otherwise, it is $0$. We define $y(i,j)$ $(\forall i \neq j)$ as a Boolean indicator, which equals $1$ if the $i$th path and the $j$th path in the migration have at least one common link; otherwise, it is $0$. We give an example to illustrate these equations. If $y(i, j)=1$ and $\delta_{i,j}=1$, Eq. \eqref{eq:c9} becomes Eq. \eqref{eq:c16}, which ensures the bandwidth non-overlapping constraint. Eq. \eqref{eq:c10} is automatically satisfied in this case.
\begin{equation}\label{eq:c15}
\delta_{i,j}=
\begin{cases}
1, & f(i) < f(j)\\
0, & f(i) \geq f(j)
\end{cases}
\end{equation}
\begin{equation}\label{eq:c16}
f(i)+b(i)\leq f(j)
\end{equation}
When we provision spectrum slots for requests in the EONs, the path continuity constraint, spectrum continuity constraint and non-overlapping constraint must be considered. For the path continuity constraint, a lightpath must use the same subcarriers in the whole path for a request. For the spectrum continuity constraint, the used subcarriers must be continuous if a request needs more than one subcarriers. For the non-overlapping constraint, two different lightpaths must be assigned with different subcarriers if they have one or more common links. Since we use a path based method to formulate the RE-AIM problem, the path continuity constraint of the network is already taken into account.
The main contribution of this paper is considering the network influence on the migration when we minimize the brown energy consumption of the DCs. In other words, we want to impose a controllable effect on the network in the migration that leads to less network congestion.
\section{Problem Analysis and Heuristic Algorithms }\label{sec:analysis-algorithms}
\subsection{Problem Analysis}\label{sec:analysis}
To solve the RE-AIM problem, both the energy costs in DCs and the network resource required for the migration should be considered. For example, when a DC consumes brown energy, it is desirable to migrate some VMs to other DCs. The VM migration will introduce additional traffic to the network. To avoid congesting the network, we have to optimize the number of VMs that will be migrated and select the routing path for the migration. Therefore, it is challenging to solve the RE-AIM, which is proven to be NP-hard.
\begin{lemma}
The RE-AIM problem is NP-hard
\end{lemma}
\begin{proof}
We prove that the RE-AIM problem is NP-hard by reducing any instance of the multi-processor scheduling problem (\emph{MPS}) into the RE-AIM problem.\end{proof}
In the RE-AIM problem, without considering the network constraints, the optimal number of VMs hosted in the DCs can be derived according to the availability of the renewable energy. However, with the consideration of the network constraints and the background traffic, it is difficult and impossible to solve the RE-AIM problem online. For the RE-AIM problem, many VMs are migrated from a set of DCs (source DCs) to another set of DCs (destination DCs). Therefore, we can model the VM migration problem as a manycast problem. Since the RE-AIM problem is NP-hard, we propose heuristic algorithms to solve this problem. These algorithms determine which VM should be migrated to which DC and select a proper routing path in the network to avoid congesting the network. We consider two network scenarios. The first one is a network with light traffic load. Under this network scenario, we design Manycast with Shortest Path Routing (\emph{Manycast-SPR}) algorithm for VM migrations. The second network scenario is a network with heavy traffic load. In this case, we propose Manycast Least-Weight Path Routing (\emph{Manycast-LPR}) for migrating VMs among DCs.
\subsection{Heuristic Algorithms For Light Work Loads}\label{sec:algorithm_Manycast-SPR}
When the network load is light, there are more available spectrum slots. It is easy to find a path with available spectrum slots for the migration requests. Then, a lower computing complexity algorithm is preferred. Manycast-SPR only uses the shortest path, and thus it is a very simple algorithm. Hence, Manycast-SPR is expected to provision the inter-DC VM migration requests in a network with light work loads.
The Manycast-SPR algorithm, as shown in Alg. \ref{Manycast-SPR}, is to find the shortest routing path that satisfies the VM migration requirement and the network resource constraints. In the beginning, we input $\mathcal{G}(V, E, B)$, $\varTheta_{m}$ and $\varPhi_{m}$, and then calculate the optimal work loads distribution. Afterward, we get $\mathcal {D}$, $\mathcal {D}_{s}$ and $\mathcal {D}_{d}$. Then, we collect the migration requests $R$. Here, our algorithm splits the manycast requests into many anycast requests $r \in R$. Now, we start to find a source DC $s$ and a destination DC $d$ for the request $r$. The migration will try to use the shortest path $p$ from $s$ to $d$; the request $r$ is carried out if the network congestion constraint is satisfied; otherwise, the request is denied. Then, we update $\mathcal {D}_{s}$ and $\mathcal {D}_{d}$ for the next request. After many rounds of migration, if $\mathcal {D}_{s}$ or $\mathcal {D}_{d}$ is empty, or Eq. (\ref{eq:c4}) is not satisfied, the migration is completed.
Details of the Manycast-SPR algorithm is described in \emph{Algorithm} \ref{Manycast-SPR}. Here, $p(\cdot)$ is a function which targets to get the path for the migration. The complexity of Manycast-SPR is $O(|B| |E|^{2} \mathcal{|R|} \mathcal|{Q}_{r}| \mathcal {|D|}^{2}c_{m}\varsigma)$. Here, $O(\mathcal {|D|}^{2}c_{m}\varsigma)$ is the complexity to determine the optimal work loads, $O(|B|)$ is the complexity to determine $f$, and $O(\mathcal{|R|}\mathcal|{Q}_{r}|)$ is the complexity in building the VM set for the migration. $O(|E|^{2})$ is the complexity of determining the path $p$ for Manycast-SPR.
\subsection{Heuristic Algorithms For Heavy Work Loads}\label{sec:algorithm_Manycast-LPR}
When the work load of the network is heavy, the number of available spectrum slots in the network is limited. Since Manycast-SPR only uses the shortest path (one path) for routing, it is impossible for Manycast-SPR to find an available path and spectrum slots in this scenario. Then, Manycast-SPR may block the migration request, and leads to high brown energy consumption of DCs. Hence, we propose another algorithm Manycast-LPR to achieve better routing performance, that results in low brown energy consumption. Manycast-LPR checks $K$-shortest paths from the source node to the destination node, and picks up the idlest path to serve the requests. The requests will be provisioned with a higher probability by Manycast-LPR as compared to Manycast-SPR. In summary, Manycast-LPR is expected to provision the inter-DC VM migration requests under a heavy work load. It targets to find a path with more available spectrum slots at the expense of a higher complexity.
Manycast-LPR, as shown in Alg. \ref{Manycast-LPR}, is to find the least weight routing path that satisfies the VM migration requirement and the network resource constraints. The main difference between Manycast-LPR and Manycast-SPR is using different ways to find a path. For Manycast-SPR, it first determines the source node and the destination node. Manycast-LPR, however, finds the path first, then uses the path to find the source node and the destination node. The other steps are almost the same. Since Manycast-LPR should calculate the weights for all node pairs to find a path, it increases the complexity.
Details of the Manycast-LPR algorithms is described in \emph{Algorithm} \ref{Manycast-LPR}. The complexity for Manycast-LPR is $O(K |B| |E|^{2} \mathcal{|R|} \mathcal|{Q}_{r}| \mathcal{|D|}^{3} c_{m}\varsigma)$. Here, $p(\cdot)$ is a function which targets to get the path for the migration. $O(\mathcal {|D|}^{2}c_{m}\varsigma)$ is the complexity to determine the optimal work loads, $O(|B|)$ is the complexity to determine $f$, and $O(\mathcal{|R|}\mathcal|{Q}_{r}|)$ is the complexity in building the VM set for the migration. $O(K|E|^{2}\mathcal {|D|})$ is the complexity of determining the path $p$ for Manycast-LPR. The most complex part is to determine the set of VMs for the migration.
\begin{algorithm}
\SetKwData{Left}{left}\SetKwData{This}{this}\SetKwData{Up}{up}
\SetKwFunction{Union}{Union}\SetKwFunction{FindCompress}{FindCompress}
\SetKwInOut{Input}{Input}\SetKwInOut{Output}{Output}
\Input{$\mathcal{G}(V, E, B)$, $\varTheta_{m}$ and $\varPhi_{m}$\;}
\Output{$\mathcal {D}$, $\mathcal {Q}_{r}$, $p(\mathcal {Q}_{r})$, $f(\mathcal {Q}_{r})$ and $\varGamma_{p,f}^{r}$, $r\in R$\;}
\nl Build $\mathcal {D}$, $\mathcal {D}_{s}$ and $\mathcal {D}_{d}$ by the the optimal work loads allocation\;
\nl Collect manycast requests $R$\;
\nl \While{$\mathcal {D}_{s}$ and $\mathcal {D}_{d}$ are not empty}{
\nl calculate the network congestion ratio for all $p$, and get $w_{B}$\;
\nl \For{all nodes $s \in \mathcal {D}_{s}$}{
\nl find $s$ with the max migratory VMs as the source node\;}
\nl \For {all nodes $d \in \mathcal {D}_{d}$}{
\nl find $d$ with the max available renewable energy as the destination node\;}
\nl get the shortest path $p(\mathcal {Q}_{r})$ from $s$ to $d$ for the $r$th migration\;
\nl build $\mathcal {Q}_{r}$ for the $r$th migration according to $p(\mathcal {Q}_{r})$ and Eqs. \eqref{eq:c9}-\eqref{eq:c10}\;
\nl \If {Eq. (\ref{eq:c4}) is satisfied}{
\nl path $p(\mathcal {Q}_{r})$ is used to migrate\;
\nl find the start spectrum slot index $f(\mathcal {Q}_{r})$ in $B$ \;
\nl get the allocated bandwidth $\varGamma_{p,f}^{r}$ \;
\nl update $\mathcal {D}_{s}$ and $\mathcal {D}_{d}$\;}
\nl \Else{
\nl return\; } }
\caption{Manycast with Shortest Path Routing\label{Manycast-SPR}}
\end{algorithm}
\begin{algorithm}
\SetKwData{Left}{left}\SetKwData{This}{this}\SetKwData{Up}{up}
\SetKwFunction{Union}{Union}\SetKwFunction{FindCompress}{FindCompress}
\SetKwInOut{Input}{Input}\SetKwInOut{Output}{Output}
\Input{$\mathcal{G}(V, E, B)$, $\varTheta_{m}$ and $\varPhi_{m}$\;}
\Output{$\mathcal {D}$, $\mathcal {Q}_{r}$, $p(\mathcal {Q}_{r})$, $f(\mathcal {Q}_{r})$ and $\varGamma_{p,f}^{r}$, $r\in R$\;}
\nl Build $\mathcal {D}$, $\mathcal {D}_{s}$ and $\mathcal {D}_{d}$ by the the optimal work loads allocation\;
\nl Collect manycast requests $R$\;
\nl \While{$\mathcal {D}_{s}$ and $\mathcal {D}_{d}$ are not empty}{
\nl calculate the network congestion ratio for all $p$, and get $w_{B}$\;
\nl \For{all nodes $s \in \mathcal {D}_{s}$}{
\nl \For {all nodes $d \in \mathcal {D}_{d}$}{
\nl build K-shortest path set $\mathcal {P}$\;
\nl \For {path $p \in \mathcal {P}$}{
\nl get path $p(\mathcal {Q}_{r})$ with the lowest congestion ratio for the $r$th migration\; }}}
\nl build $\mathcal {Q}_{r}$ for the $r$th migration according to $p(\mathcal {Q}_{r})$ and Eqs. \eqref{eq:c9}-\eqref{eq:c10}\;
\nl \If {Eq. (\ref{eq:c4}) is satisfied}{
\nl path $p(\mathcal {Q}_{r})$ is used to migrate\;
\nl find the start spectrum slot index $f(\mathcal {Q}_{r})$ in $B$\;
\nl get the allocated bandwidth $\varGamma_{p,f}^{r}$\;
\nl update $\mathcal {D}_{s}$ and $\mathcal {D}_{d}$\;}
\nl \Else{
\nl return\; } }
\caption{Manycast with Least-weight Path Routing\label{Manycast-LPR}}
\end{algorithm}
\section{Performance Evaluations}\label{sec:evaluations}
We evaluate the proposed algorithms for the RE-AIM problem in this section. In order to make the RE-AIM problem simple, we assume migratory VMs can be completed in one time slot. The NSFNET topology, shown in Fig. \ref{fig:NSF-green}, is used for the simulation. There are 14 nodes, and the DCs are located at $\mathcal {D}= \{3, 5, 8, 10, 12\}$ \cite{Zhang2014, zhang-osa}. The DCs are assumed to be equipped with wind turbines and solar panels, which provide the DCs with renewable energy, as shown in Fig \ref{fig:NSF-green}. The constant $\alpha$ is randomly generated from $[1.6, 3.2]$ and represents the varying price of the electric grid. The capacity of a spectrum slot $b$ is set to 12.5Gps. The maximum number of slots $c_{e}$ is set to 300; 300 spectrum slots are available when the network is empty. Assume $\varsigma$ equals to 10; 10 VMs can be run in one server. $K$ is set to 3, i.e., the maximum number of shortest paths that can be used in Manycast-LPR is 3. Without losing generality, the average energy consumption of a VM is assumed to be 1 unit, implying that $p_{s}$ equals to 10 units. The VM bandwidth requirement $\zeta_{m,k}$ is randomly selected from $[1, 14]$, which is convenient for the simulation. The migration requests are generated by the optimal work loads distribution which is calculated based on $\varTheta_{m}$ and $\varPhi_{m}$. The background traffic is randomly generated between node pairs in the network. The background traffic load is counted as an average of $\frac{\lambda}{\mu}$, where $\lambda$ is an average arrival rate of the requests and $\frac{1}{\mu}$ is the holding period of each request \cite{Zhang2014}. Here, the background traffic arriving process is a poisson process, and the holding time is a negative exponential distribution. Parameters which are used for the evaluation are summarized in Table \ref{tab:simulation-parameters}.
\begin{figure}[t]
\centering
\includegraphics[width=0.95\columnwidth]{NSF-green.eps}
\caption{\small NSFNET topology with renewable energy DCs.}
\label{fig:NSF-green}
\end{figure}
\begin{table}[!htb]
\begin{center}
\caption{\small Simulation Parameters}\label{tab:simulation-parameters}
\begin{tabular}{{|l|p{140pt}|}}
\hline
Network topology & NSFNET\\
\hline
$\mathcal {D}$ & \{3, 5, 8, 10, 12\} \\
\hline
$\varsigma$ & $10$ VMs \\
\hline
$c_{m}$ & $1000$ servers \\
\hline
$\alpha=\{\alpha_{1},\alpha_{2},...,\alpha_{m}\}$& $\{2.1, 2.5, 1.9, 2.8, 2\}$ \\
\hline
$\varTheta_{m}$ & $[0, 8000]$\\
\hline
$\varPhi_{m}$ & $[1000, 9000]$ \\
\hline
$p_{s}$ & $10$ units, $1$ unit for $1$ VM in average \\
\hline
$\zeta_{m,k}$ & $[1, 14]$ Gb/s \\
\hline
$c_{e}$ & $300$ spectrum slots \\
\hline
$c_{s}$ & $12.5$ Gbps \\
\hline
$\kappa$ & $\{2, 4, 8, 16\} $ spectrum slots \\
\hline
$\frac{\lambda}{\mu}$ & $\{40, 80, 120, 160, 200, 240, 280, 320\}$ \\
\hline
\end{tabular}
\end{center}
\end{table}
We run the simulation for 150 times, and exclude the scenario with empty VM requests traffic load $(\mathcal{D}_{s}\neq\varnothing \quad\&\quad \mathcal{D}_{d}\neq\varnothing)$. Fig. \ref{fig:compare_energy} shows the total cost of brown energy consumption of the strategy without using renewable energy, Manycast-SPR $(\kappa =2)$ and Manycast-LPR $(\kappa = 2)$. Apparently, Manycast-SPR and Manycast-LPR can save brown energy substantially. Manycast-SPR saves about $15\%$ cost of brown energy as compared with the strategy without migration. Manycast-LPR reduces up to $31\%$ cost of brown energy as compared with the strategy without migration. Manycast-LPR has better performance because Manycast-LPR employs the least weight path $p$ of all node pairs for routing, while Manycast-LPR engages only the short path $p$ of one node pair.
\begin{figure}[!htb]
\centering
\includegraphics[width=1.0\columnwidth]{compare_energy.eps}
\caption{\small Total brown energy cost comparison.}
\label{fig:compare_energy}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=1.0\columnwidth]{compare_time.eps}
\caption{\small Running time comparison.}
\label{fig:compare_time}
\end{figure}
In order to obtain a better analysis, the running time of Manycast-SPR $(\kappa = 2)$ and Manycast-LPR $(\kappa = 2)$ are shown in Fig. \ref{fig:compare_time}. Manycast-SPR spends less time than Manycast-LPR, implying that Manycast-SPR has a lower complexity and Manycast-LPR has a higher computing complexity. It also illustrates that the time and the final cost value is a trade-off in the evaluation. Manycast-LPR is more complex and hence incurs a lower brown energy cost.
\begin{figure}[!htb]
\centering
\includegraphics[width=1.0\columnwidth]{cm1000ks1.eps}
\caption{\small Total brown energy cost of Manycast-SPR.}
\label{fig:cm1000ks1}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=1.0\columnwidth]{cm1000ks2.eps}
\caption{\small Total brown energy cost of Manycast-LPR.}
\label{fig:cm1000ks2}
\end{figure}
The results of Manycast-SPR for various $\kappa$ are described in Fig. \ref{fig:cm1000ks1}. The cost of brown energy consumption keeps increasing when the background traffic increases, because high background traffic tends to congest the network links and leads to more migration failures. Apparently, a small $\kappa$ brings more benefits than a big $\kappa$ in reducing the energy cost.
Fig. \ref{fig:cm1000ks2} shows the results of Manycast-LPR for various $\kappa$, almost the same results as shown in Fig. \ref{fig:cm1000ks1}, but the cost of the brown energy consumption is much less than that in Fig. \ref{fig:cm1000ks1}, because Manycast-LPR can easily find a path which has available bandwidth for migration. Obviously, Manycast-LPR with $\kappa =2$ achieves the best result with the lowest cost of consumed brown energy. All these results illustrate that a small $\kappa$ leads to a lower cost of the brown energy consumption and a big $\kappa$ induces a higher cost of the brown energy consumption. This is because it is difficult to find a path with enough bandwidth for a big $\kappa$, when the network has background traffic. A smaller $\kappa$ achieves a lower energy cost at the cost of higher complexity.
Figs. \ref{fig:SPR_time} and \ref{fig:LPR_time} show the running time of Manycast-SPR and that of Manycast-LPR with different $\kappa$, respectively. We can observe that the computing time is decreased when the traffic load increases. For the same $\kappa$ with a given background traffic load, Manycast-SPR consumes more time than Manycast-LPR does. For either of the two algorithms under a specific background traffic load, we can see that the running time is nearly halved when $\kappa$ is doubled. Hence, a smaller $\kappa$ brings a better performance but takes longer time, and a larger $\kappa$ has worse performance with a shorter running time.
\begin{figure}[!htb]
\centering
\includegraphics[width=1.0\columnwidth]{SPR_time.eps}
\caption{\small Running time of Manycast-SPR.}
\label{fig:SPR_time}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=1.0\columnwidth]{LPR_time.eps}
\caption{\small Running time of Manycast-LPR.}
\label{fig:LPR_time}
\end{figure}
\section{Conclusion}\label{conclusion}
Datacenters are widely deployed for the increasing demands of data processing and cloud computing. The energy consumption of DCs will take up $25\%$ of the total ICT energy consumption by 2020. Powering DCs with renewable energy can help save brown energy. However, the availability of renewable energy varies by locations and changes over time, and DCs' work loads demands also vary by locations and time, thus leading to the mismatch between the renewable energy supplies and the work loads demands in DCs. Inter-DC VM migration brings additional traffic to the network, and the VM mitigation is constrained by the network capacity, rendering inter-DC VM migration a great challenge.
This paper addresses the emerging renewable energy-aware inter-DC VM migration problem. The main contribution of this paper is to reduce the network influence on the migration while minimizing the brown energy consumption of the DCs. The RE-AIM problem is formulated and proven to be NP-hard. Two heuristic algorithms, Manycast-SPR and Manycast-LPR, have been proposed to solve the RE-AIM problem. Our results show that Manycast-SPR saves about $15\%$ cost of brown energy as compared with the strategy without migration, while Manycast-LPR saves about $31\%$ cost of brown energy as compared with the strategy without migration. The computing time of Manycast-LPR is longer than that of Manycast-SPR because the complexity of Manycast-LPR is higher than Manycast-SPR. In conclusion, we have demonstrated the viability of the proposed algorithms in minimizing brown energy consumption in inter-DC migration without congesting the network.
\bibliographystyle{IEEEtran}
|
2,869,038,154,015 | arxiv |
\section{Introduction}\label{sec_intro}
We consider one-dimensional $\alpha$-stable L\'evy processes with scaling index $\alpha\in (0,2)$ killed on entering the interval $[-1,1]$. In Döring et al. \cite{Doer_Kyp_Wei_01} the authors found a positive invariant function (sometimes called positive harmonic function) for such killed processes, i.e. a function $h:\mathbb{R} \setminus [-1,1] \rightarrow (0,\infty)$ such that
\begin{align}\label{invariant}
\mathbb{E}^x \big[ \mathds{1}_{\{ t < T_{[-1,1]} \}} h(\xi_t) \big] = h(x), \quad x \notin [-1,1], t\geq 0,
\end{align}
where $T_{B} := \inf\left\lbrace t \geq 0: \xi_t \in B \right\rbrace$ denotes the first hitting time of an open or closed set $B$.
The invariant function was used to condition the stable processes to avoid the interval and to relate the conditioned processes to their $h$-transformed path measure:
$$\mathbb{E}^x \Big[ \mathds{1}_\Lambda \mathds{1}_{\{ t < T_{[-1,1]} \}} \frac{h(\xi_t)}{h(x)} \Big] = \lim_{s \rightarrow \infty} \mathbb{P}^x(\Lambda \, | \, s+t < T_{[-1,1]}), \quad x \notin [-1,1], t\geq 0,
$$
for $\Lambda \in \mathcal{F}_t$, where $(\mathcal{F}_t)_{t \geq 0}$ is the natural enlargement of the filtration induced by $\xi$. As for other processes conditioned to avoid sets the conditioned stable processes are transient. As a counter part, the present article studies the question if stable processes can also be conditioned to hit the interval continuously in finite time. \smallskip
In recent years the problem of conditioning processes to hit a given set $B$ continuously has attracted some attention. As an example take a stable process of index smaller than $1$ and a singleton $B = \{0\}$. Moreover, denote $\rho:= \mathbb{P}(\xi_1>0)$ and $\hat\rho := 1-\rho$. It was proved in Kyprianou et al. \cite{Kyp_Riv_Sat_01} that
$$e:\mathbb{R} \setminus \left\lbrace 0 \right\rbrace \rightarrow (0,\infty), x \mapsto \begin{cases}
\sin(\pi\alpha\hat\rho) x^{\alpha-1} \quad &\text{if } x>0 \\
\sin(\pi\alpha\rho) |x|^{\alpha-1} \quad &\text{if } x<0
\end{cases},
$$
is excessive for the killed process, i.e.
$$\mathbb{E}^x \big[ \mathds{1}_{\{ t < T_{\{ 0 \}} \}} e(\xi_t) \big] \leq e(x), \quad x \neq 0, t \geq 0,$$
and the $h$-transform with $e$ coincides with the stable process conditioned to hit $0$ continuously. Indeed, the authors showed that the killing time is finite almost surely and the left-limit at the killing time is $0$. Applications of the conditioned processes have been found for instance in the study of entrance and exit at infinity of stochastic differential equations driven by stable processes, see Döring and Kyprianou \cite{Doer_Kyp_01}.\smallskip
In this article we will derive (strictly) excessive functions for the stable process killed on entering an interval, without loss of generality the interval $[-1,1]$, i.e. functions $v: \mathbb{R} \setminus [-1,1] \rightarrow (0,\infty)$ such that
\begin{align}\label{def_exc}
\mathbb{E}^x \big[ \mathds{1}_{\{ t < T_{[-1,1]} \}} v(\xi_{t}) \big] \leq v(x), \quad x \notin [-1,1],t\geq 0.
\end{align}
Unfortunately, the corresponding $h$-transformed process is not self-similar and hence, we can not follow the strategy of \cite{Kyp_Riv_Sat_01} to show that this process hits the interval continuously. A second example for a process conditioned to be absorbed by a set due to Chaumont \cite{Chau_01} uses another way of showing continuous absorption and it will turn out that this way is the right one in our setting, too. Under some assumptions the author conditioned a Lévy process to be continuously absorbed by $0$ from above, i.e. to hit $(-\infty,0]$ continuously from the outside. The tool which was used is again a Doob $h$-transform with an excessive function $u:(0,\infty) \rightarrow (0,\infty)$ which has the additional condition that, for any compact $K \subseteq (0,\infty)$,
\begin{align}
\mathbb{E}^x \big[ \mathds{1}_{\{ T_{K^\mathrm{C}} < T_{(-\infty,0]} \}} u(\xi_{T_{K^\mathrm{C}}}) \big] = u(x), \quad x \notin [-1,1].
\end{align}
Such an excessive function is called harmonic. In Silverstein \cite{Sil_01} it was shown that in Chaumont's setting the role of $u$ is played by the potential density of the dual ladder height process. Considering the $h$-transformed process which we denote by $(\xi,\mathbb{P}^x_{u})$ and the killing time by $\zeta$ one sees that
\begin{align}\label{h-trafo_Chau}
\mathbb{P}^x_{u}(T_{K^\mathrm{C}}<\zeta) = \mathbb{E}^x \Big[ \mathds{1}_{\{ T_{K^\mathrm{C}} < T_{(-\infty,0]} \}} \frac{u(\xi_{T_{K^\mathrm{C}}})}{u(x)} \Big] = 1.
\end{align}
This shows that the $h$-transformed process leaves all compact sets before it is killed. Chaumont even went further and extended (\ref{h-trafo_Chau}) to sets of the form $K=[a,\infty)$ which shows that the $h$-transformed process hits any set of the form $(0,a),a >0$, before killing, thus, absorption at $0$ is continuous.\smallskip
Before presenting our results, we introduce the most important definitions. More details can be found, for example, in Chung and Walsh \cite{Chu_Wal_01}, Bertoin \cite{Bert_01}, Kyprianou \cite{Kyp_03}
or Sato \cite{Sat_01}.\smallskip
\textbf{Stable processes:} We consider the canonical process ${\xi}$ on the space of càdlàg paths equipped with the $\sigma$-algebra $\mathcal{F}$ induced by the Skorohod topology. We denote by $\mathbb{P}^x$ the probability measure on the path space that makes ${\xi}$ a stable process started from $x \in \mathbb{R}$. Stable processes are L\'evy processes that fulfill the scaling property
\begin{align}\label{scaling}
\left((c {\xi}_{c^{-\alpha}t})_{t \geq 0},\mathbb{P}^x \right) \overset{(d)}{=} \left(({\xi}_{t})_{t \geq 0},\mathbb{P}^{cx}\right)
\end{align}
for all $x \in \mathbb{R}$ and $c>0$, where $\alpha$ is the index of self-similarity. It turns out to be necessary that $\alpha\in (0,2]$ with $\alpha=2$ corresponding to the Brownian motion. The continuity of sample paths excludes the Brownian motion from our study so we restrict to $\alpha\in (0,2)$. As a Lévy process stable processes are characterised entirely by the L\'evy triplet. For $\alpha<2$, the linear and Brownian part vanish and the L\'evy measure is
\begin{align*}
\Pi(\dd x) = \frac{\Gamma(\alpha+1)}{\pi} \left\{
\frac{ \sin(\pi \alpha \rho) }{ x^{\alpha+1}} \mathds{1}_{\{x > 0\}} + \frac{\sin(\pi \alpha \hat\rho)}{ {|x|}^{\alpha+1} }\mathds{1}_{\{x < 0\}}
\right\}\dd x,\quad x \in \mathbb{R},
\end{align*}
where
$\rho:=\mathbb{P}^0({\xi}_1 \geq 0)$ is the positivity parameter. For $\alpha \in (0,1)$ we exclude the case $\rho \in \left\lbrace 0,1 \right\rbrace$, in which case $\xi$ is (the negative of) a subordinator. For $\alpha \in (1,2)$ it is know that $\rho\in[{1}/{\alpha}, 1- {1}/{\alpha} ]$ and we exclude the boundary cases $\rho \in \left\lbrace {1}/{\alpha}, 1- {1}/{\alpha} \right\rbrace$ in which case $\xi$ has one-sided jumps. For $\alpha=1$ we consider the symmetric Cauchy process excluding drift. The normalisation was chosen so that the characteristic exponent satisfies
\[
\mathbb{E}^x[{ e}^{{ i}\theta( {\xi}_1-x)}] = { e}^{-|\theta|^\alpha}, \quad \theta\in\mathbb{R}.
\]
An important fact we will use for the parameter regimes is that the stable process exhibits (set) transience and (set) recurrence according to whether $\alpha\in (0,1)$ or $\alpha\in[1,2)$. When $\alpha\in(1,2)$ the notion of recurrence is even stronger in the sense that fixed points are hit with probability one. \smallskip
\textbf{Killed L\'evy processes and $h$-transforms:}
The killed transition measures are defined as
$$p^{[-1,1]}_t(x,\dd y)=\mathbb{P}^x(\xi_t \in \dd y, t< T_{[-1,1]}), \quad t\geq 0.$$
The corresponding sub-Markov process is called the L\'evy process killed in $[-1,1]$.
An excessive function for the killed process is a measurable function $v : \mathbb{R} \backslash [-1,1] \rightarrow [0,\infty)$ such that
\begin{align}\label{eq_def_harm}
\mathbb{E}^x \big[ \mathds{1}_{\{ t < T_{[-1,1]} \}} v(\xi_t) \big] \leq v(x),
\quad x \in \mathbb{R} \backslash [-1,1], t \geq 0.
\end{align}
An excessive function taking only strictly positive values is called a positive excessive function. When $v$ is a positive excessive function, the associated Doob $h$-transform is defined via the change of measure
\begin{align}\label{def_htrafo}
\mathbb{P}_v^x(\Lambda, t < \zeta) := \mathbb{E}^x \Big[ \mathds{1}_\Lambda \mathds{1}_{\{ t < T_{[-1,1]} \}} \frac{v(\xi_t)}{v(x)} \Big],\quad x\in\mathbb{R} \backslash [-1,1],
\end{align}
for $\Lambda \in \mathcal{F}_t$, where $\zeta$ is the (possibly infinite) killing time of the process.
From Chapter 11 of Chung and Walsh \cite{Chu_Wal_01}, we know that under $\mathbb{P}_v^{x}$ the canonical process is a strong Markov process and that (\ref{def_htrafo}) extends from deterministic times to $(\mathcal{F}_t)_{t \geq 0}$-stopping times $T$;
that is,
\begin{align}\label{eq_htrafo_stoppingtime}
\mathbb{P}_{v}^x(\Lambda, T <\zeta) = \mathbb{E}^x \Big[\mathds{1}_\Lambda \mathds{1}_{\{ T<T_{[a,b]} \}} \frac{v(\xi_T)}{v(x)} \Big], \quad x \notin [-1,1],
\end{align}
for $\Lambda\in \mathcal F_T$. An excessive function $v: \mathbb{R} \setminus [-1,1] \rightarrow (0,\infty)$ which fulfills
\begin{align}\label{def_harmonic}
\mathbb{E}^x \Big[ \mathds{1}_{\{ T_{K^\mathrm{C}} < T_{[-1,1]} \}} v(\xi_{T_{K^\mathrm{C}}}) \Big] = v(x), \quad x \notin [-1,1],
\end{align}
for all compact $K \subseteq \mathbb{R}\setminus [-1,1]$ is called a positive harmonic function.
\begin{remark}
The terminology of a harmonic function is not used consistently in the literature. In many articles (also including \cite{Doer_Kyp_Wei_01}) the notion of a harmonic function coincides with the notion of an invariant function in the sense of (\ref{invariant}). Here, we will always use the notion of a harmonic function for an excessive function which fulfills the additional condition \eqref{def_harmonic}.
The crucial point on positive harmonic functions in the sense of this article is that the $h$-transform leaves all compact sets before being killed, see \eqref{h-trafo_Chau}.
\end{remark}
\section{Main results}
The main results of this article are two-fold. We first identify new harmonic functions (in the sense of \eqref{def_harmonic}) for the stable processes killed in the unit interval. From these harmonic functions we define $h$-transformed measures which we then identify as the limiting measures of suitable conditionings that force the process to be absorbed at the boundary of the interval. The different possible cases of absorption at the top or the bottom of the interval will be reflected in the existence of different harmonic functions and their linear combinations.
\subsection{Harmonic functions} \label{sec_harm}
In this first section we identify two (minimal) harmonic functions. Let us define two functions $v_1, v_2: \mathbb{R} \setminus [-1,1] \rightarrow (0,\infty)$ by
$$v_1(x):=
\begin{cases}
\sin(\pi\alpha\hat{\rho}) \Big[(x+1)\psi_{\alpha\rho}(x) - (\alpha-1)_+ \int\limits_1^x \psi_{\alpha\rho}(u) \, \dd u \Big] \quad &\text{if } x>1 \\
\sin(\pi\alpha\rho) \Big[(|x|-1) \psi_{\alpha\hat{\rho}}(|x|) - (\alpha-1)_+ \int\limits_1^{|x|} \psi_{\alpha\hat{\rho}}(u) \, \dd u\Big] \quad &\text{if } x<-1
\end{cases},
$$
and
$$v_{-1}(x):=
\begin{cases}
\sin(\pi\alpha\hat{\rho}) \Big[(x-1) \psi_{\alpha\rho}(x) - (\alpha-1)_+ \int\limits_1^x \psi_{\alpha\rho}(u) \, \dd u \Big] \quad &\text{if } x>1 \\
\sin(\pi\alpha\rho) \Big[(|x|+1) \psi_{\alpha\hat{\rho}}(|x|) - (\alpha-1)_+ \int\limits_1^{|x|} \psi_{\alpha\hat{\rho}}(u) \, \dd u \Big] \quad &\text{if } x<-1
\end{cases}.
$$
The appearing auxiliary functions
\begin{align*}
\psi_{\alpha\rho}(x)= (x-1)^{\alpha\hat\rho-1}(x+1)^{\alpha\rho-1}, \quad x >1,
\end{align*}
already played a crucial rule to condition the stable processes to avoid an interval in \cite{Doer_Kyp_Wei_01}. For the function $\psi_{\alpha\hat\rho}$ the positivity parameter $\rho$ is replaced by $\hat\rho$, and vice versa.\smallskip
Here is the main result of this section:
\begin{theorem} \label{thm_harm}
Let $\xi$ be a stable process with index $\alpha \in (0,2)$ which has jumps in both directions. Then $v_1$ and $v_{-1}$ are harmonic functions for $\xi$ killed on first hitting the interval $[-1,1]$.
\end{theorem}
As described in the introduction a harmonic function is in particular excessive, hence, a new measure can be defined as an $h$-transform with the harmonic function. In what follows we will denote the $h$-transforms with $v_1$, $v_{-1}$ and $v:=v_1+v_{-1}$ by $\mathbb{P}^x_{v_1}$, $\mathbb{P}^x_{v_{-1}}$ and $\mathbb{P}^x_{v}$.
\subsection{Stable processes absorbed from above (or below)} \label{sec_abs_above}
The purpose of this section is to analyse the $h$-transformed process $(\xi,\mathbb{P}^x_{v_{1}})$. Since all results for $(\xi,\mathbb{P}^x_{v_{-1}})$ are analogous (replacing $\rho$ and $\hat \rho$) without loss of generality we only discuss $(\xi,\mathbb{P}^x_{v_{1}})$. Two questions will be our main concern:
\begin{itemize}
\item Is the process killed in finite time and, if so, what is the limiting behavior at the killing time?
\item How to characterize $\mathbb{P}^x_{v_{1}}$ through a limiting conditioning of $\mathbb{P}^x$?
\end{itemize}
The first question can be answered for all $\alpha$ simultaneously using properties of the explicit form of $v_1$:
\begin{proposition}\label{thm_abs_above}
Let $\xi$ be an $\alpha$-stable process with $\alpha \in (0,2)$ and both sided jumps, then
$$\mathbb{P}_{v_1}^x(\zeta < \infty, \xi_{\zeta-} =1)=1, \quad x \notin [-1,1].$$
\end{proposition}
To answer the second question we need to distinguish the recurrent and the transient cases:\\
\textbf{The case $\alpha<1$:} The probability that $\xi$ never hits the interval $[-1,1]$ is positive because the stable process is transient. To condition $\xi$ to be absorbed by $[-1,1]$ from above without hitting the interval we first condition on $\{T_{[-1,1]}=\infty\}$ and then on some event which describes the absorption from above. The most plausible event is $T_{(1,1+\varepsilon)}$ being finite for small $\varepsilon>0$. Another possibility refers to the so-called point of closest reach. Let therefore $\underline{m}$ be the time such that $|\xi_{\underline{m}}| \leq |\xi_t|$ for all $t \geq 0$. Then $\xi_{\underline{m}}$ is called the point of closest reach of $0$. The polarity of points for $\alpha<1$ implies $\xi_{\underline{m}} \neq 0$ almost surely under $\mathbb{P}^x$ for all starting points $x \neq 0$. With these definitions one could also think of conditioning on the event $\{ \xi_{\underline{m}} \in (1,1+\varepsilon) \}$ which is contained in $\{ T_{[-1,1]}=\infty, T_{(1,1+\varepsilon)}<\infty \}$ and, indeed, this is the right choice. \smallskip
\textbf{The case $\alpha \geq 1$:} The first hitting time $T_{[-1,1]}$ is finite almost surely, hence, a different conditioning is needed. Since $T_{(-1-\varepsilon,1+\varepsilon)}$ is finite as well the good conditioning is to condition $\xi_{T_{(-1-\varepsilon,1+\varepsilon)}}$ to be in $(1,1+\varepsilon)$ and then let $\varepsilon$ tend to $0$.\smallskip
The techniques we use for the conditioning center around the recent results on the so-called deep factorisation of stable processes, see e.g. Kyprianou \cite{Kyp_02} and Kyprianou et al. \cite{Kyp_Riv_Sen_01} and hitting distributions of stable processes, see Kyprianou et al. \cite{Kyp_Par_Wat_01}. In particular, results on the distribution of the point of closest reach in the case $\alpha<1$ and the distribution of the first hitting time of the interval $(-1,1)$ in the case $\alpha \geq 1$ are the keys to prove our results.\smallskip
We come to the first characterisation of the $h$-transform $\mathbb{P}_{v_1}^x$ as the process conditioned to be absorbed by $[-1,1]$ from above in a meaningful way.
\begin{theorem}\label{thm_cond_v1_<1}
Let $\xi$ be an $\alpha$-stable process with $\alpha\in (0,1)$ and both sided jumps. Then it holds, for all $x \notin [-1,1]$ and $\Lambda \in \mathcal{F}_t$, that
$$\mathbb{P}^{x}_{v_1}(\Lambda, t < \zeta) = \lim_{\delta \searrow 0}\lim_{\varepsilon \searrow 0} \mathbb{P}^x(\Lambda, t< T_{(-(1+\delta),1+\delta)} \,|\, \xi_{\underline{m}} \in (1,1+\varepsilon)).$$
\end{theorem}
In fact, we prove a slightly more general statement which has precisely the form of a self-similar Markov process conditioned to be absorbed at the origin in Kyprianou et al. \cite{Kyp_Riv_Sat_01} and a L\'evy process conditioned to be absorbed at the origin from above in Chaumont \cite{Chau_01}:
\begin{align}\label{aa}
\mathbb{P}^{x}_{v_1}(\Lambda, t < T_{(-(1+\delta),1+\delta)}) = \lim_{\varepsilon \searrow 0} \mathbb{P}^x(\Lambda, t< T_{(-(1+\delta),1+\delta)} \,|\, \xi_{\underline{m}} \in (1,1+\varepsilon))
\end{align}
for all $\delta>0$.\smallskip
In the case $\alpha \geq 1$ the $h$-transform belongs to a different conditioned process.
\begin{theorem}\label{thm_cond_v1_geq1}
Let $\xi$ be an $\alpha$-stable process with $\alpha \in [1,2)$ and both sided jumps. Then it holds, for all $x \notin [-1,1]$ and $\Lambda \in \mathcal{F}_t$, that
$$\mathbb{P}^{x}_{v_1}(\Lambda, t < \zeta) = \lim_{\delta \searrow 0} \lim_{\varepsilon \searrow 0} \mathbb{P}^x(\Lambda, t< T_{(-(1+\delta),1+\delta)} \,|\xi_{T_{(-(1+\varepsilon),1+\varepsilon)}} \in (1,1+\varepsilon)).$$
\end{theorem}
With this result we can interpret the $h$-transformed process as the original process conditioned to approach the interval $[-1,1]$ continuously from above.\smallskip
For $\alpha >1$ we can even find a second characterisation of $\mathbb{P}^x_{v_1}$ as conditioned process. We need to introduce the stable process conditioned to avoid $0$ (see e.g. Pantí \cite{Pan_01} or Yano \cite{Yan_01} for general Lévy processes). For this sake define $e: \mathbb{R} \setminus \left\lbrace 0 \right\rbrace \rightarrow (0,\infty)$ via
$$
e(x) = \begin{cases}
\sin(\pi\alpha\hat\rho) x^{\alpha-1} \quad &\text{if } x>0 \\
\sin(\pi\alpha\rho) |x|^{\alpha-1} \quad &\text{if } x<0
\end{cases},
$$
which is known to be a positive invariant function for the process killed on hitting $0$ when $\alpha >1$. Denote the underlying $h$-transform by $\mathbb{P}^x_\circ$, i.e.
$$\mathbb{P}^x_\circ(\Lambda) = \mathbb{E}^x \Big[\mathds{1}_{\{ t < T_{\{0\}} \}} \frac{e(\xi_t)}{e(x)} \Big], \quad x \neq 0, \Lambda \in \mathcal{F}_t,$$
which can be shown to correspond to conditioning the stable process to avoid the origin. We can use $\mathbb{P}^x_\circ$ to give a conditioning analogously to the case $\alpha<1$ also in the case $\alpha >1$. But here the conditioning does not refer to the original process but to the process conditioned to avoid $0$.
\begin{theorem}\label{thm_cond_v1_altern_>1}
Let $\xi$ be an $\alpha$-stable process with $\alpha \in (1,2)$ and both sided jumps. Then it holds, for all $x \notin [-1,1]$ and $\Lambda \in \mathcal{F}_t$, that
$$\mathbb{P}^{x}_{v_1}(\Lambda, t < \zeta) = \lim_{\delta \searrow 0} \lim_{\varepsilon \searrow 0} \mathbb{P}_\circ^x(\Lambda, t< T_{(-(1+\delta),1+\delta)} \,|\,\xi_{\underline{m}} \in (1,1+\varepsilon)).$$
\end{theorem}
It is quite remarkable to compare Theorem \ref{thm_cond_v1_altern_>1} and Theorem \ref{thm_cond_v1_<1}. Since conditioning to avoid a point has no effect for $\alpha< 1$ both theorems coincide. First condition to avoid the origin (trivial for $\alpha< 1$) then condition to approach $1$ from above yields $\mathbb{P}^{x}_{v_1}$. The case $\alpha=1$ differs from $\alpha \neq 1$ in this respect because $0$ is polar and the conditioning to approach the interval from above is not well-defined because $\xi_{\underline m}=0$ almost surely.
\subsection{Stable processes absorbed without restrictions} \label{sec_abs_both}
In this section we want to analyse the h-transforms $(\xi,\mathbb{P}^x_{v})$ with $v=v_1+v_{-1}$. The two main aspects are the same as in Section \ref{sec_abs_above}. First we want to analyse the behaviour of the paths of $(\xi,\mathbb{P}^x_v)$ at the killing time if it is finite. Second we give characterisations of the $h$-transformed process as the original process conditioned on similar events as in Section \ref{sec_abs_above}.\smallskip
In the case $\alpha<1$ this works as one would expect, namely the $h$-transform using $v$ corresponds to the process conditioned on $\{ |\xi_{\underline{m}}| \in (1,1+\varepsilon) \}$ for $\varepsilon$ tending to $0$. For $\alpha \geq 1$ we won't find a representation $(\xi,\mathbb{P}^x_{v})$ as a conditioned process. Nonetheless we can show that the process conditioned to be absorbed by $[-1,1]$ without any restrictions on the side of the interval of which it is absorbed, equals $(\xi,\mathbb{P}^x_{v_1})$ or $(\xi,\mathbb{P}^x_{v_{-1}})$ depending on some relation on $\rho$. This means that the process conditioned to be absorbed without any restrictions coincides with one of the processes conditioned to be absorbed from one side.\smallskip
Here is the result on the behaviour at the killing time:
\begin{proposition}\label{thm_abs_both}
Let $\xi$ be an $\alpha$-stable process with $\alpha \in (0,2)$ and both sided jumps, then
$$\mathbb{P}_{v}^x(\zeta < \infty, |\xi_{\zeta-}| =1)=1, \quad x \notin [-1,1].$$
\end{proposition}
As before we want to connect the $h$-transformed process to some conditioned process. Again we have to separate the cases $\alpha<1$ and $\alpha \geq 1$ and for $\alpha>1$ we give an alternative conditioned process. The event we condition on is bigger than in Section \ref{sec_abs_above} in all cases.
We start with the asymptotic in the case $\alpha<1$ and the characterisation of $(\xi,\mathbb{P}^x_v)$ as conditioned process as one would expect with the knowledge of Theorem \ref{thm_cond_v1_<1}.
\begin{theorem}\label{thm_cond_v_<1}
Let $\xi$ be an $\alpha$-stable process with $\alpha \in (0,1)$ and both sided jumps. Then it holds, for all $x \notin [-1,1]$ and $\Lambda \in \mathcal{F}_t$, that
$$\mathbb{P}^{x}_{v}(\Lambda, t < \zeta) = \lim_{\delta \searrow 0} \lim_{\varepsilon \searrow 0} \mathbb{P}^x(\Lambda, t< T_{(-(1+\delta),1+\delta)} \,|\, |\xi_{\underline{m}}| \in (1,1+\varepsilon)).$$
\end{theorem}
As we already mentioned, in the case $\alpha \geq 1$ the process conditioned to be absorbed by the interval without restriction on the side of absorption is the same as the process conditioned to be absorbed from one side, the side depending on $\rho$.
\begin{theorem}\label{thm_cond_v_geq1}
Let $\xi$ be an $\alpha$-stable process with $\alpha \in [1,2)$ and both sided jumps. Then it holds, for all $x \notin [-1,1]$ and $\Lambda \in \mathcal{F}_t$, that
\begin{align*}
& \lim_{\delta \searrow 0} \lim_{\varepsilon \searrow 0} \mathbb{P}^x(\Lambda, t< T_{(-(1+\delta),1+\delta)} \,| \, \xi_{T_{(-(1+\varepsilon),1+\varepsilon)}} \notin [-1,1])\\
=&\, \begin{cases}
\mathbb{P}^{x}_{v_1}(\Lambda, t < \zeta) \quad &\text{if } \rho \leq \frac{1}{2} \\
\mathbb{P}^{x}_{v_{-1}}(\Lambda, t < \zeta) \quad &\text{if } \rho > \frac{1}{2}
\end{cases}.
\end{align*}
\end{theorem}
We conclude with the alternative characterisation for the $h$-transform for $\alpha>1$. Again the conditioning refers to the stable process conditioned to avoid $0$ and the event we condition on is the same as in the case $\alpha<1$.
\begin{theorem}\label{thm_cond_v_altern_>1}
Let $\xi$ be an $\alpha$-stable process with $\alpha \in (1,2)$ and both sided jumps. Then it holds, for all $x \notin [-1,1]$ and $\Lambda \in \mathcal{F}_t$, that
$$\mathbb{P}^{x}_{v}(\Lambda, t < \zeta) = \lim_{\delta \searrow 0}\lim_{\varepsilon \searrow 0} \mathbb{P}_\circ^x(\Lambda, t< T_{(-(1+\delta),1+\delta)} \,|\,|\xi_{\underline{m}}| \in (1,1+\varepsilon)).$$
\end{theorem}
\section{Proofs}\label{sec_proofs}
\subsection{Harmonic functions} \label{sec_proof_harmonic}
In this section we prove Theorem \ref{thm_harm}. First we give an idea how to extract the right harmonic functions. The potential measure of $\xi$ killed when it enters $[-1,1]$ is defined as
$$U_{[-1,1]}(x,\dd y) := \mathbb{E}^x \Bigg[ \int\limits_0^{T_{[-1,1]}} \mathds{1}_{\{ \xi_t \in \dd y \}} \, \dd t \Bigg], \quad x,y \notin [-1,1].$$
It is known that the potential measure has a density with respect to the Lebesgue measure (also known as Green's function), i.e.
$$U_{[-1,1]}(x,\dd y) = u_{[-1,1]}(x,y) \, \dd y,$$
where $u_{[-1,1]}: (\mathbb{R}\setminus [-1,1])^2 \rightarrow [0,\infty)$ is explicitely known from Profeta and Simon \cite{Pro_Sim_01}. Moreover, Kunita and Watanabe \cite{Kun_Wat_01} showed that $x \mapsto u_{[-1,1]}(x,y)$ is harmonic for all $y \notin [-1,1]$ and, heuristically speaking, the corresponding $h$-transform should be the process conditioned to be absorbed by $y$. Since our aim is to condition the process to be absorbed from $1$ we will consider the limit when $y$ tends to $1$. But from the formulas of \cite{Pro_Sim_01} we see immediately that $u_{[-1,1]}(x,y)$ converges to $0$ for $y$ tending to $1$. So there are two difficulties. The first one is that we need to renormalise $u_{[-1,1]}(x,y)$ such that it converges pointwise for $y \searrow 1$ to some function in $x$ and second we need to argue why in this case the limit of the (scaled) harmonic function is harmonic again.\smallskip
To abbreviate we denote
$$c_{\alpha\rho} := 2^{\alpha\rho}\frac{\pi \alpha\rho\Gamma(\alpha\rho)}{\Gamma(1-\alpha\hat{\rho})} \quad \text{and} \quad c_{\alpha\hat\rho}:= 2^{\alpha\hat{\rho}}\frac{\pi \alpha\hat{\rho}\Gamma(\alpha\hat{\rho})}{\Gamma(1-\alpha\rho)}.$$
The first auxiliary result establishes a pointwise connection between $v_1$ and the potential density $u_{[-1,1]}$ which will be very important for the proof of harmonicity of $v_1$. From Profeta and Simon \cite{Pro_Sim_01} we know that $y \mapsto u_{[-1,1]}(x,y)$ has a pole in $x$ (for $\alpha<1$) but is also integrable at $x$. Hence, defining $u_{[-1,1]}(x,x) := 0$ does not change anything for the potential of the process killed on entering $[-1,1]$.
\begin{lemma} \label{lemma_boring}
Whenever $x>y>1$ or $x<-1, y>1$, it holds that
\begin{align*}
v_1(x) &= 2^{\alpha\hat\rho-1} c_{\alpha\rho} \frac{u_{[-1,1]}(x,y)}{g(y)}\\
&\quad -\Big(\sin(\pi\alpha\hat\rho) \mathds{1}_{\left\lbrace x>1 \right\rbrace} + \sin(\pi\alpha\rho) \mathds{1}_{\left\lbrace x<-1 \right\rbrace}\Big)\\
& \quad \quad \times \frac{(1-\alpha\hat\rho)|x-y|^{\alpha-1}}{g(y)} \int\limits_1^{z(x,y)} (u-1)^{\alpha\rho}(u+1)^{\alpha\hat\rho-2}\, \dd u
\\
&\quad +(\alpha-1)_+ \Big(\sin(\pi\alpha\hat\rho)\mathds{1}_{\{ x>1 \}}\int\limits_1^x \psi_{\alpha\rho}(u)\, \dd u + \sin(\pi\alpha\rho) \mathds{1}_{\{ x<-1 \}}\int\limits_1^{|x|} \psi_{\alpha\hat\rho}(u)\, \dd u\Big)\\
&\quad \quad \times \Big(\frac{\alpha\rho}{g(y)} \int\limits_1^y \psi_{\alpha\hat\rho}(u)\,\dd u -1 \Big),
\end{align*}
where $g(y)=(y-1)^{\alpha\rho}(y+1)^{\alpha\hat\rho-1} = (y-1) \psi_{\alpha\hat\rho}(y)$.
\end{lemma}
\begin{proof}
We use the explicit expression for $u_{[-1,1]}(x,y)$ from Profeta and Simon \cite{Pro_Sim_01}, where the expression
$$z(x,y) _= \frac{|xy-1|}{|x-y|}, \quad x,y \notin [-1,1],x \neq y,$$
appears frequently. Before we start we note that
$$z(x,y)-1 = \begin{cases}
\frac{(x+1)(y-1)}{x-y} \quad &\text{if } x>y>1 \\
\frac{(|x|-1)(y-1)}{y-x} \quad &\text{if } x<-1,y>1
\end{cases}$$
and
$$z(x,y)+1 = \begin{cases}
\frac{(x-1)(y+1)}{x-y} \quad &\text{if } x>y>1 \\
\frac{(|x|+1)(y+1)}{y-x} \quad &\text{if } x<-1,y>1
\end{cases}.$$
Furthermore, with integration by parts we get
\begin{align*}
&\quad\int\limits_1^{z(x,y)} \psi_{\alpha\rho}(u) \, \dd u\\
&= \int\limits_1^{z(x,y)} (u-1)^{\alpha\hat{\rho}-1} (u+1)^{\alpha\rho-1} \, \dd u \\
&= \frac{1}{\alpha\hat{\rho}}\Big[(u-1)^{\alpha\hat{\rho}} (u+1)^{\alpha\rho-1}\Big]_1^{z(x,y)} - \frac{\alpha\rho-1}{\alpha\hat{\rho}}\int\limits_1^{z(x,y)} (u-1)^{\alpha\hat{\rho}} (u+1)^{\alpha\rho-2} \, \dd u \\
&= \frac{1}{\alpha\hat{\rho}}\Big((z(x,y)-1)^{\alpha\hat{\rho}} (z(x,y)+1)^{\alpha\rho-1}\Big) + \frac{1-\alpha\rho}{\alpha\hat{\rho}}\int\limits_1^{z(x,y)} (u-1)^{\alpha\hat{\rho}} (u+1)^{\alpha\rho-2} \, \dd u
\end{align*}
and analogously
\begin{align*}
&\quad\int\limits_1^{z(x,y)} \psi_{\alpha\hat\rho}(u) \, \dd u\\
&= \frac{1}{\alpha\rho}\Big((z(x,y)-1)^{\alpha\rho} (z(x,y)+1)^{\alpha\hat\rho-1}\Big) + \frac{1-\alpha\hat\rho}{\alpha\rho}\int\limits_1^{z(x,y)} (u-1)^{\alpha\rho} (u+1)^{\alpha\hat\rho-2} \, \dd u.
\end{align*}
We use the explicit form for $u_{[-1,1]}(x,y)$ given in \cite{Pro_Sim_01} and plug in to see, for $x>y>1$,
\begin{align*}
&\quad\frac{\Gamma(\alpha\rho)\Gamma(\alpha\hat{\rho})}{2^{1-\alpha}} u_{[-1,1]}(x,y)\\
&= (x-y)^{\alpha-1} \int\limits_1^{z(x,y)} \psi_{\alpha\hat{\rho}}(u) \, \dd u - (\alpha-1)_+ \int\limits_1^y \psi_{\alpha\hat\rho}(u) \, \dd u \int\limits_1^x \psi_{\alpha\rho}(u) \, \dd u \\
&= \frac{(x-y)^{\alpha-1}}{\alpha\rho}(z(x,y)-1)^{\alpha\rho} (z(x,y)+1)^{\alpha\hat\rho-1}\\
&\quad + \frac{(1-\alpha\hat\rho)(x-y)^{\alpha-1}}{\alpha\rho}\int\limits_1^{z(x,y)} (u-1)^{\alpha\rho} (u+1)^{\alpha\hat\rho-2} \, \dd u \\
&\quad - (\alpha-1)_+ \int\limits_1^y \psi_{\alpha\hat\rho}(u) \, \dd u \int\limits_1^x \psi_{\alpha\rho}(u) \, \dd u \\
&= \frac{1}{\alpha\rho}((x+1)(y-1))^{\alpha\rho} ((x-1)(y+1))^{\alpha\hat\rho-1}\\
&\quad + \frac{(1-\alpha\hat\rho)(x-y)^{\alpha-1}}{\alpha\rho}\int\limits_1^{z(x,y)} (u-1)^{\alpha\rho} (u+1)^{\alpha\hat\rho-2} \, \dd u \\
&\quad - (\alpha-1)_+ \int\limits_1^y \psi_{\alpha\hat\rho}(u) \, \dd u \int\limits_1^x \psi_{\alpha\rho}(u) \, \dd u \\
&= \frac{1}{\alpha\rho}(y-1)^{\alpha\rho} (y+1)^{\alpha\hat\rho-1} \Big(\frac{1}{\sin(\pi\alpha\hat\rho)} v_1(x)+(\alpha-1)_+ \int\limits_1^x \psi_{\alpha\rho}(u)\, \dd u \Big)\\
&\quad + \frac{(1-\alpha\hat\rho)(x-y)^{\alpha-1}}{\alpha\rho}\int\limits_1^{z(x,y)} (u-1)^{\alpha\rho} (u+1)^{\alpha\hat\rho-2} \, \dd u \\
&\quad - (\alpha-1)_+ \int\limits_1^y \psi_{\alpha\hat\rho}(u) \, \dd u \int\limits_1^x \psi_{\alpha\rho}(u) \, \dd u.
\end{align*}
Solving the equation with respect to $v_1$ and using $\sin(\pi\alpha\hat\rho)= \frac{\pi}{\Gamma(\alpha\hat\rho)\Gamma(1-\alpha\hat\rho)}$ yields the claim for $x>y>1$. For $x<-1,y>1$ we get similarly:
\begin{align*}
&\quad\frac{\sin(\pi\alpha\hat\rho)}{\sin(\pi\alpha\rho)}\frac{\Gamma(\alpha\rho)\Gamma(\alpha\hat{\rho})}{2^{1-\alpha}} u_{[-1,1]}(x,y)\\
&= (y-x)^{\alpha-1} \int\limits_1^{z(x,y)} \psi_{\alpha\hat{\rho}}(u) \, \dd u - (\alpha-1)_+ \int\limits_1^y \psi_{\alpha\hat\rho}(u) \, \dd u \int\limits_1^{|x|} \psi_{\alpha\hat\rho}(u) \, \dd u \\
&= \frac{(y-x)^{\alpha-1}}{\alpha\rho}(z(x,y)-1)^{\alpha\rho} (z(x,y)+1)^{\alpha\hat\rho-1}\\
&\quad + \frac{(1-\alpha\hat\rho)(y-x)^{\alpha-1}}{\alpha\rho}\int\limits_1^{z(x,y)} (u-1)^{\alpha\rho} (u+1)^{\alpha\hat\rho-2} \, \dd u \\
&\quad - (\alpha-1)_+ \int\limits_1^y \psi_{\alpha\hat\rho}(u) \, \dd u \int\limits_1^{|x|} \psi_{\alpha\hat\rho}(u) \, \dd u \\
&= \frac{1}{\alpha\rho}((|x|-1)(y-1))^{\alpha\rho} ((|x|+1)(y+1))^{\alpha\hat\rho-1}\\
&\quad + \frac{(1-\alpha\hat\rho)(y-x)^{\alpha-1}}{\alpha\rho}\int\limits_1^{z(x,y)} (u-1)^{\alpha\rho} (u+1)^{\alpha\hat\rho-2} \, \dd u \\
&\quad - (\alpha-1)_+ \int\limits_1^y \psi_{\alpha\hat\rho}(u) \, \dd u \int\limits_1^{|x|} \psi_{\alpha\hat\rho}(u) \, \dd u \\
&= \frac{1}{\alpha\rho}(y-1)^{\alpha\rho} (y+1)^{\alpha\hat\rho-1} \Big(\frac{1}{\sin(\pi\alpha\rho)} v_1(x)+(\alpha-1)_+ \int\limits_1^{|x|} \psi_{\alpha\hat\rho}(u)\, \dd u \Big)\\
&\quad + \frac{(1-\alpha\hat\rho)(x-y)^{\alpha-1}}{\alpha\rho}\int\limits_1^{z(x,y)} (u-1)^{\alpha\rho} (u+1)^{\alpha\hat\rho-2} \, \dd u \\
&\quad - (\alpha-1)_+ \int\limits_1^y \psi_{\alpha\hat\rho}(u) \, \dd u \int\limits_1^{|x|} \psi_{\alpha\hat\rho}(u) \, \dd u.
\end{align*}
Again, solving with respect to $v_1(x)$ leads to the claim.
\end{proof}
\begin{corollary}\label{cor_limit}
It holds that
$$v_1(x) = c_{\alpha\rho}\lim_{y \searrow 1} \frac{u_{[-1,1]}(x,y)}{(y-1)^{\alpha\rho}},\quad x \in \mathbb{R} \setminus [-1,1].$$
\end{corollary}
\begin{proof}
We consider the expression from Lemma \ref{lemma_boring} and let $y$ tend to $1$ from above. It is sufficient to show that
\begin{align*}
&-\Big(\sin(\pi\alpha\hat\rho) \mathds{1}_{\left\lbrace x>1 \right\rbrace} + \sin(\pi\alpha\rho) \mathds{1}_{\left\lbrace x<-1 \right\rbrace}\Big)\\
&\quad \times \frac{(1-\alpha\hat\rho)|x-y|^{\alpha-1}}{g(y)} \int\limits_1^{z(x,y)} (u-1)^{\alpha\rho}(u+1)^{\alpha\hat\rho-2}\, \dd u
\\
&+(\alpha-1)_+ \Big(\sin(\pi\alpha\hat\rho)\mathds{1}_{\left\lbrace x>1 \right\rbrace}\int\limits_1^x \psi_{\alpha\rho}(u)\, \dd u + \sin(\pi\alpha\rho) \mathds{1}_{\left\lbrace x<-1 \right\rbrace}\int\limits_1^{|x|} \psi_{\alpha\hat\rho}(u)\, \dd u\Big)\\
&\quad \times \Big(\frac{\alpha\rho}{g(y)} \int\limits_1^y \psi_{\alpha\hat\rho}(u)\,\dd u -1 \Big)
\end{align*}
converges to $0$ for $y \searrow 1$. For that it is of course sufficient to show that
$$\frac{1}{g(y)} \int\limits_1^{z(x,y)} (u-1)^{\alpha\rho}(u+1)^{\alpha\hat\rho-2}\, \dd u \quad
\text{ and } \quad \frac{\alpha\rho}{g(y)} \int\limits_1^y \psi_{\alpha\hat\rho}(u)\,\dd u -1 $$
converge to $0$ for $y \searrow 1$. Both claims can be seen readily with l'Hopital's rule.
\end{proof}
Now we prove harmonicity of $v_1$.
\begin{proof}[Proof of Theorem \ref{thm_harm}]
To show excessiveness we define the measure
$$\eta(\dd x) \coloneqq v_1(x) \, \dd x\quad \text{ on } \mathbb{R} \setminus [-1,1].$$
We will show that $\eta$ is an excessive measure for the dual process killed on entering the intervall, i.e. $\eta$ is $\sigma$-finite and it holds that
$$\int\limits_{\mathbb{R}\setminus [-1,1]} \hat{\mathbb{P}}^x(\xi_t \in A, t < T_{[-1,1]}) \, \eta(\dd x) \leq \eta(A),$$
for all $A \in \mathcal{B}(\mathbb{R} \setminus [-1,1])$ and $t \geq 0$. From Theorem XII.71 of Dellacherie and Meyer \cite{Del_Mey_01} it is known that if an excessive measure has a density with respect to the duality measure (which is the Lebesgue measure also for killed Lévy processes, see Bertoin \cite{Bert_01}, Theorem II.5), then this density is an excessive function for the dual process killed on hitting $[-1,1]$. Hence, by showing that $\eta$ is an excessive measure for the dual process killed on hitting $[-1,1]$, it follows that $v_1$ is an excessive function for the original process killed on entering the interval.\smallskip
To show that $\eta$ is excessive for the dual process, first note that $\eta$ is $\sigma$-finite because $v_1$ is continuous on $\mathbb{R} \setminus [-1,1]$. Next, for the dual process, we note that
$$\hat{U}_{[-1,1]}(y, \dd x) = u_{[-1,1]}(x,y) \, \dd x,\quad x,y \in \mathbb{R} \setminus [-1,1],$$
where $\hat{U}_{[-1,1]}$ is the potential of the dual process killed on entering $[-1,1]$ (see Theorem XII.72 of Dellacherie and Meyer \cite{Del_Mey_01} for a general Markov process). Let $A \in \mathcal{B}(\mathbb{R} \setminus [-1,1])$ be compact, use Corollary \ref{cor_limit} in the first equation and Fatou's Lemma in the second one:
\begin{align*}
&\quad \frac{1}{c_{\alpha\rho}} \int\limits_{\mathbb{R}\setminus [-1,1]} \hat{\mathbb{P}}^x(\xi_t \in A, t < T_{[-1,1]}) \, \eta(\dd x)\\
&= \int\limits_{\mathbb{R}\setminus [-1,1]} \hat{\mathbb{P}}^x(\xi_t \in A, t < T_{[-1,1]}) \lim\limits_{y \searrow 1} \frac{u_{[-1,1]}(x,y)}{(y-1)^{\alpha\rho}} \,\dd x\\
&\leq \liminf\limits_{y \searrow 1} \frac{1}{(y-1)^{\alpha\rho}} \int\limits_{\mathbb{R}\setminus [-1,1]} \hat{\mathbb{P}}^x(\xi_t \in A, t < T_{[-1,1]}) u_{[-1,1]}(x,y) \, \dd x \\
&\leq \liminf\limits_{y \searrow 1} \frac{1}{(y-1)^{\alpha\rho}} \int\limits_{\mathbb{R}\setminus [-1,1]} \hat{\mathbb{P}}^x(\xi_t \in A, t < T_{[-1,1]}) \, \hat{U}_{[-1,1]}(y,\dd x)\\
&= \liminf_{y \searrow 1}\frac{1}{(y-1)^{\alpha\rho}}\int\limits_0^\infty \Big( \int\limits_{\mathbb{R}\setminus [-1,1]} \hat{\mathbb{P}}^x(\xi_t \in A, t < T_{[-1,1]}) \, \hat{\mathbb{P}}^y(\xi_s \in \dd x, s < T_{[-1,1]}) \Big) \, \dd s\\
&= \liminf_{y \searrow 1}\frac{1}{(y-1)^{\alpha\rho}}\int\limits_0^\infty \hat{\mathbb{P}}^y(\xi_{t+s} \in A, t+s < T_{[-1,1]}) \, \dd s \\
&= \liminf_{y \searrow 1}\frac{1}{(y-1)^{\alpha\rho}}\int\limits_t^\infty \hat{\mathbb{P}}^y(\xi_{s} \in A, s < T_{[-1,1]}) \, \dd s \\
&\leq \liminf_{y \searrow 1}\frac{1}{(y-1)^{\alpha\rho}}\int\limits_0^\infty \hat{\mathbb{P}}^y(\xi_{s} \in A, s < T_{[-1,1]}) \, \dd s \\
&\leq \liminf_{y \searrow 1}\frac{1}{(y-1)^{\alpha\rho}}\int\limits_A \hat{u}_{[-1,1]}(y,x) \, \dd x \\
&\leq \liminf_{y \searrow 1}\int\limits_A \frac{u_{[-1,1]}(x,y)}{(y-1)^{\alpha\rho}} \, \dd x.
\end{align*}
From Corollary \ref{cor_limit} we know that $(u_{[-1,1]}(x,y))/((y-1)^{\alpha\rho})$ converges for $y \searrow 1$ for all $x \in \mathbb{R} \setminus [-1,1]$, in particular the function $y \mapsto (u_{[-1,1]}(x,y))/((y-1)^{\alpha\rho})$ is bounded on $(1,\varepsilon)$ with $\varepsilon < \inf A \cap (1,\infty)$ for all $x \in A$. But since $A$ is compact $(u_{[-1,1]}(x,y))/((y-1)^{\alpha\rho})$ is uniformly bounded for $x \in A$. Hence, we can apply dominated convergence to deduce:
\begin{align*}
\frac{1}{c_{\alpha\rho}} \int\limits_{\mathbb{R}\setminus [-1,1]} \hat{\mathbb{P}}^x(\xi_t \in A) \, \eta(\dd x) &\leq \int\limits_A \lim_{y \searrow 1} \frac{u_{[-1,1]}(x,y)}{(y-1)^{\alpha\rho}} \, \dd x\\
&= \frac{1}{c_{\alpha\rho}} \int_A v_1(x) \, \dd x\\
&= \frac{1}{c_{\alpha\rho}} \eta(A).
\end{align*}
Hence, we proved that $\eta$ is an excessive measure and as mentioned above it follows with Theorem XII.71 of \cite{Del_Mey_01} that $v_1$ is an excessive function.\smallskip
Now we show the characterising property of harmonicty, i.e.
$$\mathbb{E}^x\big[ \mathds{1}_{\{ T_{K^\mathrm{C}} < T_{[-1,1]} \}} v_1(\xi_{T_{K^\mathrm{C}}}) \big] = v_1(x), \quad x \in \mathbb{R}\setminus [-1,1],$$
for all $K \subseteq \mathbb{R}\setminus [-1,1]$ which are compact in $\mathbb{R}\setminus [-1,1]$. If $x \in K^\mathrm{C}=(\mathbb{R}\setminus[-1,1]) \setminus K$ the claim is clear. So we assume $x \in K$. The idea is to use the connection between $v_1$ and $u_{[-1,1]}$ from Lemma \ref{lemma_boring} and Proposition 6.2 (ii) of Kunita and Watanabe \cite{Kun_Wat_01}. The second tells us that $x \mapsto u_{[-1,1]}(x,y)$ is harmonic on $(\mathbb{R}\setminus [-1,1])\setminus \left\lbrace y \right\rbrace$ for all $y \in \mathbb{R}\setminus [-1,1]$, i.e.
$$\mathbb{E}^x\big[ \mathds{1}_{\{ T_{K^\mathrm{C}} < T_{[-1,1]} \}} u_{[-1,1]}(\xi_{T_{K^\mathrm{C}}},y) \big] = u_{[-1,1]}(x,y), \quad x,y \in \mathbb{R}\setminus [-1,1], x \neq y,$$
for all $K \subseteq \mathbb{R}\setminus [-1,1]$ which are compact in $ \mathbb{R}\setminus [-1,1] \setminus \{ y \}$.\smallskip
Let us fix $x \notin [-1,1]$ and since $y$ tends to $1$ we can assume $x \neq y$ and $y \notin K$. We use monotone convergence twice and plug in the result of Lemma \ref{lemma_boring}:
\begin{align}\label{v_1_sep}
&\quad \mathbb{E}^x\big[ \mathds{1}_{\{ T_{K^\mathrm{C}}<T_{[-1,1]}\}} v_1(\xi_{T_{K^\mathrm{C}}}) \big]\nonumber \\
&= \lim_{\varepsilon \searrow 0} \mathbb{E}^x\big[ \mathds{1}_{\{ \xi_{T_{K^\mathrm{C}}}>1+\varepsilon \text{ or } \xi_{T_{K^\mathrm{C}}}<-1 \}} v_1(\xi_{T_{K^\mathrm{C}}}) \big]\nonumber \\
&= \lim_{\varepsilon \searrow 0} \lim_{y \searrow 1} \mathbb{E}^x\big[ \mathds{1}_{\{ \xi_{T_{K^\mathrm{C}}}>y+\varepsilon \text{ or } \xi_{T_{K^\mathrm{C}}}<-1 \}} v_1(\xi_{T_{K^\mathrm{C}}})\big]\nonumber \\
&= \lim_{\varepsilon \searrow 0}\lim_{y \searrow 1} \frac{2^{\alpha\hat\rho-1}c_{\alpha\rho}}{g(y)} \mathbb{E}^x\big[ \mathds{1}_{\{ \xi_{T_{K^\mathrm{C}}}>y+\varepsilon \text{ or } \xi_{T_{K^\mathrm{C}}}<-1 \}} u_{[-1,1]}(\xi_{T_{K^\mathrm{C}}},y) \big] \nonumber \\
&\quad - \lim_{\varepsilon \searrow 0}\lim_{y \searrow 1}\mathbb{E}^x \Big[\big(\sin(\pi\alpha\hat\rho)\mathds{1}_{\{ \xi_{T_{K^\mathrm{C}}}>y+\varepsilon \}}+ \sin(\pi\alpha\rho)\mathds{1}_{\{\xi_{T_{K^\mathrm{C}}}<-1 \}}\big) \\
&\quad \quad \times \frac{(1-\alpha\hat\rho)|\xi_{T_{K^\mathrm{C}}}-y|^{\alpha-1}}{g(y)}\int\limits_1^{z(\xi_{T_{K^\mathrm{C}}},y)} (u-1)^{\alpha\rho} (u+1)^{\alpha\hat\rho-2} \, \dd u \Big] \nonumber\\
&\quad + \lim_{\varepsilon \searrow 0}\lim_{y \searrow 1}(\alpha-1)_+ \Big(\frac{\alpha\rho}{g(y)} \int\limits_1^y \psi_{\alpha\hat\rho}(u)\,\dd u -1 \Big) \nonumber \\
&\quad \quad \times \mathbb{E}^x \Big[\sin(\pi\alpha\hat\rho)\mathds{1}_{\{ \xi_{T_{K^\mathrm{C}}}>y+\varepsilon \}} \int\limits_1^{\xi_{T_{K^\mathrm{C}}}} \psi_{\alpha\rho}(u)\, \dd u + \sin(\pi\alpha\rho)\mathds{1}_{\{\xi_{T_{K^\mathrm{C}}}<-1 \}}\int\limits_1^{|\xi_{T_{K^\mathrm{C}}}|} \psi_{\alpha\hat\rho}(u)\, \dd u\Big] \nonumber
\end{align}
We care about these three summands separately. We start with the last one which just appears if $\alpha>1$. From the proof of Corollary \ref{cor_limit} we already know that
$$\frac{\alpha\rho}{g(y)} \int\limits_1^y \psi_{\alpha\hat\rho}(u)\,\dd u -1$$
converges to $0$ for $y \searrow 0$. Furthermore, we get with monotone convergence:
\begin{align*}
&\quad\lim_{\varepsilon \searrow 0}\lim_{y \searrow 1} \mathbb{E}^x \Big[\sin(\pi\alpha\hat\rho)\mathds{1}_{\{ \xi_{T_{K^\mathrm{C}}}>y+\varepsilon \}} \int\limits_1^{\xi_{T_{K^\mathrm{C}}}} \psi_{\alpha\rho}(u)\, \dd u + \sin(\pi\alpha\rho)\mathds{1}_{\{\xi_{T_{K^\mathrm{C}}}<-1 \}}\int\limits_1^{|\xi_{T_{K^\mathrm{C}}}|} \psi_{\alpha\hat\rho}(u)\, \dd u\Big] \\
&= \mathbb{E}^x \Big[\sin(\pi\alpha\hat\rho)\mathds{1}_{\{ \xi_{T_{K^\mathrm{C}}}>1 \}} \int\limits_1^{\xi_{T_{K^\mathrm{C}}}} \psi_{\alpha\rho}(u)\, \dd u + \sin(\pi\alpha\rho)\mathds{1}_{\{\xi_{T_{K^\mathrm{C}}}<-1 \}}\int\limits_1^{|\xi_{T_{K^\mathrm{C}}}|} \psi_{\alpha\hat\rho}(u)\, \dd u\Big]\\
&= \frac{\pi}{\Gamma(1-\alpha\rho)\Gamma(1-\alpha\hat\rho)} \mathbb{E}^x \Big[\mathds{1}_{\{ T_{K^\mathrm{C}} < T_{[-1,1]} \}} h(\xi_{T_{K^\mathrm{C}}}) \Big],
\end{align*}
where $h$ is the invariant function which appears in Döring et al. \cite{Doer_Kyp_Wei_01}. But since the $h$-transformed process with this invariant function is transient with infinite lifetime (see Theorem 1.3 in that article) it leaves all compact sets almost surely. Hence, we have
\begin{align*}
\mathbb{E}^x \Big[\mathds{1}_{\{ T_{K^\mathrm{C}} < T_{[-1,1]} \}} \frac{h(\xi_{T_{K^\mathrm{C}}})}{h(x)} \Big] &= \mathbb{P}^x_h(T_{K^\mathrm{C}} < \zeta) \\
&= \mathbb{P}^x_h(T_{K^\mathrm{C}} < \infty) \\
&= 1,
\end{align*}
thus, $\mathbb{E}^x \big[\mathds{1}_{\{ T_{K^\mathrm{C}} < T_{[-1,1]} \}} h(\xi_{T_{K^\mathrm{C}}}) \big] = h(x) <\infty$. It follows that the third term of (\ref{v_1_sep}) is $0$. So it remains to consider the first and the second sumand of (\ref{v_1_sep}). With Proposition 6.2 (ii) of Kunita and Watanabe \cite{Kun_Wat_01} and Corollary \ref{cor_limit} we see for the first term:
\begin{align*}
&\quad\lim_{\varepsilon \searrow 0}\lim_{y \searrow 1} \frac{2^{\alpha\hat\rho-1}c_{\alpha\rho}}{g(y)} \mathbb{E}^x\big[ \mathds{1}_{\{ T_{K^\mathrm{C}}<T_{[-1,1]}\}}\mathds{1}_{\{ \xi_{T_{K^\mathrm{C}}}>y+\varepsilon \text{ or } \xi_{T_{K^\mathrm{C}}}<-1 \}} u_{[-1,1]}(\xi_{T_{K^\mathrm{C}}},y) \big] \\
&= \lim_{y \searrow 1} \frac{2^{\alpha\hat\rho-1}c_{\alpha\rho}}{g(y)} \mathbb{E}^x\big[ \mathds{1}_{\{ T_{K^\mathrm{C}}<T_{[-1,1]}\}} u_{[-1,1]}(\xi_{T_{K^\mathrm{C}}},y) \big] \\
& \quad- \lim_{\varepsilon \searrow 0}\lim_{y \searrow 1} \frac{2^{\alpha\hat\rho-1}c_{\alpha\rho}}{g(y)} \mathbb{E}^x\big[ \mathds{1}_{\{ T_{K^\mathrm{C}}<T_{[-1,1]}\}}\mathds{1}_{\{ \xi_{T_{K^\mathrm{C}}} \in (1,y+\varepsilon) \}} u_{[-1,1]}(\xi_{T_{K^\mathrm{C}}},y) \big] \\
&= \lim_{y \searrow 1} \frac{2^{\alpha\hat\rho-1}c_{\alpha\rho}}{g(y)} u_{[-1,1]}(x,y)\\
& \quad- \lim_{\varepsilon \searrow 0}\lim_{y \searrow 1} \frac{2^{\alpha\hat\rho-1}c_{\alpha\rho}}{g(y)} \mathbb{E}^x\big[ \mathds{1}_{\{ T_{K^\mathrm{C}}<T_{[-1,1]}\}}\mathds{1}_{\{ \xi_{T_{K^\mathrm{C}}} \in (1,y+\varepsilon) \}} u_{[-1,1]}(\xi_{T_{K^\mathrm{C}}},y) \big] \\
&= v_1(x) - \lim_{\varepsilon \searrow 0}\lim_{y \searrow 1} \frac{2^{\alpha\hat\rho-1}c_{\alpha\rho}}{g(y)} \mathbb{E}^x \big[ \mathds{1}_{\{ T_{K^\mathrm{C}}<T_{[-1,1]}\}}\mathds{1}_{\{ \xi_{T_{K^\mathrm{C}}} \in (1,y+\varepsilon) \}} u_{[-1,1]}(\xi_{T_{K^\mathrm{C}}},y) \big].
\end{align*}
Hence, to prove harmonicity of $v_1$ it sufficies to show
\begin{align}\label{beh_1}
\lim_{y \searrow 1}\mathbb{E}^x&\Big[ \mathds{1}_{\{ T_{K^\mathrm{C}}<T_{[-1,1]},|\xi_{T_{K^\mathrm{C}}}-y| >\varepsilon \}}\frac{|\xi_{T_{K^\mathrm{C}}}-y|^{\alpha-1}}{g(y)}\int\limits_1^{z(\xi_{T_{K^\mathrm{C}}},y)} (u-1)^{\alpha\rho} (u+1)^{\alpha\hat\rho-2} \, \dd u \Big] =0
\end{align}
for all $\varepsilon >0$ and
\begin{align}\label{beh_2}
\lim_{\varepsilon \searrow 0} \lim_{y \searrow 1} \frac{1}{g(y)} \mathbb{E}^x\big[ \mathds{1}_{\{ T_{K^\mathrm{C}}<T_{[-1,1]}\}}\mathds{1}_{\{ \xi_{T_{K^\mathrm{C}}} \in (1,y+\varepsilon) \}} u_{[-1,1]}(\xi_{T_{K^\mathrm{C}}},y) \big]=0.
\end{align}
We start with (\ref{beh_1}). First we note that
\begin{align}\label{est_z}
\begin{split}
\int\limits_1^{z(\xi_{T_{K^\mathrm{C}}},y)} (u-1)^{\alpha\rho} (u+1)^{\alpha\hat\rho-2} \, \dd u &\leq
(z(\xi_{T_{K^\mathrm{C}}},y)-1)^{\alpha\rho} \int\limits_1^{z(\xi_{T_{K^\mathrm{C}}},y)} (u+1)^{\alpha\hat\rho-2} \, \dd u \\
&\leq C_1 (z(\xi_{T_{K^\mathrm{C}}},y)-1)^{\alpha\rho}\\
&=\begin{cases}
C_1 \frac{(\xi_{T_{K^\mathrm{C}}}+1)^{\alpha\rho}(y-1)^{\alpha\rho}}{|\xi_{T_{K^\mathrm{C}}}-y|^{\alpha\rho}} \quad &\text{if } \xi_{T_{K^\mathrm{C}}}>y+\varepsilon \\
C_1 \frac{(|\xi_{T_{K^\mathrm{C}}}|-1)^{\alpha\rho}(y-1)^{\alpha\rho}}{|\xi_{T_{K^\mathrm{C}}}-y|^{\alpha\rho}} \quad &\text{if } \xi_{T_{K^\mathrm{C}}}<-1
\end{cases}\\
&\leq C_1 \frac{(|\xi_{T_{K^\mathrm{C}}}|+1)^{\alpha\rho}(y-1)^{\alpha\rho}}{|\xi_{T_{K^\mathrm{C}}}-y|^{\alpha\rho}}
\end{split}
\end{align}
where $C_1 = \int\limits_1^{\infty} (u+1)^{\alpha\hat\rho-2} \, \dd u<\infty$. With that we get on $\{ |\xi_{T_{K^\mathrm{C}}}-y| >\varepsilon \}$ (without loss of generality we assume $y<2$):
\begin{align*}
&\quad \frac{|\xi_{T_{K^\mathrm{C}}}-y|^{\alpha-1}}{g(y)} \int\limits_1^{z(\xi_{T_{K^\mathrm{C}}},y)} (u-1)^{\alpha\rho} (u+1)^{\alpha\hat\rho-2} \, \dd u \\
&\leq C_1 \frac{|\xi_{T_{K^\mathrm{C}}}-y|^{\alpha-1}}{g(y)} \frac{(|\xi_{T_{K^\mathrm{C}}}|+1)^{\alpha\rho}(y-1)^{\alpha\rho}}{|\xi_{T_{K^\mathrm{C}}}-y|^{\alpha\rho}}\\
&= C_1 |\xi_{T_{K^\mathrm{C}}}-y|^{\alpha\hat{\rho}-1} (|\xi_{T_{K^\mathrm{C}}}|+1)^{\alpha\rho}(y+1)^{1-\alpha\hat\rho}\\
&\leq C_1 |\xi_{T_{K^\mathrm{C}}}-y|^{\alpha\hat{\rho}-1} (|\xi_{T_{K^\mathrm{C}}}-y|^{\alpha\rho}+ (y+1)^{\alpha\rho}) (y+1)^{1-\alpha\hat\rho}\\
&= C_1 (y+1)^{1-\alpha\hat\rho}(|\xi_{T_{K^\mathrm{C}}}-y|^{\alpha-1}+ |\xi_{T_{K^\mathrm{C}}}-y|^{\alpha\hat{\rho}-1}(y+1)^{\alpha\rho})\\
&\leq C_1 3^{1-\alpha\hat\rho}(\varepsilon^{\alpha-1}+ 3^{\alpha\rho}\varepsilon^{\alpha\hat\rho-1})\\
&\leq C_1 3^{1+\alpha\rho-\alpha\hat\rho}(\varepsilon^{\alpha-1}+ \varepsilon^{\alpha\hat\rho-1}) =: C_\varepsilon
\end{align*}
Hence, we can use dominated convergence to switch the $y$-limit and the expectation in \eqref{beh_1}. The following calculation on $\{ |\xi_{T_{K^\mathrm{C}}}-y| >\varepsilon \}$ shows that the integrand converges pointwise to $0$ which shows \eqref{beh_1}:
\begin{align*}
&\quad\frac{|\xi_{T_{K^\mathrm{C}}}-y|^{\alpha-1}}{g(y)}\int\limits_1^{z(\xi_{T_{K^\mathrm{C}}},y)} (u-1)^{\alpha\rho} (u+1)^{\alpha\hat\rho-2} \, \dd u \\
&\leq 2^{\alpha\hat\rho-2} \frac{|\xi_{T_{K^\mathrm{C}}}-y|^{\alpha-1}}{g(y)}\int\limits_1^{z(\xi_{T_{K^\mathrm{C}}},y)} (u-1)^{\alpha\rho} \, \dd u \\
&= \frac{2^{\alpha\hat\rho-2}}{\alpha\rho+1} \frac{|\xi_{T_{K^\mathrm{C}}}-y|^{\alpha-1}}{g(y)}(z(\xi_{T_{K^\mathrm{C}}},y)-1)^{\alpha\rho+1}\\
&\leq \frac{2^{\alpha\hat\rho-2}}{\alpha\rho+1} \frac{|\xi_{T_{K^\mathrm{C}}}-y|^{\alpha-1}}{{(y-1)^{\alpha\rho}(y+1)^{\alpha\hat\rho-1}}} \frac{(|\xi_{T_{K^\mathrm{C}}}|+1)^{\alpha\rho+1}(y-1)^{\alpha\rho+1}}{|\xi_{T_{K^\mathrm{C}}}-y|^{\alpha\rho+1}}\\
&= \frac{2^{\alpha\hat\rho-2}}{\alpha\rho+1} (|\xi_{T_{K^\mathrm{C}}}-y|^{\alpha\hat\rho-2}(|\xi_{T_{K^\mathrm{C}}}|+1)^{\alpha\rho+1}(y-1)(y+1)^{1-\alpha\hat\rho}\\
&\overset{y \searrow 1}{\longrightarrow} 0
\end{align*}
where we used the same estimate for $z(\xi_{T_{K^\mathrm{C}}},y)-1$ as in \eqref{est_z}. This shows (\ref{beh_1}).\smallskip
Now we show (\ref{beh_2}). We define $a = \min(\inf(K \cap (1,\infty),-\sup(K \cap (-\infty,-1))$. Sinc $y$ tends to $1$ and $\varepsilon$ to $0$ we can assume $a>y+\varepsilon$. It follows that $\xi_{T_{K^\mathrm{C}}} \in (1,y+\varepsilon)$ is just possible if $T_{K^\mathrm{C}} = T_{(-a,a)}$. So we have
\begin{align*}
&\quad \mathbb{E}^x\big[ \mathds{1}_{\{ T_{K^\mathrm{C}}<T_{[-1,1]}\}}\mathds{1}_{\{ \xi_{T_{K^\mathrm{C}}} \in (1,y+\varepsilon) \}} u_{[-1,1]}(\xi_{T_{K^\mathrm{C}}},y) \big]\\
&\leq \mathbb{E}^x\big[ \mathds{1}_{\{ \xi_{T_{(-a,a)}}\in (1,y+\varepsilon),T_{(-a,a)}<\infty \}} u_{[-1,1]}(\xi_{T_{(-a,a)}},y) \big].
\end{align*}
Further $\xi_{T_{(-a,a)}} =y $ happens with zero probability and with this follows
\begin{align*}
&\quad\mathbb{E}^x\big[ \mathds{1}_{\{ \xi_{T_{(-a,a)}} \in (1,y+\varepsilon),T_{(-a,a)}<\infty \}} u_{[-1,1]}( \xi_{T_{(-a,a)}},y) \big]\\
&=\mathbb{E}^x\big[ \mathds{1}_{\{ \xi_{T_{(-a,a)}} \in (1,y),T_{(-a,a)}<\infty \}} u_{[-1,1]}(\xi_{T_{(-a,a)}},y) \big]\\
&\quad+\mathbb{E}^x\big[ \mathds{1}_{\{ \xi_{T_{(-a,a)}} \in (y,\varepsilon),T_{(-a,a)}<\infty \}} u_{[-1,1]}( \xi_{T_{(-a,a)}},y) \big].
\end{align*}
With the formulas for $u_{[-1,1]}$ of Profeta and Simon \cite{Pro_Sim_01} we get for $\xi_{T_{(-a,a)}} \in (1,y)$:
\begin{align*}
u_{[-1,1]}(\xi_{T_{(-a,a)}},y) &\leq \frac{2^{1-\alpha}}{\Gamma(\alpha\rho)\Gamma(\alpha\hat\rho)} (y-\xi_{T_{(-a,a)}})^{\alpha-1} \int\limits_1^{z(\xi_{T_{(-a,a)}},y)} (u-1)^{\alpha\hat\rho-1}(u+1)^{\alpha\rho-1}\, \dd u\\
&\leq \frac{2^{-\alpha\hat\rho}}{\alpha\hat\rho\Gamma(\alpha\rho)\Gamma(\alpha\hat\rho)} (y-\xi_{T_{(-a,a)}})^{\alpha-1} (z(\xi_{T_{(-a,a)}},y)-1)^{\alpha\hat\rho}\\
&\leq \frac{2^{-\alpha\hat\rho}}{\alpha\hat\rho\Gamma(\alpha\rho)\Gamma(\alpha\hat\rho)} (y-\xi_{T_{(-a,a)}})^{\alpha-1} \Big(\frac{(\xi_{T_{(-a,a)}}-1)(y+1)}{y-\xi_{T_{(-a,a)}}}\Big)^{\alpha\hat\rho}\\
&= \frac{1}{\alpha\hat\rho\Gamma(\alpha\rho)\Gamma(\alpha\hat\rho)} (y-\xi_{T_{(-a,a)}})^{\alpha\rho-1} (\xi_{T_{(-a,a)}}-1)^{\alpha\hat\rho}\\
&\leq \frac{1}{\alpha\hat\rho\Gamma(\alpha\rho)\Gamma(\alpha\hat\rho)} (y-\xi_{T_{(-a,a)}})^{\alpha\rho-1} (y-1)^{\alpha\hat\rho}.
\end{align*}
It follows for $x>a$ with Theorem 1.1 of Kyprianou et al. \cite{Kyp_Par_Wat_01} and the scaling property:
\begin{align*}
&\quad\mathbb{E}^x \left\lbrack \mathds{1}_{\left\lbrace \xi_{T_{(-a,a)}} \in (1,y),T_{(-a,a)}<\infty \right\rbrace} u_{[-1,1]}(\xi_{T_{(-a,a)}},y) \right\rbrack \\
&\leq \frac{1}{\Gamma(\alpha\rho)\Gamma(\alpha\hat\rho)} (y-1)^{\alpha\hat\rho}\mathbb{E}^x \left\lbrack \mathds{1}_{\left\lbrace \xi_{T_{(-a,a)}} \in (1,y), T_{(-a,a)}<\infty\right\rbrace}(y-\xi_{T_{(-a,a)}})^{\alpha\rho-1} \right\rbrack \\
&\leq \frac{\sin(\pi\alpha\hat\rho)}{\pi \Gamma(\alpha\rho)\Gamma(\alpha\hat\rho)} (x+a)^{\alpha\rho}(x-a)^{\alpha\hat\rho} (y-1)^{\alpha\hat\rho} \int\limits_{(1,y)} \frac{(y-u)^{\alpha\rho-1}}{(a+u)^{\alpha\rho}(a-u)^{\alpha\hat\rho} (x-u)}\, \dd u\\
&\leq \frac{a\sin(\pi\alpha\hat\rho)}{\pi \Gamma(\alpha\rho)\Gamma(\alpha\hat\rho)} \frac{(x+a)^{\alpha\rho}(x-a)^{\alpha\hat\rho}}{(a+1)^{\alpha\rho}(a-y)^{\alpha\hat\rho} (x-y)} (y-1)^{\alpha\hat\rho} \int\limits_{(1,y)} (y-u)^{\alpha\rho-1} \dd u\\
&= \frac{a\sin(\pi\alpha\hat\rho)}{\pi \alpha\rho \Gamma(\alpha\rho)\Gamma(\alpha\hat\rho)} \frac{(x+a)^{\alpha\rho}(x-a)^{\alpha\hat\rho}}{(a+1)^{\alpha\rho}(a-y)^{\alpha\hat\rho} (x-y)} (y-1)^{\alpha\hat\rho} (y-1)^{\alpha\rho}.
\end{align*}
With this estimate we see immediately
$$\lim_{y \searrow 1} \frac{1}{(y-1)^{\alpha\rho}} \mathbb{E}^x \big[ \mathds{1}_{\{ \xi_{T_{(-a,a)}} \in (1,y) \}} u_{[-1,1]}(\xi_{T_{(-a,a)}},y) \big] = 0$$
for $x>1$. For $x<-1$ we use Theorem 1.1 of Kyprianou et al. \cite{Kyp_Par_Wat_01} in a similar way to deduce the analogous claim. Similarly we get for $\xi_{T_{(-a,a)}} \in (y,y+\varepsilon)$ (without loss of generality $y+\varepsilon<2$):
\begin{align*}
u_{[-1,1]}(\xi_{T_{(-a,a)}},y) &\leq \frac{2^{1-\alpha}}{\Gamma(\alpha\rho)\Gamma(\alpha\hat\rho)}(\xi_{T_{(-a,a)}}-y)^{\alpha-1} \int\limits_1^{z(\xi_{T_{K^\mathrm{C}}},y)} (u-1)^{\alpha\rho-1} (u+1)^{\alpha\hat\rho-1} \, \dd u\\
&\leq \frac{2^{-\alpha\rho}}{\alpha\rho\Gamma(\alpha\rho)\Gamma(\alpha\hat\rho)} (\xi_{T_{(-a,a)}}-y)^{\alpha-1} (z(\xi_{T_{(-a,a)}},y)-1)^{\alpha\rho} \\
&= \frac{2^{-\alpha\rho}}{\alpha\rho\Gamma(\alpha\rho)\Gamma(\alpha\hat\rho)} (\xi_{T_{(-a,a)}}-y)^{\alpha\hat\rho-1} (\xi_{T_{(-a,a)}}+1)^{\alpha\rho}(y-1)^{\alpha\rho}\\
&\leq \frac{2^{-\alpha\rho} 3^{\alpha\rho}}{\alpha\rho\Gamma(\alpha\rho)\Gamma(\alpha\hat\rho)}(y-1)^{\alpha\rho} (\xi_{T_{(-a,a)}}-y)^{\alpha\hat\rho-1}.
\end{align*}
Define $C_2 := \frac{2^{-\alpha\rho} 3^{\alpha\rho}}{\alpha\rho\Gamma(\alpha\rho)\Gamma(\alpha\hat\rho)}$ and we get again with Theorem 1.1 of \cite{Kyp_Par_Wat_01} for $x>1$
\begin{align*}
&\quad\mathbb{E}^x \big[ \mathds{1}_{\{ \xi_{T_{(-a,a)}} \in (y,y+\varepsilon),T_{(-a,a)}<\infty \}} u_{[-1,1]}(\xi_{T_{(-a,a)}},y) \big] \\
&\leq C_2(y-1)^{\alpha\rho} \mathbb{E}^x \big[ \mathds{1}_{\{ \xi_{T_{(-a,a)}} \in (y,y+\varepsilon),T_{(-a,a)}<\infty \}} (\xi_{T_{(-a,a)}}-y)^{\alpha\hat\rho-1} \big] \\
&\leq \frac{C_2a\sin(\pi\alpha\hat\rho)}{\pi} (x+a)^{\alpha\rho}(x-a)^{\alpha\hat\rho} (y-1)^{\alpha\rho} \int\limits_{(y,y+\varepsilon)} \frac{(u-y)^{\alpha\hat\rho-1}}{(a+u)^{\alpha\rho}(a-u)^{\alpha\hat\rho} (x-u)}\, \dd u \\
&\leq \frac{C_2a\sin(\pi\alpha\hat\rho)}{\pi} \frac{(x+a)^{\alpha\rho}(x-a)^{\alpha\hat\rho}(y-1)^{\alpha\rho}}{(a+y)^{\alpha\rho}(a-(y+\varepsilon))^{\alpha\hat\rho} (x-(y+\varepsilon))} \int\limits_{(y,y+\varepsilon)} (u-y)^{\alpha\hat\rho-1} \dd u\\
&= \frac{C_2a\sin(\pi\alpha\hat\rho)}{\pi} \frac{(x+a)^{\alpha\rho}(x-a)^{\alpha\hat\rho}}{(a+y)^{\alpha\rho}(a-(y+\varepsilon))^{\alpha\hat\rho} (x-(y+\varepsilon))} \frac{(y-1)^{\alpha\rho} \varepsilon^{\alpha\hat\rho}}{\alpha\hat\rho}.
\end{align*}
So we have:
\begin{align*}
&\quad\lim_{\varepsilon \searrow 0}\lim_{y \searrow 1} \frac{1}{(y-1)^{\alpha\rho}} \mathbb{E}^x\big[ \mathds{1}_{\left\lbrace T_{K^\mathrm{C}}<T_{[-1,1]}\right\rbrace}\mathds{1}_{\left\lbrace \xi_{T_{K^\mathrm{C}}} \in (1,y+\varepsilon) \right\rbrace} u_{[-1,1]}(\xi_{T_{K^\mathrm{C}}},y) \big]\\
&\leq \frac{C_2\sin(\pi\alpha\hat\rho)}{\pi\alpha\hat\rho} \lim_{\varepsilon \searrow 0}\lim_{y \searrow 1} \Big[\frac{(x+a)^{\alpha\rho}(x-a)^{\alpha\hat\rho}}{(a+y)^{\alpha\rho}(a-(y+\varepsilon))^{\alpha\hat\rho} (x-(y+\varepsilon))} \varepsilon^{\alpha\hat\rho} \Big]\\
&= 0.
\end{align*}
The claim for $x<-1$ follows again similarly. This shows (\ref{beh_2}) and hence, we have harmonicity of $v_1$.
\end{proof}
\begin{remark}
If $\alpha \leq 1$, another (maybe more elegant) way of proving harmonicity of $v_1$ is to prove that the renewal densities of the MAP which corresponds to the stable process via the Lamperti-Kiu transform (for explicit expressions see Corollary 1.6 of \cite{Kyp_Riv_Sen_01}) are harmonic functions for the MAP killed on entering the negative half-line. This claim should be true since Silverstein \cite{Sil_01} proved the analogous claim for a Lévy process which does not drift to $-\infty$. One can show that $v_1$ and $v_{-1}$ are just these renewal densities (the argument replaced by the logarithm). Via the Lamperti-Kiu transform one could obtain harmonicity of $v_1$ and $v_{-1}$ for the stable process killed in $[-1,1]$.
\end{remark}
\subsection{Behaviour at the killing time}\label{sec_proof_paths}
Before we start with the proofs we should discuss more elementary properties of $v_1$ and $v_{-1}$. First, it can be seen immediately that $v_1$ has a pole in $1$ and $v_{-1}$ has a pole in $-1$ and hence $v:= v_1 + v_{-1}$ has poles in $1$ and $-1$. Further $v_1$ is bounded on $(-\infty,-1) \cup (K,\infty)$ for all $K>1$. For $\alpha \leq 1$ this is obvious and for $\alpha>1$ this can be seen via showing that $v_1$ converges for $x \rightarrow \pm \infty$ (a similar convergence was shown in \cite{Doer_Kyp_Wei_01} in the proof of Lemma 3.3). Similarly $v_{-1}$ is bounded on $(-\infty,-K) \cup (1,\infty)$ for all $K>1$. It follows obviously that $v$ is bounded on $(-\infty,-K_1) \cup (K_2,\infty)$ for all $K_1,K_2>1$.\smallskip
For the first results we need to define the potential of the $h$-transformed process via
$$U_{v_1}(x,\dd y) = \mathbb{E}^x_{v_1} \Big[ \int\limits_0^\zeta \mathds{1}_{\left\lbrace \xi_t \in \dd y \right\rbrace} \, \dd t \Big],\quad x,y \notin [-1,1],$$
which is the expected time the process $(\xi,\mathbb{P}^x_{v_1})$ stays in $\dd y$ until it is killed. With a Fubini flip we obtain
$$U_{v_1}(x,\dd y) = \frac{v_1(y)}{v_1(x)}\,U_{[-1,1]}(x,\dd y) = \frac{v_1(y)}{v_1(x)}u_{[-1,1]}(x,y)\, \dd y.$$
The following result shows on the one hand that the $h$-transformed process is almost surely bounded and second that the expected time the process stays in a set of the form $[-b,-1)\cup(1,b]$ is finite.
\begin{lemma}\label{lemma_help_v1}
Let $\xi$ be an $\alpha$-stable process with $\alpha \in (0,2)$ and both sided jumps. Then it holds for $x\notin [-1,1]$:
\begin{enumerate}[(i)]
\item $\mathbb{P}_{v_1}^x(T_{(-\infty,-d]\cup[d,\infty)} < \zeta \, \forall d>1)=0$.
\item $U_{v_1}(x,[-b,-1)\cup(1,b]) <\infty$ for all $b>1$.
\end{enumerate}
\end{lemma}
\begin{proof}
(i) We already noticed that $v_1$ is bounded on $(-\infty,-K)\cup(K,\infty)$ for all $K>1$. So we obtain, applying dominated convergence in the last equality,
\begin{align*}
\mathbb{P}_{v_1}^x(T_{(-\infty,-d]\cup[d,\infty)} <\zeta \, \forall d>1) &= \lim_{d \rightarrow \infty} \mathbb{P}_{v_1}^x(T_{(-\infty,-d]\cup[d,\infty)} <\zeta)\\
&= \lim_{d \rightarrow \infty} \mathbb{E}^x \Big[ \mathds{1}_{\{ T_{(-\infty,-d]\cup[d,\infty)}<T_{[-1,1]} \}} \frac{v_1(\xi_{T_{(-\infty,-d]\cup[d,\infty)}})}{v_1(x)} \Big] \\
&= \mathbb{E}^x \Big[ \lim_{d \rightarrow \infty} \mathds{1}_{\{ T_{(-\infty,-d]\cup[d,\infty)}<T_{[-1,1]}\}} \frac{v_1(\xi_{T_{(-\infty,-d]\cup[d,\infty)}})}{v_1(x)} \Big].
\end{align*}
In the case $\alpha<1$ we use that $v_1(y)$ converges to $0$ for $y \rightarrow \pm \infty$. If $\alpha \geq 1$ we see that $\mathds{1}_{\{ T_{(-\infty,-d]\cup[d,\infty)}<T_{[-1,1]}\}}$ converges to $0$ almost surely since $(\xi,\mathbb{P}^x)$ is recurrent. This shows (i).\smallskip
(ii) It holds
$$U_{v_1}(x,[-b,-1)\cup(1,b]) = \frac{1}{v_1(x)}\int\limits_{[-b,-1)\cup(1,b]} v_1(y) u_{[-1,1]}(x,y) \, \dd y.$$
Since $v_1$ is bounded and $u_{[-1,1]}(x,\cdot)$ is integrable on all compact intervals, the only points where this integral could be infinite, are the boundary points $1$ and $-1$. From the explicit formulas of \cite{Pro_Sim_01} we see that $u_{[-1,1]}(x,y)$ converges to $0$ for $y \rightarrow \pm 1$. Further $v_1(y)$ behaves as $(y-1)^{\alpha\hat{\rho}-1}$ for $y \searrow 1$ and as $(|y|-1)^{\alpha\rho}$ for $y \nearrow -1$. Since $\alpha\rho,\alpha\hat\rho \in (0,1)$ these arguments shows $U_{v_1}(x,[-b,-1)\cup(1,b])<\infty$.
\end{proof}
Combining the two statements of Lemma \ref{lemma_help_v1} we can show Proposition \ref{thm_abs_above}.
\begin{proof}[Proof of Proposition \ref{thm_abs_above}]
We show that $\mathbb{P}_{v_1}^x(\zeta < \infty)=1$ and $\mathbb{P}_{v_1}^x(\xi_{\zeta-}=1)=1$ and start with the first equality. From Lemma \ref{lemma_help_v1} (ii) we know
$$\mathbb{P}_{v_1}^x\Big(\int\limits_0^\zeta \mathds{1}_{\left\lbrace \xi_t \in [-b,-1)\cup(1,b] \right\rbrace} \, \dd t <\infty \Big)=1$$
for all $b>1$. By the continuity of probability measures we see
\begin{align*}
&\quad\mathbb{P}_{v_1}^x\Big(\int\limits_0^\zeta \mathds{1}_{\{ \xi_t \in [-b,-1)\cup(1,b] \}} \, \dd t <\infty \, \forall b>1\Big)\\
&= \lim\limits_{b \rightarrow \infty} \mathbb{P}_{v_1}^x \Big(\int\limits_0^\zeta \mathds{1}_{\{ \xi_t \in [-b,-1)\cup(1,b] \}} \, \dd t <\infty \Big) = 1.
\end{align*}
On the other hand Lemma \ref{lemma_help_v1} (i) yields
$$\mathbb{P}_{v_1}^x(\exists d>1: \, T_{(-\infty,-d]\cup [d,\infty)} \geq \zeta)=1.$$
Since the intersection of two events with probability $1$ has again probability $1$ it follows:
\begin{align*}
\mathbb{P}_{v_1}^x(\zeta < \infty) &= \mathbb{P}_{v_1}^x \Big(\int\limits_0^\zeta \mathds{1}_{\{ \xi_t \in \mathbb{R} \setminus [-1,1] \}} \, \dd t <\infty \Big)\\
&\geq \mathbb{P}_{v_1}^x \Big(\int\limits_0^\zeta \mathds{1}_{\{ \xi_t \in [-b,-1)\cup(1,b] \}} \, \dd t <\infty \, \forall b>1 \, , \, \exists d>1: T_{(-\infty,-d]\cup[d,\infty)} \geq \zeta \Big)\\
&= 1.
\end{align*}
To prove $\mathbb{P}_{v_1}^x(\xi_{\zeta-}=1)=1$ we use a procedure which is inspired by Chaumont \cite{Chau_01}. Using that $v_1$ is harmonic (Theorem \ref{thm_harm}) we see for $x \notin [-1,1]$ and
$$M_{a,b} = (-\infty,-b) \cup (-a,-1) \cup (1,1+\varepsilon) \cup (b,\infty)$$
with $1<a<b$ and $\varepsilon>0$ (obviously the complement of $M_{a,b}$ is compact in $\mathbb{R}\setminus [-1,1]$):
$$\mathbb{E}^x \big[ \mathds{1}_{\{ T_{M_{a,b}} < T_{[-1,1]} \}} v_1(\xi_{T_{M_{a,b}}}) \big] = v_1(x).$$
It follows that
$$\mathbb{P}_{v_1}^x(T_{M_{a,b}} < \zeta) = \frac{1}{v_1(x)}\mathbb{E}^x \big[ \mathds{1}_{\{ T_{M_{a,b}} < T_{[-1,1]} \}} v_1(\xi_{T_{M_{a,b}}}) \big] = 1.$$
From Lemma \ref{lemma_help_v1} we know on the one hand
\begin{align}\label{help_11}
\mathbb{P}_{v_1}^x(T_{(-\infty,-b) \cup (b,\infty)} < \zeta \, \forall \, b>1)&= 0.
\end{align}
On the other hand we see, applying dominated convergence using that $v_1$ is bounded on $(-\infty,-1)$,
\begin{align}\label{help_12}
\begin{split}
\mathbb{P}_{v_1}^x(T_{(-a,-1)} < \zeta \, \forall \, a>1)&= \lim_{a \searrow 1} \mathbb{P}_{v_1}^x(T_{(-a,-1)} < \zeta) \\
&= \lim_{a \searrow 1} \frac{1}{v_1(x)}\mathbb{E}^x \big[ \mathds{1}_{\{ T_{(-a,-1)} < T_{[-1,1]} \}} v_1(\xi_{T_{(-a,-1)}}) \big] \\
&= \frac{1}{v_1(x)} \mathbb{E}^x \big[ \lim_{a \searrow 1} \mathds{1}_{\{ T_{(-a,-1)} < T_{[-1,1]} \}} v_1(\xi_{T_{(-a,-1)}}) \big] \\
&= 0.
\end{split}
\end{align}
In the last step we used that $v_1(y)$ converges to $0$ for $y \nearrow -1$. Note that this argument does not work if $(-a,-1)$ is replaced by $(1,a)$ because $v_1$ has a pole in $1$. Now we plug in \eqref{help_11} and \eqref{help_12} to obtain for all $\varepsilon>0$:
\begin{align*}
&\quad \mathbb{P}_{v_1}^x(T_{(1,1+\varepsilon)} < \zeta)\\
&= \mathbb{P}_{v_1}^x(\{ T_{(1,1+\varepsilon)} < \zeta \} \cup \{ T_{(-a,-1)} < \zeta \, \forall \, a>1 \} \cup \{ T_{(-\infty,-b) \cup (b,\infty)} < \zeta \, \forall \, b>1 \}) \\
&= \lim_{b \rightarrow \infty} \lim_{a \searrow 1} \mathbb{P}_{v_1}^x(\left\lbrace T_{(1,1+\varepsilon)} < \zeta \right\rbrace \cup \left\lbrace T_{(-a,-1)} < \zeta \right\rbrace \cup \left\lbrace T_{(-\infty,-b ) \cup (b,\infty)} < \zeta \right\rbrace) \\
&= \lim_{b \rightarrow \infty} \lim_{a \searrow 1} \mathbb{P}_{v_1}^x( T_{M_{a,b}} < \zeta) \\
&= 1.
\end{align*}
With this in hand we show the final claim that $\xi_{\zeta-}=1$ almost surely under $\mathbb{P}^x_{v_1}$. By $(1,1+\delta)^{\mathrm{C}}$ we mean as usual $\mathbb{R}\setminus [-1,1] \setminus (1,1+\delta)$.
\begin{align}\label{leftlimit}
\begin{split}
\mathbb{P}^x_{v_1}(\xi_{\zeta-}=1) &= \mathbb{P}^x_{v_1}(\forall \,\delta>0 \, \exists\, \varepsilon \in (0,\delta] \,:\, \xi_t \in (1,1+\delta) \,\forall \, t \in [T_{(1,1+\varepsilon)},\zeta) )\\
&= \lim_{\delta \searrow 0} \lim_{\varepsilon \searrow 0} \mathbb{P}^x_{v_1}(\xi_t \in (1,1+\delta) \,\forall \, t \in [T_{(1,1+\varepsilon)},\zeta) ) \\
&= \lim_{\delta \searrow 0} \lim_{\varepsilon \searrow 0} \mathbb{E}^x_{v_1}\big[ \mathbb{P}_{v_1}^{\xi_{T_{(1,1+\varepsilon)}}}(\xi_t \in (1,1+\delta) \,\forall \, t \in [0,\zeta)) \big] \\
&= \lim_{\delta \searrow 0} \lim_{\varepsilon \searrow 0} \mathbb{E}^x_{v_1}\big[ \mathbb{P}_{v_1}^{\xi_{T_{(1,1+\varepsilon)}}}(T_{(1,1+\delta)^{\mathrm{C}}} \geq \zeta) \big] \\
&= 1- \lim_{\delta \searrow 0} \lim_{\varepsilon \searrow 0} \mathbb{E}^x_{v_1}\big[ \mathbb{P}_{v_1}^{\xi_{T_{(1,1+\varepsilon)}}}(T_{(1,1+\delta)^{\mathrm{C}}} < \zeta) \big] \\
&= 1- \lim_{\delta \searrow 0} \mathbb{E}^x_{v_1}\big[ \lim_{\varepsilon \searrow 0}\mathbb{P}_{v_1}^{\xi_{T_{(1,1+\varepsilon)}}}(T_{(1,1+\delta)^{\mathrm{C}}} < \zeta) \big] \\
&= 1- \lim_{\delta \searrow 0} \mathbb{E}^x_{v_1}\big[ \lim_{\varepsilon \searrow 0}\mathbb{P}_{v_1}^{1+\varepsilon}(T_{(1,1+\delta)^{\mathrm{C}}} < \zeta) \big].
\end{split}
\end{align}
In the second equality we used that $T_{(1,1+\varepsilon)}<\zeta$ almost surely and in the third equality we used the strong Markov property of $(\xi,\mathbb{P}^x_{v_1})$. Let us consider the $\varepsilon$-limit inside the expectation. Using the definition of $\mathbb{P}^x_{v_1}$ we see:
\begin{align*}
\mathbb{P}_{v_1}^{1+\varepsilon}(T_{(1,1+\delta)} < \zeta) &= \frac{1}{v_1(1+\varepsilon)} \mathbb{E}^x \big[ \mathds{1}_{\{T_{(1,1+\delta)^{\mathrm{C}}} < T_{[-1,1]} \}} v_1(\xi_{T_{(1,1+\delta)^{\mathrm{C}}}}) \big].
\end{align*}
Since for fixed $\delta>0$ the function $v_1$ is bounded on $(-\infty,-1)\cup (1+\delta,\infty)$ and $\lim_{\varepsilon \searrow 0}v_1(1+\varepsilon) = \infty$ it follows that
$$\lim_{\varepsilon \searrow 0}\mathbb{P}_{v_1}^{1+\varepsilon}(T_{(1,1+\delta)} < \zeta) = 0$$
and with \eqref{leftlimit} we conclude $\mathbb{P}^x_{v_1}(\xi_{\zeta-}=1)=1$.
\end{proof}
Proposition \ref{thm_abs_both} can be proved similarly to Proposition \ref{thm_abs_above} using the following lemma:
\begin{lemma}\label{lemma_help_v}
Let $\xi$ be an $\alpha$-stable process with $\alpha \in (0,2)$ and both sided jumps. Then it holds:
\begin{enumerate}[(i)]
\item $\mathbb{P}_{v}^x(T_{(-\infty,-d]\cup[d,\infty)} < \zeta \, \forall d>1)=0$.
\item $U_{v}(x,[-b,-1)\cup(1,b]) <\infty$ for all $b>1$.
\end{enumerate}
\end{lemma}
\begin{proof}
The proof is analogous to the one of Lemma \ref{lemma_help_v1}.
\end{proof}
The proof of Proposition \ref{thm_abs_both} consists of combining these two statements as in the proof of Proposition \ref{thm_abs_above}.
\subsection{Conditioning and $h$-transform} \label{sec_proof_conditioning}
To connect the $h$-transform with the conditioned process we need some connection between the harmonic function and the asymptotic probability of the event we condition on. We have to separate the cases $\alpha<1$ and $\alpha \geq 1$.
\subsubsection{The case $\alpha<1$}
\begin{proposition} \label{prop_as_<1}
Let $\xi$ be an $\alpha$-stable process with $\alpha \in (0,1)$ and both sided jumps. Then it holds:
\begin{align}\label{eq_as1_<1}
\frac{\pi \Gamma(1-\alpha\rho)\Gamma(1-\alpha\hat\rho)}{2^{\alpha}\Gamma(1-\alpha)} v_1(x) &= \lim_{\varepsilon \searrow 0} \frac{1}{\varepsilon}\mathbb{P}^x(\xi_{\underline{m}} \in (1,1+\varepsilon)), \quad x \in \mathbb{R} \setminus [-1,1],
\end{align}
and
\begin{align}\label{eq_as_<1}
\frac{\pi \Gamma(1-\alpha\rho)\Gamma(1-\alpha\hat\rho)}{2^{\alpha}\Gamma(1-\alpha)} v(x) &= \lim_{\varepsilon \searrow 0} \frac{1}{\varepsilon}\mathbb{P}^x(|\xi_{\underline{m}}| \in (1,1+\varepsilon)), \quad x \in \mathbb{R} \setminus [-1,1].
\end{align}
\end{proposition}
\begin{proof}
The proof is based on Proposition 1.1 of \cite{Kyp_Riv_Sen_01} where we find an explicit expression for the distribution of $\xi_{\underline{m}}$. For $x>1$ this gives
\begin{align*}
\mathbb{P}^{x}(\xi_{\underline{m}} \in (1,1+\varepsilon)) &= \frac{2^{-\alpha}\Gamma(1-\alpha\rho)}{\Gamma(1-\alpha)\Gamma(\alpha\hat\rho)} \int\limits_1^{x \land (1+\varepsilon)} z^{-\alpha} (x-z)^{\alpha\hat{\rho}-1}(x+z)^{\alpha\rho} \, \dd z \\
&= \frac{2^{-\alpha}\Gamma(1-\alpha\rho)}{\Gamma(1-\alpha)\Gamma(\alpha\hat\rho)} \int\limits_1^{x \land (1+\varepsilon)} z^{-1} \big(\frac{x}{z}-1 \big)^{\alpha\hat{\rho}-1} \big(\frac{x}{z}+1\big)^{\alpha\rho} \, \dd z \\
&= \frac{2^{-\alpha}\Gamma(1-\alpha\rho)}{\Gamma(1-\alpha)\Gamma(\alpha\hat\rho)} \int\limits_{\frac{x}{x \land (1+\varepsilon)}}^{x} \frac{x}{z^2} \frac{z}{x} (z-1)^{\alpha\hat{\rho}-1}(z+1)^{\alpha\rho} \, \dd z \\
&= \frac{2^{-\alpha}\Gamma(1-\alpha\rho)}{\Gamma(1-\alpha)\Gamma(\alpha\hat\rho)} \int\limits_{1 \lor \frac{x}{1+\varepsilon}}^{x}(1+\frac{1}{z}) \psi_{\alpha\rho}(z) \, \dd z.
\end{align*}
Applying l'Hopital's rule to the first calculation we obtain:
\begin{align*}
\frac{\Gamma(1-\alpha)\Gamma(\alpha\hat\rho)}{2^{-\alpha}\Gamma(1-\alpha\rho)} \lim_{\varepsilon \searrow 0}\frac{1}{\varepsilon} \mathbb{P}^x(\xi_{\underline{m}} \in (1,1+\varepsilon)) &= \lim_{\varepsilon \searrow 0}\frac{1}{\varepsilon} \int\limits_{\frac{x}{1+\varepsilon}}^{x}\big(1+\frac{1}{z}\big) \psi_{\alpha\rho}(z) \, \dd z \\
&= \lim_{\varepsilon \searrow 0} \frac{x}{(1+\varepsilon)^2} \big(1+\frac{1+\varepsilon}{x}\big) \psi_{\alpha\rho}(\frac{x}{1+\varepsilon})\\
&=(x+1) \psi_{\alpha\rho}(x) \\
&= \frac{1}{\sin(\pi\alpha\hat{\rho})} v_1(x).
\end{align*}
Since $\sin(\pi\alpha\hat{\rho}) = \pi /(\Gamma(\alpha\hat{\rho})\Gamma(1-\alpha\hat{\rho}))$ this shows \eqref{eq_as1_<1} for $x>1$. For $x<-1$ we first use duality to deduce:
\begin{align*}
\mathbb{P}^{x}(\xi_{\underline{m}} \in (1,1+\varepsilon)) &= \hat{\mathbb{P}}^{-x}(\xi_{\underline{m}} \in (-1-\varepsilon,-1))\\
&= \frac{2^{-\alpha}\Gamma(1-\alpha\hat\rho)}{\Gamma(1-\alpha)\Gamma(\alpha\rho)} \int\limits_{1 \lor \frac{-x}{1+\varepsilon}}^{-x}\big(1-\frac{1}{z}\big) \psi_{\alpha\hat\rho}(z) \, \dd z,
\end{align*}
where the second equality is verified using a similar calculation as above. Hence, it follows, for $x<-1$, that
\begin{align*}
\frac{\Gamma(1-\alpha)\Gamma(\alpha\rho)}{2^{-\alpha}\Gamma(1-\alpha\hat\rho)} \lim_{\varepsilon \searrow 0} \frac{1}{\varepsilon} \mathbb{P}^x(\xi_{\underline{m}} \in (1,1+\varepsilon)) &= \lim_{\varepsilon \searrow 0} \frac{1}{\varepsilon} \int\limits_{\frac{-x}{1+\varepsilon}}^{-x}(1-\frac{1}{z}) \psi_{\alpha\hat\rho}(z) \, \dd z\\
&=\lim_{\varepsilon \searrow 0} \frac{-x}{(1+\varepsilon)^2} \big(1-\frac{1+\varepsilon}{-x}\big) \psi_{\alpha\hat\rho}\big(\frac{-x}{1+\varepsilon}\big) \\
&= (-x-1)\psi_{\alpha\hat\rho}(-x)\\
&= \frac{1}{\sin(\pi\alpha\rho)} v_1(x).
\end{align*}
Again we use $\sin(\pi\alpha\rho) = \frac{\pi}{\Gamma(\alpha\rho)\Gamma(1-\alpha\rho)}$ to obtain \eqref{eq_as1_<1} for $x<-1$.\smallskip
Similarly, \eqref{eq_as_<1} can be deduced as follows. Analogously to the proof of the first equation we can show
\begin{align*}
\frac{\pi \Gamma(1-\alpha\rho)\Gamma(1-\alpha\hat\rho)}{2^{\alpha}\Gamma(1-\alpha)} v_{-1}(x) &= \lim_{\varepsilon \searrow 0} \frac{1}{\varepsilon}\mathbb{P}^x(\xi_{\underline{m}} \in (-(1+\varepsilon)),-1)), \quad x \in \mathbb{R} \setminus [-1,1].
\end{align*}
Since we defined $v(x) = v_1(x) + v_{-1}(x)$ this shows \eqref{eq_as_<1}.
\end{proof}
Now we are ready to prove the connection between the $h$-transform and the conditioned process.
\begin{proof}[Proof of Theorems \ref{thm_cond_v1_<1} and \ref{thm_cond_v_<1}]
We start with $x>1$. First note for $\delta>\varepsilon>0$:
\begin{align*}
&\quad\mathbb{P}^x(\Lambda, t< T_{(-(1+\delta),1+\delta)} ,\xi_{\underline m} \in (1,1+\varepsilon)) \\
&=\mathbb{P}^x(\Lambda, t< T_{(-(1+\delta),1+\delta)}, t < \underline m ,\xi_{\underline m} \in (1,1+\varepsilon)).
\end{align*}
Now we denote the shift operator in the path space by $\theta_t: D \rightarrow D$, i.e. it holds $(\xi \circ \theta_t)_s = \xi_{s+t}$. With the tower property of the conditional expectation and the Markov property in the version including the shift-operator (see e.g. \cite{Chu_Wal_01} p. 8) it holds:
\begin{align*}
&\quad \mathbb{P}^x(\Lambda, t< T_{(-(1+\delta),1+\delta)}, t < \underline m ,\xi_{\underline m} \in (1,1+\varepsilon)) \\
&= \mathbb{E}^x \Big[ \mathds{1}_\Lambda \mathbb{E}^x \big[\mathds{1}_{\{ t< T_{(-(1+\delta),1+\delta)} \}} \mathds{1}_{\{ \xi_{\underline m} \in (1,1+\varepsilon) \}} \,|\, \mathcal{F}_t \big] \Big] \\
&= \mathbb{E}^x \Big[ \mathds{1}_\Lambda \mathbb{E}^x \big[\mathds{1}_{\{ t< T_{(-(1+\delta),1+\delta)} \}}(\mathds{1}_{\{ \xi_{\underline m} \in (1,1+\varepsilon) \}} \circ \theta_t )\,|\, \mathcal{F}_t \big] \Big] \\
&= \mathbb{E}^x \Big[ \mathds{1}_\Lambda \mathds{1}_{\{ t< T_{(-(1+\delta),1+\delta)} \}} \mathbb{E}^x \big[\mathds{1}_{\{ \xi_{\underline m} \in (1,1+\varepsilon) \}} \circ \theta_t \,|\, \mathcal{F}_t \big] \Big] \\
&= \mathbb{E}^x \Big[ \mathds{1}_\Lambda \mathds{1}_{\{ t< T_{(-(1+\delta),1+\delta)} \}} \mathbb{P}^{\xi_t}(\xi_{\underline m} \in (1,1+\varepsilon)) \Big].
\end{align*}
Hence, we have
\begin{align*}
&\quad\mathbb{P}^x(\Lambda, t< T_{(-(1+\delta),1+\delta)} ,\xi_{\underline m} \in (1,1+\varepsilon))\\
&= \mathbb{E}^x \Big[ \mathds{1}_\Lambda \mathds{1}_{\{ t< T_{(-(1+\delta),1+\delta)} \}} \mathbb{P}^{\xi_t}(\xi_{\underline m} \in (1,1+\varepsilon)) \Big], \quad |x|>\delta>\varepsilon.
\end{align*}
With the help of this application of the Markov property we obtain
\begin{align*}
&\quad \mathbb{P}^x(\Lambda, t< T_{(-(1+\delta),1+\delta)} \, |\,\xi_{\underline m} \in (1,1+\varepsilon))\\
&= \frac{\mathbb{P}^x(\Lambda, t< T_{(-(1+\delta),1+\delta)},\xi_{\underline m} \in (1,1+\varepsilon))}{\mathbb{P}^x(\xi_{\underline m} \in (1,1+\varepsilon))} \\
&= \mathbb{E}^x \Big[ \mathds{1}_\Lambda \mathds{1}_{\{ t< T_{(-(1+\delta),1+\delta)}\}} \frac{\mathbb{P}^{\xi_t}(\xi_{\underline m} \in (1,1+\varepsilon))}{\mathbb{P}^x(\xi_{\underline m} \in (1,1+\varepsilon))} \Big].
\end{align*}
Now we would like to replace the ratio inside the expectation by $v_1(\xi_t){/}v_1(x)$ with Proposition \ref{prop_as_<1} when $\varepsilon$ tends to $0$. For that we need to argue why we can move the $\varepsilon$-limit inside the integral. Without loss of generality we assume $|x|>1+\delta>1+\varepsilon$. Note that for $y>1+\delta$ we have again with Proposition 1.1 of \cite{Kyp_Riv_Sen_01}:
\begin{align*}
\mathbb{P}^{y}(|\xi_{\underline{m}}| \in (1,1+\varepsilon)) &= 2 \frac{2^{-\alpha}\Gamma(1-\alpha\rho)}{\Gamma(1-\alpha)\Gamma(\alpha\hat\rho)} \int\limits_{\frac{y}{1+\varepsilon}}^{y} \psi_{\alpha\rho}(z) \, \dd z \\
&\leq \frac{2^{1-\alpha}\Gamma(1-\alpha\rho)}{\Gamma(1-\alpha)\Gamma(\alpha\hat\rho)} \Big(y-\frac{y}{1+\varepsilon}\Big) \psi_{\alpha\rho}\Big(\frac{y}{1+\varepsilon}\big)\\
&= \frac{2^{1-\alpha}\Gamma(1-\alpha\rho)}{\Gamma(1-\alpha)\Gamma(\alpha\hat\rho)} \frac{y\varepsilon}{1+\varepsilon} \psi_{\alpha\rho}\Big(\frac{y}{1+\varepsilon}\Big)\\
&= \frac{2^{1-\alpha}\Gamma(1-\alpha\rho)}{\Gamma(1-\alpha)\Gamma(\alpha\hat\rho)} \frac{\varepsilon}{2\sin(\pi\alpha\hat\rho)} v\Big(\frac{y}{1+\varepsilon}\Big).
\end{align*}
Now let $\varepsilon$ be so small that $\frac{1+\delta}{1+\varepsilon}>1+\frac{\delta}{2}$ and define
$$C_\delta := \sup_{|u| > 1+\frac{\delta}{2}} v(u),$$
which is finite because of the properties of $v$. So we can estimate on the event $\left\lbrace t< T_{[-1,1+\delta]}, \xi_t \geq 1+\delta \right\rbrace$:
\begin{align*}
\frac{1}{\varepsilon} \mathbb{P}^{\xi_t}(\xi_{\underline m} \in (1,1+\varepsilon)) &\leq \frac{2^{-\alpha}\Gamma(1-\alpha\rho)}{\Gamma(1-\alpha)\Gamma(\alpha\hat\rho)\sin(\pi\alpha\hat\rho)} v\big(\frac{\xi_t}{1+\varepsilon}\big)\\
&\leq \frac{2^{-\alpha}\Gamma(1-\alpha\rho)}{\Gamma(1-\alpha)\Gamma(\alpha\hat\rho)\sin(\pi\alpha\hat\rho)} C_\delta.
\end{align*}
On $\left\lbrace t< T_{(-(1+\delta),1+\delta)}, \xi_t \leq -(1+\delta) \right\rbrace$ an analogous argumentation shows
\begin{align*}
\frac{1}{\varepsilon}\mathbb{P}^{\xi_t}(\xi_{\underline m} \in (1,1+\varepsilon)) &\leq \frac{2^{-\alpha}\Gamma(1-\alpha\hat\rho)}{\Gamma(1-\alpha)\Gamma(\alpha\rho)\sin(\pi\alpha\rho)} C_\delta.
\end{align*}
So we can use dominated convergence as follows:
\begin{align*}
&\quad\lim_{\varepsilon \searrow 0} \mathbb{P}^x(\Lambda, t< T_{(-(1+\delta),1+\delta)} \, |\,\xi_{\underline m} \in (1,1+\varepsilon)) \\
&= \lim_{\varepsilon \searrow 0}\frac{\varepsilon}{{\mathbb{P}^x(\xi_{\underline m} \in (1,1+\varepsilon))}} \lim_{\varepsilon \searrow 0} \mathbb{E}^x \Big[ \mathds{1}_\Lambda \mathds{1}_{\{ t< T_{(-(1+\delta),1+\delta)}\}} \frac{\mathbb{P}^{\xi_t}(\xi_{\underline m} \in (1,1+\varepsilon))}{\varepsilon} \Big] \\
&= \lim_{\varepsilon \searrow 0}\frac{\varepsilon}{{\mathbb{P}^x(\xi_{\underline m} \in (1,1+\varepsilon))}} \mathbb{E}^x \Big[ \mathds{1}_\Lambda \mathds{1}_{\{ t< T_{(-(1+\delta),1+\delta)}\}} \lim_{\varepsilon \searrow 0} \frac{\mathbb{P}^{\xi_t}(\xi_{\underline m} \in (1,1+\varepsilon))}{\varepsilon} \Big] \\
&= \mathbb{E}^x \Big[ \mathds{1}_\Lambda \mathds{1}_{\{ t< T_{(-(1+\delta),1+\delta)}\}} \frac{v_1(\xi_t)}{v_1(x)} \Big]. \\
&= \mathbb{P}_{v_1}^x(\Lambda, t< T_{(-(1+\delta),1+\delta)}).
\end{align*}
In the last step we used Proposition \ref{prop_as_<1}. This proves Theorem \ref{thm_cond_v1_<1}.\smallskip
The proof of Theorem \ref{thm_cond_v_<1} is similar. Applying the Markov property in the shift-operator-version we get
\begin{align*}
&\quad\mathbb{P}^x(\Lambda, t< T_{(-(1+\delta),1+\delta)} \, |\,|\xi_{\underline m}| \in (1,1+\varepsilon))\\
&= \frac{\mathbb{P}^x(\Lambda, t< T_{(-(1+\delta),1+\delta)},|\xi_{\underline m}| \in (1,1+\varepsilon))}{\mathbb{P}^x(|\xi_{\underline m}| \in (1,1+\varepsilon))} \\
&= \mathbb{E}^x \Big[ \mathds{1}_\Lambda \mathds{1}_{\{ t< T_{(-(1+\delta),1+\delta)}\}} \frac{\mathbb{P}^{\xi_t}(|\xi_{\underline m}| \in (1,1+\varepsilon))}{\mathbb{P}^x(|\xi_{\underline m}| \in (1,1+\varepsilon))} \Big].
\end{align*}
In the proof of Theorem \ref{thm_cond_v1_<1} we already found an integrable dominating function for $\mathbb{P}^{\xi_t}(|\xi_{\underline m}| \in (1,1+\varepsilon)) /\varepsilon$. So we can use dominated convergence as follows:
\begin{align*}
&\quad \lim_{\varepsilon \searrow 0} \mathbb{P}^x(\Lambda, t< T_{(-(1+\delta),1+\delta)} \, |\,|\xi_{\underline m}| \in (1,1+\varepsilon)) \\
&= \lim_{\varepsilon \searrow 0}\frac{\varepsilon}{{\mathbb{P}^x(|\xi_{\underline m}| \in (1,1+\varepsilon))}} \lim_{\varepsilon \searrow 0} \mathbb{E}^x \Big[ \mathds{1}_\Lambda \mathds{1}_{\{ t< T_{(-(1+\delta),1+\delta)}\}} \frac{\mathbb{P}^{\xi_t}(|\xi_{\underline m}| \in (1,1+\varepsilon))}{\varepsilon} \Big] \\
&= \lim_{\varepsilon \searrow 0}\frac{\varepsilon}{{\mathbb{P}^x(|\xi_{\underline m}| \in (1,1+\varepsilon))}} \mathbb{E}^x \Big[ \mathds{1}_\Lambda \mathds{1}_{\{ t< T_{(-(1+\delta),1+\delta)}\}} \lim_{\varepsilon \searrow 0} \frac{\mathbb{P}^{\xi_t}(|\xi_{\underline m}| \in (1,1+\varepsilon))}{\varepsilon} \Big] \\
&= \mathbb{E}^x \Big[ \mathds{1}_\Lambda \mathds{1}_{\{ t< T_{(-(1+\delta),1+\delta)}\}} \frac{v(\xi_t)}{v(x)} \Big]. \\
&= \mathbb{P}_{v}^x(\Lambda, t< T_{(-(1+\delta),1+\delta)}),
\end{align*}
where we used Proposition \ref{prop_as_<1} in the last equality.
\end{proof}
\subsubsection{The case $\alpha \geq 1$}
The strategy for $\alpha \geq 1$ is as in the case $\alpha <1$. First we need a relation between $v_1$ and the asymptotic probability we want to condition on. This event looks a bit different from the one in the case $\alpha<1$.
\begin{proposition} \label{prop_as_geq1}
Let $\xi$ be an $\alpha$-stable process with $\alpha \in [1,2)$ and both sided jumps, then
\begin{align*}
\frac{1-\alpha\hat\rho}{2^{\alpha\rho} \pi} v_1(x) &= \lim_{\varepsilon \searrow 0} \frac{1}{\varepsilon^{1-\alpha\hat\rho}}\mathbb{P}^x(\xi_{T_{(-(1+\varepsilon),1+\varepsilon)}} \in (1,1+\varepsilon)), \quad x \in \mathbb{R} \setminus [-1,1].
\end{align*}
\end{proposition}
\begin{proof}
Using the scaling property and Theorem 1.1 of \cite{Kyp_Par_Wat_01} we get for $x>1+\varepsilon$:
\begin{align}\label{help_111}
\begin{split}
&\quad\frac{\pi}{\sin(\pi\alpha\hat\rho)} \mathbb{P}^x( \xi_{T_{(-(1+\varepsilon),1+\varepsilon)}} \in (1,1+\varepsilon))\\
&= \frac{\pi}{\sin(\pi\alpha\hat\rho)} \mathbb{P}^{\frac{x}{1+\varepsilon}}\Big( \xi_{T_{(-1,1)}} \in \big(\frac{1}{1+\varepsilon},1\big)\Big)\\
&= \big(\frac{x}{1+\varepsilon}+1\big)^{\alpha\rho} \big(\frac{x}{1+\varepsilon}-1\big)^{\alpha\hat\rho} \int\limits_{\frac{1}{1+\varepsilon}}^1 (1+y)^{-\alpha\rho} (1-y)^{-\alpha\hat\rho} \big( \frac{x}{1+\varepsilon}-y \big)^{-1} \, \dd y \\
& \quad -(\alpha-1) \int\limits_1^{\frac{x}{1+\varepsilon}} \psi_{\alpha\rho}(u)\, \dd u \int\limits_{\frac{1}{1+\varepsilon}}^1 (1+y)^{-\alpha\rho} (1-y)^{-\alpha\hat\rho} \, \dd y.
\end{split}
\end{align}
With l'Hopital's rule and the integration rule of Leibnitz we see:
\begin{align}\label{help_112}
\begin{split}
&\quad\lim_{\varepsilon \searrow 0} \frac{1}{\varepsilon^{1-\alpha\hat\rho}} \int\limits_{\frac{1}{1+\varepsilon}}^1 (1+y)^{-\alpha\rho} (1-y)^{-\alpha\hat\rho} \big( \frac{x}{1+\varepsilon}-y \big)^{-1} \, \dd y \\
&=\lim_{\varepsilon \searrow 0} \frac{\varepsilon^{\alpha\hat\rho}}{1-\alpha\hat\rho} \frac{1}{(1+\varepsilon)^2} \big(1+\frac{1}{1+\varepsilon}\big)^{-\alpha\rho} \big(1-\frac{1}{1+\varepsilon}\big)^{-\alpha\hat\rho} \big(\frac{x-1}{1+\varepsilon}\big)^{-1} \\
&\quad \quad + \lim_{\varepsilon \searrow 0} \frac{\varepsilon^{\alpha\hat\rho}}{1-\alpha\hat\rho} \int\limits_{\frac{1}{1+\varepsilon}}^1 (1+y)^{-\alpha\rho} (1-y)^{-\alpha\hat\rho}\frac{x}{(x-y(1+\varepsilon))^2} \, \dd y \\
&= \frac{2^{-\alpha\rho}}{1-\alpha\hat\rho} (x-1)^{-1}
\end{split}
\end{align}
and further,
\begin{align}\label{help_113}
\begin{split}
&\quad\lim_{\varepsilon \searrow 0} \frac{1}{\varepsilon^{1-\alpha\hat\rho}} \int\limits_{\frac{1}{1+\varepsilon}}^1 (1+y)^{-\alpha\rho} (1-y)^{-\alpha\hat\rho} \, \dd y\\
&= \lim_{\varepsilon \searrow 0} \frac{\varepsilon^{\alpha\hat\rho}}{1-\alpha\hat\rho} \frac{1}{(1+\varepsilon)^2} \big(1+\frac{1}{1+\varepsilon}\big)^{-\alpha\rho} \big(1-\frac{1}{1+\varepsilon}\big)^{-\alpha\hat\rho}\\
&= \frac{2^{-\alpha\rho}}{1-\alpha\hat\rho}.
\end{split}
\end{align}
Now we plug in \eqref{help_112} and \eqref{help_113} in \eqref{help_111} and get
\begin{align*}
&\quad\lim_{\varepsilon \searrow 0} \frac{\pi}{\varepsilon^{1-\alpha\hat\rho}} \mathbb{P}^x( \xi_{T_{(-(1+\varepsilon),1+\varepsilon)}} \in (1,1+\varepsilon))\\
&=\frac{2^{-\alpha\rho}}{1-\alpha\hat\rho} \sin(\pi\alpha\hat\rho)\Big[(x+1)^{\alpha\rho} (x-1)^{\alpha\hat\rho} (x-1)^{-1} - (\alpha-1) \int\limits_1^{x} \psi_{\alpha\rho}(u)\, \dd u \Big] \\
&= \frac{2^{-\alpha\rho}}{1-\alpha\hat\rho} v_1(x).
\end{align*}
For $x<-1$ we note that
$$\mathbb{P}^x( \xi_{T_{(-(1+\varepsilon),1+\varepsilon)}} \in (1,1+\varepsilon)) = \hat{\mathbb{P}}^{|x|}( \xi_{T_{(-(1+\varepsilon),1+\varepsilon)}} \in (-(1+\varepsilon),-1)),$$
use again Theorem 1.1 of \cite{Kyp_Par_Wat_01} and do a similar calculation as above to deduce
\begin{align*}
&\quad \frac{\pi}{\sin(\pi\alpha\rho)} \mathbb{P}^x( \xi_{T_{(-(1+\varepsilon),1+\varepsilon)}} \in (1,1+\varepsilon))\\
&= \big(\frac{|x|}{1+\varepsilon}+1\big)^{\alpha\hat\rho} \big(\frac{|x|}{1+\varepsilon}-1\big)^{\alpha\rho} \int\limits_{-1}^{-\frac{1}{1+\varepsilon}} (1+y)^{-\alpha\hat\rho} (1-y)^{-\alpha\rho} \big( \frac{|x|}{1+\varepsilon}-y \big)^{-1} \, \dd y \\
& \quad -(\alpha-1) \int\limits_1^{\frac{|x|}{1+\varepsilon}} \psi_{\alpha\hat\rho}(u)\, \dd u \int\limits_{-1}^{-\frac{1}{1+\varepsilon}} (1+y)^{-\alpha\hat\rho} (1-y)^{-\alpha\rho} \, \dd y.
\end{align*}
A substitution on the integrals and the same limiting arguments as in the case $x>1$ show
\begin{align*}
&\quad \lim_{\varepsilon \searrow 0} \frac{\pi}{\varepsilon^{1-\alpha\hat\rho}} \mathbb{P}^x( \xi_{T_{(-(1+\varepsilon),1+\varepsilon)}} \in (1,1+\varepsilon))\\
&=\frac{2^{-\alpha\rho}}{1-\alpha\hat\rho} \sin(\pi\alpha\rho)\Big[(|x|+1)^{\alpha\hat\rho} (|x|-1)^{\alpha\rho} (|x|+1)^{-1} - (\alpha-1) \int\limits_1^{|x|} \psi_{\alpha\hat\rho}(u)\, \dd u \Big] \\
&= \frac{2^{-\alpha\rho}}{1-\alpha\hat\rho} v_1(x).
\end{align*}
\end{proof}
\begin{proof}[Proof of Theorems \ref{thm_cond_v1_geq1} and \ref{thm_cond_v_geq1}]
First we note by a similar application of the Markov property as in the proof of Theorem \ref{thm_cond_v1_<1}:
\begin{align*}
&\quad \mathbb{P}^x(\Lambda, t< T_{(-(1+\delta),1+\delta)} ,\xi_{T_{(-(1+\varepsilon),1+\varepsilon)}} \in (1,1+\varepsilon)) \\
&= \mathbb{E}^x \Big[ \mathds{1}_\Lambda \mathds{1}_{\{ t< T_{(-(1+\delta),1+\delta)} \}} \mathbb{P}^{\xi_t}(\xi_{T_{(-(1+\varepsilon),1+\varepsilon)}} \in (1,1+\varepsilon)) \Big]
\end{align*}
and hence,
\begin{align*}
&\quad \mathbb{P}^x(\Lambda, t< T_{(-(1+\delta),1+\delta)} \,|\, \xi_{T_{(-(1+\varepsilon),1+\varepsilon)}} \in (1,1+\varepsilon)) \\
&= \mathbb{E}^x \Big[ \mathds{1}_\Lambda \mathds{1}_{\{ t< T_{(-(1+\delta),1+\delta)} \}} \frac{\mathbb{P}^{\xi_t}(\xi_{T_{(-(1+\varepsilon),1+\varepsilon)}} \in (1,1+\varepsilon))}{\mathbb{P}^{x}(\xi_{T_{(-(1+\varepsilon),1+\varepsilon)}} \in (1,1+\varepsilon))} \Big].
\end{align*}
Again we want to move the $\varepsilon$-limit inside the integral and use Proposition \ref{prop_as_geq1}. First we use \eqref{help_111}:
\begin{align*}
&\quad\frac{\pi}{\sin(\pi\alpha\hat\rho)} \mathbb{P}^y( \xi_{T_{(-(1+\varepsilon),1+\varepsilon)}} \in (1,1+\varepsilon))\\
&= \big(\frac{y}{1+\varepsilon}+1\big)^{\alpha\rho} \big(\frac{y}{1+\varepsilon}-1\big)^{\alpha\hat\rho} \int\limits_{\frac{1}{1+\varepsilon}}^1 (1+u)^{-\alpha\rho} (1-u)^{-\alpha\hat\rho} \big( \frac{y}{1+\varepsilon}-u \big)^{-1} \, \dd u \\
& \quad -(\alpha-1) \int\limits_1^{\frac{y}{1+\varepsilon}} \psi_{\alpha\rho}(w)\, \dd w \int\limits_{\frac{1}{1+\varepsilon}}^1 (1+u)^{-\alpha\rho} (1-u)^{-\alpha\hat\rho} \, \dd u \\
&\leq \Big[ \big(\frac{y}{1+\varepsilon}+1\big)^{\alpha\rho} \big(\frac{y}{1+\varepsilon}-1\big)^{\alpha\hat\rho-1} - (\alpha-1) \int\limits_1^{\frac{y}{1+\varepsilon}} \psi_{\alpha\rho}(w)\, \dd w \Big] \int\limits_{\frac{1}{1+\varepsilon}}^1 (1+u)^{-\alpha\rho} (1-u)^{-\alpha\hat\rho} \, \dd u\\
&= \frac{1}{\sin(\pi\alpha\hat\rho)}v_1\big(\frac{y}{1+\varepsilon}\big) \int\limits_{\frac{1}{1+\varepsilon}}^1 (1+u)^{-\alpha\rho} (1-u)^{-\alpha\hat\rho} \, \dd u.
\end{align*}
Further,
\begin{align*}
\varepsilon^{\alpha\hat\rho-1} \int\limits_{\frac{1}{1+\varepsilon}}^1 (1+u)^{-\alpha\rho} (1-u)^{-\alpha\hat\rho} \, \dd u &\leq \varepsilon^{\alpha\hat\rho-1} \int\limits_{\frac{1}{1+\varepsilon}}^1(1-u)^{-\alpha\hat\rho} \, \dd u \\
&= \frac{\varepsilon^{\alpha\hat\rho-1}}{1-\alpha\hat\rho} \Big(\frac{\varepsilon}{1+\varepsilon}\Big)^{1-\alpha\hat\rho} \\
&\leq \frac{1}{1-\alpha\hat\rho}.
\end{align*}
Let be $\varepsilon$ so small that $\frac{1+\delta}{1+\varepsilon} > 1+\frac{\delta}{2}$ and define
$$C_\delta = \sup_{|u| \geq 1+\frac{\delta}{2}} v_1(u).$$
Then it follows
\begin{align*}
\frac{\pi}{\varepsilon^{1-\alpha\hat\rho}} \mathbb{P}^y( \xi_{T_{(-(1+\varepsilon),1+\varepsilon)}} \in (1,1+\varepsilon)) &\leq \frac{1}{1-\alpha\hat\rho} v_1\Big(\frac{y}{1+\varepsilon}\Big)
\leq \frac{C_\delta}{1-\alpha\hat\rho}.
\end{align*}
Similarly, we get for $y<-(1+\varepsilon)$:
\begin{align*}
\frac{\pi}{\varepsilon^{1-\alpha\hat\rho}} \mathbb{P}^y( \xi_{T_{(-(1+\varepsilon),1+\varepsilon)}} \in (1,1+\varepsilon)) &\leq \frac{1}{1-\alpha\hat\rho} v_1\Big(\frac{y}{1+\varepsilon}\Big)\\
&\leq \frac{C_\delta}{1-\alpha\hat\rho}.
\end{align*}
So we can apply dominated convergence to deduce
\begin{align*}
&\quad\lim_{\varepsilon \searrow 0} \mathbb{P}^x(\Lambda, t< T_{(-(1+\delta),1+\delta)} \, |\xi_{T_{(-(1+\varepsilon),1+\varepsilon)}} \in (1,1+\varepsilon)) \\
&= \lim_{\varepsilon \searrow 0}\frac{\varepsilon^{1-\alpha\hat\rho}}{{\mathbb{P}^x(\xi_{T_{(-(1+\varepsilon),1+\varepsilon)}} \in (1,1+\varepsilon))}} \\
&\quad \times \lim_{\varepsilon \searrow 0} \mathbb{E}^x \Big[ \mathds{1}_\Lambda \mathds{1}_{\{ t< T_{(-(1+\delta),1+\delta)}\}} \frac{\mathbb{P}^{\xi_t}(\xi_{T_{(-(1+\varepsilon),1+\varepsilon)}} \in (1,1+\varepsilon))}{\varepsilon^{1-\alpha\hat\rho}} \Big] \\
&= \lim_{\varepsilon \searrow 0}\frac{\varepsilon^{1-\alpha\hat\rho}}{{\mathbb{P}^x(\xi_{T_{(-(1+\varepsilon),1+\varepsilon)}} \in (1,1+\varepsilon))}} \\
&\quad \times \mathbb{E}^x \Big[ \mathds{1}_\Lambda \mathds{1}_{\{ t< T_{(-(1+\delta),1+\delta)}\}} \lim_{\varepsilon \searrow 0} \frac{\mathbb{P}^{\xi_t}(\xi_{T_{(-(1+\varepsilon),1+\varepsilon)}} \in (1,1+\varepsilon))}{\varepsilon^{1-\alpha\hat\rho}} \Big] \\
&= \mathbb{E}^x \Big[ \mathds{1}_\Lambda \mathds{1}_{\{ t< T_{(-(1+\delta),1+\delta)}\}} \frac{v_1(\xi_t)}{v_1(x)} \Big] \\
&= \mathbb{P}_{v_1}^x(\Lambda, t< T_{(-(1+\delta),1+\delta)}),
\end{align*}
where we used Proposition \ref{prop_as_geq1} in the second last equality. This finishes the proof of Theorem \ref{thm_cond_v1_geq1}.\smallskip
To prove Theorem \ref{thm_cond_v_geq1} we first not that one can show analogously to the proof of Proposition \ref{prop_as_geq1}:
$$\lim_{\varepsilon \searrow 0} \varepsilon^{\alpha\rho-1}\mathbb{P}^{x}(\xi_{T_{(-(1+\varepsilon),1+\varepsilon)}} \in (-(1+\varepsilon),-1)) = \frac{1-\alpha\rho}{2^{\alpha\hat\rho} \pi}v_{-1}(x), \quad x \notin [-1,1].$$
We assume without loss of generality $\rho \leq \hat\rho$ (i.e. $\rho \leq 1/2$) and in particular it holds that
\begin{align*}
&\quad \lim_{\varepsilon \searrow 0} \varepsilon^{\alpha\hat\rho-1}\mathbb{P}^{x}(\xi_{T_{(-(1+\varepsilon),1+\varepsilon)}} \in (-(1+\varepsilon),-1))\\
&= \lim_{\varepsilon \searrow 0} \varepsilon^{\alpha(\hat\rho-\rho)} \varepsilon^{\alpha\rho-1}\mathbb{P}^{x}(\xi_{T_{(-(1+\varepsilon),1+\varepsilon)}} \in (-(1+\varepsilon),-1))\\
&= 0.
\end{align*}
It follows that
\begin{align*}
&\quad\lim_{\varepsilon \searrow 0} \frac{\mathbb{P}^{\xi_t}(|\xi_{T_{(-(1+\varepsilon),1+\varepsilon)}}| \in (1,1+\varepsilon))} {\mathbb{P}^{x}(|\xi_{T_{(-(1+\varepsilon),1+\varepsilon)}}| \in (1,1+\varepsilon))} \\
&=\lim\limits_{\varepsilon \searrow 0} \frac{ \mathbb{P}^{\xi_t}(\xi_{T_{(-(1+\varepsilon),1+\varepsilon)}} \in (1,1+\varepsilon))+ \mathbb{P}^{\xi_t}(\xi_{T_{(-(1+\varepsilon),1+\varepsilon)}} \in (-(1+\varepsilon),-1))}{\mathbb{P}^{x}(\xi_{T_{(-(1+\varepsilon),1+\varepsilon)}} \in (1,1+\varepsilon))+\mathbb{P}^{x}(\xi_{T_{(-(1+\varepsilon),1+\varepsilon)}} \in (-(1+\varepsilon),-1))} \\
&=\lim\limits_{\varepsilon \searrow 0} \frac{\varepsilon^{\alpha\hat\rho-1}\mathbb{P}^{\xi_t}(\xi_{T_{(-(1+\varepsilon),1+\varepsilon)}} \in (1,1+\varepsilon))+ \varepsilon^{\alpha\hat\rho-1} \mathbb{P}^{\xi_t}(\xi_{T_{(-(1+\varepsilon),1+\varepsilon)}} \in (-(1+\varepsilon),-1))}{\varepsilon^{\alpha\hat\rho-1}\mathbb{P}^{x}(\xi_{T_{(-(1+\varepsilon),1+\varepsilon)}} \in (1,1+\varepsilon))+\varepsilon^{\alpha\hat\rho-1}\mathbb{P}^{x}(\xi_{T_{(-(1+\varepsilon),1+\varepsilon)}} \in (-(1+\varepsilon),-1))}\\
&=\lim\limits_{\varepsilon \searrow 0} \frac{\varepsilon^{\alpha\hat\rho-1}\mathbb{P}^{\xi_t}(\xi_{T_{(-(1+\varepsilon),1+\varepsilon)}} \in (1,1+\varepsilon))}{\varepsilon^{\alpha\hat\rho-1}\mathbb{P}^{x}(\xi_{T_{(-(1+\varepsilon),1+\varepsilon)}} \in (1,1+\varepsilon))}\\
&= \frac{v_1(\xi_t)}{v_1(x)}.
\end{align*}
For $\rho>1/2$, following the same argument, the first summands vanish instead of the second. To finish the proof of Theorem \ref{thm_cond_v_geq1} the dominated convergence argument can be transferred from the proof of Theorem \ref{thm_cond_v1_geq1}.
\end{proof}
\subsubsection{The alternative characterisation for $\alpha>1$}
As before we start with the needed asymptotic probability of the event we want to condition on which is, as already mentioned, the same as in the case $\alpha<1$ but under the law of the process conditioned to avoid $0$.
\begin{proposition}\label{prop_as_altern_>1}
Let $\xi$ be an $\alpha$-stable process with $\alpha \in (1,2)$ and both sided jumps, then
\begin{align}\label{eq_as1_>1}
\frac{\alpha-1}{2} v_1(x) &= \lim\limits_{\varepsilon \searrow 0} \frac{e(x)}{\varepsilon}\mathbb{P}_\circ^{x}(\xi_{\underline{m}} \in (1,1+\varepsilon)), \quad x \notin [-1,1],
\end{align}
and
\begin{align}\label{eq_as_>1}
\frac{\alpha-1}{2} v(x) &= \lim\limits_{\varepsilon \searrow 0} \frac{e(x)}{\varepsilon}\mathbb{P}_\circ^{x}(|\xi_{\underline{m}}| \in (1,1+\varepsilon)), \quad x \notin [-1,1].
\end{align}
\end{proposition}
\begin{proof}
We use the so-called point of furthest reach before hitting $0$. Let $\overline{m}$ be the time such that $|\xi_t| \leq |\xi_{\overline{m}}|$ for all $t \leq T_0$. The Riesz-Bogdan-Żak (see Bogdan and Żak \cite{Bog_Zak_01} for symmetric stable processes, Kyprianou \cite{Kyp_02} for general stable processes and Alili et al. \cite{Ali_Cha_Gra_Zak_01} for self-similar Markov processes) tells us that the process conditioned to avoid $0$ is the spatial inverse of the original (i.e. not $h$-transformed) dual process including a certain time-change. Since the time change does not play any role for the value $\xi_{\overline{m}}$ we can extract the distribution of the point of closest reach of the process conditioned to avoid $0$ from the distribution of the point of furthest reach of the original dual process, i.e.
$$\mathbb{P}_\circ^{x}(\xi_{\underline{m}} \in (1,1+\varepsilon)) = \hat{\mathbb{P}}^{\frac{1}{x}}\Big(\xi_{\overline{m}} \in \Big(\frac{1}{1+\varepsilon},1 \Big)\Big).$$
Combining this with Proposition 1.2 of Kyprianou et al. \cite{Kyp_Riv_Sen_01} where one can find an explicit expression for the distribution of the point of furthest reach before hitting $0$, we get for $x>1$:
\begin{align*}
&\,\frac{2}{\alpha-1}\mathbb{P}_\circ^{x}(\xi_{\underline{m}} \in (1,1+\varepsilon))\\
=&\,\int\limits_{\frac{1}{x} \lor \frac{1}{1+\varepsilon}}^1 u^{-\alpha} \Big[\big(u+\frac{1}{x}\big)^{\alpha\rho}\big(u-\frac{1}{x}\big)^{\alpha\hat{\rho}-1} -(\alpha-1) x^{1-\alpha} \int\limits_1^{ux} \psi_{\alpha\rho}(w) \, \dd w \Big] \, \dd u \\
=&\, \int\limits_{1 \lor \frac{x}{1+\varepsilon}}^x \frac{1}{x} \left(\frac{x}{u}\right)^{\alpha} \Big[\big(\frac{u}{x}+\frac{1}{x}\big)^{\alpha\rho}\big(\frac{u}{x}-\frac{1}{x}\big)^{\alpha\hat{\rho}-1} -(\alpha-1)x^{1-\alpha} \int\limits_1^{u} \psi_{\alpha\rho}(w) \, \dd w \Big] \, \dd u \\
=&\, \int\limits_{1 \lor \frac{x}{1+\varepsilon}}^x u^{-\alpha} \Big[(u+1)^{\alpha\rho}(u-1)^{\alpha\hat{\rho}-1} -(\alpha-1) \int\limits_1^{u} \psi_{\alpha\rho}(w) \, \dd w \Big] \, \dd u \\
=&\, \int\limits_{1 \lor \frac{x}{1+\varepsilon}}^x u^{-\alpha} \Big[(u+1)\psi_{\alpha\rho}(u) -(\alpha-1) \int\limits_1^{u} \psi_{\alpha\rho}(w) \, \dd w \Big] \, \dd u \\
=& \frac{1}{\sin(\pi\alpha\hat\rho)} \int\limits_{1 \lor \frac{x}{1+\varepsilon}}^x u^{-\alpha} v_1(u) \, \dd u.
\end{align*}
With l'Hopital's rule we get
\begin{align*}
\lim_{\varepsilon \searrow 0} \frac{2}{\alpha-1} \frac{1}{\varepsilon} \mathbb{P}_\circ^{x}(\xi_{\underline{m}} \in (1,1+\varepsilon)) &= \frac{1}{\sin(\pi\alpha\hat\rho)} \lim_{\varepsilon \searrow 0} \Big[\frac{x}{(1+\varepsilon)^2} \Big(\frac{x}{1+\varepsilon}\Big)^{-\alpha} v_1\Big(\frac{x}{1+\varepsilon}\Big) \Big] \\
&= \frac{1}{\sin(\pi\alpha\hat\rho)} x^{1-\alpha} v_1(x)\\
&= \frac{v_1(x)}{e(x)}.
\end{align*}
This shows \eqref{eq_as1_>1} for $x>1$. For $x<-1$ the equality \eqref{eq_as1_>1} follows similarly. \smallskip
To show the second claim we use
\begin{align*}
\frac{\alpha-1}{2} v_{-1}(x) &= \lim\limits_{\varepsilon \searrow 0} \frac{e(x)}{\varepsilon}\mathbb{P}_\circ^{x}(\xi_{\underline{m}} \in (-(1+\varepsilon),-1)), \quad x \notin [-1,1],
\end{align*}
which follows from a computation similar to \eqref{eq_as1_>1}. Using that $v=v_1+v_{-1}$ the second claim follows.
\end{proof}
\begin{proof}[Proof of Theorems \ref{thm_cond_v1_altern_>1} and \ref{thm_cond_v_altern_>1}]
Since the process conditioned to avoid $0$ is a strong Markov process (this follows by general theory on $h$-transforms, see e.g. Chung and Walsh \cite{Chu_Wal_01}) we can use arguments analogous to the case $\alpha<1$ to obtain, for all $x \notin [-1,1]$,
\begin{align*}
&\quad\mathbb{P}_\circ^x(\Lambda, t< T_{(-(1+\delta),1+\delta)} \, |\,\xi_{\underline m} \in (1,1+\varepsilon))\\
&= \mathbb{E}_\circ^x \left\lbrack \mathds{1}_\Lambda \mathds{1}_{\left\lbrace t< T_{(-(1+\delta),1+\delta)}\right\rbrace} \frac{\mathbb{P}_\circ^{\xi_t}(\xi_{\underline m} \in (1,1+\varepsilon))}{\mathbb{P}_\circ^x(\xi_{\underline m} \in (1,1+\varepsilon))} \right\rbrack.
\end{align*}
In the proof of Proposition \ref{prop_as_altern_>1} we have already seen that
$$\mathbb{P}_\circ^{y}(\xi_{\underline{m}} \in (1,1+\varepsilon)) = \frac{\alpha-1}{2 \sin(\pi\alpha\hat\rho)} \int\limits_{1 \lor \frac{y}{1+\varepsilon}}^y u^{-\alpha}v_{1}(u) \, \dd u$$
for $y>1+\varepsilon$. Analogously we can show
$$\mathbb{P}_\circ^{y}(\xi_{\underline{m}} \in (-(1+\varepsilon),-1)) = \frac{\alpha-1}{2 \sin(\pi\alpha\hat\rho)} \int\limits_{1 \lor \frac{y}{1+\varepsilon}}^y u^{-\alpha}v_{-1}(u) \, \dd u$$
for $y>1+\varepsilon$ and hence, we have
$$\mathbb{P}_\circ^{y}(|\xi_{\underline{m}}| \in (1,1+\varepsilon)) = \frac{\alpha-1}{2\sin(\pi\alpha\hat\rho)} \int\limits_{1 \lor \frac{y}{1+\varepsilon}}^y u^{-\alpha}v(u) \, \dd u$$
for $y>1+\varepsilon$. Now we fix $\delta>0$ and assume that $\varepsilon$ is so small that $\frac{1+\delta}{1+\varepsilon} \geq 1+\frac{\delta}{2}$. We define again $C_\delta:= \sup_{|u| \geq 1+\frac{\delta}{2}} v(u)$ which is finite. Note that, for $y>1+\delta$, we have:
\begin{align*}
\mathbb{P}_\circ^{y}(|\xi_{\underline{m}}| \in (1,1+\varepsilon)) &=\frac{\alpha-1}{2\sin(\pi\alpha\hat\rho)}\int\limits_{\frac{y}{1+\varepsilon}}^y u^{-\alpha} v(u) \, \dd u \\
&\leq \frac{\alpha-1}{2\sin(\pi\alpha\hat\rho)} \frac{y\varepsilon}{1+\varepsilon} \Big(\frac{y}{1+\varepsilon}\Big)^{-\alpha} \sup_{u \in [\frac{y}{1+\varepsilon},\infty)} v(u)\\
&\leq \frac{C_\delta(\alpha-1)}{2\sin(\pi\alpha\hat\rho)} \frac{\varepsilon}{(1+\varepsilon)^{1-\alpha}}y^{1-\alpha}\\
&\leq \frac{C_\delta(\alpha-1)}{2\sin(\pi\alpha\hat\rho)} \varepsilon (1+\delta)^{\alpha-1 }y^{1-\alpha}.
\end{align*}
So we can estimate on $\{ t< T_{[-(1+\delta),1+\delta]}, \xi_t > 1 \}$:
\begin{align*}
\frac{\mathbb{P}^{\xi_t}(\xi_{\underline m} \in (1,1+\varepsilon))}{\varepsilon} &\leq \frac{C_\delta(\alpha-1)}{2\sin(\pi\alpha\hat\rho)} (1+\delta)^{\alpha-1 }\xi_t^{1-\alpha}.
\end{align*}
On $\{ t< T_{[-(1+\delta),1+\delta]}, \xi_t <- 1 \}$ an analogous argument shows
\begin{align*}
\frac{\mathbb{P}^{\xi_t}(\xi_{\underline m} \in (1,1+\varepsilon))}{\varepsilon} &\leq \frac{C_\delta(\alpha-1)}{2\sin(\pi\alpha\rho)} (1+\delta)^{\alpha-1 }|\xi_t|^{1-\alpha}.
\end{align*}
Further it holds that
\begin{align*}
\frac{1}{\sin(\pi\alpha\hat\rho)}\mathbb{E}_\circ^x \big[ \mathds{1}_\Lambda \mathds{1}_{\{ t< T_{[-(1+\delta),1+\delta]}, \xi_t > 1 \}} \xi_t^{1-\alpha} \big]
&= \frac{1}{e(x)} \mathbb{E}^x \big[ \mathds{1}_\Lambda \mathds{1}_{\{ t< T_{[-(1+\delta),1+\delta]}, \xi_t > 1 \}} \xi_t^{\alpha-1} \xi_t^{1-\alpha} \big] \\
&\leq \frac{1}{e(x)}
\end{align*}
and, analogously,
$$\frac{1}{\sin(\pi\alpha\rho)}\mathbb{E}_\circ^x \big[ \mathds{1}_\Lambda \mathds{1}_{\{ t< T_{[-(1+\delta),1+\delta]}, \xi_t < -1 \}} |\xi_t|^{1-\alpha} \big] \leq \frac{1}{e(x)}.$$
So we can use dominated convergence and the Markov property as follows:
\begin{align*}
&\quad\lim_{\varepsilon \searrow 0} \mathbb{P}_\circ^x(\Lambda, t< T_{[-(1+\delta),1+\delta]} \, |\,\xi_{\underline m} \in (1,1+\varepsilon)) \\
&= \lim_{\varepsilon \searrow 0}\frac{\varepsilon}{{\mathbb{P}_\circ^x(\xi_{\underline m} \in (1,1+\varepsilon))}} \\
&\quad \quad \times \lim_{\varepsilon \searrow 0} \mathbb{E}_\circ^x \Big[ \mathds{1}_\Lambda \mathds{1}_{\{ t< T_{[-(1+\delta),1+\delta]}\}} \frac{\mathbb{P}_\circ^{\xi_t}(\xi_{\underline m} \in (1,1+\varepsilon))}{\varepsilon} \Big] \\
&= \lim_{\varepsilon \searrow 0}\frac{\varepsilon}{{\mathbb{P}_\circ^x(\xi_{\underline m} \in (1,1+\varepsilon))}} \\
&\quad \quad \times \mathbb{E}_\circ^x \Big[ \mathds{1}_\Lambda \mathds{1}_{\{ t< T_{[-(1+\delta),1+\delta]}\}} \lim_{\varepsilon \searrow 0} \frac{\mathbb{P}_\circ^{\xi_t}(\xi_{\underline m} \in (1,1+\varepsilon))}{\varepsilon} \Big] \\
&= \mathbb{E}_\circ^x \Big[ \mathds{1}_\Lambda \mathds{1}_{\{ t< T_{[-(1+\delta),1+\delta]}\}} \frac{e(x)v_1(\xi_t)}{e(\xi_t)v_1(x)} \Big] \\
&= \mathbb{E}^x \Big[ \mathds{1}_\Lambda \mathds{1}_{\{ t< T_{[-(1+\delta),1+\delta]}\}} \frac{v_1(\xi_t)}{v_1(x)} \Big].
\end{align*}
In the second last second step we used Proposition \ref{prop_as_altern_>1}. Theorem \ref{thm_cond_v_altern_>1} can be proven similarly using the same dominating function.
\end{proof}
\textbf{Acknowledgements.} The authors would like to thank Dr. Watson for useful discussions on this topic.
\bibliographystyle{abbrvnat}
|
2,869,038,154,016 | arxiv | \section{Introduction}
Heavy quarkonium provides a unique laboratory to study Quantum Chromodynamics (QCD) for the bound
states of heavy quark-antiquark system. The nature that the heavy quarkonium is relevant to a
non-relativistic treatment had been known for a long time \cite{QR}. Although the non-relativistic
QCD (NRQCD), an effective field theory, is a powerful theoretical tool to separate the high energy
modes from the low energy contributions, the calculations of the low energy hadronic matrix elements
rely on model-dependent non-perturbative methods in most cases. From the point of view of the
non-perturbative QCD, there is no one method which is uniquely superior over the others. Many methods
were employed in heavy quarkonium physics, such as lattice QCD, quark-potential model, etc. (for a
recent review see \cite{HQ}). The light-front quark model, in which a hadronic matrix element is
represented as the overlap of wave functions, offers many insights into the internal structures of
the bound states. In this study, we will explore the heavy quarkonium from a quark model on the light
front.
The light-front QCD has been developed as a promising analytic
method for solving the non-perturbative problems of hadron physics
\cite{BPP}. The aim of the light-front QCD is to describe the
hadronic bound states in terms of their fundamental quark and gluon
degrees of freedom. It may be the only possible method that the low
energy quark model and the high energy parton model can be
reconciled. For the hard processes with large momentum transferred,
the light-front QCD reduces to perturbative QCD (pQCD) which
factorize the physical quantity into a convolution of the hard
scattering kernel and the distribution amplitudes (or functions). In
general, the basic ingredient in light-front QCD is the relativistic
hadron wave functions which generalize the distribution amplitudes
(or functions) by including the transverse momentum distributions.
It contains all information of a hadron from its constituents. The
hadronic quantities are represented by the overlaps of wave
functions and can be derived in principle.
The light-front quark model is the only relativistic quark model in
which a consistent and fully relativistic treatment of quark spins
and the center-of-mass motion can be carried out \cite{LFQM}. This
model has many advantages. For example, the light-front wave
function is manifestly Lorentz invariant as it is expressed in terms
of the internal momentum fraction variables which is independent of
the total hadron momentum. Moreover, hadron spin can also be
correctly constructed using the so-called Melosh rotation. This
model had been successfully applied to calculate many
phenomenologically important meson decay constants and hadronic form
factors \cite{Jaus1, CCH1, Jaus2, CCH2, Hwang}.
On the light front, the non-relativistic nature of a heavy
quarkonium is represented by that the light-front momentum
fractions of the quark and antiquark is close to $1/2$ and the
relative transverse and the $z-$direction momenta are much smaller
than the heavy quark mass. The Lorentz invariant light-front wave
function and the light-front formulations provide a systematic way
to include the relativistic corrections. There is no conceptual
problem to extend the light-front approach into the heavy
quarkonium. We will apply the covariant light-front approach
\cite{Jaus2, CCH2} to the ground-state $s$-wave mesons which include
$^1S_0$ pseudoscalar mesons ($P$) $\eta_c, \eta_b$ and $^3S_1$
vector mesons ($V$) $J/\psi, \Upsilon(1S)$ as our first-step study
along this direction. The main purposes of this study are threefold:
(1) Is the light-front approach applicable into the heavy
quarkonium? In concept, the light-front quark model is the
relativistic generalization of the non-relativistic quark model. The
phenomenological success of the previous non-relativistic
quark-potential model should be reproduced in the light-front
approach. In particular, we will examine the validity of the
light-front approach in three types of quantities: decay constants,
two-photon annihilation $P\to \gamma\gamma$ and magnetic dipole
transition $V\to P\gamma$. In most literatures, these processes were
explored separately \cite{KMRR, AB, BJV, DER}. To study them
simultaneously can better constrain the phenomenological parameters
and check the consistency of the theory predictions. (2) The
$\eta_b$ meson has still not been observed in experiment
\cite{ALEPH}. We will present the numerical prediction for the
branching ratios for $\eta_b\to\gamma\gamma$ and
$\Upsilon\to\eta_b\gamma$ processes. (3) What is the relation of the
light-front approach with the other approaches? In the
non-relativistic approximations, the light-front approach will be
closely related with the non-relativistic quark-potential approach.
For the process of $P\to\gamma\gamma$ which is light-front
dominated, the light-front approach reduce to the model-independent
pQCD.
The paper is organized as follows. In Sec. II, we give a detailed
presentation of the covariant light-front approach for heavy
quarkonium. It contains a brief review of the light-front framework
and the light-front analysis for the decay constants of $P$ and $V$
mesons and the processes $P\to\gamma\gamma$, $V\to P\gamma$. In Sec.
III, the relations of the light-front approach with the
non-relativistic approach and pQCD are discussed. In Sec. IV, the
numerical results and discussions are presented. Finally, the
conclusions are given in Sec. V.
\section{Formalism of covariant light-front approach}
\subsection{General formalism}
A heavy quarkonium is the hadronic bound state of heavy quark and
antiquark. In this system, the valence quarks have equal masses
$m_1=m_2=m$ with $m$ the mass of heavy quark $c$ or $b$. Thus the
formulae associated with the term $(m_1-m_2)$ vanish and will lead
to some simplifications. In this section, we will give the formulae
specially for the quarkonium system.
The momentum of a particle is given in terms of light-front
component by $k=(k^-, k^+, k_\perp)$ where $k^{\pm}=k^0\pm k^3$ and
$k_\perp=(k^1, k^2)$, and the light-front vector is written as $\tilde
k=(k^+, k_{\perp})$. The longitudinal component $k^+$ is restricted to
be positive, i.e., $k^+> 0$ for the massive particle. By this way,
the physical vacuum of light-front QCD is trivial except the zero
longitudinal momentum modes (zero-mode). We will study a meson with
total momentum $P$ and two constituents, quark and antiquark whose
momenta are $p_1$ and $p_2$, respectively. In order to describe the
internal motion of the constituents, it is crucial to introduce the
intrinsic variables $(x_i, p_{\perp})$ through
\begin{eqnarray}
&& p_1^+=x_1 P^+, \qquad \qquad p_{1\perp}=x_1 P_{\bot}+ p_{\perp}; \nonumber \\
&& p_2^+=x_2 P^+, \qquad \qquad p_{2\perp}=x_2 P_{\bot}- p_{\perp},
\end{eqnarray}
where $x_i$ are the light-front momentum fractions and they satisfy
$0<x_1, x_2<1$ and $x_1+x_2=1$. The invariant mass $M_0$ of the
constituents and the relative momentum in $z$ direction $p_z$ can be
written as
\begin{eqnarray} \label{eq:Mpz}
M_0^2=\frac{p_{\perp}^2+m^2}{x_1 x_2}, \qquad ~~~
p_z=(x_2-\frac{1}{2})M_0.
\end{eqnarray}
The invariant mass $M_0$ of $q\bar q$ is in general different from
the mass $M$ of meson which satisfies $M^2=P^2$. This is due to the
fact that the meson, quark and antiquark can not be on-shell
simultaneously. The momenta $p_{\perp}$ and $p_z$ constitute a momentum
vector $\vec p=(p_{\perp}, p_z)$ which represents the relative momenta
in the transverse and $z$ directions, respectively. The energy of
the quark and antiquark $e_1=e_2\equiv e$ can be obtained from their
relative momenta,
\begin{eqnarray}
e=\sqrt{m^2+p_{\perp}^2+p_z^2}.
\end{eqnarray}
It is straightforward to find that
\begin{eqnarray}
x_1=\frac{e-p_z}{2e},~
x_2=\frac{e+p_z}{2e},~ e=\frac{M_0}{2}.
\end{eqnarray}
To calculate the decay constants or decay amplitude, the Feynman
rules for the vertices of quark-antiquark coupling to the meson
bound state are required. In the following formulations, we will
follow the notations in \cite{CCH2}. The vertices $\Gamma_M$ for the
incoming meson $M$ are given as
\begin{eqnarray} \label{eq:HM}
H_P \gamma_5 \qquad \qquad \qquad \qquad
&&{\rm for~ }P; \nonumber \\
iH_V\Big [ \gamma_{\mu}-\frac{1}{W_V}(p_1-p_2)_{\mu} \Big ]
\qquad &&{\rm for~ }V.
\end{eqnarray}
After performing a one-loop contour integral to be discussed below which amounts to make one quark or
antiquark on its mass-shell, the function $H_M$ and the parameter $W_V$ are reduced to $h_M$ and
$w_V$, respectively, and they are written by
\begin{eqnarray} \label{eq:htowf}
h_P=h_V=(M^2-M_0^2)\sqrt{\frac{x_1 x_2}{N_c}}\frac{1}{\sqrt 2
M_0}\phi(x_2,p_\bot),
\end{eqnarray}
and
\begin{eqnarray}
w_V=M_0+2m.
\end{eqnarray}
The form of the function $h_M$ and the Feynman rule for $\Gamma_M$
are derived from the light-front wave function which describes a
meson bound state in terms of a quark $q_1$ and an antiquark $\bar
q_2$. The light-front wave function contains two parts: one is the
momentum distribution amplitude $\phi(x_2,p_\bot)$ which is the
central ingredient in light-front QCD, the other is a spin wave
function which constructs a state of definite spin ($S, S_z$) out of
light front helicity eigenstates ($\lambda_1, \lambda_2$). The spin
wave function is constructed by using the Melosh transformation and
its spin structure has been contained in Eq. (\ref{eq:HM}).
The momentum distribution amplitude $\phi(x_2,p_\bot)$ is the
generalization of the distribution amplitude $\phi(x)$ of the pQCD
method and can be chosen to be normalizable, i.e., it satisfies
\begin{eqnarray} \label{eq:Norm1}
\int \frac{dx d^2 p_{\bot}}{2 (2\pi)^3}
|\phi(x,p_{\bot})|^2=1.
\end{eqnarray}
In principle, $\phi(x_2,p_\bot)$ is obtained by solving the light-front QCD bound state equations
$H_{LF}|\Psi\rangle=M|\Psi\rangle$ which is the familiar Schr\"{o}dinger equation in ordinary quantum
mechanics and $H_{LF}$ is the light-front Hamiltonian. To see the explicit form of the light-front
bound state equation, let us consider a quarkonium wave function. The light-front bound state
equation can be expressed as:
\begin{equation}
\begin{array}{l}
\biggl ( M^2-\displaystyle\sum_i \displaystyle\frac{\,k_{i\bot}^2+m^2_i\,}{x_i}\biggr )
\left (
\begin{array}{c}
\Psi_{q\bar{q}} \\
\Psi_{q\bar{q}g} \\
\vdots
\end{array}
\right ) \\[2.5\eqnskip]
\hspace{0.8cm} = \left (
\begin{array}{ccc}
\langle q\bar{q}|H_{int}|q\bar{q} \rangle & \langle
q\bar{q}|H_{int}|q\bar{q}g\rangle & \cdots \\
\langle q\bar{q}g|H_{int}|q\bar{q} \rangle & \cdots & \\
\vdots & &
\end{array}
\right ) \left (
\begin{array}{c}
\Psi_{q\bar{q}} \\
\Psi_{q\bar{q}g} \\
\vdots
\end{array}
\right ) \, .
\end{array}
\label{exact}
\end{equation}
Of course, to exactly solve the above equation for the whole Fock space is still impossible.
Currently, two approaches have been developed. One is given by Brodsky and Pauli
\cite{PB,TBP,KP,KPP}, the so-called discretize light-front approach, the other by Perry, Harindranath
and Wilson \cite{PHW,PH,GHPSW}, based on the old idea of the Tamm-Dancoff approach \cite{T,D} that
truncates the Fock space to only include these Fock states with a small number of particles.
Furthermore, if one can eliminate all the high order Fock space sectors (approximately) by an
effective two-body interaction kernel, the light-front bound state equation is reduced to the
light-front Bethe-Salpeter equation:
\begin{equation}
\biggl ( M^2-\displaystyle\frac{\,k_{\bot}^2+m^2\,}{\,x(1-x)\,}\biggr )
\Psi_{q\bar{q}}(x,k_{\bot})= \displaystyle\int \displaystyle\frac{\,dyd^2k'_{\bot}\,}{2(2\pi)^3}
V_{eff}(x,k_{\bot},y,k'_{\bot})\Psi_{q\bar{q}}(y,k'_{\bot}) \, . \label{Bethe}
\end{equation}
One may solve the Bethe-Salpeter equation for finding the relativistic bound states. However, the
Bethe-Salpeter equation only provides the amplitude of a Fock sector in the bound states so that it
cannot be normalized. In other words, the Bethe-Salpeter amplitudes do not have the precise meaning
of wave functions for particles. In addition, the advantage for Eq. (\ref{exact}) with the
Tamm-Dancoff approximation is that it provides a reliable way to study the contribution of Fock
states which contain more particles step by step by increasing the size of truncated Fock space,
while the Bethe-Salpeter equation Eq. (\ref{Bethe}) lacks such an ability. Some studies on
nonperturbative features of light-front dynamics were focused on the 1+1 field theory. Typical
examples are: the discretized light-front quantization approach for the bound states in the 1+1 field
theory developed by Pauli and Brodsky \cite{PB,EPB}, the light-front Tamm-Dancoff approach for bound
state Fock space truncation discussed by Perry {\em et al}. \cite{PHW,PH}. However, at the present
time, how to solve for the bound states from 3+1 QCD is still unknown. We are satisfied with
utilizing some phenomenological momentum distribution amplitudes which have been constructed
phenomenologically in describing hadrons. One widely used form is the Gaussian-type which we will
employ in the application of covariant light-front approach.
\subsection{Decay constants}
In general, the decay constants of mesons $f_{P,V}$ are defined by the
matrix elements for $P$ and
$V$ mesons
\begin{eqnarray}
\langle 0|A_\mu |P(P) \rangle &=& if_P P_\mu, \nonumber \\
\langle 0|V_\mu |V(P) \rangle &=& M_V f_V \epsilon_\mu.
\end{eqnarray}
where $P_\mu$ is the momentum of meson and $\epsilon_\mu$ is the
polarization vector of $V$ meson. The Feynman diagram which
contributes to $f_{P,V}$ is depicted in Fig. \ref{fig:dc}.
\begin{figure}[htbp]
\includegraphics*[width=2.1in]{dc} \caption{Feynman diagram for meson
decay constants, where $P$ is the momentum of meson, $p_1$ is the
quark momentum, $p_2$ is the antiquark momentum and $\Gamma$ denotes
the corresponding $V$-$A$ current.}
\label{fig:dc}
\end{figure}
The meson decay constant plays an important role in determining the
parameters of the distribution function $\phi(x_2,p_\bot)$, in
particular, the quark mass and a parameter $\beta$ characterizing
the hadronic ``size" for a Gaussian wave function. The decay
constants have been calculated in \cite{Jaus2, CCH2} and are the
same as our results. Thus we simply provide the formulae for
$f_{P,V}$ here.
For a pseudoscalar quarkonium, the decay constant is represented by
\begin{eqnarray} \label{eq:dcP}
f_P&=&\frac{\sqrt{2N_c}}{8\pi^3}\int dx_2 d^2 p_\perp
\frac{m}{\sqrt{x_1x_2} M_0}\phi_P(x_2,p_{\bot})\nonumber\\
&=&\frac{\sqrt{2N_c}}{8\pi^3}\int dx_2 d^2 p_\perp
\frac{m}{\sqrt{m^2+p_\perp^2}}\phi_P(x_2,p_{\bot}).
\end{eqnarray}
where $N_c=3$ is the color number and $m$ denotes the mass of the heavy quark. In Eq. (\ref{eq:dcP}),
we have used the relation
\begin{eqnarray}
M_0\sqrt{x_1x_2}=\sqrt{m^2+p_\perp^2}
\end{eqnarray}
for a quarkonium .
For the vector meson, the decay constant in the covariant approach
is represented by
\begin{eqnarray} \label{eq:dcV}
f_V=\frac{\sqrt{2N_c}}{8\pi^3 M}\int dx_2 d^2 p_\perp
\frac{1}{\sqrt{m^2+p_\perp^2}}\left [ x_1 M_0^2-
p_\perp^2+\frac{2m}{w_V}p_\perp^2 \right ]\phi_V(x_2,p_{\bot}).
\end{eqnarray}
Eq. (\ref{eq:dcV}) coincides with the result in \cite{Jaus2} when $m_1=m_2$. Note that the $1/w_V$
part of Eq. (\ref{eq:dcV}) is different from that in the conventional approach, for example,
\cite{CCH1}. The reason is that the conventional approach is not covariant and contains a spurious
dependence on the orientation of the light front. The relevant calculations are not free of spurious
contributions for transitions involving vector meson. Zero modes, which relate to the $p^-$
integration for $p^+=0$, are required to eliminate the spurious dependence and contribute to the
$1/w_V$ part of Eq. (\ref{eq:dcV}). More detailed discussions about this point can be found in
\cite{Jaus2, CCH2}. The decay constant $f_V$ is related to the electromagnetic decay of vector meson
$V\to e^+e^-$ by \cite{NS}
\begin{eqnarray} \label{eq:dcVexp}
\Gamma(V\to e^+e^-)=\frac{4\pi}{3}\frac{\alpha^2}{M_V}c_V f_V^2.
\end{eqnarray}
where $c_V$ is factor related to the electric charge of the quark
that make up the vector meson.
\subsection{$P\to \gamma\gamma$}
Charge conservation requires charge conjugation $C=+1$ state
coupling to two photons. Thus only pseudoscalar meson can transform
into two photons while the vector meson is forbidden. In the process
of $P\to \gamma\gamma$, the final two photons are both on-shell. For
the purpose of illustration, it is useful to consider a more general
process $P\to\gamma\gamma^*$ with one photon off-shell. We introduce
a transition form factor $F_{P\gamma}(q^2)$ arising from the
$P\gamma\gamma^*$ vertex. The $P\to \gamma\gamma$ process is related
to the form factor at $q^2=0$, i.e., $F_{P\gamma}(0)$. The form
factor $F_{P\gamma}(q^2)$ is defined by
\begin{eqnarray}
{\cal A}_\mu=-ie^2F_{{P\gamma}}(q^2)\epsilon_{\mu\nu\rho\sigma}
P^{\nu}q_1^{\rho}\epsilon^{\sigma}.
\end{eqnarray}
where ${\cal A}_\mu$ is the decay amplitude of the process
$P\to\gamma\gamma^*$ and $q_1(\epsilon)$ the momentum (polarization) of
the on-shell photon.
\begin{figure}[htbp]
\includegraphics*[width=5in]{Prr}
\caption{Feynman diagram for $P\to\gamma\gamma^*$ process where $P$
in the parenthesis denotes the momentum of meson. The diagram (b) is
related to (a) by the exchange of two photons.}
\label{fig:Prr}
\end{figure}
The transition amplitude for the process of $P\to\gamma\gamma^*$ can
be derived from the common Feynman rules and the vertices for the
meson-quark-antiquark coupling given in Eq. (\ref{eq:HM}). In the
covariant light-front approach, the meson is on-shell while the
constituent quarks are off-shell and the momentum satisfies
$P=p_1+p_2$. To the lowest order approximation, $P\to\gamma\gamma^*$
is a one-loop diagram and depicted in Fig. \ref{fig:Prr}. The
amplitude is given as a momentum integral
\begin{eqnarray} \label{eq:APrr}
{\cal A}_{\mu}=i e_q^2 e^2 N_c \int \frac{d^4 p_1}{(2\pi)^4}
\Big \{
\frac{H_P}{N_1 N_2 N_{ia}} {\rm Tr}[{\gamma_5(-\not \!p_2+m)
\not \!\epsilon(\not \!p_{ia}+m)\gamma_{\mu}}(\not \!p_1+m)] \nonumber \\
+\frac{H_P}{N_1 N_2 N_{ib}} {\rm Tr}[{\gamma_5(-\not \!p_2+m)
\gamma_{\mu}} (\not \!p_{ib}+m)\not \!\epsilon(\not \!p_1+m)]
\Big \},
\end{eqnarray}
where
\begin{eqnarray}
& p_{ia}=p_1-q, ~~~~~~~~~~~~~~~~~ & p_{ib}=q-p_2, \nonumber \\
& N_1=p_1^2-m^2+i\epsilon, ~~~~~~~~~~ & N_2=p_2^2-m^2+i\epsilon, \nonumber \\
& N_{ia}=p_{ia}^2-m^2+i\epsilon, ~~~~~~~~~~ & N_{ib}=p_{ib}^2-m^2+i\epsilon,
\end{eqnarray}
and $e_q$ is the electric charge of quark: $e_q=2/3$ for $c$ quark
and $e_q=-1/3$ for $b$ quark. The first and second terms in Eq.
(\ref{eq:APrr}) come from diagrams Fig. \ref{fig:Prr} (a) and (b),
respectively.
For the calculation of the form factor $F_{P\gamma}(q^2)$, it is
convenient to choose the purely transverse frame $q^+=0$, i.e.,
$q^2=-q_{\bot}^2\leq0$. The advantage of this choice is that there
is no the so-called Z-diagram contributions. The price is that only
the form factor at space-like regions can be calculated directly.
The values at the time-like momentum transfer $q^2>0$ regions are
obtained by analytic continuation. In this study, the continuation
is not necessary because we only need the form factors at $q^2=0$
for the $P\to \gamma\gamma$ and $V\to P\gamma$ processes.
At first, we discuss the calculation of Fig. \ref{fig:Prr}(a). The
factors $N_1$, $N_2$ and $N_{ia}$ produce three singularities in the
$p_1^-$ complex plane: one lies in the upper plane; the other two in
the lower plane. By closing the contour in the upper $p_1^-$ complex
plane, the momentum integral can be easily calculated since there is
only one singularity in the plane. This corresponds to putting the
antiquark on the mass-shell. Given this restriction, the momentum
$p_2\to \hat p_2$ with $\hat p_2^2-m^2=0$, and $\hat p_1=P-\hat
p_2$. The on-shell restriction and the requirement of covariance
lead to the following replacements:
\begin{eqnarray} \label{eq:N1}
N_1 &\to& \hat N_1=x_1(M^2-M_0^2), \nonumber \\
N_{ia} &\to& \hat N_{ia}=x_2 q^2-x_1 M_0^2+2p_{\bot}\cdot q_{\bot},
\nonumber \\
N_2 &\to& \hat N_2=\hat N_1+(1-2 x_1)M^2=x_2 M^2-x_1 M_0^2, \nonumber \\
\int \frac{d^4 p_1}{(2\pi)^4}\frac{H_P}{N_1N_2N_{ia}} &\to&
-i\pi\int \frac{dx_2 d^2p_{\bot}}{(2\pi)^4}\frac{h_P}
{x_2 \hat N_1 \hat N_{ia}}.
\end{eqnarray}
For Fig. \ref{fig:Prr}(b), the contour is closed in the lower
$p_1^-$ complex plane. It corresponds to putting the quark on the
mass-shell and the momentum $p_1\to \hat p_1$ with $\hat
p_1^2-m^2=0$. In this case, we need to do the following replacements
\begin{eqnarray} \label{eq:N2}
N_2 &\to& \hat N_2=x_2(M^2-M_0^2), \nonumber \\
N_{ib} &\to& \hat N_{ib}=x_1 q^2-x_2 M_0^2
-2p_{\bot}\cdot q_{\bot}, \nonumber \\
N_1 &\to& \hat N_1=x_1 M^2-x_2 M_0^2, \nonumber \\
\int \frac{d^4 p_1}{(2\pi)^4}\frac{H_P}{N_1N_2N_{ib}} &\to&
-i\pi\int \frac{dx_2 d^2p_{\bot}}{(2\pi)^4}\frac{h_P}
{x_1 \hat N_2 \hat N_{ib}}.
\end{eqnarray}
From Eqs. (\ref{eq:N1}) and (\ref{eq:N2}), we see that $\hat
N_{ib}$ is obtained from $\hat N_{ia}$ by the exchange of $x_1
\leftrightarrow x_2$ and the change of the sign of $p_{\bot}$.
After the above treatments, the transition amplitude of $P\to
\gamma\gamma^*$ is obtained as
\begin{eqnarray}
{\cal A}_{\mu}=&&-ie^2\epsilon_{\mu\nu\rho\sigma}
P^{\nu}q_1^{\rho}\epsilon^{\sigma} \int \frac{dx_2 d^2
p_{\bot}}{4\pi^3} \frac{N_c e_q^2 m~ h_P}{x_1 x_2 (M^2-M_0^2)}\nonumber\\
&&~~~~\times\left [ \frac{1}{-x_2 q^2+x_1 M_0^2-2p_{\bot}\cdot q_{\bot}}
+\frac{1}{-x_1 q^2+x_2 M_0^2+2p_{\bot}\cdot q_{\bot}}
\right ],
\end{eqnarray}
Thus, the final formulae for the form factor $F_{P\gamma}(q^2)$ is
\begin{eqnarray} \label{eq:Pr2}
F_{P\gamma}(q^2)=&&\frac{e_q^2\sqrt{ 2N_c}}{8\pi^3}\int dx_2 d^2p_{\bot}
\phi_P(x_2,p_{\bot})\frac{m}{\sqrt{m^2+p_{\bot}^2}} \nonumber\\
&&~~~~\times\left [ \frac{1}{x_1 M_0^2-x_2 q^2-2p_{\bot}\cdot q_{\bot}}
+\frac{1}{x_2 M_0^2-x_1 q^2+2p_{\bot}\cdot q_{\bot}}
\right ],
\end{eqnarray}
and $F_{P\gamma}(0)$ is
\begin{eqnarray} \label{eq:Pr20}
F_{P\gamma}(0)=\frac{e_q^2\sqrt{ 2N_c}}{8\pi^3}\int dx_2 d^2p_{\bot}
\phi_P(x_2,p_{\bot})\frac{m}{\sqrt{m^2+p_{\bot}^2}}\frac{1}{x_1 x_2 M_0^2}.
\end{eqnarray}
By comparing Eq. (\ref{eq:Pr20}) with Eq. (\ref{eq:dcP}), they share
some similarities except the propagators $N_{ia(b)}$ (and a trivial
factor $e_q^2$). This point will become clearer when we discuss the
relation between the light-front QCD and pQCD method.
The decay rate for $P\to \gamma\gamma$ is obtained from the
transition form factors by
\begin{eqnarray} \label{Ptogg}
\Gamma(P\to \gamma\gamma)=\frac{M_P^3}{64
\pi}(4\pi\alpha)^2|F_{P\gamma}(0)|^2.
\end{eqnarray}
\subsection{$V\to P\gamma$}
Similar to the analysis of $P\to \gamma\gamma$, we also consider a
more general process of $V\to P\gamma^*$ where the final photon is
off-shell. The $V\to P\gamma^*$ transition is parameterized in term
of a vector current form factor $V(q^2)$ by
\begin{eqnarray}
\Gamma_{\mu}=ie\epsilon_{\mu\nu\alpha\beta}\epsilon^{\nu}q^{\alpha}P^{\beta}V(q^2).
\end{eqnarray}
where $\Gamma_{\mu}$ is the amplitude of $V\to P\gamma^*$ process.
$P$ ($\epsilon$) is the momentum (polarization vector) of the initial
vector meson, $P'$ denotes the momentum of the final pseudoscalar
meson, and the momentum transfer $q=P-P'$.
\begin{figure}[htbp]
\includegraphics*[width=5in]{VPr}
\caption{Feynman diagram for $V\to P\gamma^*$ process where $P$ in
the parenthesis denotes the momentum of initial meson and $P'$
denotes the momentum of final meson. }
\label{fig:VPr}
\end{figure}
To the lowest order approximation, the $V\to P\gamma^*$ transition
is depicted in Fig. \ref{fig:VPr}. The amplitude $\Gamma_{\mu}$ is
given by a one-loop momentum integral
\begin{eqnarray} \label{eq:AVPr}
\Gamma_{\mu}=-iee_qN_c \int\frac{d^4 p_1}{(2\pi)^4}
\Bigg\{ \frac{H_VH_P'}{N_1N_2N_1'}S^a_{\mu\nu}+
\frac{H_VH_P'}{N_1N_2N_2'}S^b_{\mu\nu}
\Bigg\} \epsilon^{\nu},
\end{eqnarray}
where
\begin{eqnarray}
S^a_{\mu\nu}&=&{\rm Tr}
\Big[ \Big(\gamma_{\nu}-\frac{1}{W_V}(p_1-p_2)_{\nu}\Big)
(-\not\! p_2+m)\gamma_5(\not \! p_1^{~\prime}+m)
\gamma_{\mu}(\not \! p_1+m)
\Big ], \nonumber\\
S^b_{\mu\nu}&=&{\rm Tr}
\Big[ \Big(\gamma_{\nu}-\frac{1}{W_V}(p_1-p_2)_{\nu}\Big)
(-\not \! p_2+m)\gamma_{\mu}(-\not \! p_2^{~\prime}+m)
\gamma_5(\not \! p_1+m)
\Big ],
\end{eqnarray}
and
\begin{eqnarray}
N_1'=p_1'^2-m^2+i\epsilon; \qquad \qquad N_2'=p_2'^2-m^2+i\epsilon.
\end{eqnarray}
The first and second terms in Eq. (\ref{eq:AVPr}) are arising from
diagram (a) and (b) of Fig. \ref{fig:VPr}, respectively. We have
used the momentum relations: $P=p_1+p_2$, $P'=p^{\prime}_1+p_2'$,
$q=P-P'$; $p_2=p_2'$ for diagram Fig. \ref{fig:VPr}(a); $p_1=p_1'$
for diagram Fig. \ref{fig:VPr}(b). It is easy to find that
$S^a_{\mu\nu}=S^b_{\mu\nu}$.
The momentum integral of Eq. (\ref{eq:AVPr}) are performed analogous
to the case of $P\to \gamma\gamma^*$. The contour integrals are
closed in the upper $p_1^-$ half-plane for the first term in Eq.
(\ref{eq:AVPr}) which corresponds to putting antiquark on the
mass-shell; and in the lower half-plane for the second term which
corresponds to putting quark on the mass-shell. For the first term,
it leads to the replacements
\begin{eqnarray}
N_1^{(\prime)} &\to& \hat N_1^{(\prime)}=x_1 \big(
M^{(\prime)2}-M_0^{(\prime)2} \big), \nonumber \\
\int \frac{d^4 p_1}{(2\pi)^4}\frac{H_VH_P'}{N_1N_2N_1'} &\to&
-i\pi\int \frac{dx_2 d^2p_{\bot}}{(2\pi)^4}\frac{h_Vh_P'}
{x_2 \hat N_1 \hat N_1'}
\end{eqnarray}
In order to preserve the covariance of the decay amplitude, we also
need the replacements
\begin{eqnarray}
p_1^{\alpha}&\to& x_1 P^{\alpha}-q^{\alpha}
\frac{p_{\bot}\cdot q_{\bot}}{q^2}, \qquad \qquad
p_1^{\alpha}p_1^{\beta} \to -g^{\alpha\beta}\left( p_{\bot}^2+
\frac{(p_{\bot}\cdot q_{\bot})^2}{q^2}\right).
\end{eqnarray}
The similar treatments can be done for the second term. After using
the above replacements and Eq. (\ref{eq:htowf}), we obtain the
formulae for the form factor $V(q^2)$ as
\begin{eqnarray} \label{eq:Pr2}
V(q^2)=&&\frac{e_q}{8\pi^3}\int dx_2 d^2p_{\bot}
\frac{\phi_V(x_2,p_\bot) \phi'_P(x_2,p'_\bot)}{x_1x_2M_0M_0^{\prime}}
\left \{ m-\frac{2}{w_V}\left( p_{\bot}^2+
\frac{(p_{\bot}\cdot q_{\bot})^2}{q^2} \right ) \right \}.
\end{eqnarray}
The rate for $V\to P\gamma$ is
\begin{eqnarray} \label{eq:VPr2}
\Gamma(V\to P\gamma)=\frac{1}{3}\frac{(M_V^2-M_P^2)^3}{32
\pi M_V^3}(4\pi\alpha)|V(0)|^2.
\end{eqnarray}
\section{ non-relativistic approximation and perturbative QCD}
It is well-known that the system of the heavy quarkonium can be
treated non-relativistically \cite{QR}. A relativistic invariant
theory, light-front QCD in our case, should reproduce the previous
results in the non-relativistic approximations. Here, we will
explore the non-relativistic approximations of the light-front QCD.
It is similar to the studies of heavy quark limit for heavy meson
\cite{CCH1} within the light-front approach. In addition to it, the
light-front QCD is related to perturbative QCD at the large momentum
transfers, such as in $P\to \gamma\gamma$ process. Both of them show
the different aspects of light-front QCD.
At first, we discuss the non-relativistic approximations of the
light-front approach. In the rest frame of the heavy quarkonium, the
momenta of quark and antiquark are dominated by their rest mass
$m\gg \Lambda_{\rm QCD}$ ($\Lambda_{\rm QCD}$ is the hadronic scale). The momentum fractions
$x_1, x_2$ are peaked around $\frac{1}{2}$, and $x_2-\frac{1}{2}$ is
of order $\Lambda_{\rm QCD}/m$. In NRQCD, the velocity of heavy quark is chosen
as the expansion parameter. Neglecting the terms suppressed by
$1/m$, the invariant mass $M_0$ and $p_z$ can be approximated as
\begin{eqnarray} \label{eq:MA}
M_0\cong 2m\cong M, \qquad \qquad p_z=(x_2-\frac{1}{2})M_0\sim \Lambda_{\rm QCD}.
\end{eqnarray}
Compared with $m$, we have neglected the transverse momentum
$p_{\bot}$ because it is of the order of $\Lambda_{\rm QCD}$. Thus, the
magnitude of the relative momentum $\vec p$ will be much smaller
than $m$, i.e., $|\vec p|\sim \Lambda_{\rm QCD}$, which constitutes the basis of
the non-relativistic treatment.
Under the non-relativistic approximations, the dependence of the
hadron wave function on $x_2$ is replaced by its dependence on $p_z$
since $M_0\cong M$ is a constant. In this way, the hadron wave
function will depend on the relative momentum $\vec p$ only, in
other words, it can be represented by $\psi(\vec p)$. The relation
between the non-relativistic function $\psi(\vec p)$ and the
relativistic one $\phi(x_2, p_{\bot})$ can be established as
follows. From Eq. (\ref{eq:Mpz}), we obtain
\begin{eqnarray} \label{eq:app1}
dp_z\cong M~ dx_2, \qquad \qquad
d^3 p=M~ dx_2 d^2p_{\bot},
\end{eqnarray}
As uausl, the function of $\psi(\vec p)$ is normalized as
\begin{eqnarray} \label{eq:NRnormal}
\int \frac{d^3 p}{(2\pi)^3}|\psi(\vec p)|^2=1.
\end{eqnarray}
Comparing Eqs. (\ref{eq:app1}) and (\ref{eq:NRnormal}) with Eq.
(\ref{eq:Norm1}), it is straightforward to derive a relation
\begin{eqnarray}
\phi(x_2, p_{\bot})\doteq \sqrt{2M}\psi(\vec p).
\end{eqnarray}
Note that the above relation is valid within the non-relativistic
approximation and is not correct in the general case.
The hadron wave function in the coordinate space $\Psi(\vec r)$ is
obtained by using the Fourier transformation
\begin{eqnarray}
\Psi(\vec r)=\int \frac{d^3 p}{(2\pi)^3}~\psi(\vec p)~
e^{i\vec p\cdot \vec r}.
\end{eqnarray}
At the origin $\vec r=0$, $\Psi(0)=\int \frac{d^3
p}{(2\pi)^3}~\psi(\vec p)$ is an important parameter which gives the
magnitude of quark-antiquark coupling to the quarkonium. In the
non-relativistic approximations, one can safely neglect $\vec p$
compared to $m$. For example,
\begin{eqnarray} \label{delta}
\sqrt{m^2+p_{\bot}^2}\to m,~~~ x_2-\frac{1}{2} \to 0.
\end{eqnarray}
After these approximations, we can rewrite the decay constants Eqs.
(\ref{eq:dcP}) and (\ref{eq:dcV}) as
\begin{eqnarray} \label{ffPV}
f_P \doteq 2\sqrt{N_c}\frac{\Psi_P(0)}{\sqrt{M_P}}, \qquad \qquad
f_V \doteq 2\sqrt{N_c}\frac{\Psi_V(0)}{\sqrt{M_V}}.
\end{eqnarray}
Thus,
\begin{eqnarray}
\frac{f_P^2}{f_V^2}=\frac{M_V}{M_P}\frac{|\Psi_P(0)|^2}{|\Psi_V(0)|^2}.
\end{eqnarray}
This is just the so-called Van Royen-Weisskopf formula \cite{RW}.
Since the differences between vector and pseudoscalar vectors arise
from the higher order in $1/m$, these differences vanish in the
limit $m\to \infty$, thus $M_V=M_P=2 m$, $\Psi_V(\vec p)=\Psi_P(\vec
p)$. The ratio of decay constants is equal to 1 in the limit.
For the form factor $V(0)$ of $V\to P\gamma$ process, Eq.
(\ref{eq:Pr2})can be reduced to
\begin{eqnarray}
V(0)\doteq e_q\int \frac{d^3\vec p}{(2\pi)^3}\frac{2\sqrt{M_P M_V}}{M_V}
\frac{\Psi_V(\vec p)\Psi_P(\vec p)}{m},
\end{eqnarray}
Similarly, in the non-relativistic limit $m\to \infty$, the form
factor $V(0)$ can be further written in a simple form as
\begin{eqnarray} \label{eq:v0}
V(0)=2e_q/m.
\end{eqnarray}
Thus $V(0)$ is a constant, independent of $\Psi(0)$ because of the
normalization condition of $\Psi(\vec p)$. The physical picture is:
the heavy quark and antiquark in the initial and final quarkonium
are in the same momentum configuration at $q^2=0$ point. It is
analogous to the meson system with a single heavy quark that the
Isgur-Wise function is normalized to 1 at the zero-recoil point in
the infinite heavy quark mass limit. From Eqs. (\ref{eq:VPr2}) and
(\ref{eq:v0}), the rate for the $V\to P\gamma$ process is reduced
into
\begin{eqnarray} \label{eq:lVPr}
\Gamma(V\to P\gamma)=\frac{16}{3}\alpha e_q^2\frac{k_\gamma^3}{M_V^2}.
\end{eqnarray}
where $k_\gamma=(M_V^2-M_P^2)/2 M_V$ is the energy of the photon.
This is the leading order result of Eq. (37) in \cite{BJV}.
Next, we discuss that the pQCD is applicable in $P\to \gamma\gamma$
process. In the rest frame of the heavy quarkonium, the total energy
is $2m\gg \Lambda_{\rm QCD}$. Each final photon contains high energy of $m$ and
moves in the opposite light-front direction. When the high energy
photon hits on one nearly rest constituent of the quarkonium, it
causes a large virtuality of the order of $m^2$. In particular, the
virtuality of the internal quark is about $2 m^2$ from Eqs.
(\ref{eq:N1}) and (\ref{eq:N2}). The transverse momentum in the
propagator of the virtual quark can be neglected. Up to leading
order in $\Lambda_{\rm QCD}/m$, the transition form factor $F_{P\gamma}(0)$ is
represented by
\begin{eqnarray}
F_{P\gamma}(0)&=&e_q^2\sqrt{2N_c}\int\frac{dx_2d^2p_{\bot}}{(2\pi)^3}
\phi(x_2,p_{\bot})~T_H(x_2) \nonumber\\
&\propto& \int dx_2~ \Phi(x_2) T_H(x_2).
\end{eqnarray}
where $\Phi(x_2)$ is the hadron distribution amplitude obtained from
wave function by integral over the transverse momentum, and
$T_H(x_2)$ is the hard scattering kernel from the subprocess of
$q\bar q\to \gamma\gamma$.
The hard scattering kernel depends on momentum fraction $x_2$ when
the loop corrections are taken into account. But, at tree level, the
hard scattering kernel is
\begin{eqnarray}
T_H=\frac{1}{m^2},
\end{eqnarray}
It is not only independent of transverse momentum $p_{\bot}$ but
also of longitudinal fraction $x_2$. We thus have a further result
\begin{eqnarray} \label{Ftof}
F_{P\gamma}(0)=e_q^2\frac{f_P}{m^2}.
\end{eqnarray}
This equation means that the form factor $F_{P\gamma}(0)$ is
proportional to the decay constant $f_P$ in leading order $\Lambda_{\rm QCD}/m$
and leading order of strong coupling constant $\alpha_s$. After combining
Eqs. (\ref{eq:dcVexp}), (\ref{Ptogg}), (\ref{ffPV}), and
(\ref{Ftof}), we finally obtain the decay rates for processes of
$V\to e^+e^-$ and $P\to \gamma\gamma$ as
\begin{eqnarray}
\Gamma(V\to e^+e^-) &=& {16\over{3}}N_c\pi\alpha^2 c_V \frac{|\Psi_V(0)|^2}{M_V^2},
\nonumber \\
\Gamma(P\to \gamma\gamma) &=& 16N_c\pi\alpha^2e_q^4
\frac{|\Psi_P(0)|^2}{M_P^2}.
\end{eqnarray}
These results are the same as ones in Table III in the
non-relativistic quark-potential model \cite{KMRR}.
\section{Numerical results and discussions}
In order to obtain the numerical results, the crucial thing is to
determine the momentum distribution amplitude $\phi(x_2,p_{\bot})$.
One wave function that has been often used in the literature for
mesons is the Gaussian-type
\begin{eqnarray}
\phi(x_2,p_{\bot})=N\sqrt{\frac{dp_z}{dx_2}}~{\rm exp}\left(
-\frac{p_{\bot}^2+p_z^2}{2\beta^2} \right),
\end{eqnarray}
with $N=4(\pi / {\beta^2})^{3/4}$ and
\begin{eqnarray}
\frac{dp_z}{dx_2}=\frac{e^2}{x_1x_2 M_0}.
\end{eqnarray}
The required input parameters include quark mass: $m_c$ for c quark and $m_b$ for b quark; a hadronic
scale parameter $\beta$ for $\eta_{c(b)}$ and $J/\psi(\Upsilon)$. The quark mass entered into our
analysis is the constituent mass. For light quarks (u and d), the constituent mass which several
hundred MeV, is quite bigger than the current one which is only several MeV obtained from the chiral
perturbation theory. While for the heavy quarks, the difference between them is small. From PDG
\cite{PDG06}, the current masses are $1~{\rm GeV}\leq m_c\leq 1.4~{\rm GeV}$ and $4~{\rm GeV}\leq
m_b\leq 4.5~{\rm GeV}$ in the ${\rm \overline{MS}}$ renormalization scheme. For our purpose, we will
choose heavy quark constituent masses as
\begin{eqnarray}
m_c=1.2 ~{\rm GeV} , \qquad \qquad m_b=4.3 ~{\rm GeV} .
\end{eqnarray}
Our choices are smaller than the parameters given in \cite{CCH2},
but they are consistent within the error of one $\Lambda_{\rm QCD}$. For the
meson mass, $M_{\eta_c}=2.980~{\rm GeV} $, $M_{J/\psi}=3.097~{\rm GeV} $ and
$M_{\Upsilon}=9.460~{\rm GeV} $ \cite{PDG06}. The mass of $\eta_b$ is
still unknown and it is parameterized as $\Delta
m=M_{\Upsilon}-M_{\eta_b}$. From the references in \cite{ALEPH}, the
range of $\Delta m$ is $\Delta m=30-150$ MeV.
After fixing the quark and meson masses, the remained thing is to
determine the parameters $\beta$. For the vector meson, $\beta_V$ is
extracted from the decay constant $f_V$ which is obtained directly
from the process $V\to e^+e^-$ by Eq. (\ref{eq:dcVexp}). For the
pseudoscalar meson $\eta_c$, $\beta_{\eta_c}$ is extracted from the
decay constant $f_{\eta_c}$ which is obtained from the process $B
\to \eta_c K$.
For the $c\bar c$ charmonium system, there are some experiment data which provides a place to test
the applicability of the Gaussian-type wave function to the heavy quarkonium. From $J/\psi\to
e^+e^-$, we obtain $f_{J/\psi}=416\pm 6$ MeV, and extract $\beta_{J/\psi}=0.639\pm 0.006$ GeV. From
$B \to \eta_c K$, one obtains $f_{\eta_c}=335\pm 75$ MeV \cite{fetac}, and we extract
$\beta_{\eta_c}=0.652^{+0.165}_{-0.143}$ GeV. It is apparent that the dominant errors in the
following calculations will be derived from the uncertainty of $f_{\eta_c}$. By using the above
parameters, we give the numerical results for $\eta_c \to \gamma\gamma$: $Br(\eta_c \to
\gamma\gamma)=(1.78\sim 3.05)\times 10^{-4}$ and for $J/\psi \to \eta_c \gamma$: $Br(J/\psi \to
\eta_c \gamma)=(2.38\sim 2.84)\times 10^{-2}$. The experimental data are $Br(\eta_c \to
\gamma\gamma)=(2.8\pm 0.9)\times 10^{-4}$ and $Br(J/\psi \to \eta_c \gamma)=(1.3\pm 0.4)\times
10^{-2}$. Obviously the former fits experiment very well but the latter does not. This inconsistency
still exists even we adjust the quark mass $m_c$ in the range $1\sim 1.4~{\rm GeV} $.
One may consider a power law wave function similar to the one employed in Ref. \cite{BS} to fit the
data, however, the Gaussian-type wave function has been used widely in the phenomenal analyses which
related to meson. Thus we modify the Gaussian-type wave function by just multiplying a factor $(x_1
x_2)^n$
\begin{eqnarray}
\tilde{\phi}(x_2,p_\perp)=\tilde{N}(x_1 x_2)^n \sqrt{\frac{dp_z}{dx_2}}
~{\rm exp}\left(-\frac{p_{\bot}^2+p_z^2}{2\tilde{\beta}^2} \right).
\end{eqnarray}
The curve which $x_{2}$ is peaked around ${1\over{2}}$ will be
sharped or dulled if $n>0$ or $n<0$, respectively. In the
non-relativistic limit, Eq. (\ref{delta}) reveals that the curve is
near to a delta function $\delta (x_2-{1\over{2}})$. Therefore the
case of $n>0$ seems suitable for the heavy quarkonium. In fact, if
$n=5$ and $m_c=1.2$ GeV, we can extract
$\tilde{\beta}_{J/\psi}=0.786\pm 0.008$ GeV and
$\tilde{\beta}_{\eta_c}=0.807^{+0.273}_{-0.211}$ GeV. The numerical
results $Br(\eta_c \to \gamma\gamma)=(1.56\sim 2.06)\times 10^{-4}$
and $Br(J/\psi \to \eta_c \gamma)=(1.62\sim 2.41)\times 10^{-2}$ are
both consistent with the experimental data. Thus there is a
deduction that, for heavy quarkonium, the momentum fraction $x_2$ is
more centered on ${1\over{2}}$ than one is in the Gaussian-type wave
function. We show the $x$-dependent behaviors of these two types of
wave functions in Fig. \ref{fig:x-dep} and the numerical results in
Table \ref{tab:data-c}.
\begin{figure}[htbp]
\includegraphics*[width=4.5in]{x-dep}
\caption{The $x$-dependent behaviors of $\phi$ (dash line) and
$\tilde{\phi}$~(solid line, n=5) at $p^2_\perp=0.1$ GeV$^2$. }
\label{fig:x-dep}
\end{figure}
\begin{table}[h!]
\caption{\label{tab:data-c} The comparisons between the experimental
data and theory predictions for charmonium decays.}
\begin{ruledtabular}
\begin{tabular}{ccc}
& $Br(\eta_c \to \gamma\gamma)$
& $Br(J/\psi\to \eta_c\gamma)$
\\ \hline
experiment data
& $(2.8\pm 0.9)\times 10^{-4}$
& $(1.3\pm 0.4)\%$
\\
this work $(\phi)$
& $(1.78\sim3.05)\times 10^{-4}$
& $(2.38\sim2.84)\%$
\\
this work $(\tilde{\phi})$
& $(1.56\sim 2.06)\times 10^{-4}$
& $(1.62\sim2.41)\%$
\end{tabular}
\end{ruledtabular}
\end{table}
For the $b\bar b$ bottomonium system, the experimental data are relatively less. From
$\Upsilon(1S)\to e^+e^-$, we obtain $f_{\Upsilon}=708\pm 8$ MeV, then extract
$\beta_{\Upsilon}=1.323\pm 0.010$ GeV and $\tilde{\beta}_{\Upsilon}=1.463\pm 0.012$ GeV. However,
$\eta_b$ meson hasn't been observed in experiment. It is impossible to determine the decay constant
$f_{\eta_b}$ from the experiment. As had been discussed, the relation $f_{\eta_b}=f_{\Upsilon}$ is
hold in the non-relativistic limit. Since the corrections to this relation are suppressed by
$\Lambda_{\rm QCD}/m_b$, it may be reasonable to use it to determine the parameter
$\beta_{\eta_b}(\tilde{\beta}_{\eta_b})$. Therefore we obtain $\beta_{\eta_b}=1.433\pm 0.014$ and
$\tilde{\beta}_{\eta_b}=1.607\pm 0.018$ GeV. For obtaining the decay widths $\Gamma(\eta_b \to
\gamma\gamma)$ and $\Gamma(\Upsilon\to \eta_b\gamma)$, we must be aware of the value of $\Delta m$.
However, the sensitivities of these two decay widths to $\Delta m$ are quite different. On the one
hand, $\Gamma(\eta_b \to \gamma\gamma)$ is insensitive to $\Delta m$ because $M_{\eta_b} \gg \Delta
m$ (see Eq. (\ref{Ptogg})). On the other hand, $\Gamma(\Upsilon\to \eta_b\gamma)$ is very sensitive
to $\Delta m$ because it is proportional to $(\Delta m)^3$ (see Eq. (\ref{eq:VPr2})). Thus here we
list the values of $\Gamma(\eta_b \to \gamma\gamma)$ for $\Delta m=0.09\pm 0.06$ GeV and
$\Gamma(\Upsilon\to \eta_b\gamma)$ for $\Delta m=0.09$ GeV in Table \ref{tab:data}. The dependences
of $\Upsilon\to \eta_b\gamma$ on $\Delta m$ are also shown in Fig. \ref{fig:VP-dm}.
\begin{table}[h!]
\caption{\label{tab:data} The comparisons among the several theory
predictions for bottomonium decays.}
\begin{ruledtabular}
\begin{tabular}{ccc}
& $\Gamma(\eta_b \to \gamma\gamma) $ (eV)
& $\Gamma(\Upsilon\to \eta_b\gamma)$ (eV)
\\ \hline
this work $(\phi)$
& $453 \pm 17$
& $33.2 \pm 0.1$
\\
this work $(\tilde{\phi})$
& $422 \pm 15$
& $31.5 \pm 0.1$
\\
used in \cite{ALEPH}
& $557\pm 85$
& -
\\
NRQCD \cite{NRQCD1}$\cal{O}(\alpha_{\mathrm s})$
& $460$
& -
\\
potential model \cite{potential}
& $466\pm 101$
& -
\\
\end{tabular}
\end{ruledtabular}
\end{table}
\begin{figure}[htbp]
\includegraphics*[width=4.5in]{VP-dm}
\caption{ The dependences of $\Gamma(\Upsilon\to \eta_b\gamma)$ on $\Delta
m=M_{\Upsilon}-M_{\eta_b}$.}
\label{fig:VP-dm}
\end{figure}
For the numerical results, some comments are in orders:
(1) The decay constant for $\eta_c$ is $f_{\eta_c}=335\pm 75$ MeV
\cite{fetac}, and we obtain
\begin{eqnarray}
\left(\frac{f_{\eta_c}}{f_{J/\psi}}\right)^2\approx 0.65\pm 0.31.
\end{eqnarray}
The difference between the pseudoscalar and vector meson in
light-front approach comes from power suppressed terms: the
transverse momentum $p_\perp$, $x_2-\frac{1}{2}$ and the wave function
$\phi(x,p_\perp)$. The deviation of the results from 1 shows that
$\Lambda_{\rm QCD}/m_c\sim 30\%$ corrections cannot be neglected.
(2) For the M1 transition $J/\psi\to \eta_c\gamma$, the leading order prediction from Eq.
(\ref{eq:lVPr}) for the branching ratio is $4.2\%$ which is about a factor of 3 larger than the
experimental data. This means that the next-to-leading order $\Lambda_{\rm QCD}/m_c$ corrections are so
substantial that they must be included in the calculations.
(3) For the decay width $\Gamma(\eta_b \to \gamma\gamma)$, it is
insensitive to the variations of $\Delta m$ but proportional to
$f_{\eta_b}^2$. Thus an adjustment of $f_{\eta_b}$ by $10\%$ will
correspond to a variation of the theory prediction by about $20\%$.
So far, $(f_{\eta_b}/f_{\Upsilon})^2=1$ is our assumption and
$0.99\pm 0.04$ in \cite{HK} and $1.16\pm 0.06$ in \cite{AM}. After
considering these uncertainties, our prediction may be consistent
with previous results listed in \cite{ALEPH,NRQCD1,potential}.
\section{Conclusions}
In this article we have studied the decay constants, two-photon annihilation $P\to \gamma\gamma$ and
magnetic dipole transition $V\to P\gamma$ processes for the ground-state heavy quarkonium within the
covariant light-front approach. The phenomenological parameters and wave functions are determined
from the experiment. The predictions agree with the measured data within the theoretical and
experimental errors. The quark mass we use is very close to the current mass which is different from
the choices in non-relativistic quark model. The difference between the $J/\psi$ and $\eta_c$ decay
constants and the study in $J/\psi\to \eta_c\gamma$ both show that the power corrections from wave
functions and transverse momentum effects are important. In order to make a better fit to the
experimental data, we adjust the longitudinal momentum fraction of the wave function to center around
$1/2$ further. We also give a numerical prediction for $\eta_b\to \gamma\gamma$ and $\Upsilon\to
\eta_b\gamma$. The branching ratio for $\Upsilon\to \eta_b\gamma$ is too small to be observed.
$\eta_b\to \gamma\gamma$ may be a good process to determine $\eta_b$ and its mass. The QCD
corrections are neglected in this study, including them will slightly change the wave function inputs
but does not change our conclusions.
The light-front approach shows different aspects of QCD. Under the
non-relativistic approximations, the light-front approach reproduces
the results in the non-relativistic quark-potential model. For $P\to
\gamma\gamma$ where two final photons are on the opposite
light-front, the process is perturbative dominated and the
light-front approach reduces to the model-independent pQCD. The
light-front method unifies the perturbative and non-perturbative QCD
into the same framework.
We have considered s-wave heavy quarkonium in the light-front approach only, the applications to
other quantities and higher resonances are in progress. One interesting thing may be to explore the
light-front approach in NRQCD (or pNRQCD). This will provide an alternative non-perturbative method
to calculate the hadronic matrix elements defined in NRQCD.
\vspace{0.5cm} {\bf Acknowledgments}\\
We thank Hai-Yang Cheng and Chun-Khiang Chua for many valuable
discussions. We also wish to thank the National Center for
Theoretical Sciences (South) for its hospitality during our summer
visits where this work started. This work was supported in part by
the National Science Council of R.O.C. under Grant No.
NSC94-2112-M-017-004.
|
2,869,038,154,017 | arxiv | \section{Introduction}
Many supervised learning problems today are solved with deep neural networks exploiting large-scale labeled data. The computational and memory demands associated with the large amount of parameters of deep models can be alleviated by using \emph{sparse} models.
Applying sparseness can be seen as a form of regularization, as it leads to a reduced amount of model parameters\footnote{The sparseness focused on in this work, occurs on the level of trainable parameters, i.e., we do not consider data sparsity.}, for given layer widths or representation sizes.
Current successful approaches gradually induce sparseness during training, starting from densely initialized networks, as detailed in \secref{sec:relatedwork}. However, we propose that models can also be built with \emph{predefined sparseness}, i.e., such models are already sparse by design and do not require sparseness inducing training schemes.
The main benefit of such an approach is memory efficiency, even at the start of training.
Especially in the area of natural language processing, in line with the hypothesis by \citet{Yang_2017} that natural language is ``high-rank'', it may be useful to train larger sparse representations, even when facing memory restrictions.
For example, in order to train word representations for a large vocabulary using limited computational resources, predefined sparseness would allow training larger embeddings more effectively compared to strategies inducing sparseness from dense models.
The contributions of this paper are
\begin{enumerate*}[label=(\roman*)]
\item a predefined sparseness model for recurrent neural networks,
\item as well as for word embeddings, and
\item proof-of-concept experiments on part-of-speech tagging and language modeling, including an analysis of the memorization capacity of dense vs.\ sparse networks.
\end{enumerate*}
An overview of related work is given in the next \secref{sec:relatedwork}. We subsequently present predefined sparseness in recurrent layers (\secref{sec:sparse_rnns}), as well as embedding layers (\secref{sec:sparse_emb}), each illustrated by experimental results. This is followed by an empirical investigation of the memorization capacity of language models with predefined sparseness (\secref{sec:L2R}). \secref{sec:conclusion} summarizes the results, and points out potential areas of follow-up research.
The code for running the presented experiments is publically available.\footnote{https://github.com/tdmeeste/SparseSeqModels}
\section{Related Work}\label{sec:relatedwork}
A substantial body of work has explored the benefits of using sparse neural networks.
In deep convolutional networks, common approaches include sparseness regularization, e.g., using decomposition \cite{liu_sparse_2015} or variational dropout \cite{molchanov_variational_2017}), pruning of connections \cite{han_deep_2015, han_learning_2015, guo_dynamic_2016} and low rank approximations \cite{jaderberg_speeding_2014, tai_convolutional_2015}. Regularization and pruning often lead to mostly random connectivity, and therefore to irregular memory accesses, with little practical effect in terms of hardware speedup. Low rank approximations are structured and thus do achieve speedups, with as notable examples the works of \citet{wen_learning_2016} and \citet{lebedev_fast_2016}.
Whereas above-cited papers specifically explored convolutional networks, our work focuses on recurrent neural networks (RNNs). Similar ideas have been applied there, e.g., see \citet{lu_learning_2016} for a systematic study of various new compact architectures for RNNs, including low-rank models, parameter sharing mechanisms and structured matrices. Also pruning approaches have been shown to be effective for RNNs, e.g., by \citet{narang_exploring_2017}.
Notably, in the area of audio synthesis, \citet{kalchbrenner_2018} showed that
large sparse networks perform better than small dense networks. Their sparse models were obtained by pruning, and importantly,
a significant speedup was achieved through an efficient implementation.
For the domain of natural language processing (NLP), recent work by \citet{wang_deep_2016} provides an overview of sparse learning approaches, and in particular noted that ``application of sparse coding in language processing is far from extensive, when compared to speech processing''.
Our current work attempts to further fill that gap. In contrast to aforementioned approaches (that either rely on inducing sparseness starting from a denser model, or rather indirectly try to impose sparseness by enforcing constraints), we explore ways to predefine sparseness.
In the future, we aim to design models where predefined sparseness will allow using very large representation sizes at a limited computational cost. This could be interesting for training models on very large datasets \cite{Chelba2013, Shazeer2017}, or for more complex applications such as joint or multi-task prediction scenarios \cite{Miwa2016, bekoulis2018, hashimoto2017}.
\section{Predefined Sparseness in RNNs}\label{sec:sparse_rnns}
Our first objective is designing a recurrent network cell with fewer trainable parameters than a standard cell, with given input dimension $i$ and hidden state size $h$.
In \secref{subsec:sparse_rnn}, we describe one way to do this, while still allowing the use of fast RNN libraries in practice. This is illustrated for the task of language modeling in \secref{subsec:LM}.
\subsection{Sparse RNN Composed of Dense RNNs}\label{subsec:sparse_rnn}
The weight matrices in RNN cells can be divided into input-to-hidden matrices $\mathbf{W}_{hi}\in \mathbb{R}^{h\times i}$ and
hidden-to-hidden matrices $\mathbf{W}_{hh}\in \mathbb{R}^{h\times h}$ (assuming here the output dimension corresponds to the hidden state size $h$), adopting the terminology used in \cite{Goodfellow_2016}.
A \emph{sparse} RNN cell can be obtained by
introducing sparseness in
$\mathbf{W}_{hh}$ and $\mathbf{W}_{hi}$.
Note that our experiments make use of the Long Short-Term Memory (LSTM) cell \cite{hochreiter_97},
but our discussion should hold for any type of recurrent network cell. For example, an LSTM
contains 4 matrices $\mathbf{W}_{hh}$ and $\mathbf{W}_{hi}$, whereas the Gated Recurrent Unit (GRU) \cite{Chung2014_GRU} only has 3.
We first propose
to organize the hidden dimensions in several disjoint groups, i.e, $N$ segments with lengths $s_n\; (n=1,\ldots,N)$, with $\sum_n s_n=h$.
We therefore reduce $\mathbf{W}_{hh}$ to a block-diagonal matrix. For example, a uniform segmentation would reduce the number of trainable parameters in $\mathbf{W}_{hh}$ to a fraction $1/N$.
\Figref{fig:rnn_sparse} illustrates an example $\mathbf{W}_{hh}$ for $N=3$.
One would expect that this simplification has a significant regularizing effect, given that the number of possible interactions between hidden dimensions is strongly reduced.
However, our experiments (see \secref{subsec:LM}) show that a larger sparse model may still be more expressive than its dense counterpart with the same number of parameters.
Yet, \citet{merity_2017} showed that applying weight dropping (i.e., DropConnect, \citet{Wan:2013}) in an LSTM's $\mathbf{W}_{hh}$ matrices
has a stronger positive effect on language models than other ways to regularize them. Sparsifying $\mathbf{W}_{hh}$ upfront can hence be seen as a similar way to avoid the model's `over-expressiveness' in its recurrent weights.
\begin{figure}[!t]
\begin{overpic}[width=\columnwidth]{config_sparse_rnn.pdf}
\put(21,52){$h$}
\put(80,52){$i$}
\put(48,14){$h$}
\put(86,13){$\gamma i$}
\put(4,10){$\mathbf{W}_{hh}$}
\put(61,10){$\mathbf{W}_{hi}$}
\put(19,1){(a)}
\put(76,1){(b)}
\put(19,13){$s_3$}
\put(16,26){$s_2$}
\put(19,41){$s_1$}
\end{overpic}
\caption{Predefined sparseness in hidden-to-hidden ($\mathbf{W}_{hh}$) and input-to-hidden ($\mathbf{W}_{hi}$) matrices in RNNs.
Trainable parameters (yellow) vs.\ zeros (white).}
\label{fig:rnn_sparse}
\end{figure}
As a second way to sparsify the RNN cell, we propose to not provide all hidden dimensions with explicit access to each input dimension. In each row of $\mathbf{W}_{hi}$ we limit the number of trainable parameters to a fraction $\gamma\in\ ]0,1]$.
Practically, we choose to organize the $\gamma i$ trainable parameters in each row within a window that gradually moves from the first to the last input dimension, when advancing in the hidden (i.e., row) dimension.
Furthermore, we segment the hidden dimension of $\mathbf{W}_{hi}$ according to the segmentation of $\mathbf{W}_{hh}$, and move the window of $\gamma i$ trainable parameters discretely per segment, as illustrated in \figref{fig:rnn_sparse}(b).
Because of the proposed practical arrangement of sparse and dense blocks in $\mathbf{W}_{hh}$ and $\mathbf{W}_{hi}$, the sparse RNN cell is equivalent to a composition of smaller dense RNN's operating in parallel on (partly) overlapping input data segments, with concatenation of the individual hidden states at the output. This will be illustrated at the end of \secref{sec:L2R}.
As a result, fast libraries like CuDNN \cite{Chetlur_2014} can be used directly. Further research is required to investigate the potential benefit in terms of speed and total cell capacity, of physically distributing computations for the individual dense recurrent cells.
Note that this is only possible because of the initial requirement that the output dimensions are divided into disjoint segments. Whereas inputs can be shared entirely between different components, joining overlapping segments in the $h$ dimension would need to be done within the cell, before applying the gating and output non-linearities. This would make the proposed model less interesting for practical use.
We point out two special cases:
\begin{enumerate*}[label=(\roman*)]
\item dense $\mathbf{W}_{hi}$ matrices ($\gamma=1$) lead to $N$ parallel RNNs that share the inputs but with separate contributions to the output, and
\item organizing $\mathbf{W}_{hi}$ as a block matrix (e.g., $\gamma=1/N$ for $N$ same-length segments), leads to $N$ isolated parallel RNNs.
In the latter case, the reduction in trainable parameters is highest, for a given number of segments, but there is no more influence from any input dimension in a given segment to output dimensions in non-corresponding segments.
\end{enumerate*}
We recommend option (i) as the most rational way to apply our ideas: the sparse RNN output is a concatenation of individual outputs of a number of RNN components connected in parallel, all sharing the entire input.
\subsection{Language Modeling with Sparse RNNs}\label{subsec:LM}
We apply predefined sparse RNNs to language modeling. Our baseline approach is the AWD-LSTM model introduced by \citet{merity_2017}.
The recurrent unit consists of a three-layer stacked LSTM (Long Short-Term Memory network \cite{hochreiter_97}), with 400-dimensional inputs and outputs, and intermediate hidden state sizes of 1150.
Since the vocabulary contains only $10$k words, most trainable parameters are in the recurrent layer ($20$M out of a total of $24$M).
In order to cleanly measure the impact of predefined sparseness in the recurrent layer, we maintain the original word embedding layer dimensions, and sparsify the recurrent layer.\footnote{Alternative models could be designed for comparison, with modifications in both the embedding and output layer. Straightforward ideas include an ensemble of smaller independent models, or a mixture-of-softmaxes output layer to combine hidden states of the parallel LSTM components, inspired by \cite{Yang_2017}.}
In this example, we experiment with increasing dimensions in the recurrent layer while maintaining the number of trainable parameters, whereas in \secref{subsec:POS} we increase sparseness while maintaining dimensions.
Specifically, each LSTM layer is made sparse in such a way that the hidden dimension 1150 is increased by a factor 1.5 (chosen \emph{ad hoc}) to 1725, but the embedding dimensions and total number of parameters remain the same
(within error margins from rounding to integer dimensions for the dense blocks).
We use uniform segments.
The number of parameters for the middle LSTM layer can be calculated as:%
\footnote{This follows from an LSTM's 4 $\mathbf{W}_{hh}$ and 4 $\mathbf{W}_{hi}$ matrices, as well as bias vectors. However, depending on the implementation the equations may differ slightly in the contribution from the bias terms. We assume the standard Pytorch implementation~\cite{paszke_2017}.}
\begin{alignat*}{2}
&\text{\# params. LSTM layer 2 } &&\nonumber\\
&\qquad = 4(h_d\ i_d + h_d^2 +2h_d) &&\text{\emph{(dense)}}\nonumber\\
&\qquad = 4N(\frac{h_s}{N}\gamma i_s + \frac{h_s^2}{N^2} + 2\frac{h_s}{N})\quad &&\text{\emph{(sparse)}}\nonumber
\end{alignat*}
in which the first expression represents the general case (e.g., the \emph{dense} case has input and state sizes $i_d = h_d = 1150$),
and the second part is the \emph{sparse} case composed of $N$ parallel LSTMs with input size $\gamma i_s$, and state size $h_s/N$ (with $i_s = h_s = 1725$).
\emph{Dense} and \emph{sparse} variants have the same number of parameters for
$N=3$ and $\gamma=0.555$. These values are obtained by identifying both expressions. Note that the equality in model parameters for the dense and sparse case holds only approximately due to rounding errors in $(\gamma i_s)$ and $(h_s/N)$.
\Figref{fig:rnn_sparse} displays $\mathbf{W}_{hh}$ and $\mathbf{W}_{hi}$ for the middle layer, which has close to $11$M parameters out of the total of $24$M in the whole model.
A dense model with hidden size $h=1725$ would require $46$M parameters, with $24$M in the middle LSTM alone.
\begin{table}[!t]
\centering
\begin{tabular}{@{\extracolsep{4pt}}lcc@{}}
\toprule
& finetune & test perplexity \\
\midrule
\cite{merity_2017} & no & $58.8$ \\
baseline & no & $58.8 \pm 0.3$ \\
sparse LSTM & no & $\mathbf{57.9 \pm 0.3}$ \\
\midrule
\cite{merity_2017} & yes & $57.3$ \\
baseline & yes & $\mathbf{56.6 \pm 0.2}$ \\
sparse LSTM & yes & $57.0 \pm 0.2$ \\
\bottomrule
\end{tabular}
\caption{Language modeling for PTB \\(mean $\pm$ stdev). }
\label{tab:LMresults}
\end{table}
Given the strong hyperparameter dependence of the AWD-LSTM model, and the known issues in objectively evaluating language models \cite{Melis_2017}, we decided to keep all hyperparameters (i.e., dropout rates and optimization scheme) as in the implementation from \citet{merity_2017}\footnote{Our implementation extends \url{https://github.com/salesforce/awd-lstm-lm}.}, including the weight dropping with $p=0.5$ in the sparse $\mathbf{W}_{hh}$ matrices.
\tabref{tab:LMresults} shows the test perplexity on a processed version~\cite{Mikolov_2010} of the Penn Treebank (PTB) \cite{Marcus_1993}, both with and without the `finetune' step\footnote{The `finetune' step indicates hot-starting the Averaged Stochastic Gradient Descent optimization once more, after convergence in the initial optimization step \cite{merity_2017}.}, displaying mean and standard deviation over 5
different runs.
Without finetuning, the sparse model consistently performs around 1 perplexity point better, whereas after finetuning, the original remains slightly better, although less consistently so over different random seeds. We observed that the sparse model overfits more strongly than the baseline, especially during the finetune step.
We hypothesize that the regularization effect of \textit{a priori} limiting interactions between dimensions does not compensate for the increased expressiveness of the model due to the larger hidden state size.
Further experimentation, with tuned hyperparameters, is needed to determine the actual benefits of predefined sparseness, in terms of model size, resulting perplexity, and sensitivity to the choice of hyperparameters.
\section{Sparse Word Embeddings}\label{sec:sparse_emb}
Given a vocabulary with $V$ words, we want to construct vector representations of length $k$ for each word such that the total number of parameters needed (i.e., non-zero entries), is smaller than $k\, V$.
We introduce one way to do this based on word frequencies (\secref{subsec:freqbased_sparse_emb}), and present part-of-speech tagging experiments (\secref{subsec:POS}).
\subsection{Word-Frequency based Embedding Size}\label{subsec:freqbased_sparse_emb}
Predefined sparseness in word embeddings amounts to deciding which positions in the word embedding matrix $\mathbf{E}\in \mathbb{R}^{V \times k}$ should be fixed to zero, prior to training.
We define the fraction of trainable entries in $\mathbf{E}$ as the embedding density $\delta_E$.
We hypothesize that rare words can be represented with fewer parameters than frequent words, since they only appear in very specific contexts. This will be investigated experimentally in \secref{subsec:POS}.
Word occurrence frequencies have a typical Zipfian nature \cite{Manning_2008}, with many rare and few highly frequent terms.
Thus, representing the long tail of rare terms with short embeddings should greatly reduce memory requirements.
In the case of a low desired embedding density $\delta_E$, we want to save on the rare words, in terms of assigning trainable parameters, and focus on the fewer more popular words. An exponential decay in the number of words that are assigned longer representations is one possible way to implement this. In other words, we propose to have the number of words that receive a trainable parameter at dimension $j$ decrease with a factor $\alpha^j$ ($\alpha \in\ ]0, 1]$).
For a given fraction $\delta_E$, the parameter $\alpha$ can be determined from requiring the total number of non-zero embedding parameters to amount to a given fraction $\delta_E$ of all parameters:
\begin{align}
\text{\# embedding params.}
= \sum_{j=0}^{k-1}\alpha^j V = \delta_E\, k\, V \nonumber
\end{align}
and numerically solving for $\alpha$.
\begin{figure
\includegraphics[width=.9\columnwidth]{sparse_emb.pdf}
\caption{Visualization of sparse embedding matrices for different densities $\delta_E$ (with $k = 20$). Colored region: non-zero entries.
Rows represent word indices, sorted from least frequent (LF) to highly frequent (HF).
}
\label{fig:embmatrix}
\end{figure}
\Figref{fig:embmatrix} gives examples of embedding matrices with varying~$\delta_E$.
For a vocabulary of $44$k terms and maximum embedding length $k=20$, the density $\delta_E=0.2$ leads to 25\% of the words with embedding length 1 (corresponding $\alpha=0.75$), only 7.6\% with length of 10 or higher, and with the maximum length 20 for only the 192 most frequent terms.
The particular configurations shown in \figref{fig:embmatrix} are used for the experiments in \secref{subsec:POS}.
In order to set a minimum embedding length for the rarest words, as well as for computational efficiency, we note that this strategy can also be applied on $M$ bins of embedding dimensions, rather than per individual dimensions. The width of the first bin then indicates the minimum embedding length. Say bin $m$ has width $\kappa_m$ (for $m=0,\ldots,M-1$, and $\sum_m \kappa_m = k$). The multiplicative decay factor $\alpha$ can then be obtained by solving
\begin{equation}
\delta_E = \frac{1}{k}\sum_{m=0}^{M-1}\kappa_m\alpha^m, \label{eq:deltaE}
\end{equation}
while numerically compensating for rounding errors in the number $V\alpha^m$ of words that are assigned trainable parameters in the $m^\textrm{th}$ bin.
$ $
\subsection{Part-of-Speech Tagging Experiments}\label{subsec:POS}
We now study the impact of sparseness in word embeddings, for a basic POS tagging model, and report results on the PTB Wall Street Journal data.
We embed 43,815 terms in 20-dimensional space, as input for a BiLSTM layer with hidden state size 10
for both forward and backward directions.
The concatenated hidden states go into a fully connected layer with $\tanh$ non-linearity (down to dimension 10), followed by a \emph{softmax} classification layer with 49 outputs (i.e., the number of POS tags).
The total number of parameters is $880$k, of which $876$k in the embedding layer.
Although character-based models are known to outperform pure word embedding based models \cite{Ling_2015}, we wanted to investigate the effect of sparseness in word embeddings, rather than creating more competitive but larger or complex models, risking a smaller resolution in the effect of changing
individual building blocks. To this end we also limited the dimensions, and hence the expressiveness, of the recurrent layer.\footnote{With LSTM state sizes of 50, the careful tuning of dropout parameters gave an accuracy of 94.7\% when reducing the embedding size to $k=2$, a small gap compared to 96.8\% for embedding size 50. The effect of larger sparse embeddings was therefore much smaller in absolute value than the one visualized in \figref{fig:pos_sparse}, because of the much more expressive recurrent layer.}
Our model is similar to but smaller than the `word lookup' baseline by \citet{Ling_2015}.
\begin{figure
\includegraphics[width=.9\columnwidth]{sparse_dense_seq.pdf}
\caption{POS tagging accuracy on PTB data: dense (red) vs.\ sparse (green). X-axis: embedding size $k$ for the dense case, and average embedding size (or $20\;\delta_E$) for the sparse case.
Shaded bands indicate \emph{stdev} over 4 randomly seeded runs.
}
\label{fig:pos_sparse}
\end{figure}
\Figref{fig:pos_sparse} compares the accuracy for variable densities $\delta_E$ (for $k=20$) vs.\ different embedding sizes (with $\delta_E=1$).
For easily comparing sparse and dense models with the same number of embedding parameters, we scale $\delta_E$, the x-axis for the sparse case, to the average embedding size of $20\;\delta_E$.
Training models with shorter dense embeddings appeared more difficult. In order to make a fair comparison, we therefore tuned the models over a range of regularization hyperparameters, provided in \tabref{tab:POSparams}.
\begin{table*}[t]
\centering
\begin{tabular}{ll}
\toprule
hyperparameter & value(s) \\
\midrule
optimizer & Adam \cite{kingma_2014} \\
learning rate & 0.001 \\
epochs & 50 \\
word level embedding dropout \dag & [0.0, 0.1, 0.2] \\
variational embedding dropout \dag & [0.0, 0.1, 0.2, 0.4] \\
DropConnect on $\mathbf{W}_{hh}$ \dag & [0.0, 0.2, 0.4]\\
batch size & 20 \\
\bottomrule
\end{tabular}
\caption{Hyperparameters for POS tagging model (\dag as introduced in \cite{merity_2017}). A list indicates tuning over the given values was performed.}
\label{tab:POSparams}
\end{table*}
We observe that the sparse embedding layer allows lowering the number of parameters in $\mathbf{E}$ down to a fraction of 15\% of the original amount, with little impact on the effectiveness, provided $\mathbf{E}$ is sparsified rather than reduced in size.
The reason for that is that with sparse 20-dimensional embeddings, the BiLSTM still receives 20-dimensional inputs, from which a significant subset only transmits signals from a small set of frequent terms. In the case of smaller dense embeddings, information from all terms is uniformly present over fewer dimensions, and needs to be processed with fewer parameters at the encoder input.
Finally, we verify the validity of our hypothesis from \secref{subsec:freqbased_sparse_emb} that frequent terms need to be embedded with more parameters than rare words.
Indeed, one could argue in favor of the opposite strategy. It would be computationally more efficient if the terms most often encountered had the smallest representation. Also, stop words are the most frequent ones but are said to carry little information content.
However, \tabref{tab:POSresults} confirms our initial hypothesis. Applying the introduced strategy to sparsify embeddings on randomly ordered words (`no sorting') leads to a significant decrease in accuracy compared to the proposed sorting strategy (`up').
When the most frequent words are encoded with the shortest embeddings (`down' in the table), the accuracy goes down even further.
\begin{table*}[t!]
\centering
\begin{tabular}{@{\extracolsep{4pt}}lccc@{}}
\toprule
& $\delta_E=1.0$ & $\delta_E=0.25$ & $\delta_E=0.1$ \\
\midrule
\# params. ($\mathbf{E}$; all) & $876$k; $880$k & $219$k; $222$k & $88$k ; $91$k \\
\midrule
up & & $\mathbf{96.1 \pm 0.1}$ & $\mathbf{95.6 \pm 0.1}$ \\
no sorting & $96.0 \pm 0.3$ & $94.3 \pm 0.4$ & $93.0 \pm 0.3$ \\
down & & $89.8 \pm 2.2$ & $90.6 \pm 0.5$ \\
\bottomrule
\end{tabular}
\caption{Impact of vocabulary sorting on POS accuracy with sparse embeddings: up vs.\ down (most frequent words get longest vs.\ shortest embeddings, resp.) or not sorted, for different embedding densities $\delta_E$.
}
\label{tab:POSresults}
\end{table*}
\section{Learning To Recite}\label{sec:L2R}
\begin{table*}[t!]
\centering
\begin{tabular}{@{\extracolsep{4pt}}lccccc@{}}
\toprule
&\multicolumn{2}{c}{embeddings} & hidden state & \multirow{2}{*}{\# parameters} & memorization \\
& size $k$, & density $\delta_E$ & size $h$ & & accuracy (\%) \\
\midrule
dense model (orig. dims.) & $400$ & $1$ & $1150$ & $24.22$M & $\mathbf{100.0}$ \\
\midrule
dense model (see \figref{fig:rnn_composite}(a)) & $200$ & $1$ & $575$ & $7.07$M & $99.33$ \\
sparse RNN (see \figref{fig:rnn_composite}(b)) & $200$ & $1$ & $1150$ & $7.07$M & $\mathbf{99.95}$ \\
sparse RNN + sparse emb. & $400$ & $1/2$ & $1150$ & $7.07$M & $99.74$ \\
\midrule
dense model & $133$ & $1$ & $383$ & $3.59$M & $81.48$ \\
sparse RNN & $133$ & $1$ & $1150$ & $3.59$M & $76.37$ \\
sparse RNN + sparse emb. & $400$ & $1/3$ & $1150$ & $3.59$M & $\mathbf{89.98}$ \\
\bottomrule
\end{tabular}
\caption{PTB train set memorization accuracies for dense models vs.\ models with predefined sparseness in recurrent and embedding layers with comparable number of parameters.}
\label{tab:L2R}
\end{table*}
From the language modeling experiments in \secref{subsec:LM}, we hypothesized that an RNN layer becomes more expressive, when the dense layer is replaced by a larger layer with predefined sparseness and the same number of model parameters. In this section, we design an experiment to further investigate this claim. One way of quantifying an RNN's capacity is in measuring how much information it can memorize. We name our experimental setup \emph{learning to recite}: we investigate to what extent dense vs.\ sparse models are able to learn an entire corpus by heart in order to recite it afterwards.
We note that this toy problem could have interesting applications, such as the design of neural network components that keep entire texts or even knowledge bases available for later retrieval, encoded in the component's weight matrices.\footnote{It is likely that recurrent networks are not the best choice for this purpose, but here we only wanted to measure the LSTM-based model's capacity to memorize with and without predefined sparseness.}
\subsection{Experimental Results}
The initial model for our \emph{learning to recite} experiment is the baseline language model used in \secref{subsec:LM} \cite{merity_2017}, with the PTB data. We set all regularization parameters to zero, to focus on memorizing the training data. During training, we measure the ability of the model to correctly predict the next token at every position in the training data, by selecting the token with highest predicted probability. When the model reaches an accuracy of 100\%, it is able to recite the entire training corpus.
We propose the following optimization setup (tuned and tested on dense models with different sizes):
minibatch SGD (batch size 20, momentum 0.9, and best initial learning rate among 5 or 10). An exponentially decaying learning rate factor (0.97 every epoch) appeared more suitable for memorization than other learning rate scheduling strategies, and we report the highest accuracy in 150 epochs.
We compare the original model (in terms of network dimensions) with versions that have less parameters,
by either reducing the RNN hidden state size $h$ or by sparsifying the RNN, and similarly for the embedding layer. For making the embedding matrix sparse, $M=10$ equal-sized segments are used (as in eq.~\ref{eq:deltaE}).
\Tabref{tab:L2R} lists the results.
The dense model with the original dimensions has $24$M parameters to memorize a sequence of in total `only' $930$k tokens, and is able to do so. When the model's embedding size and intermediate hidden state size are halved, the number of parameters drops to $7$M, and the resulting model now makes 67 mistakes out of 10k predictions. If $h$ is kept, but the recurrent layers are made sparse to yield the same number of parameters, only 5 mistakes are made for every 10k predictions. Making the embedding layer sparse as well introduces new errors.
If the dimensions are further reduced to a third the original size, the memorization capacity goes down strongly, with less than $4$M trainable parameters. In this case, sparsifying both the recurrent and embedding layer yields the best result, whereas the dense model works better than the model with sparse RNNs only.
A possible explanation for that is the strong sparseness in the RNNs. For example, in the middle layer only 1 out of 10 recurrent connections is non-zero. In this case, increasing the size of the word embeddings (at least, for the frequent terms) could provide an alternative for the model to memorize parts of the data, or maybe it makes the optimization process more robust.
\begin{figure}[!t]
\begin{overpic}[width=\columnwidth]{sparse_rnn_composed.pdf}
\put(16,98){\small{layer 1}}
\put(32,98){\small{layer 2}}
\put(48,98){\small{layer 3}}
\put(0,82){(a)}
\put(4,89){\small{$k\!=\!200$}}
\put(22,95){\small{$h\!=\!575$}}
\put(40,95){\small{$h\!=\!575$}}
\put(57,89){\small{$k\!=\!200$}}
\put(16,80){\footnotesize{$R_1$}}
\put(32,80){\footnotesize{$R_2$}}
\put(49,80){\footnotesize{$R_3$}}
\put(14,76){\scriptsize{$200\!\rightarrow\!575$}}
\put(31,76){\tiny{$575\!\rightarrow\!575$}}
\put(48,76){\tiny{$575\!\rightarrow\!200$}}
\put(14,73){\tiny{$1.79$M par.}}
\put(31,73){\tiny{$2.65$M par.}}
\put(48,73){\tiny{$0.62$M par.}}
\put(0,35){(b)}
\put(4,44){\small{$k\!=\!200$}}
\put(22,62){\small{$h\!=\!1150$}}
\put(40,62){\small{$h\!=\!1150$}}
\put(57,44){\small{$k\!=\!200$}}
\put(16,53){\footnotesize{$R_{1,1}$}}
\put(16,41){\footnotesize{$R_{1,2}$}}
\put(16,29){\footnotesize{$R_{1,3}$}}
\put(16,16){\footnotesize{$R_{1,4}$}}
\put(33,55){\footnotesize{$R_{2,1}$}}
\put(33,45){\footnotesize{$R_{2,2}$}}
\put(33,35){\footnotesize{$R_{2,3}$}}
\put(33,24.5){\footnotesize{$R_{2,4}$}}
\put(33,14){\footnotesize{$R_{2,5}$}}
\put(50,38){\footnotesize{$R_{3,1}$}}
\put(50,31){\footnotesize{$R_{3,2}$}}
\put(14,12){\tiny{$99\!\rightarrow\!288$}}
\put(31,10){\tiny{$244\!\rightarrow\!230$}}
\put(48,27){\tiny{$675\!\rightarrow\!100$}}
\put(14,6){\footnotesize{sparse $R_{1}$}}
\put(14,3){\tiny{$200\!\rightarrow\!1150$}}
\put(31,6){\footnotesize{sparse $R_{2}$}}
\put(31,3){\tiny{$1150\!\rightarrow\!1150$}}
\put(48,6){\footnotesize{sparse $R_{3}$}}
\put(48,3){\tiny{$1150\!\rightarrow\!200$}}
\put(14,0){\tiny{$1.79$M par.}}
\put(31,0){\tiny{$2.65$M par.}}
\put(48,0){\tiny{$0.62$M par.}}
\end{overpic}
\caption{Schematic overview of 3-layer stacked (a) dense vs.\ (b) sparse LSTMs with the same number of parameters (indicated with `par.'). Sparse layers are effectively composed of smaller dense LSTMs. `$R_{i,j}$' indicates component $j$ within layer $i$, and `$675\!\rightarrow\!100$' indicates an LSTM compoment with input size $675$ and output size $100$.}
\label{fig:rnn_composite}
\end{figure}
\subsection{Visualization}
Finally, we provide an illustration of the high-level composition of the recurrent layers in two of the models used for this experiment. \Figref{fig:rnn_composite}(a) sketches the stacked 3-layer LSTM network from the `dense RNN' model (see \Tabref{tab:L2R}) with $k=200$ and $h=575$.
As already mentioned, our proposed sparse LSTMs are equivalent to a well-chosen composition of smaller dense LSTM components with overlapping inputs and disjoint outputs. This composition is shown in \figref{fig:rnn_composite}(b) for the model `sparse RNN' (see \Tabref{tab:L2R}), which in every layer has the same number of parameters as the dense model with reduced dimensions.
\section{Conclusion and Future Work}\label{sec:conclusion}
This paper introduces strategies to design word embedding layers and recurrent networks with predefined sparseness. Effective sparse word representations can be constructed by
encoding less frequent terms with smaller embeddings and vice versa. A sparse recurrent neural network layer can be constructed by applying multiple smaller recurrent cells in parallel, with partly overlapping inputs and concatenated outputs.
The presented ideas can be applied to build models with larger representation sizes
for a given number of parameters, as illustrated with a language modeling example.
Alternatively, they can be used
to reduce the number of parameters
for given representation sizes, as investigated with a part-of-speech tagging model.
We introduced ideas on predefined sparseness in sequence models, as well as proof-of-concept experiments, and analysed the memorization capacity of sparse networks in the `learning to recite' toy problem.
More elaborate experimentation is required to investigate the benefits of predefined sparseness on more competitive tasks and datasets in NLP. For example, language modeling results on the Penn Treebank rely on heavy regularization due to the small corpus. Follow-up work could therefore investigate to what extent language models for large corpora can be trained with limited computational resources, based on predefined sparseness.
Other ideas for future work include the use
of predefined sparseness for pretraining word embeddings,
or other neural network components besides
recurrent models, as well as their use in alternative
applications such as sequence-to-sequence
tasks or in multi-task scenarios.
\section*{Acknowledgments}
We thank the anonymous reviewers for their time and effort, and the valuable feedback.
|
2,869,038,154,018 | arxiv |
\section{Introduction}
One-dimensional systems
in proximity to s-wave superconductors have recently been extensively investigated as candidates for topological superconductivity \cite{Kitaev2001UnpairedMajoranaFermionsInQuantumWires,Kitaev2009,Ryu2010}.
This includes experiments and calculations on semiconducting nanowires in a magnetic field \cite{Zuo2012,Ronen2012,Oreg2010,Zhang2021}, self-organized atomic chains \cite{Perge2014}, and atomically constructed magnetic chains \cite{Kim2018,Schneider2020,Schneider2021a,Steiner} on superconducting substrates, e.g., Fe on Re \cite{Kim2018, Schneider2020} and Mn on Nb \cite{Schneider2021a}.
These systems are furthermore promising platforms for
odd-frequency and triplet superconductivity \cite{Kashuba2017,Linder2019,Kuzmanovski2020}.
A central model for analyzing the electronic and magnetic properties of the above mentioned systems is the spinful one-band model with proximity-induced s-wave superconductivity, including local magnetic Zeeman fields and Rashba-spin-orbit coupling \cite{Oreg2010,Perge2013,Klinovaja2013,Vazifeh,Heimes2015,Hu,Minami}.
This model describes 1D systems that can exhibit topological superconductivity and host Majorana zero modes at its ends.
Despite its simplicity, there is an ongoing discussion about the magnetic ground state of such systems in dependence on its parameters.
Klinovaja et al. \cite{Klinovaja2013} and Vazifeh et al. \cite{Vazifeh} found by an effective spin model that the system self-organizes into a topological state in the limit of weak magnetic interactions. Hu et al. \cite{Hu} assumed harmonic spin spirals and identified the energetically most favorable ones among them. In contrast to this approach, Minami et al. \cite{Minami} additionally found ground state spin configurations in non-superconducting systems that are not represented by harmonic spirals but either by collinear or by non-coplanar configurations. To this end, they performed Monte-Carlo simulations with an effective spin model at vanishingly small temperatures.
Furthermore, there are models including electron-electron interactions and continuum electron models to predict the magnetic phases in one-dimensional superconductors with magnetic impurities and Rashba-spin-orbit coupling, which point towards a stable, self-organized spiral magnetic phase giving rise to one-dimensional topological superconductivity \cite{Braunecker2009a,Braunecker2013,Braunecker2010,Braunecker2009}.\\
In this paper, we present Monte-Carlo calculations of the magnetic ground state of a 1D magnetic chain with proximity-induced s-wave superconductivity. We show that non-spiral non-collinear phases exist, and analyze how they are affected by superconductivity and how they affect the topological electronic phases of the system in return. In the limit of vanishing superconductivity, we identify magnetic phases of complex order and complex collinear phases in addition to previously known harmonic spirals and collinear phases.
Our calculations are first performed in a tight-binding model, where we consider the magnetization as a free parameter and do not limit it by any assumption about the magnetic ground state. Secondly, we introduce a computationally efficient method for approximately determining the magnetic ground states of large tight-binding systems, which we use to gain
understanding of the driving forces behind the complex magnetic states. To this end, we fit the parameters of a classical Heisenberg model to our tight-binding model, showing that four-spin interactions become relevant to understand the magnetic phases.\\
The paper is structured as follows. In Section II, we explain the tight-binding model and the methods used to identify its magnetic ground states. In Section III, we discuss the magnetic ground state and the resulting electronic topological phases. In Section IV, we introduce the classical Heisenberg model for approximately finding the magnetic ground state of a tight-binding model. Finally, in Section V, we summarize our findings and give an outlook for future research.
\begin{figure*}
\subfloat{\includegraphics[width=1.0\linewidth]{examples_one_plot.png}}
\caption{Ground state spin configurations of finite-size chains. Relative angle between neighboring spins along the chain for representative examples of ground states of finite-size chains with open boundary conditions with $\Delta=0$ and $L=30$. The periodic parts that can be used as a unit cell are marked in orange. The insets show a 2D projection of the spins. The color denotes the relative angle between the $j$-th spin and the first spin $\theta_{1,j}$. (a) $J=1.4t$, $\mu=0.5t$, (b) $J=0.2t$, $\mu=1.0t$, (c) $J=0.6t$, $\mu=1.4t$, (d) $J=1.6t$, $\mu=0.6t$.}
\label{examples}
\end{figure*}
\section{Model and Method}
We investigate a one-dimensional atomic chain with classical local magnetic moments and proximity-induced s-wave superconductivity. The system is described by the Hamiltonian
\begin{align}
H=& \sum_{j=1}^L c^{\dagger }_j\left(-J{\tau }_0\vec{m}_j\cdot \vec{\sigma }+\left(2t-\mu \right){\tau }_z{\sigma }_0+\Delta\tau_x\sigma_0\right)c_j\nonumber \\
& +\sum_{<i,j>}{c^{\dagger }_i\left(t{\tau }_z{\sigma }_0+\lambda\tau_z\sigma_y\right)c_j},\label{H_TB}
\end{align}
with the Nambu spinor
$c=(c_{\uparrow },c_{\downarrow },c^{\dagger }_{\downarrow },{-c}^{\dagger }_{\uparrow })$ \cite{nambu}, the coupling $J$ between a magnetic moment on a given site and the spin of an electron, the orientation of the local magnetic moments on the $j$-th site $\vec{m}_j$, the chemical potential $\mu$, the hopping amplitude $t$, the superconducting order parameter $\mathrm{\Delta }$, and the strength of Rashba-spin-orbit coupling $\lambda$. The Pauli matrices $\sigma$ and $\tau $ operate in spin and particle-hole-space, respectively. L is the length of the chain. This Hamiltonian effectively includes spin interactions mediated by the itinerant electrons and neglects direct interactions between the spins.
We choose a Rashba-spin-orbit coupling in $\sigma_y$-direction without loss of generality.
By a standard local gauge transformation $c=e^{ij\alpha\sigma_y}c'$ \cite{Braunecker2010}, the Rashba-spin-orbit coupling can be rotated into the magnetic moments $m_j'=R(2j\alpha)m_j$, where $R$ is the rotation matrix around the y-axis by an angle of $2j\alpha$. To fully rotate Rashba-spin-orbit coupling of strength $\lambda$ into the local magnetic moments, one has to set $\alpha=\arctan(\lambda/t)$, which rescales the hopping to $t'=t\sqrt{1+\frac{\lambda^2}{t^2}}$ and rotates the magnetic moments around the y-axis by an angle of $2j\arctan(\lambda/t)$. In the following, we therefore restrict our analysis to $\lambda=0$ and $t=1$. The results for non-vanishing Rashba-spin-orbit coupling can be obtained from the presented results by a backrotation of the magnetic moments by $-2j\arctan(\lambda/t)$ and rescaling of all energies by $\sqrt{1+\frac{\lambda^2}{t^2}}^{-1}$.\\
This model can host topological electronic phases despite being an s-wave superconductor depending on the magnetic configuration because the combination of hopping, s-wave pairing, and local magnetic moments can lead to an effective p-wave pairing \cite{Perge2013}. Here, we consider the magnetization as a free parameter, not limited by any a priori assumption about the magnetic ground state, and identify the energetically most favorable configuration of the magnetization $\textbf{m}_i$ for a given $J$ and $\mu$ at zero temperature with a Metropolis Monte-Carlo algorithm \cite{MC2, MC1},
and subsequently calculate ground state properties, e.g., the topological number of the electronic system and the electronic gap.
To prevent magnetic frustration induced by incommensurate magnetic structures, we use open boundary conditions.
Details on our method are explained in Appendix A and B. The tight-binding calculations have been performed using the \textit{Kwant} code \cite{kwant}.
\section{Magnetic ground states and topological phases}
In this section, we investigate the magnetic ground state, starting with vanishing superconductivity ($\Delta=0$) and further proceeding to the superconducting case. At the end, we investigate how the magnetic states affect the electronic topological phases.\\
We start with Monte-Carlo simulations of finite chains to generate trial configurations for infinitely long chains.
Scanning through the parameter space $(J,\mu)$ with vanishing superconductivity $\Delta=0$, we find complex collinear structures at zero temperature. All ground states are globally rotationally invariant, meaning that rotating all spins simultaneously by the same angle does not affect the energy, and we do not find a spontaneous breaking of this symmetry. We observe collinear $\uparrow\up\downarrow\down$, $\uparrow\up\uparrow\downarrow\down\downarrow$, $\uparrow\up\uparrow\downarrow$ and $\uparrow\up\uparrow\downarrow\uparrow\downarrow\down\downarrow\uparrow\downarrow$-states. Here, the short-hand notation $\uparrow \uparrow \downarrow \downarrow $ denotes that the ground state is a periodic repetition of two parallel spins and two spins that are anti-parallel to the first two spins.
We also observe structures that are dependent on finite-size effects. Representative example configurations for finite chains are shown in Fig. \ref{examples}, which shows the relative angle $\theta_{j,j+1}$ between neighboring spins along the chain. At the ends of the chains, the spins align mostly collinear and assume different structures towards the interior of the chains. Some structures (a),(b) appear to change towards harmonic spirals, but the finite-size effects can suppress this behavior (b). We also observe periodic structures, that are sequences of multiple repeating relative angles between neighboring spins (c, d). When using the inner periodic parts as unit cells for an infinite chain, the total energy of the obtained magnetic configurations is lower than that of any of the found collinear structures or spirals, implying that these structures are not caused by finite-size effects.
In contrast to Minami et al. \cite{Minami}, we did not observe any non-coplanar configurations.\\
To clarify if these configurations also exist as magnetic phases in infinite chains,
we compare the zero-temperature total energy for harmonic spirals and that of all identified collinear configurations, which we use as trial configurations for infinite chains. We do so by extracting magnetic unit cell, which are periodically expanded to do a k-space transformation. In k-space, we choose a resolution that corresponds to an effective length of 11000 atoms.
We additionally compare these configurations to results from a modified Monte-Carlo calculation that makes use of a spin basis rotation.
Fig. \ref{magn SC off} shows the magnetic phases of infinite chains identified with this method in dependence on the magnetic coupling $J$ and the chemical potential $\mu$ for vanishing superconductivity. The ground state is ferromagnetic (A) for small or negative chemical potentials $\mu$ and antiferromagnetic (B) for large $J$ or $\mu$. For $J\lesssim 0.5 t$ and $0<\mu<2t$ (C) we find a spin spiral phase, which gets interrupted around $\mu\sim0.6t$ for $J\lesssim 1.3t$ by a collinear $\uparrow \uparrow \downarrow \downarrow $-phase (D). Area E marks a collinear $\uparrow \uparrow \uparrow \downarrow \downarrow \downarrow $-phase.
The $\uparrow\up\downarrow\down$- and $\uparrow\up\uparrow\downarrow\down\downarrow$-phases have as well been reported by Minami et al. \cite{Minami}.
In addition, we identify an $\uparrow\up\uparrow\downarrow$-phase (F) and an $\uparrow\up\uparrow\downarrow\uparrow\downarrow\down\downarrow\uparrow\downarrow$-phase (G) close to the $\uparrow\up\uparrow\downarrow\down\downarrow$-phase. The total energy of the collinear phases is $\sim0.05 J$ per atom lower than that of the most favorable harmonic spiral.
Similar magnetic structures have been experimentally observed with spin polarized scanning tunneling microscopy. Spin spirals (C) have been identified in Fe chains on a Re surface \cite{spirals}. Mn chains on a Nb(110) surface show ferromagnetic or anti-ferromagnetic behavior depending on their direction \cite{FM-AFM}. An $\uparrow\up\downarrow\down$-structure has been found in GeCu$_2$O$_4$ \cite{UUDD}.
Between the $\uparrow\up\downarrow\down$-phase and the AFM phase, we find complex non-collinear structures, that can be described as sequences of relative angles resulting in at least some non-collinear spins (H). Fig. \ref{examples} (c) and (d) represent finite-size examples of these configurations. The difference in total energy between this phase and the most favorable harmonic spiral varies from $\sim 0.03 J$ per atom close to the AFM phase to $\sim 0.002 J$ per atom close to the $\uparrow\up\downarrow\down$-phase.
We identified this area with a modified Monte-Carlo method, explained in Appendix D. The energetically favorable sequences of angles found with this method align well with the results from the finite-size calculations.
Finally, for $\mu <-|J|$ no bands are occupied, which is why this region is blacked out.\\
\begin{figure}
\raisebox{-0.5 \height}{\subfloat{\includegraphics[width=0.5\textwidth]{inf_phases_D0_zoom_topo.png}}}
\caption{Magnetic phases for vanishing superconductivity in dependence on $J$ and $\mu$ for infinite chains. The shades from white to dark blue denote a spiral, where saturation describes the spiral pitch $\theta_{\text{spiral}}$, see left color bar. The right color bar labels magnetic phases. The shortened notation 3$\uparrow$ refers to $\uparrow\uparrow\uparrow$. In the hatched area, we find a negative Majorana number $M=-1$ and an opening of a spectral gap for infinitesimal superconductivity, calculated with $\Delta=0.001t$, while the other regions remain gapless or have a positive Majorana number $M=+1$.}
\label{magn SC off}
\end{figure}
\begin{figure*}
\centering
\subfloat{\includegraphics[width=0.9\linewidth]{theta_change_D_01_02_03_04_L40_w.png}}
\subfloat{\includegraphics[width=0.054\linewidth]{colorbar_theta.png}}
\subfloat{\includegraphics[width=0.9\linewidth]{collin_counter_D_01_02_03_04_L40_w.png}}
\subfloat{\includegraphics[width=0.054\linewidth]{colorbar_Ncol.png}}
\subfloat{\includegraphics[width=0.9\linewidth]{det_r_finite_01_02_03_04_L40_w.png}}
\subfloat{\includegraphics[width=0.054\linewidth]{colorbar_detr.png}}
\caption{Magnetic properties and topological phases of finite-size chains for non-vanishing superconducting order parameters $\Delta>0$ with respect to $J$ and $\mu$, calculated for a chain of length $L=40$ with open boundary conditions. (a) $\theta_{\text{change}}$ (b) Number of collinear spins $N_{\text{col}}$. (c) The determinant of the reflection matrix $\det(r)$ in the magnetic ground state. Negative values (blue) indicate, that the system is in a non-trivial state.}
\label{finite SC}
\end{figure*}
Superconductivity, i.e, $\Delta\neq 0$, can open a spectral gap and cause the electronic system to become topologically non-trivial. Superconductivity also affects the magnetic ground states, which is investigated in the following.
Fig. \ref{finite SC} (a),(b) show the number of collinear spins $N_{\text{col}}$ and the average change of angles between neighboring spins $\theta_{\text{change}}$ for a chain of $L=40$ atoms and open boundary conditions. Here, two spins $\textbf{s}_i$ and $\textbf{s}_j$ are considered to be collinear, if $|\textbf{s}_i\cdot \textbf{s}_j|>0.99$. The quantity $\theta_{\text{change}}$ is a measure for how much a chain differs from a harmonic spiral, i.e., a chain in which the relative angle between neighboring spins remains the same along the whole chain. We define $\theta_{\text{change}}$ by
\begin{equation} \label{theta_change}
\begin{split}
\theta_{\text{change}}=\frac{1}{L-2}\sum^{L-1}_{j=1}|\arccos\left(\vec{m}_j\cdot \vec{m}_{j+1}\right)\\
-\arccos\left(\vec{m}_{j+1}\cdot \vec{m}_{j+2}\right)|.
\end{split}
\end{equation}
For small $\mathrm{\Delta }$, a significant fraction of the parameter space can not be described by harmonic spirals, i.e., the magnetic ground states have a non-vanishing $\theta_{change}$. For $\mathrm{\Delta }\gtrsim 0.35t$ the $\uparrow \uparrow \downarrow \downarrow $-phase disappears. For $\mathrm{\Delta }\gtrsim 1.5t$, the $\uparrow \uparrow \uparrow \downarrow \downarrow \downarrow$-phase and $\uparrow \uparrow \uparrow \downarrow$-phase disappear. Thus, for $\Delta\gtrsim 1.5 t$, the system converges towards harmonic spirals for all $J$ and $\mu$.
In the following, we investigate the electronic topological phases of this system.
As our model is a one-dimensional class D material with the time reversal symmetry being broken by the magnetic moments and the particle-hole symmetry squaring to one, it has a $\mathbb{Z}_2$-invariant \cite{tenfold}.
We employ the Majorana number $M$ \cite{Kitaev2001UnpairedMajoranaFermionsInQuantumWires} for the topological classification of infinitely long chains, which is
\begin{equation}
M=\sgn(\Pf(\tilde{H}(k=0)))\cdot \sgn(\Pf(\tilde{H}(k=\pi))),
\end{equation}
with the Pfaffian $\Pf$ and the k-space Hamiltonian $\tilde{H}$ in a Majorana basis. The Hamiltonian is brought into a Majorana basis with the unitary transformation $\tilde{H}=U^\dagger H U$ with
\begin{equation}
U=\frac{1}{\sqrt{2}}\begin{pmatrix}
1 & 0 & 0 & i\\
0 & 1 & i & 0\\
0 & 1 & -i & 0\\
-1 & 0 & 0 & i
\end{pmatrix}.
\end{equation}
The system is topologically non-trivial, when it has a non-zero spectral gap and $M=-1$.
We calculate the Majorana number $M$ for the magnetic phases shown in Fig. \ref{magn SC off} for $\Delta=0.001t$ to investigate which regions of the parameter space are non-trivial for infinitesimal superconductivity, which we expect to leave the magnetic phases unchanged.
The different magnetic ground states result in different topological electronic phases. Hu et al. \cite{Hu} found a large topologically non-trivial regime by assuming harmonic spiral ground states. Yet, the collinear and complex order phases, that we find, affect the electronic states differently than harmonic spirals and cause the electronic system to become topologically trivial, despite having an open gap. Thus, significant portions of the parameter space are in fact
topologically trivial in the ground state.
As the $\uparrow \uparrow \downarrow \downarrow $-phase (D) and the phase with complex orders (H) lie inside the harmonic spiral phase, this adds a topological phase transition that does not exist without a magnetic phase transition.\\
We further investigate the topological number of finite-size chains for larger superconducting order parameters $\Delta$. Majorana zero-modes are localized at the ends of the chain, where we find significant finite size effects in the form of a different magnetic configuration at the boundaries than in the center of the chain. Thus, the magnetic finite-size effects can potentially affect the formation of Majorana modes even for parameters ($J, \mu, \Delta$) that would lead to non-trivial states in infinite chains.
To calculate the topological number of finite-size chains, we use the reflection matrix $r$, following Ref. \cite{scatterD}, which is defined as the matrix, that connects an incoming mode at zero energy from an infinite lead with the reflected outgoing mode. The lead is defined by the Hamiltonian
\begin{equation}
H_{\text{lead}}=\sum_{<i,j>}{c^{\dagger }_i\left(t{\tau }_z{\sigma }_0\right)c_j}.
\end{equation}
The topological number is then calculated as
\begin{equation}
Q=\sgn(\det(r))
\end{equation}
in a Majorana basis by using the same unitary transformation as above. This approach is equivalent to the Majorana number as defined by Kitaev for translationally invariant systems, but can be applied to systems that can not be extended to infinity \cite{scatterD}, which is the case here because of the aperiodic relaxation of the magnetization towards the ends of the chain, see Fig. \ref{examples}.\\
Fig. \ref{finite SC} (c) shows the topological phases for representative values of $\mathrm{\Delta }$ for finite chains with length $L=40$. The determinant $\det(r)$ is shown instead of $Q=\sgn\det(r)$, because $Q$ can change sign for determinants close to zero because of the numerical precision of calculating the scattering matrix. For $\Delta=0.1t$, we find topologically trivial regions, that would be expected to be non-trivial for infinite chains.
The portion of collinear aligned neighbors appears to be strongly correlated to the topological number $Q$. The more collinear neighbors there are, the less likely the system will be topologically non-trivial. As the collinear neighbors are usually found at the ends of the chain, where Majorana modes would localize, we interpret this as the magnetic finite-size effects disrupting the formation of Majorana modes. Note that absence of collinear neighbors does not guarantee non-trivial phases.
\section{Classical Heisenberg fit}
Calculating the magnetic ground states directly with a tight-binding Monte-Carlo method like the ones used in Section II and III is computationally demanding and does not grant physical understanding of the origin of the magnetic states. To improve upon both of these problems, we fit a classical Heisenberg model to the tight-binding model as follows. First, we generate a set of 3000 random configurations of the magnetization where the magnetization $\textbf{m}_i$ on each site is uniformly randomly chosen on the unit sphere. Then, we calculate the total energy of each configuration for the tight-binding Hamiltonian in Eq. \ref{H_TB} as explained in Appendix A, and construct the Heisenberg Hamiltonian $H_{\text{HB}}$ that best reproduces this data.
The Heisenberg Hamiltonian we employ is
\begin{equation}\label{H_H}
\begin{split}
H_{\text{HB}}= & \sum_{i,j}J_{i,j}\vec{s}_i\cdot \vec{s}_j+\sum_{i,j} A_{i,j} (\vec{s}_i\cdot \vec{s}_j)^2 \\
& + \sum_{i,j,k} B_{i,j,k} (\vec{s}_i\cdot(\vec{s}_j\times \vec{s}_j))\\
& + \sum_{i,j,k,l} C_{i,j,k,l} \big([\vec{s}_i\cdot \vec{s}_j][ \vec{s}_k \cdot \vec{s}_l]+[ \vec{s}_i \cdot \vec{s}_k][ \vec{s}_j\cdot \vec{s}_l] \\
& + [\vec{s}_i\cdot \vec{s}_l][ \vec{s}_j\cdot \vec{s}_k]\big) \\
& + \sum_{i,j,k,l} D_{i,j,k,l} \big([\vec{s}_i\cdot \vec{s}_j][ \vec{s}_k \cdot \vec{s}_l] + [\vec{s}_i \cdot \vec{s}_k][ \vec{s}_j\cdot \vec{s}_l] \\
&- 2 [\vec{s}_i\cdot \vec{s}_l ][\vec{s}_j\cdot \vec{s}_k]\big).
\end{split}
\end{equation}
This Heisenberg Hamiltonian includes all isotropic 2-, 3- and 4-spin interactions \cite{tensors}.
$J_{i,j}$, $A_{i,j}$, $B_{i,j,k}$, $C_{i,j,k,l}$ and $D_{i,j,k,l}$ are translationally invariant, e.g., $J_{i+a,j+a}=J_{i,j}$. The summations run over all combinations of spins up to the 5th nearest neighbor. To find these parameters, we first create a sample of $N\approx 10^3-10^4$ uniformly random spin configurations and calculate the respective total energies in the tight-binding model (Eq. \ref{H_TB}). Then we fit the Heisenberg model to this sample using a least squares method.\\
We only include rotationally invariant terms, because the tight-binding Hamiltonian is rotationally invariant. For example, for the 2-spin interaction we include the magnetic exchange \cite{exchange} but not the Dzyaloshinskii–Moriya interaction (DMI) \cite{exchange} as DMI is not rotationally invariant. We check that the assumption of no DMI is valid, by temporarily allowing DMI in the fitting process, which consistently returns vanishing values for DMI.
We tested the quality of the Heisenberg fits by generating the magnetic ground state using the fitted Heisenberg model.
The resulting ground states are in good agreement with the results directly obtained from the tight-binding model as shown in Fig. \ref{TB vs MC}.
\begin{figure}
\centering
\raisebox{-0.5 \height}{\subfloat{ \includegraphics[width=0.5\textwidth]{angle2_TB_vs_MC.png}}}
\caption{Average angle between neighboring spins with respect to $J$ and $\mu$ calculated with the fitted Heisenberg model (a,b) and the tight-binding model (c,d) using Monte-Carlo. Parameters: $L=40$, $\Delta=0$ (a,c) and $\Delta=1.0t$ (b,d). Fig. \ref{variance} shows a measure of the reliability of the underlying fits.}
\label{TB vs MC}
\end{figure}
\begin{figure}
\includegraphics[width=0.5\textwidth]{figure5.png}
\caption{Ratio of the 4-spin interaction $R_{\text{4spin}}$. Parameters: $L=40$, $\Delta=0$ (a), $\Delta=1.0t$ (b). Fig. \ref{variance} shows a measure of the reliability of the underlying fits.}
\label{spin4}
\end{figure}
\noindent We can understand the emergence of the complex collinear phases qualitatively with this model.
In Fig. \ref{spin4}, the ratio of the 4-spin interactions and the sum of all interactions is shown, calculated as
\begin{equation}
R_{\text{4spin}}=\frac{\sum_i (|C_i|+|D_i|)}{\sum_i(|J_i|+|A_i|+ |B_i|+|C_i|+|D_i|)},
\end{equation}
where the summation runs over all unique parameters of the Heisenberg model. This ratio shows how strong the 4-spin interaction is, relative to the whole magnetic interactions.
In the AFM phase, 2-spin interactions are dominant. In all other phases, 2-spin and 4-spin interactions are of similar strength (factor of 0.7 to 1.5). This demonstrates that higher order spin interactions are very important in this system. Among the 4-spin interactions, the anti-symmetric terms have slightly stronger contributions than the symmetric terms by a factor of 1.2 to 2 in all non-AFM phases.
Furthermore, 3-spin interactions are vanishing in the whole parameter space. This is reasonable because the only rotationally invariant 3-spin interaction is the scalar triple product, which favors non-coplanar structures. This is in agreement with our findings from the previous section that do not indicate any non-coplanar ground states.\\
Besides granting insight into physical properties of the magnetism in superconducting atomic chains, the Heisenberg model has the advantage of being computationally more efficient as its computing time required for one energy calculation does not scale with the system size $L$
in the Monte-Carlo part as one only needs to consider local changes in energy. As larger systems require linearly more steps to converge in Monte-Carlo, the total computation time scales linearly with $L$ using this approach.
For comparison, calculating the total energy directly in tight-binding requires calculating all eigenvalues of an $L\times L$ matrix, which scales with $O(L^3)$ for a divide and conquer algorithm. Thus, the whole Monte-Carlo calculation in tight-binding scales with $O(L^4)$.\\
More details on this Heisenberg method can be found in Appendix C.
\section{Discussion and Conclusions}
In this study, we numerically determine the magnetic ground state of finite and infinite suspended magnetic chains with proximity-induced s-wave superconductivity, finding a number of complex collinear, complex unharmonic, and harmonic spin spiral ground states.
For finite Rashba-spin-orbit coupling the magnetic ground states are superposed by a non-coplanar conical spiral with the y-axis as rotation axis. Here, the conical opening angle is uniformly random, reflecting the rotational symmetry of the chains for vanishing Rashba-spin-orbit coupling.
Contrary to previous results, our investigations show that harmonic spirals are not the magnetic ground state for small to medium values of the superconducting order parameter in large regions of the parameter space. Only for large superconducting order parameters $\Delta>1.5t$, the assumption of harmonic spirals as ground states holds. While the harmonic spiral phases lead to a non-trivial electronic topological phase, the other magnetic ground states result in trivial electronic topological phases.
We present an approximative method to find the magnetic ground state of tight-binding models, which scales better with system size than tight-binding calculations and grants physical insights into the magnetic interactions, by setting up a classical Heisenberg model that reconstructs the system's energy from random spin configurations. We find that the 4-spin interactions play an important role for the formation of the complex collinear phases.\\
The demonstration that simple tight-binding models host complex magnetic structures motivates further research on magnetic tight-binding models and experiments on atomic magnetic chains. Parametric regions where a small change in parameters leads to large changes in the magnetic and electronic topological phases might be of special interest for additional research regarding the control of the location of topological boundary modes. Furthermore, our findings on magnetic ground states also facilitate experiments with spin polarized scanning tunneling microscopy as knowledge about the structure of expectable magnetic states helps in identifying magnetic states experimentally.
Finally, the presented classical Heisenberg approximation allows us to investigate the magnetic ground state of more complex and larger tight-binding models like 2D surfaces, magnetic chains on non-magnetic 3D-bulk systems, or models that account for large numbers of electronic orbitals. As long as the tight-binding model can be solved often enough to generate a sample for the fit ($N_{\text{sample}}\approx 10^3-10^4$) in reasonable computation times, it can be well approximated with the presented method.
\section*{Acknowledgments}
J.N.-S. and R.W. gratefully acknowledge financial support from the European Union via the ERC Advanced Grant ADMIRE (project No. 786020).
T.P. acknowledges support by the Deutsche Forschungsgemeinschaft (DFG) (project no. 420120155).
R.W. gratefully acknowledges funding by the Cluster of Excellence 'Advanced Imaging of Matter' (EXC 2056 - project ID 390715994) of the Deutsche Forschungsgemeinschaft (DFG).
\section*{Appendix A: Total energy of a superconducting system}
\renewcommand{\theequation}{A.\arabic{equation}}
\setcounter{equation}{0}
We start from a generic superconducting Hamiltonian in the BCS mean field description
\begin{equation}
H=\textbf{c}^\dagger h \textbf{c} +\textbf{c} \Delta \textbf{c} +\textbf{c}^\dagger \Delta^\dagger \textbf{c}^\dagger,
\end{equation}
where $h$ and $\Delta$ are matrices and $c=(c_1, c_2,...,c_N)^T$ is a vector containing all fermionic creation operators. This is transformed to
\begin{equation}
H=\textbf{p}^\dagger\begin{pmatrix}
h/2 & \Delta/2 \\
\Delta^\dagger/2 & -h/2
\end{pmatrix}\textbf{p} + \frac{1}{2}\Tr(h),
\end{equation}
with $\textbf{p}=(\textbf{c},\textbf{c}^\dagger)^T$ and the trace $\Tr$.\\
We then use a unitary transformation $U$ to diagonalize the Hamiltonian
\begin{equation}
\begin{split}
H=(U\textbf{p})^\dagger U
\begin{pmatrix}
h/2 & \Delta/2 \\
\Delta^\dagger/2 & -h/2
\end{pmatrix}
U^\dagger (U\textbf{p})+\frac{1}{2}\Tr(h) \\
= \textbf{d}^\dagger \begin{pmatrix}
\epsilon/2 & 0 \\
0 & -\epsilon/2
\end{pmatrix}\textbf{d} +\frac{1}{2}\Tr(h),
\end{split}
\end{equation}
where $\epsilon$ is a matrix that contains all positive eigenvalues and $\textbf{d}=(\textbf{b}, \textbf{b}^\dagger)^T$ with the Bogoliubons $\textbf{b}$. Using fermionic algebra, the Hamiltonian becomes
\begin{equation}
H=\textbf{b}^\dagger\epsilon \textbf{b} +\frac{1}{2}(\Tr(h)-\Tr(\epsilon)).
\end{equation}
In this representation all Bogoliubons have positive energies and the ground state energy is
\begin{equation}
E_{\text{total}}=\sum_i \frac{\epsilon_i-\mu}{2}.
\end{equation}
\section*{Appendix B: Monte-Carlo Method}
\renewcommand{\theequation}{B.\arabic{equation}}
\setcounter{equation}{0}
We follow the standard Metropolis Monte-Carlo algorithm \cite{MC2}.
We start with a random spin configuration, where each spin is chosen randomly uniformly from a unit sphere. Then, in each step, one randomly chosen spin is changed into another random spin from a unit sphere. The total energy calculated in tight-binding (Appendix A) of the new spin configuration $E_{\text{new}}$ is compared to the total energy of the old configuration $E_{\text{old}}$. If the new configuration has a lower total energy it is accepted. If it has a higher total energy it is accepted with a probability equal to the Boltzmann distribution $exp(\frac{E_{\text{old}}-E_{\text{new}}}{k_B T})$. The temperature is progressively reduced close to zero temperature. Then, at low temperatures and zero temperature the spins are only updated by a small change in directions, remaining on the unit sphere. We consider the simulation to be converged, when increasing the number of steps does not reduce the total energy of the final configuration systematically. For this, we made spot checks with a ten times larger number of Monte-Carlo steps. We used $100000$ Monte-Carlo steps in the first part of the cooling, then $50000$ Monte-Carlo steps in the second part of the cooling, where only small changes are allowed, and finally $10000$ Monte-Carlo steps in the final part at zero temperature. One Monte-Carlo step represents testing a number of random spin changes equal to the number of spins in the system $L$.
\section*{Appendix C: Heisenberg fits}
\renewcommand{\theequation}{C.\arabic{equation}}
\setcounter{equation}{0}
\renewcommand{\thefigure}{C.\arabic{figure}}
\setcounter{figure}{0}
\begin{figure}
\raisebox{-0.5 \height}{\subfloat{\includegraphics[width=0.5\textwidth]{tensors_variance_001-1.png}}}
\caption{Variance $V$ of the total energy difference between the Heisenberg and the tight-binding model with respect to $J$ and $\mu$ for $\Delta=0$ (a) and $\Delta=1.0t$ (b).}
\label{variance}
\end{figure}
To find the optimal coefficients $J$, $A$, $B$, $C$ and $D$ of the Heisenberg Hamiltonian (Eq. \ref{H_H} in the main text), we first create a sample of $N$ random spin configurations and calculate the respective total energies by the tight-binding model in Eq. \ref{H_TB}. We tested $N$ on the order of $10^3$ to $10^5$ and choose $N=3000$ for the calculations shown in Section IV. The required number of samples and corresponding free parameters does not scale with the system size when the Heisenberg model is chosen translationally invariant, as is the case for our model. Then we fit the Heisenberg model to this sample using the Levenberg-Marquardt algorithm, which is a least squares method.\\
To ensure that we do not overfit the results from the tight-binding calculations, we increase the sample size until the fitting parameters do not change anymore. For the calculations shown in Section IV, we set $N=3000$. Increasing the sample size to $N=50000$ results only in minimal changes to the fitting parameters. We find
\begin{equation}
\frac{var(\textbf{F}(N=3000)-\textbf{F}(N=50000)}{var(\textbf{F}(N=50000))} < 0.01
\end{equation}
for all tested $J$, $\mu$ and $\Delta$, where $\textbf{F}$ is a vector that contains all fitting parameters.\\
To judge the quality of the fits, we use a normalized variance calculated as
\begin{equation}
V=\frac{var(\textbf{E}_{\text{HB}}-\textbf{E}_{\text{tb}})}{var(\textbf{E}_{\text{tb}})},
\end{equation}
where $\textbf{E}_{\text{HB}}$ and $\textbf{E}_{\text{tb}}$ contain the energies calculated via the Heisenberg Hamiltonian and the tight-binding model for the same spin configurations, respectively. The variance for $\Delta=0$ and $\Delta=1$ is shown in Fig. \ref{variance}. For $J\approx -\mu$ the fitting process fails at $\Delta=0$, because for many spin configurations no electronic states are occupied. Within the potentially topologically non-trivial region (see Fig. \ref{finite SC}), the fitting quality is the lowest. The magnetism in that region appears to be highly complex and might require even higher order spin interactions to be fully captured by a classical Heisenberg model. Yet, regarding the general structure of the magnetic ground states, we find good agreement with the tight-binding calculations in this parameter region as well.\\
\section*{Appendix D: Monte-Carlo method for infinite chains}
\renewcommand{\theequation}{D.\arabic{equation}}
\setcounter{equation}{0}
In this section, we explain the modified Monte-Carlo method that we use to identify in which region non-harmonic spin structures with sequences of multiple different relative angles are energetically more favorable than harmonic spirals.\\
We use a spin basis rotation $R_{\theta_j}$
\begin{equation}
R_{\theta_j}=\begin{pmatrix}
\cos (\theta_j/2) & \sin (\theta_j/2)\\
-\sin (\theta_j/2) & \cos (\theta_j/2)
\end{pmatrix},
\end{equation}
corresponding to a rotation of the magnetization in the xy-plane.
This removes the spin directions from the onsite potential and adds a change in the spin direction to the hopping term between the $j$-th and $(j+1)$-th sites.
With this the tight-binding Hamiltonian becomes
\begin{equation}
\centering
\begin{split}
H= & \sum_j{c^{\dagger }_j\left(-J{\tau }_0\sigma_z+\left(2t-\mu \right){\tau }_z{\sigma }_0+\Delta\tau_x\sigma_0\right)c_j}\\\
&+\sum_{<i,j>}{c^{\dagger }_i\left(t{\tau }_z\otimes R_{\theta_i}\right)c_j},
\end{split}
\end{equation}
where $\theta_j$ is the relative angle between the $j$-th and the $(j+1)$th site. Writing the Hamiltonian in this way allows us to infinitely expand spin structure that can be described by a finite sequence of relative angles \cite{Martin2012}.\\
We then employ a Monte-Carlo method that varies these relative angles for unit cells of size $L=1,2,3...8$ for an infinite chain in k-space, sampling 10000 k-points. The Monte-Carlo updates are the same as described in Appendix B. We then compare the minimal total energy found for each size of the unit cell and determine the array of $\theta_i$ that achieves the lowest total energy. When two unit cells with different length coincidence in total energy (on the order of $0.001 J$), the smaller unit cell is preferred.\\
If the ground state is a harmonic spiral, FM, or AFM, one finds unit cells of size $L=1$ with this method but $L\geq 2$ for more complex structures. We also calculate how much $\theta_i$ changes along the unit cell, for unit cells with $L\geq 2$, and quantify this by the average change of the relative angle between neighboring pairs of spins:
\begin{equation}
\theta_{\text{change}}=\sum_{j=1}^{L-1} \frac{|\theta_j-\theta_{j+1}|}{L-1}.
\end{equation}
In the case of a harmonic spiral, FM or AFM one finds $\theta_{\text{change}}=0$.
The region marked as (H) in Fig. \ref{magn SC off} reflects the parameters for which the derived unit cell has $L\geq 2$, $\theta_{\text{change}}>0.05$ and not all $|\theta_j|=\pi$, i.e., area H is a phase that can neither be described by a harmonic spiral nor by a collinear structure.
\bibliographystyle{apsrev4-2}
|
2,869,038,154,019 | arxiv | \section{Conformal field theory, the central charge and scaling
invariance}
\label{sec:cft}
The SU($N$) Wess--Zumino--Witten (WZW) models have been found to capture
the low energy behavior of a family of critical quantum spin
chains~\cite{affleck-87prb5291}. WZW models are conformal field
theories, meaning that the Lagrangians are invariant under conformal
mappings. These are all combinations of translation, rotation, and
dilatation in two--dimensional space-time. For field theories
with conformal invariance, it suffices to specify the scaling of
the fields or rather the scaling of their correlation functions to
characterize the theory
completely~\cite{DiFrancescoMathieuSenechal97}. As such, once a CFT
is identified, there is no immediate need to work with the
associated Lagrangian.
Our emphasis in this article will be on the relation between the
universal parameters of the CFT and numerically accessible measures,
which we extract from the DMRG studies of the corresponding spin
chain models. As a general structure, a WZW model consists of a
non-linear sigma model term and $k$ times a topological Wess-Zumino
term, where $k$ is a non-zero positive
integer~\cite{DiFrancescoMathieuSenechal97}. The SU($N$) WZW model of
level $k$ (denoted SU($N)_k$ WZW in the following) can be
characterized by the central charge and the scaling dimension of the
primary field, both of which we will evaluate numerically in
Section~\ref{sec:num} below. In the following formulas, subleading
finite size contributions are neglected if they appear.
\subsection{Central charge}
The central charge $c$ is defined in the framework of the Virasoro
algebra of the CFT~\cite{DiFrancescoMathieuSenechal97}.
Alternatively, $c$ is also named conformal anomaly number. It appears
in the correlation function of the energy momentum tensor $T(z)$ of the
theory, where $z$ denotes a complex space-time variable. This
correlation has a singularity as $z\rightarrow 0$, with a prefactor
proportional to $c$, $\langle T(z) T(0) \rangle \sim
\frac{c/2}{z^4}$. For the SU($N$$)_k$ WZW, $c$ is given by
\begin{equation}
c=\frac{k(N^2-1)}{k+N}.
\label{ccharge}
\end{equation}
The for our purposes relevant feature of the central charge is that
$c$ appears as a universal scaling factor in the microscopically
accessible entanglement entropy~\cite{korepin92prl096402}.
Let $i$ denote a site, $L$ the total length of the spin chain,
$i=1,\dots, L$, and $\rho_\alpha$ the reduced density matrix where all
the degrees of freedom on sites $i>\alpha$ are traced out, { i.e.},\
$\rho_\alpha=\text{Tr}_{i>\alpha} \rho$. For this case, the
entanglement entropy is given by
\begin{equation}
S_{\alpha,L}=-\text{Tr}\big[\rho_\alpha \log \rho_\alpha\big].
\label{ee}
\end{equation}
For periodic boundary conditions and central charge $c$, the entropy
then takes the form~\cite{calabrese-04jsm06002}
\begin{equation}
S_{\alpha,L}=\frac{c}{3} \log \left[\left(\frac{L}{\pi }\right)
\sin{\left(\frac{\pi \alpha }{L}\right)}\right]
+ c_1,
\label{cc}
\end{equation}
where $c_1$ is a non-universal constant and the lattice spacing is set
to unity. Thus, with $L$ being the total number of sites divided by
unit lattice spacing, the entanglement entropy obeys the symmetry
relation $S_{\alpha,L}=S_{L-\alpha,L}$ and has its maximum at
$\alpha=L/2$. By virtue of~\eqref{cc}, $c$ can be extracted directly
from the entanglement entropy calculated via DMRG.
\subsection{Scaling dimension}
The scaling dimension $x$ is a property of the fields $\phi$
of the CFT~\cite{cardy84jpa385}. Conformal invariance implies that the
two-point correlation function of the field must satisfy
\begin{equation}
\langle \phi(z_1) \phi(z_2) \rangle = \vert f'(z_1)\vert^x
\vert f'(z_2)\vert^x \langle \phi(f(z_1)) \phi(f(z_2)) \rangle ,
\end{equation}
where we constrain the conformal mapping $f(z)$ to a dilatation, and
$f'(z)$ is its derivative at point $z$. For the finite systems we
study numerically, we can use that
the low energy spectrum, and hence the energies of the finite system,
can be classified by the associated CFTs.
The lowest excited states (labeled by $p$) above the ground state
($p=0$) belong spectrally to a conformal tower~\cite{affleck-89jpa511},
with energies which obey the relation
\begin{equation}
E_{p,L}-E_{0,L}=\frac{2\pi v}{L} x_p,
\label{x}
\end{equation}
where $x_p$ denotes the scaling dimension of the field associated with
the $p$th state, and $v$ is the Fermi velocity. The dependence on $v$
reflects that in the low energy limit, the only relevant momentum
scale of the spin chain is provided by the linearized dispersion
around the Fermi points. This allows us to extract the scaling dimension
times the Fermi velocity, $x_pv$, as the energies in the l.h.s.\
of~\eqref{x} are numerically accessible through DMRG. In the
following, we shall focus on the first excited state $x_1 \equiv x$,
{ i.e.},\ the scaling dimension of the primary field.
\subsection{Fermi velocity parameter}
In view of~\eqref{cc} and~\eqref{x}, it is clear that we need one
further relation, to extract the Fermi velocity $v$ from our numerical
studies. The required relation is
\begin{equation}
E_{0,L}=E_{0,\infty}-\frac{\pi c v}{6L},
\label{ediff}
\end{equation}
where $E_{0,L}$ and $E_{0,\infty}$ denote the ground state energies of
the finite and the infinite chain, respectively. This relation can be
easily understood from the field theoretical point of
view~\cite{affleck-89jpa511}: For a finite length and temperature
$T=0$, $L$ sets the inverse energy scale of the system. This scale
can be rephrased in terms of a field theory at finite temperature and
no length scale, { i.e.},\ an infinite chain at temperature $T=v/L$.
Writing~\eqref{ediff} in terms of the free energy density for this
finite temperature field theory, we obtain the correct specific heat
linear in $T$, as we expect for a gapless spectrum. As we calculate
$E_{0,L}$ directly and extract $e_{0,\infty}=E_{0,L}/L$ for
$L\to\infty$ by finite size scaling, we can obtain $v$ from
\eqref{ediff} once we have obtained $c$ from~\eqref{cc}.
\section{The DMRG method}
\label{sec:dmrg}
\begin{vchfigure}[h]
\centering
\includegraphics[scale=0.9]{conv-plot-new.eps}
\vchcaption{Periodic Boundary Conditions: logarithmic plot for the
energy difference of the finite size ground state energy and the
thermodynamic site limit $L\rightarrow \infty$ versus inverse
number of states $m$ kept in the DMRG sweeps. Shown are the lines
for the nearest neighbor Heisenberg model in the fundamental
representations of SU(2), SU(3), and SU(4), as well as for the
$S=1$ TB model for comparison. The length of the chain is $L=48$.}
\label{fig:convergence-PBC}
\end{vchfigure}
In the last decade, the DMRG was successfully applied to numerous
SU(2) spin models. Very recently, the DMRG was further used to
investigate the
SU(3) representation $\bs{3}$ Heisenberg model~\cite{corboz-07prb220404}.
Here we generalize
to the six--dimensional representation $(2,0)\equiv\bs{6}$ of SU(3),
which is formed by symmetric combination of two fundamental
representations of SU(3). We also present DMRG studies of a spin
chain with spins transforming under the fundamental representation of
SU(4). Our work hence requires the explicit implementation of the
su(3) and su(4) spin algebras with its $N^2-1$ generators, which are
explicitly given in Apps.~\ref{app:SU3R6matrices}
and~\ref{app:SU4R4_matrices}. Note that this implementation is more
involved than the implementation of SU($N$) Hubbard models, where the
explicit spin algebra does not enter, and the SU($N$) symmetry enters
only through the number of different fermionic species.
\begin{vchfigure}[t]
\centering
\includegraphics[scale=0.9]{conv-OBC.eps}
\vchcaption{Hard Wall Boundary Conditions: logarithmic plot for the
energy difference of the finite size ground state energy and the
thermodynamic limit $L\rightarrow \infty$ versus inverse number of
states $m$ kept in the DMRG sweeps. Shown are the lines for the
fundamental representations of SU(2), SU(3), and SU(4). The
length of the chain is again $L=48$. As compared to the case of
PBCs shown in Fig.~\ref{fig:convergence-PBC}, the system converges
much faster for a comparable number of states kept in the DMRG
sweeps.}
\label{fig:convergence-OBC}
\end{vchfigure}
An important problem with numerical studies of SU($N$) spin chains in
general is the increasing dimensionality of the subspace of one site
of the chain, due to either larger values for $N$ or higher spin
representations (like rep $\bf 6$ for SU(3)). For DMRG, of course,
this dimensionality limits the system sizes we can access. In
Fig.~\ref{fig:convergence-PBC}, we have plotted the convergence of the
DMRG iteration as the number $m$ of states kept in the effective
density matrix is increased. We observe that for comparable $m$, the
convergence decreases rapidly as we go to higher SU($N$), according
for the exponential increase of the Hilbert space as the number of
states per site grows. However, as confirmed by our numerical results
reported below, our DMRG code is capable of at least handling critical
spin chains up to SU(4) with reasonable convergence and accuracy.
While the plots in Fig.~\ref{fig:convergence-PBC} are obtained using
periodic boundary conditions (PBCs), the convergence behavior for hard
wall boundary conditions (HWBCs) is shown in
Fig.~\ref{fig:convergence-OBC}. Note that HWBCs rather than PBCs are
the natural choice for DMRG, as the number of DMRG states $m$ we need
to keep to achieve a similar level of precision for PBCs is, according
to our calculations, roughly the square of the number of states we
need to keep for HWBCs. Nonetheless, the results we present below are
obtained with PBCs, as PBCs allow a more convenient treatment of the
finite size corrections for the quantities we extract. In particular,
\eqref{ediff} is valid only for PBCs. (There is a relation
corresponding to~\eqref{cc} for HWBCs~\cite{calabrese-04jsm06002}.)
We have used 10 DMRG sweeps for all the calculations we present.
\begin{vchfigure}[h]
\centering
\includegraphics[scale=1.0]{ee-comp.eps}
\vchcaption{The entanglement entropy $S_{\alpha,60}$ as another example
for the convergence behavior of the SU(3) Heisenberg model. (a)
shows the EE $S_{\alpha,60}$. The different curves correspond to
different number of kept DMRG states (200, 300, 500, 900, 1500,
3500 states). The system with 3500 kept DMRG states is fully
converged and provides a benchmark. The truncated Hilbert space for this converged job contains about 8 million states.}
\label{fig:ee-comparison-su3}
\end{vchfigure}
As a demonstration of convergence, Fig.~\ref{fig:ee-comparison-su3}
shows the entanglement entropy for different numbers of kept DMRG
states $m$ for the SU(3) representation ${\bf 3}$ Heisenberg model. The result
displays the $S_{\alpha,L}=S_{L-\alpha,L}$ symmetry mentioned above
and fits the prediction \eqref{cc} to astonishing accuracy. We find,
however, that this accuracy requires a number $m$ of states kept which
is large in comparison with standard applications of the DMRG method,
and which demands large computational resources. This is partially
due to the criticality of the models we study. With a spectrum that
is gapless in the thermodynamic limit, a large subspace of the entire
Hilbert space contributes to the long range correlations, which is
reflected in a large number of relevant weights in the density matrix.
Nonetheless, with a sufficiently high value of states kept, very
accurate results can be extracted from the DMRG computations. Even
for rather small systems consisting of $\mathcal{O}$(100) sites, we
obtain highly accurate estimates for the central charges of the
critical models described in the following section. As the
entanglement entropy is not directly accessible by other numerical
methods,
the DMRG method is preeminent to our purposes.
\section{Integrable models of critical SU($\bs N$) chains}
\label{sec:mod}
The SU($N$) spin chain models we investigate numerically in this work
are described by a family of Hamiltonians $\mathcal{H}^{[N,m]}$, which
are amenable to the transfer matrix
method~\cite{takhtajan82pl479,babudjan82pl479,babudjan83npb317,
andrei-84pl370,johannesson85npb235,sutherland75prb3795}.
Note that some of the models $\mathcal{H}^{[N,m]}$ were investigated
by numerical and analytical solutions of the Bethe ansatz
equations~\cite{alcaraz-88jpa4397,alcaraz-89jpal865,martins90prl2091}.
The representations $[N,m]$ of SU($N$) are given by the totally
symmetric combination of $m$ fundamental representations of SU($N$).
The corresponding Young tableaux is
\begin{displaymath}
\setlength{\unitlength}{8pt}
\begin{picture}(5,5)(0,0.15)
\put(0,4){\line(1,0){5}}
\put(0,3){\line(1,0){5}}
\put(0,3){\line(0,1){1}}
\put(1,3){\line(0,1){1}}
\put(2,3){\line(0,1){1}}
\multiput(2.7,3.48)(0.3,0){3}{\circle*{0.1}}
\put(4,3){\line(0,1){1}}
\put(5,3){\line(0,1){1}}
\put(2.5,1.5){\makebox(0,0)%
{$\underbrace{\quad\text{~~~}\qquad}_{\text{\small $m$~boxes}}$}}
\put(5.8,3.06){\circle*{0.1}}
\end{picture}
\end{displaymath}
For SU(2), all the representations are of this form, with $m=2S$. For
SU(3), the symmetric representations include the fundamental
representation $\bf 3$ and the representation $\bf 6$, for $m=1$ and
$2$, respectively. The dimensionality $n$ of the totally symmetric
representation $[N,m]$ is in general given by
\begin{equation}
n\,\equiv\, {\rm dim}[N,m]\,=\,{N-1+m\choose m}.
\end{equation}
The Hamiltonians $\mathcal{H}^{[N,m]}$ contain two-site interactions
only and are invariant under global SU($N$) spin rotation
, { i.e.},\ Heisenberg interaction terms to arbitrary
power~\cite{takhtajan82pl479,babudjan82pl479,babudjan83npb317,
andrei-84pl370,johannesson85npb235,sutherland75prb3795}.
Note that all the models $\mathcal{H}^{[N,m]}$ are integrable, due to
an infinite number of operators which commute with the Hamiltonians.
In this work, we consider the models with $[N,m]=[2,1]$, $[2,2]$,
$[2,3]$, $[3,1]$, $[3,2]$, and $[4,1]$.
The Hamiltonians for $[N,1]$, { i.e.},\ the fundamental representations, are
just the nearest-neighbor Heisenberg models,
\begin{equation}
\label{nnHM-SU(N)}
\mathcal{H}^{[N,1]}=\sum_{i=1}^{\mathcal{N}} \bs{S}_i\bs{S}_{i+1}.
\end{equation}
In general, $\bs{S}_i$ is an SU($N$) representation $[N,m]$ spin
operator at site $i$. Since the dimension of the Lie algebra su($N$)
is $N^2-1$, the spin operator $\bs{S}_i$ consists of the $N^2-1$
generators,
\begin{equation}
\label{sun-spin-operators}
S_i^{\alpha} = \frac{1}{2} \sum_{\sigma,\sigma'=f_1,\ldots,f_n}
c_{i\sigma}^{\dagger}V^{\alpha}_{\sigma\sigma'}c_{i\sigma'}^{\phantom{\dagger}},
\end{equation}
where $\alpha=1,\ldots,N^2-1$, $V^{\alpha}_{\sigma\sigma'}$ are the
SU($N$) Gell-Mann matrices, and $f_1,\ldots,f_n$ denote the $n$
different spin states~\cite{Cornwell84vol2}. Trivially,
$\bs{S}_i\bs{S}_{i+1}\equiv\sum_{\alpha=1}^{N^2-1} S_i^{\alpha}
S_{i+1}^{\alpha}$.
For the fundamental representation $[2,1]$ of SU(2), the $V$'s are
just the Pauli matrices and the two spin states can be classified by
the eigenstates $f_1=\uparrow$, $f_2=\downarrow$ of $S^z$. For the
fundamental representation $[3,1]$ of SU(3), the $V$'s are given by
the eight Gell-Mann matrices. The matrices $V$ for representations
$[3,2]$ ({ i.e.},\ SU(3) representation $\bs{6}$) and $[4,1]$ ({ i.e.},\ SU(4)
representation $\bs{4}$) are written out in
Apps.~\ref{app:SU3R6matrices} and~\ref{app:SU4R4_matrices}. In our
numerical implementations, we have scaled the Hamiltonians such that
the pre-factor of the bilinear Heisenberg term $\bs{S}_i\bs{S}_{i+1}$
is $\pm 1$ and we have dropped the constant term.
As we confirm numerically below, the low-energy behavior of the
models $\mathcal{H}^{[N,1]}$ is described by the SU($N$)$_1$ WZW
model, with topological coupling constant $k=1$.
With~\eqref{ccharge}, we expect to find $c=N-1$ for the central
charge. The integrable spin $S=1$ model we investigate, the
Takhtajan--Babudjan model~\cite{takhtajan82pl479,
babudjan82pl479,babudjan83npb317}, is given by
\begin{equation}
\label{TB-model}
\mathcal{H}^{[2,2]}=\sum_{i=1}^{\mathcal{N}} \left[ \bs{S}_i\bs{S}_{i+1}
-\left( \bs{S}_i\bs{S}_{i+1} \right)^2 \right].
\end{equation}
The low energy physics is described by the SU(2)$_2$ WZW model.
With~\eqref{ccharge}, we expect to find $c=\frac{3}{2}$. Note that
the criticality of this integer spin model is not inconsistent with
the Haldane gap, as Haldane's classification applies to {\it generic}
integer spin chains, while the Takhtajan--Babudjan
model~\eqref{TB-model} is tuned to criticality.
The next higher dimensional integrable SU(2) model from the
Takhtajan--Babudjan series is given by the spin 3/2 Hamiltonian
\begin{equation}
\label{S=3/2-model}
\mathcal{H}^{[2,3]}=\sum_{i=1}^{\mathcal{N}} \left[ -\bs{S}_i\bs{S}_{i+1}
+\frac{8}{27}\left( \bs{S}_i\bs{S}_{i+1} \right)^2
+\frac{16}{27}\left( \bs{S}_i\bs{S}_{i+1} \right)^3 \right].
\end{equation}
The corresponding CFT is the SU(2)$_3$ WZW model, which implies that the
central charge is $c=\frac{9}{5}$.
Finally, the Andrei--Johannesson~\cite{andrei-84pl370,johannesson85npb235}
model consists of SU(3) spins transforming under the six--dimensional
representation $\bs{6}$, and is given by
\begin{equation}
\label{su3rep6-model}
\mathcal{H}^{[3,2]}=\sum_{i=1}^{\mathcal{N}} \left[ \bs{S}_i\bs{S}_{i+1}
-\frac{3}{5}\left( \bs{S}_i\bs{S}_{i+1} \right)^2 \right].
\end{equation}
The corresponding CFT is the SU(3)$_2$ WZW model, which implies
$c=\frac{16}{5}$. We now turn to our numerical results for these models.
\section{Numerical results}
\label{sec:num}
\subsection{Central charge}
\begin{vchfigure}[h!]
\centering
\includegraphics[scale=0.85]{ee-s2-100-4000.eps}
\vchcaption{Entanglement entropy (block entropy) of the integrable
SU(2) $S=1$ Hamiltonian with PBCs. The solid line corresponds to
the formula \eqref{cc} with the fit parameter $c$, the central
charge of the corresponding CFT. The uniform bond entropy, { i.e.},\ the
nearest neighbor entanglement entropy, indicates a homogeneous and
translationally invariant ground state.}
\label{fig:ee-s2-100-4000}
\end{vchfigure}
\begin{vchfigure}[h!]
\centering
\includegraphics[scale=0.9]{ee-su4-60-8000.eps}
\vchcaption{Entanglement entropy (block entropy) of the SU(4)
nearest neighbor Heisenberg model with PBCs. The solid line
constitutes a fit of the data using~\eqref{cc}, yielding a central
charge close to the predicted value $c=3$ (for details see
Tab.\,\ref{tab:num-results}).}
\label{fig:ee-su4-60-8000}
\end{vchfigure}
As noted above, the entanglement entropy is provided quite naturally
in DMRG, as in each sweep we really calculate reduced density
matrices, from which we easily obtain the entanglement entropy
via~\eqref{ee}. From the plots of the entanglement entropy vs.\ the
site index, we obtain a numerical value for the central charge
via~\eqref{cc}. In Fig.~\ref{fig:ee-s2-100-4000}, we show the
entanglement entropy (also called block entropy) for the integrable
$S=1$ Hamiltonian, the Takhtajan--Babudjan model, as an illustrative
example. The fit yields a central charge of $c=1.50717 \pm 0.0003$,
where the error corresponds to the fitting error shown in
Tab.~\ref{tab:num-results}. The result is in excellent agreement with
the value $c=\frac{3}{2}$ predicted by CFT. We have also plotted the
bond entropy, which is the entanglement entropy of two neighboring
sites $\alpha$ and $\alpha+1$ with the remainder of the system. In
general, a bond entropy which is not site independent indicates a
spontaneous breakdown of translational invariance (like { e.g.}\
dimerization) in the ground state.
Despite being a quantity of its own interest to extract information
from finite systems~\cite{molina-07prb235104}, we attach no
significance to the bond entropy beyond the confirmation of
translational invariance.
Other quantities, { e.g.}\ the ground state
stiffness~\cite{schmitteckert-04prb195115}, could in principle be
studied within DMRG to supplement the ground state studies, but are
not our point of consideration in this work.
\begin{vchtable}[t]
\vchcaption{Theoretical predictions and numerical results for the
central charge for the Hamiltonians we considered. The error
quoted are due to inaccuracies when fitting the data for
entanglement entropy obtained numerically to~\eqref{cc}. An
additional systematic error, which we have not estimated
separately, arises from the states discarded within the DMRG.}
\label{tab:num-results}\renewcommand{\arraystretch}{1.5}
\begin{tabular}{cccccc} \hline
Hamiltonian $[N,m]$ & $N$ & $k$ & $c$ & $c_{\rm DMRG}$ \\%[3pt]
\hline
$[2,1]$ & 2 & 1 & 1 & $1.0001 \pm 0.0012$ \\%[3pt]
$[2,2]$ & 2 & 2 & $\frac{3}{2}$ & $1.5072 \pm 0.0003$ \\%[3pt]
$[2,3]$ & 2 & 3 & $\frac{9}{5}$ & $1.8002 \pm 0.0211$ \\%[3pt]
$[3,1]$ & 3 & 1 & 2 & $2.0001 \pm 0.0102$ \\%[3pt]
$[3,2]$ & 3 & 2 & $\frac{16}{5}$ & $3.2214 \pm 0.0437$ \\%[3pt]
$[4,1]$ & 4 & 1 & 3 & $2.9527 \pm 0.0237$ \\%[3pt]
\hline
\end{tabular}
\end{vchtable}
The discrepancy between the data and the fit to~\eqref{cc} is most
visible at the maximum of the entanglement entropy, { i.e.},\ for half of
the sites traced out, as this discrepancy is due to entropy we have
discarded by discarding states. While there is no difference visible
in Fig.~\ref{fig:ee-s2-100-4000}, a discrepancy can be discerned in
Fig.~\ref{fig:ee-su4-60-8000}, where we have plotted the entanglement
entropy of the SU(4) Heisenberg model in the fundamental
representation. This small discrepancy is present even though we keep
8000 DMRG states for the sweep iterations which results in a truncated
Hilbert space containing 22 million states. Note that the DMRG
calculation for the ground state of this model with 8000 kept states
has taken 68 hours of computer time (4 CPU cores) with ca.\,20 gigabyte
of memory, while the same calculation with 5500 kept states has taken
28 hours (4 CPU cores) with ca.\,9 gigabyte of memory.
Here we only exploited the abelian quantum numbers of SU($N$). By
using the square of the total spin $\bs{S}^2$ as additional quantum
number and representing the states according to the Wigner--Eckhardt
theorem, one should be able to reduce the required Hilbert space
significantly. However, already for SU(2), the Clebsch--Gordon
coefficients make the implementation cumbersome~\cite{mcculloch}, and
for SU(3) and SU(4) the corresponding Clebsch--Gordon coefficients are
more complicated. Therefore we decided not to implement them. Note
that the use of non--abelian quantum numbers does not automatically
lead to better performance, { e.g.}\ in SU(2) it helps for small $S$
sectors only, since otherwise the sum over reduced states gets too
involved.
The value we obtain for the central charge, $c=2.95268 \pm 0.02368$,
however, is reasonably close to the predicted value $c=3$. In
addition to the fitting error we quote in Tab.\,\ref{tab:num-results},
there is a systematic error due to the entropy we have discarded by
discarding states in DMRG. As all the contributions to the
entanglement entropy are positive definite, this systematic error
leads to a slight underestimate for the numerically obtained central
charge.
Our results for the models introduced in Section~\ref{sec:mod} are
presented in Tab.~\ref{tab:num-results}. In general, we find excellent
agreement between analytical values and numerical data.
\subsection{Scaling dimension of critical models}
\begin{vchfigure}[b]
\includegraphics[scale=0.8]{sd-su2.eps}
\vchcaption{Scaling dimension for the spin 1/2 Heisenberg model.
According to Eq. \eqref{ediff} the Fermi velocity is fitted in the
main picture ($v \sim \frac{\pi}{2}$), which is used as a parameter
for the fit of the scaling dimension done in the inset. In the
inset, $E_{1,L}-E_{0,L}$ is plotted vs. $1/L$ and according to Eq.
\eqref{x} we have fitted the scaling dimension $x=0.443\pm 0.0020$.
The data points correspond to chains (PBCs) from 20 to 320 sites.}
\label{fig:spin1/2-sd}
\end{vchfigure}
\begin{vchtable}[t]
\vchcaption{Theoretical predictions and numerical results for the
scaling dimension of the primary fields for SU(2) and SU(3)
Hamiltonians. The deviations from the analytical values are higher
than the deviations for central charges discussed above. The errors
quoted again refer to inaccuracies of the fits only.}
\label{tab:num-results-x}\renewcommand{\arraystretch}{1.5}
\begin{tabular}{cccccc} \hline
Hamiltonian $[N,m]$ & $N$ & $k$ & $x$ & $x_{\rm DMRG}$ \\%[3pt]
\hline
$[2,1]$ & 2 & 1 & $\frac{1}{2}$ & $0.443 \pm 0.0020$ \\%[3pt]
$[2,2]$ & 2 & 2 & $\frac{3}{8}$ & $0.338 \pm 0.0006$ \\%[3pt]
$[3,1]$ & 3 & 1 & $\frac{2}{3}$ & $0.638 \pm 0.0010$ \\%[3pt]
\hline
\end{tabular}
\end{vchtable}
From the spectrum we numerically calculate via DMRG, it takes two
steps to obtain an estimate for the scaling dimension of the primary
field. First, with the central charge obtained through the
entanglement entropy, we use \eqref{ediff} to arrive at an estimate
for the Fermi velocity $v$. Second, we use~\eqref{x} to obtain an
approximate value for the scaling dimension $x$. The values we obtain
for some of the models we study are listed in
Tab.~\ref{tab:num-results-x}.
Both steps require only linear fits, which are easily accomplished.
For the fundamental representations of SU(2) and SU(3), these fits are
shown in Figs.~\ref{fig:spin1/2-sd} and \ref{fig:su3-sd}. In the
process, however, we omit significant finite size corrections. To
begin with, in~\eqref{ediff}, marginal sub-leading contributions of
the order of $\mathcal{O}(\frac{1}{L (\log L)^3})$ are
omitted~\cite{cardy86jpa1093}. As we consider spin chains with of the
order of 100 sites, these corrections are at the order of $1\%$ and
thus for our purposes negligible. In~\eqref{x}, however, the error
due to omitting marginal contributions is of order
$\mathcal{O}(\frac{1}{L \log L})$, and hence significantly larger.
This error is essentially responsible for the discrepancy between the
analytical results and the numerical findings in
Tab.~\ref{tab:num-results-x}. For these reasons, our numerical
results for the scaling dimension are not nearly as accurate as for
the central charges, where finite size corrections did not enter. By
use of non-Abelian bosonization~\cite{affleck-89jpa511}, the
logarithmic correction can be calculated in principle. For the case
$S=1$ of SU(2), this has been carried out by Hijii and
Nomura~\cite{hijii-02prb104413}. For our purposes, however, such an
analysis is not required.
\begin{vchfigure}[b]
\includegraphics[scale=0.8]{sd-su3.eps}
\vchcaption{Scaling dimension for the SU(3) representation $\bs{3}$
Heisenberg model. Less data points have been computed compared to
$S=1/2$ SU(2) in Fig.~\ref{fig:spin1/2-sd}, but yields a similar
numerical fit precision. The Fermi velocity is fitted to $v \sim
\frac{\pi}{3}$. In the inset the energy difference $E_{1,L}-E_{0,L}$
is plotted vs. $1/L$ and according to Eq. \eqref{x} we have fitted
the scaling dimension $x=0.638\pm 0.0010$. The data points
correspond to chains (PBCs) with 30, 60, 90, and 120 sites.}
\label{fig:su3-sd}
\end{vchfigure}
The fits to leading order for the cases
$S=1/2$ and $S=1$ of SU(2) as well as the fundamental
representation~$\bs{3}$ of SU(3) are sufficiently conclusive to
identify the scaling dimension of the primary fields of the
corresponding WZW model. For a more refined spectral analysis or
calculation of the various scaling dimensions of the descendants of
the primary fields, however, it would be indispensable to include the
marginal contributions into the fits as well.
\section{Conclusion}
\label{sec:con}
To summarize, we have investigated critical spin models of higher
representations of SU(2), SU(3) and SU(4) by DMRG, extracting the
central charge as well as the scaling dimension of the primary field
from our numerical results. These results agree accurately with the
predictions of the associated conformal field theories, the
SU($N$)$_k$ WZW models. We have thus shown that the study of block
entropies within DMRG is a suitable numerical tool to investigate
SU($N$) spin chains including higher representations. It thus
represents a fruitful method to complement analytical approaches to
these models and perspectively provide important information on models
where analytical methods may not be practicable or even applicable.
\begin{acknowledgement}
We thank D.~Schuricht for useful discussions and a critical reading
of the manuscript. SR is supported by the Cusanuswerk, RT by the
Studienstiftung des deutschen Volkes. MG and PS are supported by the
Deutsche Forschungsgemeinschaft (DFG) through grant FOR 960.
\end{acknowledgement}
\vspace{20pt}
|
2,869,038,154,020 | arxiv |
\section{Introduction} \label{sec:introduction}
``Scale models'' are ubiquitous in fields such as fluid dynamics.
They are physical or numerical models that preserve some of the
important properties of an object being modeled while not preserving
the original dimensions of the object. The main goal of this paper is
to formulate and study a miniature scale model of an optical fiber
laser amplifier. Our scale model reduces fiber length to increase
computational efficiency. While unable to preserve all properties of
the original electromagnetic solution, our numerical scale model is
able to approximately replicate the original fiber's power
distribution, as we shall see in later sections. After this
introductory section, we will begin by describing a simplified model
of beam propagation in fibers. This model will then be used to derive,
justify, and verify the scale model.
The importance of fiber amplifiers in enabling our current world of
long-distance fiber optics and submarine telecommunications cannot be
overlooked~\cite{Chesn18}. High power fiber amplifiers also have many
other uses, for example, as defensive speed-of-light weapons. High
output powers have been achieved by solid-state optical fiber laser
amplifiers~\cite{JaureLimpeTunne13}. Numerical modeling of these
optical devices has also been effectively used by
many~\cite{JaureOttoStutz15, NaderDajanMadde13, SmithSmith11, Ward13}. Yet,
simulation of full length fibers remains cumbersome and far from being
routine. This is because of the long simulation times and the large
computational resources required. Simulations using the full Maxwell
system are too expensive since there are millions of wavelengths
within any realistically long fiber. An an example, consider the full
Maxwell simulation of Raman gain attempted
in~\cite{NagarGrosePetri19}: more than five million degrees of freedom
was needed to simulate an extremely short fiber containing 80
wavelengths (less than 0.0001~m). Although a full Maxwell model of a
realistically long (10~m) fiber can be written out (and we shall do so
in Subsection~\ref{ssec:full-maxwell-model}), its numerical solution
is beyond the reach of today's simulation capabilities.
Therefore, simplified models form the current state of the art. It is
somewhat surprising how unreasonably effective these models have
proved to be, despite the drastic simplifications used in their
derivation. The state of the art in fiber amplifier simulation
consists of beam propagation methods using coupled mode theory
(CMT). We shall introduce the reader to the CMT model in
Subsection~\ref{ssec:cmt} as a simplification of the full Maxwell
model. To facilitate cross-disciplinary readership, we make an effort
to enunciate the assumptions behind such simplifications. Even though
it is not common in the optics literature to view CMT in the backdrop
of emerging developments in reduced-order models, one may view it as
essentially an example of a {\em physics-based reduced-order
model}. Indeed, in CMT, the electromagnetic solution is expressed
using a ``reduced basis'' consisting of transverse guided modes of the
fiber that encapsulates the energy-transport mechanism in fibers.
Even the simplified CMT model is computationally too demanding to ably
assist with the important open issues in the subject today. One of
these issues is what is currently recognized to be the main roadblock
to power scaling of beam combinable fiber amplifiers, namely the
nonlinear transverse mode
instability (TMI).
TMI can be described as a
sudden breakdown in beam quality at high power operation, first
observed experimentally~\cite{EidamWirthJaure11}. As pointed out in
the review~\cite{JaureLimpeTunne13}, when attempting to design
highly coherent lasers
capable of sustained high (average) powers, a practically uncrossable
limit was encountered due to the TMI. After intensive speculations on the
cause of TMI, the prevailing theory seems to be that the cause is a
temperature-induced grating.
We believe that numerical modeling is essential for investigating the TMI, and other nonlinearities that arise inside fiber amplifiers, since experimental evidence is mostly limited to examining the amplifier output, rather than the onset of physical effects that occur inside of the glass fiber along its length.
The current difficulty in using numerical models is the
excessive simulation times: indeed any numerical technique used
must be able to
solve for the electromagnetic field within a long fiber a
vast number of times.
Given the great computational burden of capturing length scales as small as 10 $\mu$m, and time scales as small as 10 $\mu$sec (for the thermal problem), techniques that further accelerate the numerical simulations have the potential to significantly enhance the ability for computer modeling to inform experimental designs and configurations in a timely manner.
It is our intent to contribute such
an acceleration technique by developing the above-mentioned scale
model (in Sections~\ref{sec:equiv}--\ref{sec:results}). Studies of
its application to TMI investigations are postponed to the future.
The models are tested using two commercially available examples of
doped step-index fibers, one with ytterbium (Yb) doping in the fiber
core, and another with thulium (Tm) doping. Both are examples of large
mode area (LMA) fibers which support more than one guided transverse
core mode. LMA fibers are of great interest since they permit
greater light amplification per unit length and help mitigate the
onset of other detrimental optical nonlinearities.
Unfortunately, they are also more
susceptible to the TMI, and hence stand most to benefit from advances
in numerical simulation. Our active gain model for these fibers
utilizes
the population dynamics of Yb and Tm ions.
Active gain in fiber amplifiers appears as a nonlinear coupling term
between the Maxwell systems for the (1) less coherent ``pump'' light
that supplies energy for amplification, and the (2) highly coherent
``signal'' (laser) light. The gain mechanism involves exciting the outer most electron of the dopant (Yb or Tm) by absorbing the pump light, and producing more coherent signal light via stimulated emission that allows the excited electrons to return to their ground state.
We have included
a simplified, yet very typical,
mathematical formulation for the dopant ion population dynamics in
Section~\ref{sec:Tm+Yb}. A few of the initial results obtained in this
work for the simpler Yb-doped case were announced earlier in the
conference proceedings~\cite{GopalGoswaGrose19}. We begin by deriving
the CMT model next.
\section{The CMT model}
\label{sec:model}
Physics-based reduced-order models are now being used successfully in
various simulation techniques~\cite{SwiscMainiPeher18}. In this
section, we introduce such a model for an optical fiber amplifier
starting from Maxwell equations. We will start from the Maxwell system
and describe the assumptions that lead us to the simplified CMT model
consisting of a system of ordinary differential equations (ODE).
\subsection{The full Maxwell model}
\label{ssec:full-maxwell-model}
Suppose the optical fiber amplifier to be modeled is aligned so that
it is longitudinally centered along the $z$-axis; the transverse
coordinates will be denoted by $x$ and $y$ in the Cartesian coordinate
system. The core region of the fiber,
$\{ (x,y, z): x^2 + y^2 < r_{\text{core}}^2\}$, is enveloped by a cladding that
extends to radius $r_{\text{clad}}$. The fiber is a step-index fiber, i.e., its
refractive index is a piecewise constant function that takes the value
$n_{\text{core}}$ in the core region and $n_{\text{clad}}$ in the cladding region.
There is usually another polymer coating that surrounds this inner cladding (composed of fused silica); however, this second cladding/coating can readily be neglected for this analysis since the laser light is mostly guided in the fiber core region.
We want to model a continuous wave, weakly guided
($n_{\text{core}} - n_{\text{clad}} \ll 1 $), polarization maintaining, large mode area
(LMA) fiber.
There are various
arrangements in which this fiber amplifier could be seeded and
pumped. We consider the {\em co-pumped/clad-pumped} configuration, wherein a
highly coherent laser light -- which we shall refer to as the {\em signal}-- is injected into the fiber core area at the beginning of
the fiber ($z=0$). The {\em pump} light is injected into the fiber at
$z=0,$ and unlike the signal, it enters both core and cladding.
Let $\vec{\mathcal{E}}_s, \vec{\mathcal{H}}_s$ and $\vec{\mathcal{E}}_p, \vec{\mathcal{H}}_p$ denote the electric and magnetic fields of the signal and pump light, respectively. The signal and pump fields are assumed to be time harmonic of frequencies $\omega_s$ and $\omega_p$ respectively, i.e.,
\[
\begin{aligned}
\vec{\mathcal{E}}_\ell (x,y,z,t)
& = \text{Re} \Big[ \vec{{E}}_\ell(x,y,z)e^{-\hat\imath \omega_\ell t} \Big],
&\quad
\vec{\mathcal{H}}_\ell (x,y,z,t)
& = \text{Re} \Big[ \vec{{H}}_\ell(x,y,z)e^{-\hat\imath \omega_\ell t} \Big],
\end{aligned}
\]
Here and throughout, we use the subscript $\ell \in \{s, p\}$ to
distinguish between signal and pump fields. Note that the $\vec{\mathcal{E}}_\ell$
and $\vec{\mathcal{H}}_\ell$ are real valued while $\vec{{E}}_\ell$ and $\vec{{H}}_\ell$ are
complex valued. The signal field $\vec{{E}}_s, \vec{{H}}_s$ and the pump field
$\vec{{E}}_p, \vec{{H}}_p,$ are assumed to independently satisfy Maxwell
equations, but are coupled through
the electric polarization terms of the form
$\vec{P}_\ell \equiv \vec{P}_\ell(\vec{{E}}_s, \vec{{E}}_p)$, $\ell \in \{s, p\}$,
which appear in the following time-harmonic Maxwell system,
\begin{equation}
\label{eq:Maxwell-1}
\begin{aligned}
{\ensuremath{\text{curl}\,}} \vec{{E}}_\ell - \hat\imath \omega_\ell \mu_0 \vec{{H}}_\ell
& = 0,
\\
{\ensuremath{\text{curl}\,}} \vec{{H}}_\ell + \hat\imath \omega_\ell \epsilon_0 \vec{{E}}_\ell
& = -\hat\imath\omega_\ell \vec{P}_\ell,
\end{aligned}
\hspace{2cm} \ell \in \{ s, p\},
\end{equation}
where $\epsilon_0$ is the electric permittivity and $\mu_0$ is the
vacuum magnetic permeability.
All interactions between the propagation medium and the
electromagnetic field are modeled through electric polarization terms.
The traditional polarization model includes linear susceptibility,
namely the background material interaction $\PPP^\text{bg}_\ell$ given as a
function of the index of refraction of the medium that the light
propagates through. Other examples of polarization terms include those
that account for linear loss,
active laser gain
($\PPP^\text{ag}_\ell$), thermal effects, and optical nonlinearities such as Brillouin scattering, Raman scattering, and Kerr effects. Here we
focus on active gain polarization and the linear background
polarization, namely,
\[
\begin{aligned}
\PPP^\text{bg}_\ell(\vec{{E}}_\ell)
& = \epsilon_0 (n^2 -1) \vec{{E}}_\ell,
&&&
\PPP^\text{ag}_\ell(\vec{{E}}_\ell)
& = - \frac{\hat\imath \epsilon_0 c n}{\omega_\ell} g_\ell \vec{{E}}_\ell,
&& \quad \ell \in \{ s, p\},
\end{aligned}
\]
where c is the speed of light and $g_\ell$ is the {\em active gain
term that depends on $\vec{{E}}_\ell$ in some nonlinear } fashion.
Examples of $g_\ell$ are given in Section~\ref{sec:Tm+Yb}. Typical
optical operating frequencies imply that within a fiber of realistic
length there are several millions of wavelengths. Even if a mesh fine
enough to capture the wave oscillations is used, the pollution
effect~\cite{BabusSaute00} in wave propagation simulations destroys
the accuracy of finite element solutions at the end of millions of
wavelengths. Hence, without further simplifications, the
above-described full Maxwell model is not a feasible simulation tool
for realistic fiber lengths. We proceed to develop a physics-based
reduced model.
\subsection{Coupled mode theory} \label{ssec:cmt}
Experiments indicate that the vast majority of the laser signal is contained within the guided core modes of the fiber, and, likewise, most of the pump light is within the guided cladding modes. This is the basis of an electric field ansatz that
CMT uses.
Before giving the ansatz, let us eliminate
$\vec{{H}}_\ell$ from~\eqref{eq:Maxwell-1}, to obtain the second order
equation
\begin{equation} \label{eqn: simplified maxwell}
{\ensuremath{\text{curl}\,}} {\ensuremath{\text{curl}\,}} \vec{{E}}_\ell - \omega_\ell^2 \epsilon_0 \mu_0 \vec{{E}}_\ell =
\omega_\ell^2 \mu_0 \vec{P}_\ell
\end{equation}
solely for the electric field. Substituting
\begin{equation} \label{eqn: total polarization}
\vec{P}_\ell = \PPP^\text{bg}_\ell + \PPP^\text{ag}_\ell = \epsilon_0 (n^2 -1) \vec{{E}}_\ell - \frac{\hat\imath \epsilon_0 c n}{\omega_\ell} g_\ell \vec{{E}}_\ell
\end{equation}
into~\eqref{eqn: simplified maxwell}, using $c = 1/\sqrt{\epsilon_0 \mu_0}$ and simplifying we get,
\begin{equation}
\label{eq:Maxwell-2}
{\ensuremath{\text{curl}\,}} {\ensuremath{\text{curl}\,}} \vec{{E}}_\ell - k_\ell^2 n^2 \vec{{E}}_\ell + \hat\imath k_\ell n g_\ell \vec{{E}}_\ell =0,
\end{equation}
where $k_\ell = \omega_\ell/c$ is the wavenumber corresponding to the
frequency $\omega_\ell$.
Next, we assume that the electric field $\vec{{E}}_\ell$ can be expressed
as
\[ \vec{{E}}_\ell (x,y,z) = U_\ell(x,y,z) \hat{e}_x,
\]
i.e., it is linearly polarized in a fixed transverse direction, which
is taken above to be the $x$-direction (where $\hat{e}_x$ denotes the
unit vector in the $x$-direction). Furthermore, since $\vec{{E}}_\ell$ has
high frequency oscillations along the $z$-direction, its variations
along the transverse directions may be considered negligible. It is
therefore standard in optics to neglect $\ensuremath{\mathop{{\text{grad}}}} \div \vec{{E}}_\ell.$ These
assumptions imply that the vector equation~\eqref{eq:Maxwell-2}
becomes the following scalar Helmholtz equation for $U_\ell$,
\begin{equation}
\label{eqn: scalar helmholtz}
-\Delta U_\ell - k_\ell^2 n^2 U_\ell +
\hat\imath k_\ell n g_\ell U_\ell = 0.
\end{equation}
Due to the high wave number $k_\ell$, even this simplified scalar
field problem is
computationally intensive. We now proceed to further reduce this
scalar model using CMT.
CMT is usually useful in the analysis of the interaction between
several near-resonance guided modes. For step-index fiber waveguides
these modes are called linearly polarized transverse guided core
modes~\cite{Agraw13}, often referred to simply as {\em LP
modes}. Mathematically speaking, these modes are finitely many
non-trivial functions $\varphi_m(x,y)$, $m=1,2,\ldots, M_\ell$,
that decay exponentially at the edge of the
cladding region and satisfy
\begin{equation} \label{eqn: eigen problem}
(\Delta_{xy} + k_{{\ell}}^2 n^2) \varphi_m = \beta_m^2 \varphi_m,
\qquad
m = 1, \ldots, M_\ell,
\end{equation}
where $\beta_m$ is the corresponding \textit{propagation constant} and
$\Delta_{xy} = \partial_{xx} + \partial_{yy}$ denotes the transverse
Laplacian {operator}.
The CMT approach to solve~\eqref{eqn: scalar
helmholtz} expresses the solution using the ansatz
\begin{equation} \label{eqn: cmt ansatz}
U_{{\ell}}(x,y,z) = \sum\limits_{m=1}^{M_\ell} \A{\ell}_m(z) \varphi_m(x, y) e^{\hat\imath \beta_m z},
\end{equation}
where $\A{\ell}_m(z)$ denotes the complex field amplitude {of mode $m$}.
{Therefore, the wavenumber ($k_\ell n$) for the entire electric field envelop ($U_\ell$) is now decomposed into individual propagation constants ($\beta_m$) corresponding to each guided mode, and the field envelop is now decomposed into parts of amplitudes~$\A{\ell}_m$ having transverse profiles described by $\varphi_m$.}
Knowledge of the
form of the solution is thus incorporated {\it a priori} into the
ansatz. In particular, the physical intuition that the
$\varphi_m$-component should oscillate longitudinally at an
approximate frequency of $\beta_m$ is built in. This justifies the
next assumption that $\A{\ell}_m(z)$ is a slowly varying function of $z$
(having built the fast variations in $z$ into the $ e^{\hat\imath \beta_m z}$
term). Accordingly, for each $\A{\ell}_m$, we neglect the second-order
derivative $d^2 \A{\ell}_m/d z^2$ for all $m=1, \ldots, M_\ell$. Doing so after
substituting~\eqref{eqn: cmt ansatz} into~\eqref{eqn: scalar
helmholtz} and using~\eqref{eqn: eigen problem} we obtain
\begin{align} \label{eqn: BPM}
\sum\limits_{m=1}^{M_\ell} \frac{d \A{\ell}_m}{d z} \varphi_m \beta_m e^{\hat\imath \beta_m z}
& =
\frac{1}{2} \sum\limits_{m=1}^{M_\ell} \A{\ell}_m k_{{\ell}} \varphi_m n g_{{\ell}} e^{\hat\imath \beta_m z},
&& 0 < z < L.
\end{align}
The next step is to multiply both sides of~\eqref{eqn: BPM} by the
complex conjugate of $\varphi_l$, namely $\overline \varphi_l$, and
integrate. We integrate over $\varOmega_z,$ which represents the fiber cross
section having the constant longitudinal coordinate value of~$z$.
Then, simplifying using
the $L^2(\varOmega_z)$-orthogonality of the modes,
\begin{align} \label{cmt}
\frac{d \A{\ell}_l }{d z}
& = \sum_{m=1}^{M_\ell} e^{\hat\imath (\beta_m - \beta_l)z } \,
\K{\ell}_{lm}({I_\ell}, I_{{\ell^{c}}}) \; \A{\ell}_m,
&& 0 < z < L,
\end{align}
for $l = 1, \ldots, M_\ell$,
where $\K{\ell}_{lm}$ is the mode coupling coefficient, given by
\begin{equation} \label{eq:Klm}
\K{\ell}_{lm}({I_\ell}, I_{{\ell^{c}}}) = \frac {k_{{\ell}}}{2\beta_l} \int_{\varOmega_z}
g_{{\ell}}({I_\ell, I_{\ell^{c}}})\,
n(x,y) \varphi_m(x,y) \overline{ \varphi_l(x,y) } \; dx \,dy,
\end{equation}
{$\ell^{c} \in \{ s,p \} \setminus \{ \ell \}$}, and $I_s, I_p$ denote the signal and pump irradiance,
respectively,
{which are formulated later in this subsection}.
For the pump light, the number of guided cladding modes is exceedingly
large: $M_p > 10^5$. Rather than modeling each of these modes,
it is
sufficient to approximate the pump field as a plane wave, which
effectively acts as the composition of all of the pump guided
modes~\cite{NaderDajanMadde13,SmithSmith11}.
Accordingly, we set
$M_p = 1$ and the normalized mode $\varphi_1^p = (\sqrt{\pi}r_{\text{clad}})^{-1}$ (without a transverse
dependence). Since the cladding region is many times larger than the
core, the corresponding propagation constant is
estimated as if this mode travels in a uniform medium of refractive
index $n_{\text{clad}}$, i.e., we set
$\beta_1 = k_p n_{\text{clad}} = \omega_p n_{\text{clad}} /
c$. Then~\eqref{cmt} yields
\begin{equation} \label{pumpODE}
\frac{d \A{p}_1}{d z} = K_{11}^{p}(I_p, I_s)\A{p}_1,
\end{equation}
for $0 < z < L$, where
\begin{equation} \label{eq:pumpKlm}
K_{11}^p(I_p, I_s) = \frac{1}{2 \pi r_{\text{clad}}^2 n_{\text{clad}}} \int_{\varOmega_z}
g_{p}(I_p, I_s) \,n_{\text{clad}} \ dx \,dy = \frac 1 2 \mean{g_p}.
\end{equation}
Here $\mean{g_p} = |\varOmega_z|^{-1} \int_{\varOmega_z} g_p\; dx \,dy$ denotes
the mean value of $g_p$ taken over $\varOmega_z,$ the area of the fiber
cross section out to $r = r_{\text{clad}}$.
The irradiance is proportional to the square of the field envelop
magnitude, $I_\ell = n |U_\ell|^{2} / (\mu_0 c)$. Using~\eqref{eqn:
cmt ansatz},
\begin{equation}\label{eq:Irradiance}
{I_\ell(x,y,z) = \dfrac{n}{\mu_0 c} \left|\sum_{m=1}^{M_\ell} \A{\ell}_m(z) e^{\hat\imath \beta_m z} \varphi_m(x,y) \right|^2.}
\end{equation}
{For the pump plane wave, this reduces to}
\[
{I_p(z) = \dfrac{n}{\mu_0 c \pi r_{\text{clad}}^2} \left| \A{p}_1(z) \right|^2.}
\]
Using the equation~\eqref{pumpODE} and its complex conjugate,
elementary simplifications lead to the following
governing ODE for the pump irradiance:
\begin{equation}\label{eqn:Ip}
\frac{d I_p}{d z} = \mean{g_p} I_p,
\end{equation}
In view of~\eqref{eqn:Ip}, instead of $\A{p}_1$, we shall use $I_p(z)$ as our
pump unknown. There is no need for the amplitude $\A{p}_1$ in the
remainder of the model. Hence from now on, we write $A_m$ for $\A{s}_m$
dropping the superscript. We shall also simply write $M$ for $M_s$
and $K_{lm}$ for $\K{s}_{lm}$.
Next, consider the signal irradiance, namely the $\ell=s$ case
in~\eqref{eq:Irradiance}. To highlight the dependence of $I_s$ on $A_m
\equiv \A{s}_m$, we use $A \equiv [A_1(z), \ldots, A_M(z)]^t$
to collectively denote the set of all signal mode amplitudes
and write
\begin{equation}
\label{eq:Is}
I_s \equiv I_s(x,y,z, A) = \dfrac{n}{\mu_0c}
\left|\sum_{m=1}^M A_m(z) e^{\hat\imath \beta_m z} \varphi_m(x,y) \right|^2.
\end{equation}
Note that
the modes $\varphi_l(x,y)$ and the propagation constants $\beta_l$ may
be precomputed (and the cost of this precomputation corresponds to the
``off-line'' computational cost in this reduced-order model).
{In order to} complete the CMT model (assuming we have expressions for
$g_\ell$), we need to provide initial conditions at $z=0$,
the {beginning of the} fiber. What is usually known is the power
contained in the pump and signal light.
The initial pump irradiance $I_p^0 = I_p(0)$ can be calculated
from the initial pump power $P_p^0$ provided at the inlet in a
co-pumped configuration, by $I_p^0 = |\varOmega_0|^{-1} P_p^0$. We assume
that we also
know how the signal light is split into various modes at the inlet,
i.e., we may set
$A(0)$ to some given $A^0 = [A_1^0, \ldots, A_M^0]^t.$ In
practice, most of the signal power is usually carried in the first
fundamental mode.
\begin{center}
\framebox{\parbox{\textwidth}{ %
To summarize, the CMT model computes
\[
Y(z) = [I_p(z), A_1(z), A_2(z), \ldots, A_M(z)]^t, \qquad 0< z<L,
\]
{where each $A_m(z)$ is a signal mode amplitude in the fiber core, and $Y(z)$ satisfies}
the ODE system
\begin{subequations}
\label{eq:summary}
\begin{align}
\label{eq:summary-ODE}
\frac{d Y}{d z}
& =
\begin{bmatrix}
\langle g_p(Y)\rangle & 0
\\
0 & \phi(z)\cdot K(Y)
\end{bmatrix}
Y,
&& 0< z < L,
\\
\label{eq:summary-IC}
Y(0) & =
[I_p^0, A^0]^t
&& z = 0,
\end{align}
\end{subequations}
where $ \phi(z)$ is an $M \times M$ matrix defined by
$\phi_{lm}(z) = e^{\hat\imath (\beta_m - \beta_l) z},$ $ K(Y) $ is a
matrix of the same size whose $(l,m)$th entry is
$\K{s}_{lm}({I_s}, I_p)$ defined in~\eqref{eq:Klm}, and
$\phi(z)\cdot K(Y)$ denotes the Hadamard product of $\phi$ and
$K$, i.e., $ [\phi \cdot K]_{lm} = \phi_{lm}K_{lm}$. }}
\end{center}
\section{Thulium and ytterbium doped fiber amplifiers}
\label{sec:Tm+Yb}
Thulium (Tm)-doped fiber amplifiers~\cite{Jacks04, JacksKing99} can
operate in eye-safe laser wavelengths (larger than 1.4~{$\mu$m}) and can
reach an atmospheric transmission window (2.1--2.2~{$\mu$m}). There are
efficient high-power LEDs that operate in the range of
0.79-0.793~{$\mu$m}, which is a peak absorption bandwidth for Tm-doped
fibers. Cross-relaxations and upconversions occur in Tm-doped
amplifiers. Even though Tm-doped fibers usually have better TMI suppression
compared to other rare-earth ion doped fibers~\cite{SmithSmith11},
ytterbium (Yb)-doped fiber amplifiers have also emerged as excellent
candidates for high power operation due to their high-efficiencies and
low amplified spontaneous emission
gain. Yb-doped amplifiers are usually pumped at 976~nm and can lase
around 1064~nm very efficiently. The dynamics of both these ion
populations are explained below. They complete our model by giving
expressions for $g_\ell$ to be used in~\eqref{eq:summary}.
\subsection{Tm-dopant ion dynamics}
The Tm ion population dynamics are schematically represented in
Figure~\ref{fig:Tm_manifolds}. The model involves four manifolds. The
total number of Tm ions (per volume) is
\begin{equation}
\label{eq:Nt-Tm}
N_{\text{total}} = N_0(x, y, z, t) + N_1(x, y, z, t) + N_2(x, y, z, t) + N_3(x, y, z, t)
\end{equation}
where $N_0$ represents the ground state (manifold 0) ion-population
concentration, while $N_1$, $N_2,$ and $N_3$ denote ion concentrations
at excitation manifolds 1,2 and 3, respectively. What we have named
energy manifolds 0,1,2, and 3, represent Tm energy levels usually
written as ${}^3H_6$ ${}^3F_4$, ${}^3H_5$ and ${}^3H_4$, respectively.
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=3, line width=1mm, >=stealth]
\coordinate (3H6) at (0,0);
\coordinate (3F4) at (0, 0.52);
\coordinate (3H5) at (0, 0.826);
\coordinate (3H4) at (0, 1.26);
\node[left] at (3H6) {$\mathrm{{}^3H_6}$};
\node[left] at (3F4) {$\mathrm{{}^3F_4}$};
\node[left] at (3H5) {$\mathrm{{}^3H_5}$};
\node[left] at (3H4) {$\mathrm{{}^3H_4}$};
\draw (3H6) --++ (1,0) node [right] {ground};
\draw ($(3H6)+(1.5,0)$) --++ (1,0);
\draw [name path=3f4line]
(3F4) --++ (1,0) -- ($(3F4)+(1.5,0)$) --++ (1,0);
\draw (3H5) --++ (1,0) -- ($(3H5)+(1.5,0)$) --++ (1,0);
\draw (3H4) --++ (1,0) -- ($(3H4)+(1.5,0)$) --++ (1,0);
\draw[blue, dotted, ->, opacity=0.7] ($(3H4)+(2,0)$)--($(3H5)+(2,0)$);
\draw[blue, dotted, ->, opacity=0.7] ($(3H5)+(2,0)$)--($(3F4)+(2,0)$);
\draw[blue, ->, opacity=0.7] ($(3F4)+(2,0)$)--($(3H6)+(2,0)$)
node[right, midway] {1700--2100nm};
\draw [purple, ->, opacity=0.7]($(3H6)+(0.75,0)$)--($(3F4)+(0.75,0)$)
node[right, midway] {1550--1900nm};
\draw[purple, ->, opacity=0.7] ($(3H6)+(0.5,0)$)--($(3H5)+(0.5,0)$)
node[right, very near end] {1210nm};
\draw[purple, ->, opacity=0.7] ($(3H6)+(0.25,0)$)--($(3H4)+(0.25,0)$)
node[right, very near end] {793nm};
\draw [red, opacity=0.0, name path=xr]
($(3H4)+(0.95,0)$)--($(3H6)+(1.75,0)$);
\fill [red, name intersections={of=xr and 3f4line}]
(intersection-1) coordinate (intr) circle (0.1pt);
\draw [red, ->, opacity=0.7] ($(3H4)+(0.95,0)$)--(intr);
\draw [red, ->, opacity=0.7] ($(3H6)+(1.75,0)$)--(intr);
\node at ($(3H6)+(3,0)$) {$N_0$};
\node at ($(3F4)+(3,0)$) {$N_1$};
\node at ($(3H5)+(3,0)$) {$N_2$};
\node at ($(3H4)+(3,0)$) {$N_3$};
\end{tikzpicture}
\caption{Simplified diagram of Tm energy levels}
\label{fig:Tm_manifolds}
\end{figure}
Pump light of frequency $\omega_p=793$~nm excites the Tm ground state
ions into higher energy manifolds, thus depleting manifold 0 at the
rate $\nu_p \sigma^{\text{abs}}(\omega_p) N_0$ while increasing the excited
manifold $j$ at the rate $\nu_p \sigma^{\text{ems}}(\omega_p) N_j$, where $\sigma^{\text{abs}}$ and $\sigma^{\text{ems}}$
represent measurable absorption and emission cross sections of Tm
\cite{AggerPovls06}, and
\[
\nu_\ell= \frac{I_\ell} {\hbar \omega_\ell}, \qquad \ell \in \{s, p\}
\]
represents the flux of photons of frequency $\omega_\ell$. We must also
take into account the fact that an excited ion in manifold $j$ can
decay spontaneously to a lower energy manifold $k$ at the rate
$1/\tau_{jk}$. An excited ion in manifold $j$ can also decay
non-radiatively to the next lower energy manifold at the rate
$\Gamma_j$. Finally, an excited Tm ion can also undergo
cross-relaxation, wherein it transfers part of its energy to a ground
state ion so both can end up in an intermediate energy
level. Cross-relaxation is represented by the slanted arrows in
Figure~\ref{fig:Tm_manifolds}, while the other processes are
represented by up/down arrows. The rate constant for the
cross-relaxation is denoted by $\kappa_R$. Cross-relaxation, which
creates two excited Tm ions for every pump photon (a two-for-one
process), increase the amplifier efficiency (while upconversions,
which are neglected in our model, decrease fiber efficiency).
Following~\cite{McCom09}, these processes are modeled by
\begin{subequations}
\label{eq:Tm-dyn}
\begin{align}
\label{eq:Tm-dyn-N3}
\d_tN_3
& = \psi^{\text{abs}}_p N_0 - \Big(
\psi^{\text{ems}}_p \nu_p + \frac{1}{\tau_{32}} + \frac{1}{\tau_{31}} +
\frac{1}{\tau_{30}} + \Gamma_3 + \kappa_R N_0 \Big) N_3
\\ \label{eq:Tm-dyn-N2}
\d_t N_2
& = \Big( \frac{1}{\tau_{32}} + \Gamma_3 \Big) N_3 - \Big(
\frac{1}{\tau_{21}} + \frac{1}{\tau_{20}} + \Gamma_2 \Big) N_2
\\ \label{eq:Tm-dyn-N1}
\d_t N_1
&= \psi^{\text{abs}}_s N_0 + \Big(
\frac{1}{\tau_{21}} + \Gamma_2 \Big) N_2 +
\Big(\frac{1}{\tau_{31}} + 2 \kappa_R N_0 \Big) N_3 -
\Big( \frac{1}{\tau_{10}} + \Gamma_1 +
\psi^{\text{ems}}_s \Big) N_1
\\ \label{eq:Tm-dyn-N0}
N_{\text{total}} & = N_0 + N_1 + N_2 + N_3
\end{align}
\end{subequations}
where
\[
\psi^{\text{abs}}_\ell = \sigma^{\text{abs}}(\omega_\ell) \nu_\ell, \qquad \psi^{\text{ems}}_\ell =
\sigma^{\text{ems}}(\omega_\ell)\nu_\ell, \qquad \ell \in \{s, p\}.
\]
In our simulations, we have set~$\omega_s$ to correspond to signal light
of wavelength 2100~nm.
Next, we make the simplifying assumption that all the time derivatives
$\d_t$ in~\eqref{eq:Tm-dyn} may be neglected. By doing so, we are
neglecting the time variations in the ion populations that occur at an
extremely small time scale of around
$10^{-5}$~s. Equations~\eqref{eq:Tm-dyn-N3}--\eqref{eq:Tm-dyn-N1}
after setting $\d_t=0$ immediately yield $N_1, N_2, N_3$ in terms of
$N_0$. The last equation~\eqref{eq:Tm-dyn-N0} then gives a quadratic
equation for $N_0$. To express this solution, first define
\begin{gather*}
\delta_i = \sum\limits_{j=0}^{i-1}\tau_{ij} + \Gamma_i,
\qquad
\gamma_0 = \frac{1}{\psi^{\text{ems}}_p + \delta_3}
\qquad
\gamma_1 = \psi^{\text{abs}}_p \gamma_0,
\qquad
\gamma_2 = \frac{\tau_{32}^{-1} + \Gamma_3}{\delta_2},
\\
\gamma_3 = \frac{\tau_{31}^{-1} + \gamma_2 (\tau_{21}^{-1} + \Gamma_2)
+ \gamma_1^{-1} \psi^{\text{abs}}_s}
{\psi^{\text{ems}}_s + \delta_1},
\qquad
\gamma_4 = \frac{2 \psi^{\text{abs}}_p + \psi^{\text{abs}}_s}
{\psi^{\text{abs}}_p(\psi^{\text{ems}}_s + \delta_1)}.
\end{gather*}
Then, the steady-state solution is given explicitly by
\begin{subequations}
\label{eq:N0123}
\begin{align}
\nonumber
N_0 & = \frac{\gamma_0 \kappa_R N_{\text{total}} - \gamma_1 (1 + \gamma_2 + \gamma_3) -1}
{2 \kappa_R (\gamma_0 + \gamma_1 \gamma_4)}
\\ \label{eq:N0}
& + \frac{\sqrt{(1 - \gamma_0 \kappa_R N_{\text{total}} + \gamma_1(1 + \gamma_2 +\gamma_3))^2
+ 4(\gamma_0 + \gamma_1 \gamma_4) \kappa_R N_{\text{total}}}}{2 \kappa_R (\gamma_0 + \gamma_1 \gamma_4)},
\\
\label{eq:N123}
N_1 &= \frac{(\gamma_3 + \gamma_4 \kappa_R N_0)\gamma_1 N_0}{1 + \gamma_0 \kappa_R N_0}, \qquad
N_2 = \frac{\gamma_2 \gamma_1 N_0}{1 + \gamma_0 \kappa_R N_0}, \qquad
N_3 = \frac{\gamma_1 N_0}{1 + \gamma_0 \kappa_R N_0}.
\end{align}
\end{subequations}
Using this, we set the gain expressions by
\begin{eqnarray}
\label{Tm: signal gain}
g_s &=& \sigma^{\text{ems}}(\omega_s) N_1 - \sigma^{\text{abs}}(\omega_s) N_0 \\
\label{Tm: pump gain}
g_p &=& \sigma^{\text{ems}}(\omega_p) N_3 - \sigma^{\text{abs}}(\omega_p) N_0.
\end{eqnarray}
This completes the prescription of the CMT model~\eqref{eq:summary}
for Tm-doped fiber amplifier.
\subsection{Yb-dopant ion dynamics}
\begin{figure}
\centering
\begin{tikzpicture}[scale=3, line width=1mm, >=stealth]
\coordinate (2F72) at (0,0);
\coordinate (2F52) at (0,1);
\node[left] at (2F72) {$\mathrm{{}^2F_{7/2}}$};
\node[left] at (2F52) {$\mathrm{{}^2F_{5/2}}$};
\draw (2F72) --++ (1,0) node [right] {ground};
\draw ($(2F72)+(1.5,0)$) --++ (1,0);
\draw (2F52) --++ (1,0) -- ($(2F52)+(1.5,0)$) --++ (1,0);
\draw[blue, ->, opacity=0.7] ($(2F52)+(2,0)$)--($(2F72)+(2,0)$)
node[right, midway] {emission};
\draw[purple, ->, opacity=0.7]($(2F72)+(0.75,0)$)--($(2F52)+(0.75,0)$)
node[left, midway] {absorption};
\node at ($(2F72)+(3,0)$) {$N_{\text{ground}}$};
\node at ($(2F52)+(3,0)$) {$N_{\text{excited}}$};
\end{tikzpicture}
\caption{A simplified diagram of two Yb energy levels}
\label{fig:Yb_manifolds}
\end{figure}
The model for population dynamics of Yb ions is simpler as it can be
modeled using only two energy states, the ground state and one excited
state manifold, as shown in Figure~\ref{fig:Yb_manifolds}. Hence,
instead of~\eqref{eq:Nt-Tm}, we now have
$$
N_{\text{total}} = N_{\text{ground}}(x, y, z, t) + N_{\text{excited}}(x, y, z, t)
$$
where $N_{\text{total}}$ denotes the total population concentration in the fiber,
$N_{\text{ground}}$ represents the ground state ion-population (in
$\mathrm{{}^2F_{7/2}}$) and $N_{\text{excited}}$ denotes the excited state
ion-population (in $\mathrm{{}^2F_{5/2}}$). The absorption and
emission that models the two-state dynamics now result in
\begin{subequations}
\label{eq:Yb-dyn}
\begin{eqnarray}
\frac{\partial \Ne}{\partial t} & = & \psi^{\text{abs}}_s N_{\text{ground}} - \psi^{\text{ems}}_s N_{\text{excited}}
\\ \nonumber
& + & \psi^{\text{abs}}_p N_{\text{ground}} - \psi^{\text{ems}}_p N_{\text{excited}} - \frac{N_{\text{excited}}}{\tau},
\\
N_{\text{total}} & = & N_{\text{ground}} + N_{\text{excited}},
\end{eqnarray}
\end{subequations}
where now we must use the absorption and emission cross section
values~\cite{PaskCarmaHanna95} of Yb for $\sigma^{\text{abs}}, \sigma^{\text{ems}}$ while computing
$\psi^{\text{abs}}_\ell, \psi^{\text{ems}}_\ell$. The parameter $\tau$ is the upper level
radiative lifetime of the excited state. As in the Tm case, we assume
that the system has already reached the steady-state solution. Putting
the time derivative in \eqref{eq:Yb-dyn} to zero, a simple calculation
shows that
\begin{eqnarray}
\label{steady-state Ne}
N_{\text{excited}} & = & N_{\text{total}}\; \dfrac{ \psi^{\text{abs}}_s +\psi^{\text{abs}}_p}{ \psi^{\text{abs}}_s + \psi^{\text{ems}}_s
+ \psi^{\text{abs}}_p + \psi^{\text{ems}}_p + \tau^{-1}}.
\end{eqnarray}
Finally, the active gain expressions
are modeled in terms of the above $N_{\text{ground}}$ and $N_{\text{excited}}$ by
\begin{align}
\label{gain Yb}
g_\ell = (\sigma^{\text{ems}}_\ell N_{\text{excited}} - \sigma^{\text{abs}}_\ell N_{\text{ground}}), && \text{for } \ell \in \{ s, p\}.
\end{align}
When this is substituted into~\eqref{eq:summary}, the model for
Yb-doped fiber amplifiers is complete.
\subsection{Basic simulations} \label{ssec:basic-simulations}
We report the results obtained from simulation of the CMT model for
two 10~m long fibers, one doped with Yb and the other with Tm. The
fiber parameters are collected from data sheets of commercially
available exemplars of these fibers (specifically
Nufern\texttrademark\ fibers -- see nufern.com). All parameters used
for the simulation of both the fibers are reported in
Tables~\ref{tab:Yb} and~\ref{tab:Tm}.
\begin{table}
\centering
\begin{footnotesize}
\begin{tabular}{|c|l|l||c|l|l|}
\hline
Parameter & Value & Units
& Parameter & Value & Units \\
\hline
$\lambda_p
=2\pi c/\omega_p$ & \num{976e-9} & m
& $\lambda_s
=2\pi c/\omega_s$ & \num{1064e-9} & m
\\
$\sigma^{\text{abs}}(\omega_p)$ & \num{1.429E-24} & $\text{m}^2$
& $\sigma^{\text{ems}}(\omega_p)$ & \num{1.776E-24} & $\text{m}^2$
\\
$\sigma^{\text{abs}}(\omega_s)$ & \num{6E-27} & $\text{m}^2$
& $\sigma^{\text{ems}}(\omega_s)$& \num{3.58E-25} & $\text{m}^2$
\\
$N_{\text{total}}$ & \num{3E+26} & $\text{ions/m}^3$
&$\tau$ & \num{8E-4} & s
\\
$n_{\text{core}}$ & \num{1.450971} & --
& NA & 0.06 & --
\\
$r_{\text{core}}$ & \num{1.25E-5} & m
& $r_{\text{clad}}$ & \num{2E-4} & m
\\
$P_p^0$ & 1000 & W
& $P_s^0$& 25 & W \\
\hline
\end{tabular}
\end{footnotesize}
\caption{Parameters used in Yb-doped fiber simulation}
\label{tab:Yb}
\end{table}
\begin{table}
\centering
\begin{footnotesize}
\begin{tabular}{|c|l|l||c|l|l|}
\hline
Parameter & Value & Units
& Parameter & Value & Units\\
\hline
$\lambda_p=2\pi c/\omega_p$ & \num{793E-9} & m
& $\lambda_s=2\pi c/\omega_s$ & \num{2110E-9} & m
\\
$\sigma^{\text{abs}}(\omega_p)$ & \num{4.4686E-25} & $\text{m}^2$
& $\sigma^{\text{ems}}(\omega_p)$ & 0 & $\text{m}^2$
\\
$\sigma^{\text{abs}}(\omega_s)$ & \num{1.7423E-27} & $\text{m}^2$
& $\sigma^{\text{ems}}(\omega_s)$ & \num{1.17397E-25} & $\text{m}^2$
\\
$\tau_{10}$ & \num{6.2232E-03} & s
& $\tau_{20}$ & \num{5.5179E-03} & s
\\
$\tau_{21}$ & \num{2.5707E-01} & s
& $\tau_{30}$ & \num{1.3949E-03} & s
\\
$\tau_{31}$ & \num{1.7033E-02} & s
& $\tau_{32}$ & \num{6.8446E-02} & s
\\
$\Gamma_1$ & \num{2.59288E+03} & Hz
& $\Gamma_2$ & \num{2.92755E+07} & Hz
\\
$\Gamma_3$ & \num{8.05943E+04} & Hz
& -- & -- & --
\\
$N_{\text{total}}$ & \num{3E+26} & $\text{ions/m}^3$
& $\kappa_R$ & \num{1.17E-21} & $\text{m}^3$
\\
$n_{\text{core}}$ & \num{1.439994} & --
& NA & 0.1 & --
\\
$r_{\text{core}}$ & \num{1.25E-5} & m
& $r_{\text{clad}}$ & \num{2E-4} & m
\\
$P_p^0$ & 1100 & W
& $P_s^0$ & 30 & W \\
\hline
\end{tabular}
\end{footnotesize}
\caption{Parameters used in Tm-doped fiber simulation}
\label{tab:Tm}
\end{table}
We solve the CMT system~\eqref{eq:summary} using the classical
$4^{th}$ order explicit Runge-Kutta method (in complex arithmetic).
The phase terms $\phi_{lm}(z) = e^{\hat\imath (\beta_m - \beta_l)z}$ in the ODE
system oscillate at a wavelength not smaller than the so-called {\em
mode beat length}
\begin{equation}
\label{eq:mode_beat_len}
\frac{ 2 \pi}{ \displaystyle \max_{l,m=1, \ldots, M} | \beta_l -
\beta_m|}.
\end{equation}
An ODE solver applied to solve~\eqref{eq:summary} must take sufficient
number of steps per mode beat length to capture the effect of these
oscillations in the solution. Prevailing theories
\cite{NaderDajanMadde13} point to the potential importance of the mode
beating term in thermal effects, so we must be careful to treat these
oscillations with the needed accuracy if the model is to be extendable
to incorporate thermal effects in the future. In all our simulations,
we used 50 ODE steps per mode beat length.
Before running the ODE solver, we precompute the propagation constants
$\beta_j$, the mode beat length, and of course, the modes. For
step-index fibers, we can compute the modes $\varphi_l$ exactly in
closed form (see~\cite{Agraw13,Reide16}) as quickly described next.
One first computes the propagation constants by solving the
characteristic equation of the fiber as follows. Let $\mathcal{J}_i$
and $\mathcal{K}_i$ denote, respectively, the standard Bessel function
and the modified Bessel function of second kind of order~$i$. Then we
solve for $X$ satisfying the so-called ``characteristic equation'' of
the fiber, namely setting the fiber's ``numerical aperture''
NA$=\sqrt{n_{\text{core}}^2 - n_{\text{clad}}^2},$ we solve
$X \mathcal{J}_{i-1}(X) \mathcal{K}_i( \sqrt{\text{NA}^2 - X^2}) +
\sqrt{\text{NA}^2 - X^2} \,\mathcal{J}_i(X) \mathcal{K}_i(
\sqrt{\text{NA}^2 - X^2}) = 0$ by a bisection-based root-finding
method. This equation arises from the matching conditions at the
core-cladding interface. For each~$i$, enumerating the roots of the
characteristic equation as $X_{ij}$, $j=0,1,\ldots$, the propagation
constants are given by
\[
\beta_{ij} = \sqrt{ n_{\text{core}}^2 k_s^2 - X_{ij}^2 r_{\text{core}}^2}.
\]
Set $\mathcal{R}_{ij} = X_{ij}/r_{\text{core}}$ and
$\mathcal{G}_{ij} = \sqrt{\beta_{ij}^2 - n_{\text{clad}} k_s^2 }$. The exact
LP modes take the following form in polar coordinates:
\begin{equation}
\varphi_{ij}(r, \theta) =
\begin{cases}
\mathcal{K}_i(\mathcal{G}_{ij} r_{\text{core}}) \mathcal{J}_i(\mathcal{R}_{ij}
r) \cos (i \theta), & \hspace{1 cm} 0\le r<r_{\text{core}}
\\
\mathcal{J}_i(\mathcal{R}_{ij} r_{\text{core}}) \mathcal{K}_i(\mathcal{G}_{ij}
r) \cos (i \theta), & \hspace{1 cm} r_{\text{core}} \le r < r_{\text{clad}}.
\end{cases}
\end{equation}
The mode $\varphi_{ij}$ is usually called the ``LP$ij$'' mode.
For the particular case of the Tm parameters in Table~\ref{tab:Tm}, we
find that the fiber only has the LP01 and LP11 modes, while for the Yb
fiber with the parameters set in Table~\ref{tab:Yb}, we found four modes
LP01, LP11, LP21 and LP02. In our simulation the fiber geometry was
meshed using finite elements (with curved elements at the cladding
boundary and at the core-cladding interface) and the relevant LP modes
were interpolated into the degree $p$ Lagrange finite element space
based on the mesh. Integration involving finite element functions is
broken into a sum over integrals over all mesh elements and a
sufficiently high quadrature rule is used to approximated an element
integral. This is how we approximate all required integrals, such as
in the computation of the coupling coefficient~\eqref{eq:Klm}, as well
as in power computations. Note that each step of the multi-stage ODE
solver requires many such integrations.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{Tm_L10_signal_pump.png}
\hspace{-0.2cm}
\includegraphics[width=0.5\textwidth]{Yb_L10_signal_pump.png}
\caption{The simulated distribution of powers along the Tm-doped
(left) and the Yb-doped (right) fiber amplifier. The pump power
$P_p$ and the signal power $P_s$, as defined in~\eqref{eq:Ps_Pp},
are shown. The black dotted line plots $P_s+P_p$.}
\label{fig:Tm_Yb_10m}
\end{figure}
To quantitatively describe the light amplification results of the
simulation, we compute the signal and pump power, after the
approximate $Y(z) = [I_p(z), {A(z)}^t]^t$ has been computed, as follows:
\begin{align}
\label{eq:Ps_Pp}
P_s(z)
& = \int_{\varOmega_z} I_s(x, y, {z}) \;dx dy,
&&&
P_p(z)
& = \int_{\varOmega_z} I_p(z) \;dx dy = |\varOmega_z| I_p(z).
\end{align}
The initial condition $Y(0)$ is set so that the entire signal power is
fed into the LP01 mode at the inlet $z=0$. Initial pump power $P_p^0$
was set 1000~W for the Yb case and 1100~W for the Tm case.
Figure~\ref{fig:Tm_Yb_10m} shows the distribution of the computed
$P_s$ and $P_p$ (marked ``signal'' and ``pump'' there) for the Tm and
Yb-doped fibers. The energy transfer from the pump light to the
signal light is clearly evident. We used $p=5$ Lagrange elements for
these plots. The use of 50 steps per mode beat length implies that
the Yb case required 421014 RK4 steps, while the Tm case required
302340 steps of the ODE solver to cover the 10~m fiber.
Each of these hundreds of thousands of steps required (multiple)
integrations over the fiber cross section (to compute integrals such
as the one in~\eqref{eq:Klm}). As mentioned above, these integrations
were performed using finite element quadratures. In unreported
experiments, we have attempted to reduce the cost of these
integrations by hyper-reduction techniques common in reduced-order
models~\cite{Rycke09}. One such technique is to use reduced-order
quadratures to approximate the cross-section integrals instead of
using finite elements to perform the integration precisely. Our pilot
studies into this used Gaussian quadrature rules on a disc (core) and
an annulus (cladding) of order as high as 20. In cases where this
resulted in substantial reductions in computational cost, we
unfortunately also observed unacceptably large deviations from the
results presented above. Further studies are needed to conclude if
other hyper-reduced quadratures, specifically taking the modes into
account, might prove more useful. In the next section, we describe a
completely different line of inquiry that has yielded considerable
acceleration in our simulations.
\section{The equivalent short fiber concept}
\label{sec:equiv}
In this section, we present the concept of
{a nearly} equivalent short fiber,
which is an artificially short fiber with unphysical parameters that
can mimic a longer physical fiber in some respects. Being shorter, the
equivalent fiber can be solved using fewer steps of an ODE solver,
thus providing significant reductions in computational cost.
To explain the rationale behind the equivalent short fiber approach,
first consider applying an ODE solver to solve the CMT
model~\eqref{eq:summary}. As mentioned in the previous section, very
large number of ODE steps were needed to solve the CMT
system~\eqref{eq:summary} on a 10~m long fiber. Therefore, it would
be extremely useful to reduce the fiber length (and hence the number
of ODE steps) while still preserving the relevant physical processes
in the fiber amplifier. We shall now show that this is possible to
some extent using the computational scale model of an equivalent short
fiber described below.
To begin with, one might consider shortening the $z$-domain
in~\eqref{cmt} using a dimensional analysis. Note that the left hand
side of~\eqref{cmt} has dimension $\mathrm{V/m}$ (volts per meter),
and $K_{lm}$ has units of $\mathrm{m}^{-1}$. Therefore, by
non-dimensionalization, one is led to believe that a shorter fiber of
length $\tilde L \ll L$ might, in some ways, behave similarly to the
original fiber of length $L$, provided its coupling coefficient is
magnified by $L / \tilde{L}$. However, not all nonlinear systems admit
scale models that are perfect replicas of the original. Below we shall
identify what properties of such a shorter fiber can be expected to be
close to the original.
We introduce the variable change
\[
\zeta (\tilde z) = \tilde z L/ \tilde L.
\]
A fiber of length $L$, under the variable change
$\tilde z = \zeta^{-1}(z) = z \tilde L/ L$ becomes one of length
$\tilde L$.
Under this variable change, \eqref{cmt} and \eqref{eqn:Ip} become
\begin{eqnarray}
\frac{\tilde{L}}{L} \; \frac{d}{d \tilde{z}} A_l \Big( \frac{\tilde{z} L}{\tilde{L}} \Big)
&=&
\sum\limits_{m=1}^M e^{\hat\imath (\beta_m - \beta_l) \tilde{z} L / \tilde{L}} \; K_{lm} \Big( A \Big( \frac{\tilde{z} L}{\tilde{L}} \Big), I_p \Big( \frac{\tilde{z} L}{\tilde{L}} \Big) \Big) \; A_m \Big( \frac{\tilde{z} L}{\tilde{L}} \Big) \\
\frac{\tilde{L}}{L} \; \frac{d}{d \tilde{z}} I_p \Big( \frac{\tilde{z} L}{\tilde{L}} \Big)
&=&
\mean{g_p} \; I_p \Big( \frac{\tilde{z} L}{\tilde{L}} \Big)
\end{eqnarray}
for all $ 0 < \tilde{z} < \tilde{L}$. In other words, defining
$\hat A_l = A_l \circ \zeta$ and $\hat I_p = I_p \circ \zeta$, the
above system may be rewritten as the following system on the shorter
domain $ 0 < \tilde{z} < \tilde{L}$ for
$\hat Y = [\hat{A}_l, \hat{I_p}]^t$,
\begin{align} \label{eqn: exact equiv cmt}
\frac{d \hat{Y}}{d \tilde{z}}
&=
\begin{bmatrix}
(L/\tilde{L})\; \mean{g_p (\hat Y)} \;
\hat{I}_p
\\\displaystyle\sum\limits_{m=1}^M e^{\hat\imath (\beta_m - \beta_l) \tilde{z}
L / \tilde{L}} \; (L/\tilde{L}) \; K_{lm} ( \hat Y )
\; \hat{A}_m,
\end{bmatrix}.
\end{align}
Supplemented with the same initial data at at $ z = \tilde{z} = 0$,
\eqref{eqn: exact equiv cmt} is exactly equivalent
to~\eqref{eq:summary}, i.e.,
\begin{equation}
\label{eq:replica}
\hat Y = Y \circ \zeta.
\end{equation}
In other words, the solution of~\eqref{eqn: exact equiv cmt}, being
the pull back of the original solution $Y$ to the shorter domain, is a
perfect replica of the original solution $Y$.
Unfortunately, ~\eqref{eqn: exact equiv cmt} on
$0< \tilde z < \tilde L$ offers no computational advantages over the
original system~\eqref{eq:summary} on $0<z<L$. This is because the
mode beat length of \eqref{eqn: exact equiv cmt} has been reduced by a
factor of $\tilde{L}/L$ due to the variable change. So in order
to solve the ODE system \eqref{eqn: exact equiv cmt}, keeping the same
number of steps per mode beat length, the total number of steps needed
to solve the system has not been reduced.
This leads us to consider another mode coupling system with the same
mode beat length as the original system~\eqref{eq:summary}.
\begin{center}
\framebox{\parbox{\textwidth}{ %
Let
$
\tilde{Y} (\tilde z) = [\tilde I_p(\tilde z), \tilde{A}_1(\tilde z),
\cdots, \tilde{A}_M(\tilde z)]^t
$ solve
\begin{subequations}
\label{eqn:equiv cmt}
\begin{align}
\label{eqn:equiv-cmt-1}
\frac{d \tilde Y}{d \tilde z}
& =
\begin{bmatrix}
\langle (L/\tilde{L}) g_p(\tilde Y)\rangle & 0
\\
0 & p(\tilde z)\cdot (L/\tilde{L}) K(\tilde Y)
\end{bmatrix}
\tilde Y,
&& 0< \tilde z < \tilde L,
\\
\tilde Y(0) & =
[I_p^0, A^0]^t
&& \tilde z = 0.
\end{align}
\end{subequations}
}}
\end{center}
Clearly, \eqref{eqn:equiv cmt} is not the same as~\eqref{eqn: exact
equiv cmt} due to the differences in the phase factors. Therefore,
unlike the solution $\hat Y$ of~\eqref{eqn: exact equiv cmt}, the
solution $\tilde Y$ of~\eqref{eqn:equiv cmt} is not a perfect replica
of the original solution $Y$. Nonetheless, we shall now proceed to
argue that~\eqref{eqn:equiv cmt} is a practically useful scale model
of~\eqref{eq:summary} as it approximately preserves the power
distribution from the original. Power, unlike {the amplitude} $A$, is the {quantity that can be, and actually is, experimentally measured}.
Let $P_l$ and $\tilde{P}_l$ be respectively the powers contained in
the $l^{th}$ mode for the physical and equivalent fiber, defined by
\begin{eqnarray*}
P_l(z)
& =
\displaystyle\int_{\varOmega_z} \frac{n}{\mu_0 c} |A_l (z) \varphi_l(x,y)|^2 \; dx\,dy,
&
\qquad \text{ } 0 < z < L,
\\
\tilde{P}_l (\tilde z)
& =
\displaystyle
\int_{\varOmega_z} \frac{n}{\mu_0 c} |\tilde{A}_l (z) \varphi_l(x,y)|^2 \; dx\,dy,
&
\qquad \text{ } 0 < \tilde z < \tilde L.
\end{eqnarray*}
One may express these in terms of
\[
\Phi_l = \int_{\varOmega_z} \frac{n}{\mu_0 c} |\varphi_l|^2\;
dx\,dy,
\]
as $ P_l(z) = |a_l|^2 \Phi_l$, where
$ a_l(z) = A_l(z) e^{\hat\imath \beta_l z}$.
To obtain an equation for $P_l(z)$, we may start from the second equation of
the block system~\eqref{eq:summary}, or equivalently from \eqref{cmt},
which can
be rewritten as
\[
e^{\hat\imath \beta_l z}
d A_l/d z
= \sum_{m=1}^M K_{lm}(z) e^{\hat\imath \beta_m z} A_m(z).
\]
Then using $da_l/dz = e^{\hat\imath \beta_l z} \d_z A_l + \hat\imath \beta_l a_l,$
we have
\[
\frac{d a_l}{d z} = \hat\imath \beta_l a_l + \sum_{m=1}^M K_{lm}(z) a_m(z).
\]
Using also the complex conjugate of this equation,
we have
\begin{align*}
\frac{d |a_l|^2}{d z}
& = a_l \frac{d \overline{a}_l }{d z}
+ \overline{a}_l \frac{d a_l }{d z}
=
\hat\imath \beta_l a_l \overline{a}_l
- \hat\imath \beta_l \overline{a}_l a_l
+ \sum_{m=1}^M \overline{K}_{lm} a_l \overline{a}_m +
{K}_{lm} \overline{a}_l {a}_m,
\end{align*}
i.e.,
\[
\frac{d |a_l|^2}{ d z}
=
2\sum_{m =1}^M \Re \big[ K_{lm}(Y) \,\overline{a}_l a_m\big],
\]
for all $l=1, \ldots, M$, or equivalently,
\begin{equation}
\label{eqn: pwr cmt}
\frac{d P_l}{d z}
= 2K_{ll}(Y) P_l + \rho_l(Y),
\end{equation}
where
\begin{equation}
\label{eq:rho}
\rho_l(Y) = 2\Phi_l \sum^M_{\substack{m = 1 \\ m \ne l}}
\Re \big[ K_{lm}(Y) \,\overline{a}_l a_m\big],
\end{equation}
for $l =1, \ldots, M$.
To the system~\eqref{eqn: pwr cmt}, let us also add the pump power
using the index $l=0$, i.e., let $P_0(z) \equiv P_p(z)$ as defined
in~\eqref{eq:Ps_Pp}. Then integrating~\eqref{eqn:Ip}, we obtain
$ d P_0/d z = \langle g_p \rangle P_0.$ All together, we have thus
obtained an equation for $P_l$ for all $l=0, \ldots, M$,
\begin{equation}
\label{eq:Power}
\frac{d P }{d z}
=
\begin{bmatrix}
\langle g_p(Y)\rangle & 0
\\
0 & 2 \text{diag} [K(Y)]
\end{bmatrix}
P
+
\begin{bmatrix}
0 \\
\rho(Y)
\end{bmatrix},
\end{equation}
where $P = [P_0, P_1, \ldots, P_M]^t$ and $\text{diag}[\cdot]$ denotes
the diagonal part of a matrix.
To understand the motivation for the remaining arguments, we now
highlight an observation concerning~\eqref{eq:Power}. A scale model
providing a perfect replica of the original power distribution is easy
to obtain if the system~\eqref{eq:Power} were an autonomous system:
indeed, if there exists a function $F$ of $P$ alone such that
$d P/ dz = F(P)$, then by merely scaling $F$ by $L/\tilde{L}$, we
obtain an equivalent system that provides perfect replicas of the
original power distribution on the shorter fiber of length $\tilde
L$. However~\eqref{eq:Power} is not autonomous, in general. Yet, for
practical fibers, our numerical experience suggests that
\eqref{eq:Power} behaves almost like an autonomous
system. Therefore our strategy now is to view~\eqref{eq:Power} as a
perturbation of an autonomous system.
{Of particular interest is the fact that if the fiber amplifier was robustly single-mode ($M = 1$ for the laser signal), then the governing system}~\eqref{eq:Power} {would be autonomous.
This can be achieved by not using a LMA amplifier, but one of a smaller fiber core size and/or a lower numerical aperture (NA) such that the fiber core can only support only one guided core mode, the fundamental mode (indexed by $m = 1$), at the signal wavelength.
However, even with a LMA fiber, if one were to account for fiber bending effects, which cause the higher-order core modes (indexed by $1 < m \leq M$) to leak into the cladding region more so than for the fundamental mode, then the fiber would operate nearly as a single-mode fiber.
Actual fiber amplifiers are almost always wrapped on a spool rather than stretched out straight, thus ensuring this fiber bending effect.
This provides us with greater confidence of autonomous system-like behavior, even in real-world implementations of fiber laser amplifier systems.}
Recall from~\eqref{eq:Klm} that $K_{lm}$ is defined using
$ g_s(I_s, I_p)$, where $I_s$ takes the form
in~\eqref{eq:Irradiance}. We define the following perturbation of $I_s$,
\[
{\mathcal{I}_s}(P) =
\sum_{m=1}^M \frac{n}{\mu_0c} \left| a_m \varphi_m \right|^2
=
\sum_{m=1}^M \frac{n}{\mu_0c \Phi_m} P_m \left|\varphi_m \right|^2.
\]
It seems difficult to characterize when $I_s - {\mathcal{I}_s}$ is small {\it a
priori} (as it depends, e.g., on the localization and orthogonality
of the specific fiber modes) but after a CMT calculation, we may check
if this {difference} is small {\it a posteriori}. Deferring for the moment the
matter of the size of $I_s - {\mathcal{I}_s}$, let us proceed to
define
$\gamma_\ell(P) = g_\ell({\mathcal{I}_s}(P), I_p) = g_\ell({\mathcal{I}_s}(P),
P_0/|\varOmega_z|),$ for $\ell \in \{s, p\}.$ They represent the gain
functions obtained by replacing $I_s$ by ${\mathcal{I}_s}$. The new gain
functions in turn prompt the definition of a new mode coupling
coefficient: instead of~\eqref{eq:Klm}, we now consider
\[
\kappa_{lm}(P) = \frac {k_s}{2\beta_l} \int_{\varOmega_z}
\gamma_s(P)
\,
n(x,y) \varphi_m(x,y) \overline{ \varphi_l(x,y) } \; dx \,dy.
\]
for all $l, m =1, \ldots, M$. Additionally let
\[
\kappa_{00}(P) = \frac 1 2 \mean{\gamma_p(P)} P_0,
\]
and $\kappa_{0l} = \kappa_{l0} = 0,$ for all $l=1, \ldots, M$. We may
now view these $\kappa_{lm}$ as entries of an $(M+1)\times (M+1)$
matrix, using which \eqref{eq:Power} can be expressed as
\begin{align}
\label{eq:dPdz}
\frac{d P }{d z}
& =
2\kappa(P) P + \eta
\end{align}
where $\eta \in \mathbb{R}^{M+1}$ is defined
by
\[
\eta(z) =
\begin{bmatrix}
\langle g_p(Y) - \gamma_p(P)\rangle & 0 \\
0 & 2\,\text{diag}[K(Y) - \kappa(P) ]
\end{bmatrix}
P +
\begin{bmatrix}
0 \\ \rho(Y)
\end{bmatrix}.
\]
We view $\eta$ as a function of $z$, i.e.,
$\eta: [0, L] \to \mathbb{R}^{M+1}$. The $z$-dependence is clear once we
express the $z$-dependence of the solution $Y \equiv Y(z)$ and power
$P \equiv P(z).$ Equation~\eqref{eq:dPdz} shows that power is governed
by a perturbation of an autonomous system whenever $\eta$ is small
enough to be viewed as a perturbation.
Returning to consider~\eqref{eqn:equiv cmt}, we define analogous
quantities for the short fiber, namely
\[
\tilde{a}_l(z) = \tilde{A}_l(z) e^{\hat\imath \beta_l z},
\quad
\tilde{P}_0 = \int_{\varOmega_z} \tilde{I}_p \; dx\,dy,
\quad
\tilde{P}_l = |\tilde{a}_l |^2 \Phi_l,
\]
for $l=1, \ldots, M$. Then we may repeat the above arguments starting
from \eqref{eqn:equiv cmt} to obtain the following analogue of~\eqref{eq:dPdz}.
\begin{align}
\label{eqn: equiv cmt prtrb}
\frac{d \tilde{P}}{d \tilde z}
& =
2\frac{L}{\tilde L} \kappa(\tilde P) \tilde P + \tilde \eta,
\end{align}
where $\tilde{\eta} : [0, \tilde{L} ] \to \mathbb{R}^{M+1} $ is now
given by
\[
\tilde\eta
= \begin{bmatrix}
\langle g_p(\tilde Y) - \gamma_p( \tilde P)\rangle & 0 \\
0 & 2\,\text{diag}[K(\tilde Y) - \kappa(\tilde P) ]
\end{bmatrix}
\tilde P +
\begin{bmatrix}
0 \\ \rho(\tilde Y)
\end{bmatrix}.
\]
Note that $\rho(\tilde Y)$ is defined by~\eqref{eq:rho} after
replacing not only $Y$ by $\tilde Y,$ but also
$a_l$ (which depends on $Y$) by $\tilde{a}_l$ (which depends on
$\tilde Y$).
To conclude this analysis, it now suffices to compare~\eqref{eqn:
equiv cmt prtrb} and~\eqref{eq:dPdz}. Applying the change of
variable $\zeta$ to~\eqref{eq:dPdz}, we get
\begin{equation}
\label{eqn: equiv pwr cmt var chng}
\frac{d }{ d\tilde z} (P \circ \zeta)
= 2\frac{L}{\tilde L} \kappa( P \circ \zeta) P \circ \zeta +
\frac{L}{\tilde L} \; \eta \circ \zeta.
\end{equation}
Comparing \eqref{eqn: equiv cmt prtrb} and \eqref{eqn: equiv pwr cmt
var chng} we see that when $\eta$ and $\tilde \eta $ are negligibly
small compared to the other terms, $P_l \circ \zeta$ and $P_l$ solve
approximately the same equation, and consequently
\begin{equation}
\label{eq:1}
P\circ \zeta \approx \tilde{P}.
\end{equation}
We summarize this discussion as follows.
\begin{center}
\framebox{\parbox{\textwidth}{ %
The system \eqref{eqn:equiv cmt} is an equivalent short fiber
model of \eqref{cmt} in the sense {that the power $P_l$
contained in the $l^\text{th}$ mode is approximately preserved
from the original fiber model \eqref{cmt} through a change of
variable, under the above assumptions}. }}
\end{center}
\section{Computational verification of equivalent fiber concept}
\label{sec:results}
In this section, we perform extensive numerical experiments to verify
the pratical utility of the equivalent fiber concept introduced in
Section~\ref{sec:equiv}. We shall compare the relative differences in
the powers obtained from the original fiber and its equivalent short
fiber for various settings to gauge the practical effectiveness of the
approximation~\eqref{eq:1}. In Subsections~\ref{ssec:Tm-equiv-realize}
and~\ref{ssec:Yb-equiv-realize}, we show a way to understand the
equivalent short fiber as a fiber with artificial parameters (with
values not physically realizable) for the Tm and Yb cases,
respectively.
\subsection{Realizing the equivalent short fiber for the Tm-doped case}
\label{ssec:Tm-equiv-realize}
The equations of the equivalent short fiber, namely~\eqref{eqn:equiv
cmt}, can be realized for a dopant medium if we can find a set of
``artificial'' parameters that would scale the original $g_p$ and the
original $K$ by $L/\tilde L$. In view of~\eqref{eq:Klm}, this effect
is achieved by scaling the original $g_\ell$ by $L/\tilde L$ for
$\ell \in \{s, p\}$. Now consider the expressions for $g_\ell$ for
Tm-doped fiber, given in \eqref{Tm: signal gain} and \eqref{Tm: pump
gain}. Clearly, in view of these expressions, $g_\ell$ will scaled
by $L/\tilde L$ if all the ion populations $N_i$ are so scaled.
This observation, in turn, leads us to consider the expressions for
$N_i$ we derived in~\eqref{eq:N0123}. Let
\[
\tilde{N}_{\text{total}} = \frac{L}{\tilde L} N_{\text{total}}, \qquad
\tilde{\kappa}_R = \frac{\tilde L}{L} \kappa_R.
\]
The value of the expression for $N_0$ in~\eqref{eq:N0}
will be scaled by $L /\tilde L$ if we replace $\kappa_R$ by $\tilde{\kappa}_R$ and
$N_{\text{total}}$ by $\tilde{N}_{\text{total}}$, i.e., \eqref{eq:N0} implies
\begin{align}
\label{eq:Nt0}
\frac{L}{\tilde L} N_0
& = \frac{\gamma_0 \tilde{\kappa}_R \tilde{N}_{\text{total}} - \gamma_1 (1 + \gamma_2 + \gamma_3) -1}
{2 \tilde{\kappa}_R (\gamma_0 + \gamma_1 \gamma_4)}
\\ \nonumber
& + \frac{\sqrt{(1 - \gamma_0 \tilde{\kappa}_R \tilde{N}_{\text{total}} + \gamma_1(1 + \gamma_2 +\gamma_3))^2
+ 4(\gamma_0 + \gamma_1 \gamma_4) \tilde{\kappa}_R \tilde{N}_{\text{total}}}}{2 \tilde{\kappa}_R (\gamma_0 + \gamma_1 \gamma_4)}.
\end{align}
Let $\tilde{N}_0 = L N_0 / \tilde{N_0},$ the left hand side above.
Proceeding to analyze the expressions in~\eqref{eq:N123}, we find that the
same change in $\kappa_R$ and $N_{\text{total}}$, and the consequent change in
$N_0$ to $\tilde{N}_0$ per~\eqref{eq:Nt0}, also scales all other $N_i$ by
$L /\tilde L,$ i.e.,
\begin{align*}
\frac{L}{\tilde L} N_1
&= \frac{(\gamma_3 + \gamma_4 \tilde{\kappa}_R \tilde{N}_0)\gamma_1 \tilde{N}_0}{1 + \gamma_0 \tilde{\kappa}_R \tilde{N}_0}, \quad
\frac{L}{\tilde L} N_2 = \frac{\gamma_2 \gamma_1 \tilde{N}_0}{1 + \gamma_0 \tilde{\kappa}_R \tilde{N}_0}, \quad
\frac{L}{\tilde L} N_3 = \frac{\gamma_1 \tilde{N}_0}
{1 + \gamma_0 \tilde{\kappa}_R \tilde{N}_0}.
\end{align*}
Therefore, all the ion populations $N_i$ are scaled by $L/\tilde{L}$,
and so are $g_s$ and $g_p$. We have thus arrived at our main
observation of this subsection:
\begin{center}
\framebox{\parbox{\textwidth}{ %
A short fiber of length $\tilde L$ is equivalent to a Tm-doped
fiber of length $L$ if the fiber's original parameters $N_{\text{total}}$ and
$\kappa_R$ are changed to $\tilde{N}_{\text{total}} = L N_{\text{total}} / \tilde L$ and
$\tilde{\kappa}_R = \tilde L \kappa_R /L$, respectively, i.e., this change
realizes~\eqref{eqn:equiv cmt}.
}}
\end{center}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{DiffTmPowAllinone_o1.png}
\hspace{-0.2cm}
\includegraphics[width=0.5\textwidth]{DiffTmPowEquidistrib_o1.png}
\caption{A comparison between a {\bf{Tm-doped fiber}} and its
equivalent short counterpart. The left panel shows the case where
the input signal power was wholly contained in the LP01 mode,
while the right panel shows the case where it was equally
distributed between the two modes.}
\label{fig:Tm-individual}
\end{figure}
To see how this idea works in practice, we consider two scenarios,
both with an equivalent short fiber of $\tilde{L} = 0.1$~m
representing the 10~m long Tm fiber we simulated in
Figure~\ref{fig:Tm_Yb_10m}. (All parameters are as in
Table~\ref{tab:Tm} except for $N_{\text{total}}$ and $\kappa_R$, which were
modified for the equivalent fiber as stated above.) In the first
scenario, 100\% of the input signal power is carried in the LP01 mode
at the inlet (the same setting as in the computation reported in
Figure~\ref{fig:Tm_Yb_10m}). In the left panel of
Figure~\ref{fig:Tm-individual}, we find that the plots of the computed
powers for the equivalent short fiber and the real fiber are virtually
identical. Even though the difference between them appear to be zero
visually, we have quantified this difference in the bottom left plot
of Figure~\ref{fig:Tm-individual}: since the domains of the two power
functions to be compared are different, we pull back the original
powers to the shorter domain and plot $P_l \circ \zeta - \tilde{P}_l$
(for the two modes, LP01 and LP11) on the shorter domain. Clearly,
from the scale of the plot, the absolute values of these differences
are found to be of the order of $10^{-9}$, so indeed the differences
between the two sets of power curves are negligible. The
practical value of the
equivalent short fiber calculation lies in the fact it gave
essentially the same power curves about 100 times faster than the real-length fiber
calculation of Figure~\ref{fig:Tm_Yb_10m}.
In the second scenario, the total input power of 30~W is distributed
equally between the LP01 and LP11 modes. From the top right panel of
Figure~\ref{fig:Tm-individual}, we find that LP01 mode amplifies more
than the LP11 mode. Moreover, as in the left panel, the results from
the real and equivalent short fiber are visually
indistinguishable. However, a more careful examination of the
difference $P_l \circ \zeta - \tilde{P}_l$ in the bottom right plot
shows that maximal absolute power differences are about 0.3 near the
inlet of the fiber. Although this is many fold larger than the first
scenario, the relative power error of $3 \times 10^{-4}$ is still
quite small enough to make the equivalent short fiber a useful
practical tool. Note that the difference
$P_l \circ \zeta - \tilde{P}_l$ is now highly oscillatory, due to the
interactions between the two modes.
\subsection{Realizing the equivalent short fiber for the Yb-doped case}
\label{ssec:Yb-equiv-realize}
The equivalent short fiber in the Yb-doped case is more easily
realizable than the Tm-case as the Yb population dynamics is simpler.
The following conclusion can be arrived at easily proceeding similarly
as in Subsection~\ref{ssec:Tm-equiv-realize}.
\begin{center}
\framebox{\parbox{\textwidth}{ %
A short fiber of length $\tilde L$ is equivalent to a Yb-doped
fiber of length $L$ if the fiber's original parameter $N_{\text{total}}$ is
changed to $\tilde{N}_{\text{total}} = L N_{\text{total}} / \tilde L$, i.e., this change
realizes~\eqref{eqn:equiv cmt}. }}
\end{center}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{DiffYbPowAllinone_o1.png}
\hspace{-0.2cm}
\includegraphics[width=0.5\textwidth]{DiffYbPowEquidistrib_o1.png}
\caption{A comparison between a {\bf{Yb-doped fiber}} and its
equivalent short counterpart. The left panel shows the case where
the input signal power was wholly contained in the LP01 mode,
while the right panel shows the case where it was equally
distributed between all four modes.}
\label{fig:Yb-individual}
\end{figure}
Figure~\ref{fig:Yb-individual} gives some indication of the practical
performance of this equivalent short fiber. As in the experiments for
the Tm-fiber reported in Figure~\ref{fig:Tm-individual}, here we
consider two scenarios, the first where all input signal power is
given to the LP01 mode, and the second where the input power is
distributed to the four LP modes equally (25\% each). The left panel
in Figure~\ref{fig:Yb-individual} shows the former, while the right
panel shows the latter. The equivalent fiber is less faithful in the
latter case, but the scale of the errors observed in the bottom plots
in both cases are well within the acceptable error ranges in
engineering practice. {(Laboratory power measurement uncertainties tend to be about $\pm5\%$.)}
\subsection{Increase of error with respect to some parameters}
We want to understand how relative power differences between the
equivalent and real fiber vary with respect to two important input
parameters $P_p^0$ and the short fiber length $\tilde L$. We consider
both the Tm and Yb fibers, holding the
original fiber length $L$ fixed to 10~m.
The solutions of the original and equivalent fiber models vary as
initial conditions are changed. Therefore to compare one with the
other in the {\em worst case} scenario, we take the maximum of the
power error measures over the set
\[
\mathcal{A} = \left\{ \alpha \in \mathbb{C}^M: \; \int_{\varOmega_z}
I_s(x, y, 0, \alpha) \; dx dy = P_s^0 \right\},
\]
i.e., the set $\mathcal{A}$ is the set of all input distributions yielding the
same initial signal power~$P_s^0$, which is set for Tm and Yb fiber
per Tables~\ref{tab:Tm} and~\ref{tab:Yb}, respectively. The initial
pump power $P_p^0$ is varied in the range 1000--5000~W (thus providing a
corresponding range of initial values for the $I_p$-component in the
model). We solve the full CMT model and the equivalent short fiber
model, not only for this range of $P_p^0$, but also for decreasing
values of the short fiber length $\tilde L$. The following quantity
is then computed across all such solutions:
\begin{equation}
\label{eq:eps-def}
\varepsilon(P_p^0, \tilde{L}) =
\max_{A^0 \in \mathcal{A} }\;
\frac
{\displaystyle \max_{l=0,1, \ldots, M} \; \max_{0 \le z \le L}
\big| (P_l - \tilde{P}_l \circ \zeta^{-1} )(z) \big|}
{\displaystyle \max_{l=0,1, \ldots, M} \;\max_{0 \le z \le L}
\big| P_l(z)\big| }.
\end{equation}
Thus $\varepsilon$ represents the maximal possible power deviations
between the equivalent and original models over all input signal
distributions and over all mode components, as a function of initial
pump power $P_p^0$ and the fictitious length $\tilde{L}$. Values of
$\varepsilon$ will thus inform us of the ranges of $P_p^0$ and
$\tilde L$ where the equivalent short fiber is more useful.
To practically compute $\varepsilon$, we replace the maximum over the
infinite set $\mathcal{A}$ by a computable maximum over a finite set obtained
by assigning each mode component all possible values from 0 to 100\%
in 10\% increments (while constraining the total signal power to
$P_s^0$). In the case of the 2-mode thulium fiber, this resulted in
11 input power distributions, %
while for the ytterbium-doped fiber having 4 modes, 286 distributions
were required. The maximum over $z$ in~\eqref{eq:eps-def} is replaced
by the maximum over the points where ODE solver traversed. We used
polynomial degree $p=5$ for the finite element approximation of modes
and the 7-stage Dormand-Prince Runge Kutta method for solving the ODE
system. Collecting data from hundreds of simulations, we then plot
$\varepsilon$ in a two-dimensional grid of $P_p^0$ and $\tilde L$ values.
\begin{figure}
\centering
\includegraphics[scale=.39]{pwr_rel_err_tm.png}\hspace{-0.3cm}
\includegraphics[scale=.39]{pwr_rel_err_yb.png}
\caption{
Maximal relative power differences between a 10~m long real
fiber and equivalent short fibers of various lengths $\tilde L$,
for various initial pump powers $P_p^0$. The Tm case is shown on
the left and Yb case on the right.
}
\label{fig:rel_pwr_diff}
\end{figure}
The resulting contour plots of the function $\varepsilon$ are given in
Figure~\ref{fig:rel_pwr_diff} for Yb and Tm fibers, for a range of
$P_p^0$ and $\tilde L$ values. We find that relative error
$\varepsilon$ varies mildly with respect to $P_p^0$ for any fixed
$\tilde L$, indicating that the absolute error in the powers increases
more or less linearly as $P_p^0$ is increased. Looking vertically at
the plots of Figure~\ref{fig:rel_pwr_diff}, we find that holding
$P_p^0$ fixed, there are significant variations in $\varepsilon$ with
respect to $\tilde L$. The errors definitively increase as $\tilde{L}$
decrease. Figure~\ref{fig:rel_pwr_diff} clearly indicates that
excessively short equivalent fiber lengths are not recommendable.
|
2,869,038,154,021 | arxiv | \section{Introduction}
\label{S:Intro}
We consider the integral (Fig.~\ref{F:Intro})
\begin{equation}
I(a_1,a_2,a_2,a_4,a_5) = \frac{(k^2)^{\sum a_i-d}}{\pi^d}
\int \frac{d^d k_1\,d^d k_2}%
{[(k_1-k)^2]^{a_1} [(k_2-k)^2]^{a_2} [(k_1-k_2)^2]^{a_3} (k_2^2)^{a_4} (k_1^2)^{a_5}}
\label{Intro:mom}
\end{equation}
in $d=4-2\varepsilon$-dimensional Euclidean momentum space.
It has a long and interesting history.
For many years, most of the information we had about
perturbative quantum field theory was coming (directly or indirectly)
from this integral.
All massless three-loop self-energy integrals (with integer indices)
reduce to 6 master integrals~\cite{CT:81},
5 of which are particular cases of $I$~(\ref{Intro:mom}).
Only one master integral (the non-planar one)
does not reduce to $I$;
however, using the gluing method~\cite{CT:81},
one can easily show that its value at $\varepsilon=0$
is equal to the ladder integral,
which reduces to $I(1,1,\varepsilon,1,1)$.
At four loops~\cite{BC:10}, 15 master integrals (of 28) reduce to $I$,
and thus can be easily expanded in $\varepsilon$ up to high powers.
\begin{figure}[ht]
\begin{center}
\begin{picture}(124,38)
\put(28.5,19){\makebox(0,0){\includegraphics{mom.eps}}}
\put(4.75,16){\makebox(0,0){$k$}}
\put(52.25,16){\makebox(0,0){$k$}}
\put(27.5,19){\makebox(0,0)[r]{$k_1-k_2$}}
\put(16,5){\makebox(0,0){$k_1$}}
\put(41,5){\makebox(0,0){$k_2$}}
\put(12,33){\makebox(0,0){$k_1-k$}}
\put(45,33){\makebox(0,0){$k_2-k$}}
\put(29.5,19){\makebox(0,0)[l]{3}}
\put(18,10){\makebox(0,0){5}}
\put(39,10){\makebox(0,0){4}}
\put(18,28){\makebox(0,0){1}}
\put(39,28){\makebox(0,0){2}}
\put(95.5,19){\makebox(0,0){\includegraphics{coord.eps}}}
\put(74.5,17){\makebox(0,0)[r]{$0$}}
\put(116.5,17){\makebox(0,0)[l]{$x$}}
\put(95.5,2){\makebox(0,0){$x_2$}}
\put(95.5,36){\makebox(0,0){$x_1$}}
\put(94.5,19){\makebox(0,0)[r]{$\bar{3}$}}
\put(83,5){\makebox(0,0){$\bar{5}$}}
\put(108,5){\makebox(0,0){$\bar{4}$}}
\put(83,33){\makebox(0,0){$\bar{1}$}}
\put(108,33){\makebox(0,0){$\bar{2}$}}
\end{picture}
\end{center}
\caption{Two-loop self-energy diagram in momentum and coordinate space.}
\label{F:Intro}
\end{figure}
This integral in coordinate space (Fig.~\ref{F:Intro})
\begin{align}
I(a_1,a_2,a_2,a_4,a_5) &\sim
\int \frac{d^d x_1\,d^d x_2}%
{(x_1^2)^{\bar{a}_1} [(x_1-x)^2]^{\bar{a}_2} [(x_1-x_2)^2]^{\bar{a}_3}
[(x_2-x)^2]^{\bar{a}_4} (x_2^2)^{\bar{a}_5}}
\nonumber\\
&{} \sim I(\bar{a}_2,\bar{a}_4,\bar{a}_3,\bar{a}_5,\bar{a}_1)\,,
\label{Intro:coord}
\end{align}
has the same form~(\ref{Intro:mom}) if we rename $x_i\to p_i$;
here trivial $\Gamma$-functions from Fourier transforms
are not explicitly shown, and
\begin{equation}
\bar{a}_i = \frac{d}{2} - a_i\,.
\label{Intro:dual}
\end{equation}
We can perform inversion of the integration momenta in~(\ref{Intro:mom})
\begin{equation*}
k_i = \frac{k_i'}{k_i^{\prime2}}\,,\quad
k_i^2 = \frac{1}{k_i^{\prime2}}\,,\quad
d^d k_i = \frac{d^d k_i'}{(k_i^{\prime2})^d}\,,\quad
(k_1-k_2)^2 = \frac{(k_1'-k_2')^2}{k_1^{\prime2}\,k_2^{\prime2}}\,,
\end{equation*}
and obtain
\begin{equation}
\raisebox{-9.25mm}{\begin{picture}(30,21)
\put(15,10.5){\makebox(0,0){\includegraphics{small.eps}}}
\put(17,10){\makebox(0,0){$a_3$}}
\put(7,4){\makebox(0,0){$a_5$}}
\put(23,4){\makebox(0,0){$a_4$}}
\put(7,16.5){\makebox(0,0){$a_1$}}
\put(23,16.5){\makebox(0,0){$a_2$}}
\end{picture}} =
\raisebox{-9.25mm}{\begin{picture}(50,21)
\put(25,10.5){\makebox(0,0){\includegraphics{small.eps}}}
\put(27,10){\makebox(0,0){$a_3$}}
\put(10,8){\makebox(0,0){$d-a_5$}}
\put(10,4){\makebox(0,0){${}-a_1-a_3$}}
\put(40,8){\makebox(0,0){$d-a_4$}}
\put(40,4){\makebox(0,0){${}-a_2-a_3$}}
\put(17,17){\makebox(0,0){$a_1$}}
\put(33,17){\makebox(0,0){$a_2$}}
\end{picture}}\quad.
\label{Intro:inv}
\end{equation}
Inversion relations can be also derived in coordinate space, of course.
The integrals
\begin{equation*}
\raisebox{-7.25mm}{\begin{picture}(30,17)
\put(15,8.5){\makebox(0,0){\includegraphics{ibpa.eps}}}
\end{picture}}\,,
\end{equation*}
where dashed lines have integer indices,
have been calculated in~\cite{CKT:80}
via Gegenbauer polynomials.
Of course, now we know that it is trivial to calculate them
using IBP (Sect.~\ref{S:IBP}).
\section{Integration by parts}
\label{S:IBP}
The IBP relations for this particular class of integrals
first appeared in~\cite{VPK:81}.
They are described in the text below the formula~(15);
this formula is the homogeneity relation
(which is a consequence of the IBP relations).
Soon IBP relations evolved into a fantastically universal
and efficient method for reducing all scalar integrals of a given topology
to a few master integrals~\cite{CT:81}.
The IBP relations allow one to trivially reduce integrals
with integer indices in the left (or right) triangle
to one-loop integrals expressible via $\Gamma$-functions:
\begin{equation}
\raisebox{-7.25mm}{\begin{picture}(30,17)
\put(15,8.5){\makebox(0,0){\includegraphics{ibpa.eps}}}
\end{picture}} \to
\raisebox{-7.25mm}{\begin{picture}(30,17)
\put(15,8.5){\makebox(0,0){\includegraphics{ibpb.eps}}}
\end{picture}}\,,
\raisebox{-7.25mm}{\begin{picture}(30,17)
\put(15,8.5){\makebox(0,0){\includegraphics{ibpc.eps}}}
\end{picture}}\,.
\label{IBP:red}
\end{equation}
If $a_3$ is not integer, things are more difficult.
The combination~\cite{CT:81} of the IBP relations
\begin{equation}
\bigl[ (d - 2 a_3 - 4) \3+ + 2 (d - a_3 -3) \bigr]
\raisebox{-4mm}{\begin{picture}(22,11)
\put(11,5.5){\makebox(0,0){\includegraphics{ibpd.eps}}}
\put(11.5,5){\makebox(0,0)[l]{$a_3$}}
\end{picture}}
= 2 \1+ (\5- - \2- \3+)
\raisebox{-4mm}{\begin{picture}(22,11)
\put(11,5.5){\makebox(0,0){\includegraphics{ibpd.eps}}}
\put(11.5,5){\makebox(0,0)[l]{$a_3$}}
\end{picture}}
\label{IBP:a3}
\end{equation}
allows one to shift $a_3$ by $\pm1$
(if all integer indices are 1,
all integrals in the right-hand side of~(\ref{IBP:a3})
are trivial).
\section{Uniqueness}
\label{S:Uni}
Many interesting results for massless self-energy integrals were obtained
using the method of uniqueness~\cite{VPK:81,U:83,K:84,K:85}
(see also the textbook~\cite{V:98}).
It is based on the following relations.
In coordinate space
\begin{equation}
\raisebox{-5.75mm}{\begin{picture}(22,14)
\put(11,7){\makebox(0,0){\includegraphics{uni3.eps}}}
\put(11,12.5){\makebox(0,0)[b]{$a_1$}}
\put(11,1.5){\makebox(0,0)[t]{$a_2$}}
\end{picture}} =
\raisebox{-5.75mm}{\begin{picture}(22,14)
\put(11,7){\makebox(0,0){\includegraphics{uni1.eps}}}
\put(11,6.5){\makebox(0,0)[t]{$a_1+a_2$}}
\end{picture}}\,,\qquad
\raisebox{-2.75mm}{\begin{picture}(22,8)
\put(11,4){\makebox(0,0){\includegraphics{uni2.eps}}}
\put(6,3.5){\makebox(0,0)[t]{$a_1$}}
\put(16,3.5){\makebox(0,0)[t]{$a_2$}}
\end{picture}} \sim
\raisebox{-2.75mm}{\begin{picture}(22,8)
\put(11,4){\makebox(0,0){\includegraphics{uni1.eps}}}
\put(11,3.5){\makebox(0,0)[t]{$a_1+a_1-\frac{d}{2}$}}
\end{picture}}
\label{Uni:comb}
\end{equation}
(in momentum space the second formula becomes trivial,
and the first one contains some factor;
these combinations of $\Gamma$-functions from Fourier transforms
are not explicitly shown here).
The main element of the method is the star--triangle relation
which is valid if $a_1 + a_2 + a_3 = d$:
\begin{equation}
\raisebox{-7.25mm}{\begin{picture}(19,20)
\put(9.5,11.5){\makebox(0,0){\includegraphics{star.eps}}}
\put(10,14){\makebox(0,0)[l]{$a_1$}}
\put(5.5,7){\makebox(0,0)[br]{$a_2$}}
\put(13.5,7){\makebox(0,0)[bl]{$a_3$}}
\end{picture}} \sim
\raisebox{-7.25mm}{\begin{picture}(19,20)
\put(9.5,11.5){\makebox(0,0){\includegraphics{triangle.eps}}}
\put(9.5,3){\makebox(0,0)[t]{$\bar{a}_1$}}
\put(4.6,11){\makebox(0,0)[br]{$\bar{a}_3$}}
\put(14.4,11){\makebox(0,0)[bl]{$\bar{a}_2$}}
\end{picture}}\,,
\label{Uni:st}
\end{equation}
where $\bar{a}_i$ are defined by~(\ref{Intro:dual})
(note that $\bar{a}_1 + \bar{a}_2 + \bar{a}_3 = d/2$).
It can be easily derived using inversion.
Kazakov~\cite{K:85} has calculated a non-trivial integral
\begin{equation}
I(a) = \raisebox{-4mm}{\begin{picture}(22,11)
\put(11,5.5){\makebox(0,0){\includegraphics{ibpd.eps}}}
\put(11.5,5){\makebox(0,0)[l]{$a$}}
\end{picture}}
\label{Uni:Ia}
\end{equation}
(the dashed lines have indices 1)
via hypergeometric functions of the argument $-1$%
\footnote{Earlier a particular case of this family $I(\varepsilon)$
has been calculated via hypergeometric functions of 1~\cite{H:82},
see Appendix~\ref{S:Hathrell}.}.
It has a symmetry property
\begin{equation}
I(1+a) = I(1-a-3\varepsilon)
\label{Uni:sym}
\end{equation}
(see Sect.~\ref{S:Sym});
$I(0)$ is known.
The IBP relation~(\ref{IBP:a3}) gives
\begin{equation}
I(1+a) = \frac{1-a-2\varepsilon}{a+\varepsilon} I(a)
- 2
\frac{(1-2a-3\varepsilon) \Gamma^2(1-\varepsilon) \Gamma(-a-\varepsilon) \Gamma(a+2\varepsilon)}%
{(a+\varepsilon) \Gamma(1+a) \Gamma(2-a-3\varepsilon)}\,.
\label{Uni:IBP}
\end{equation}
If we write $I(1+a)$ via a new function $G(1+a)$
\begin{equation*}
I(1+a) = 2
\frac{\Gamma^2(1-\varepsilon) \Gamma(-a-\varepsilon) \Gamma(a+2\varepsilon)}%
{\Gamma(1+a) \Gamma(1-a-3\varepsilon)}
G(1+a)\,,
\end{equation*}
then this recurrence relation becomes simpler:
\begin{equation*}
G(1+a) = \frac{a}{1-a-3\varepsilon} G(a)
+ \frac{1}{a-1+3\varepsilon}
\left( \frac{1}{a+\varepsilon} + \frac{1}{a-1+2\varepsilon} \right)\,.
\end{equation*}
Writing this function as a sum over its poles
\begin{equation*}
G(1+a) = \sum_{n=1}^\infty f^{(1)}_n
\left( \frac{1}{n+a+\varepsilon} + \frac{1}{n-a-2\varepsilon} \right)
+ \sum_{n=1}^\infty f^{(2)}_n
\left( \frac{1}{n+a} + \frac{1}{n-a-3\varepsilon} \right)
\end{equation*}
(where the symmetry~(\ref{Uni:sym}) is taken into account),
we obtain recurrence relations for the residues:
\begin{equation*}
f^{(1)}_n = - \frac{n+\varepsilon}{n+1-2\varepsilon} f^{(1)}_{n+1}\,,\quad
f^{(2)}_n = - \frac{n}{n+1-3\varepsilon} f^{(2)}_{n+1}\,.
\end{equation*}
Their solution is
\begin{equation*}
f^{(1)}_n = (-1)^n \frac{\Gamma(n+1-2\varepsilon)}{\Gamma(n+\varepsilon)} c_1(\varepsilon)\,,\quad
f^{(2)}_n = (-1)^n \frac{\Gamma(n+1-3\varepsilon)}{\Gamma(n)} c_2(\varepsilon)\,,
\end{equation*}
where the constants are obtained from the initial condition:
\begin{equation*}
c_1(\varepsilon) = \frac{\Gamma(\varepsilon)}{\Gamma(2-2\varepsilon)}\,,\quad
c_2(\varepsilon) = -
\frac{\Gamma(\varepsilon) \Gamma(1-\varepsilon) \Gamma(1+\varepsilon)}%
{\Gamma(2-2\varepsilon) \Gamma(1-2\varepsilon) \Gamma(1+2\varepsilon)}\,.
\end{equation*}
Therefore we arrive at
\begin{align*}
&I(1+a) = 2
\frac{\Gamma^2(1-\varepsilon) \Gamma(\varepsilon) \Gamma(a+2\varepsilon) \Gamma(-a-\varepsilon)}%
{\Gamma(2-2\varepsilon) \Gamma(1+a) \Gamma(1-a-3\varepsilon)}\\
&{}\times\Biggl[ \sum_{n=1}^\infty (-1)^n \frac{\Gamma(n+1-2\varepsilon)}{\Gamma(n+\varepsilon)}
\left( \frac{1}{n+a+\varepsilon} + \frac{1}{n-a-2\varepsilon} \right)\\
&\qquad{} -
\frac{\Gamma(1-\varepsilon) \Gamma(1+\varepsilon)}%
{\Gamma(1-2\varepsilon) \Gamma(1+2\varepsilon)}
\sum_{n=1}^\infty (-1)^n \frac{\Gamma(n+1-3\varepsilon)}{\Gamma(n)}
\left( \frac{1}{n+a} + \frac{1}{n-a-3\varepsilon} \right)
\Biggr]\,.
\end{align*}
This result can be written via hypergeometric functions:
\begin{align}
&I(1+a) = 2
\frac{\Gamma(\varepsilon) \Gamma^2(1-\varepsilon) \Gamma(a+2\varepsilon) \Gamma(-a-\varepsilon)}%
{\Gamma(2-2\varepsilon)}
\Biggl\{
\frac{\Gamma(2-2\varepsilon)}%
{\Gamma(1+\varepsilon) \Gamma(1+a) \Gamma(1-a-3\varepsilon)}
\nonumber\\
&{}\times\Biggl[ \frac{1}{1+a+\varepsilon}
\F{3}{2}{1,2-2\varepsilon,1+a+\varepsilon\\1+\varepsilon,2+a+\varepsilon}{-1}
\nonumber\\
&\qquad{} + \frac{1}{1-a-2\varepsilon}
\F{3}{2}{1,2-2\varepsilon,1-a-2\varepsilon\\1+\varepsilon,2-a-2\varepsilon}{-1}
\Biggr]
- \cos(\pi\varepsilon) \Biggr\}\,.
\label{Uni:F32}
\end{align}
Kazakov~\cite{K:84,K:85} also derived several terms of expansion
of $I(a_i)$ with $a_i=1+n_i\varepsilon$ in $\varepsilon$
using symmetry properties of $I$;
we shall discuss this expansion in Sect.~\ref{S:Sym}.
\section{Symmetry}
\label{S:Sym}
Symmetries of the integrals~(\ref{Intro:mom})
which follow from inversion~(\ref{Intro:inv}),
duality between $x$ and $p$ space~(\ref{Intro:coord}),
and the star--triangle relation~(\ref{Uni:st})
were considered in~\cite{VPK:81}
Gorishnii and Isaev~\cite{GI:84} discovered the tetrahedron symmetry group $S_4$
of the integrals $I$.
Let's consider the vacuum diagram in Fig.~\ref{F:tetra},
all lines have mass $m$.
If we integrate in the momentum of the line 6 last, then
\begin{equation*}
I = \frac{1}{\pi^{d/2}} \int
\frac{F(k^2)\,d^d k}{(k^2+m^2)^{a_6}}\,,
\end{equation*}
where $F(k^2)$ is the self-energy diagram with external momentum $k$
obtained by cutting the line 6.
Its asymptotics is
\begin{equation*}
F(k^2\to\infty) \to
\frac{I(a_1,a_2,a_3,a_4,a_5)}{(k^2)^{a_1+a_2+a_3+a_4+a_5-d}}
\end{equation*}
(it comes from the hard region;
other regions give contributions with different powers of $k^2$).
Hence the vacuum diagram has the ultraviolet pole
\begin{equation*}
I_{\text{UV}} = \frac{1}{\Gamma(d/2)}
\frac{I(a_1,a_2,a_3,a_4,a_5)}{a_1+a_2+a_3+a_4+a_5+a_6-\frac{3}{2}d}
\end{equation*}
(other regions produce poles at different places).
But we can equally well cut some other line,
$I_{\text{UV}}$ must remain intact.
Therefore $I(a_1,a_2,a_3,a_4,a_5)$ (which also depends on $d$)
can be considered as a function of $a_1,a_2,a_3,a_4,a_5,a_6$,
where $a_6$ is defined by
\begin{equation}
a_1 + a_2 + a_3 + a_4 + a_5 + a_6 = \frac{3}{2} d\,,
\label{Sym:a6}
\end{equation}
with the tetrahedron symmetry.
\begin{figure}[ht]
\begin{center}
\begin{picture}(45,39)
\put(22.5,19.5){\makebox(0,0){\includegraphics[width=45mm]{tetra.eps}}}
\put(17.5,20){\makebox(0,0){\textcolor{blue}{1}}}
\put(35.5,22){\makebox(0,0){\textcolor{blue}{2}}}
\put(24.5,21){\makebox(0,0){\textcolor{blue}{3}}}
\put(30,13.5){\makebox(0,0){\textcolor{blue}{4}}}
\put(19,9){\makebox(0,0){\textcolor{blue}{5}}}
\put(27,4){\makebox(0,0){\textcolor{blue}{6}}}
\put(13,2.5){\makebox(0,0){\textcolor{red}{7}}}
\put(43,5){\makebox(0,0){\textcolor{red}{8}}}
\put(19,15){\makebox(0,0){\textcolor{red}{9}}}
\put(25.5,36){\makebox(0,0){\textcolor{red}{10}}}
\end{picture}
\end{center}
\caption{The tetrahedron diagram.}
\label{F:tetra}
\end{figure}
Gorishnii and Isaev also considered symmetry relations
following from the star--triangle relation~(\ref{Uni:st})
(which were discussed in~\cite{VPK:81,K:85}).
Taken together, these symmetry transformations are sufficient
for generating the complete symmetry group of the integrals $I$.
But they could not identify this group.
A complete solution of this problem was obtained in~\cite{B:86,BB:88}.
Let's introduce notation
\begin{equation}
I(a_1,a_2,a_3,a_4,a_5) =
\left[\prod_{i=1}^{10} G(a_i)\right]^{1/2}
\frac{\bar{I}(a_1,a_2,a_3,a_4,a_5,a_6)}%
{(d-3) \Gamma^2\left(\frac{d}{2}-1\right)}\,,
\label{Sym:barI}
\end{equation}
where $a_6$ is defined by~(\ref{Sym:a6}),
$\bar{a}_i$ by~(\ref{Intro:dual}),
indices of the vertices (Fig.~\ref{F:tetra}) are
\begin{equation*}
a_7 = a_1 + a_5 + a_6 - \frac{d}{2}\,,\quad
a_8 = a_2 + a_4 + a_6 - \frac{d}{2}\,,\quad
a_9 = a_3 + a_4 + a_5 - \frac{d}{2}\,,\quad
a_{10} = a_1 + a_2 + a_3 - \frac{d}{2}\,,
\end{equation*}
and
\begin{equation*}
G(a) = \frac{\Gamma(\bar{a})}{\Gamma(a)}\,.
\end{equation*}
The pre-factor in~(\ref{Sym:barI}) is chosen in such a way
that $\Gamma$-function factors
in~(\ref{Uni:comb}), (\ref{Uni:st}) cancel in symmetry relations.
The full symmetry group is generated by 3 transformations.
The first two generators are elements of the tetrahedron group:
\begin{align}
&\raisebox{-19mm}{\begin{picture}(45,39)
\put(22.5,19.5){\makebox(0,0){\includegraphics[width=45mm]{tetra.eps}}}
\put(17.5,20){\makebox(0,0){\textcolor{blue}{1}}}
\put(35.5,22){\makebox(0,0){\textcolor{blue}{2}}}
\put(24.5,21){\makebox(0,0){\textcolor{blue}{3}}}
\put(30,13.5){\makebox(0,0){\textcolor{blue}{4}}}
\put(19,9){\makebox(0,0){\textcolor{blue}{5}}}
\put(27,4){\makebox(0,0){\textcolor{blue}{6}}}
\put(13,2.5){\makebox(0,0){\textcolor{red}{7}}}
\put(43,5){\makebox(0,0){\textcolor{red}{8}}}
\put(19,15){\makebox(0,0){\textcolor{red}{9}}}
\put(25.5,36){\makebox(0,0){\textcolor{red}{10}}}
\end{picture}}
\hspace{5mm}\to
\raisebox{-19mm}{\begin{picture}(45,39)
\put(22.5,19.5){\makebox(0,0){\includegraphics[width=45mm]{tetra.eps}}}
\put(17.5,20){\makebox(0,0){\textcolor{blue}{3}}}
\put(35.5,22){\makebox(0,0){\textcolor{blue}{5}}}
\put(24.5,21){\makebox(0,0){\textcolor{blue}{4}}}
\put(30,13.5){\makebox(0,0){\textcolor{blue}{6}}}
\put(19,9){\makebox(0,0){\textcolor{blue}{2}}}
\put(27,4){\makebox(0,0){\textcolor{blue}{1}}}
\put(13,2.5){\makebox(0,0){\textcolor{red}{10}}}
\put(43,5){\makebox(0,0){\textcolor{red}{7}}}
\put(19,15){\makebox(0,0){\textcolor{red}{8}}}
\put(25.5,36){\makebox(0,0){\textcolor{red}{9}}}
\end{picture}}\,,
\label{Sym:gen1}\\
&\raisebox{-19mm}{\begin{picture}(45,39)
\put(22.5,19.5){\makebox(0,0){\includegraphics[width=45mm]{tetra.eps}}}
\put(17.5,20){\makebox(0,0){\textcolor{blue}{1}}}
\put(35.5,22){\makebox(0,0){\textcolor{blue}{2}}}
\put(24.5,21){\makebox(0,0){\textcolor{blue}{3}}}
\put(30,13.5){\makebox(0,0){\textcolor{blue}{4}}}
\put(19,9){\makebox(0,0){\textcolor{blue}{5}}}
\put(27,4){\makebox(0,0){\textcolor{blue}{6}}}
\put(13,2.5){\makebox(0,0){\textcolor{red}{7}}}
\put(43,5){\makebox(0,0){\textcolor{red}{8}}}
\put(19,15){\makebox(0,0){\textcolor{red}{9}}}
\put(25.5,36){\makebox(0,0){\textcolor{red}{10}}}
\end{picture}}
\hspace{5mm}\to
\raisebox{-19mm}{\begin{picture}(45,39)
\put(22.5,19.5){\makebox(0,0){\includegraphics[width=45mm]{tetra.eps}}}
\put(17.5,20){\makebox(0,0){\textcolor{blue}{2}}}
\put(35.5,22){\makebox(0,0){\textcolor{blue}{1}}}
\put(24.5,21){\makebox(0,0){\textcolor{blue}{3}}}
\put(30,13.5){\makebox(0,0){\textcolor{blue}{5}}}
\put(19,9){\makebox(0,0){\textcolor{blue}{4}}}
\put(27,4){\makebox(0,0){\textcolor{blue}{6}}}
\put(13,2.5){\makebox(0,0){\textcolor{red}{8}}}
\put(43,5){\makebox(0,0){\textcolor{red}{7}}}
\put(19,15){\makebox(0,0){\textcolor{red}{9}}}
\put(25.5,36){\makebox(0,0){\textcolor{red}{10}}}
\end{picture}}\,.
\label{Sym:gen2}
\end{align}
The last one comes from uniqueness.
First we introduce an extra dot on line 3 to make vertex 10 unique;
then use the star--triangle relation~(\ref{Uni:st});
and then combine two lines:
\begin{align*}
&\raisebox{-15mm}{\begin{picture}(26,31)
\put(13,14.25){\makebox(0,0){\includegraphics{u1.eps}}}
\put(7,16.5){\makebox(0,0){\textcolor{blue}{1}}}
\put(19,16.5){\makebox(0,0){\textcolor{blue}{2}}}
\put(13.5,9.5){\makebox(0,0)[l]{\textcolor{blue}{3}}}
\put(25.124,8.5){\makebox(0,0){\textcolor{blue}{4}}}
\put(0.876,8.5){\makebox(0,0){\textcolor{blue}{5}}}
\put(13,29.5){\makebox(0,0){\textcolor{blue}{6}}}
\put(2.608,21.5){\makebox(0,0){\textcolor{red}{7}}}
\put(23.392,21.5){\makebox(0,0){\textcolor{red}{8}}}
\put(13,3.5){\makebox(0,0){\textcolor{red}{9}}}
\put(13,15.5){\makebox(0,0){\textcolor{red}{10}}}
\end{picture}}
\quad\to\quad
\raisebox{-15mm}{\begin{picture}(26,31)
\put(13,14.25){\makebox(0,0){\includegraphics{u2.eps}}}
\put(7,16.5){\makebox(0,0){\textcolor{blue}{1}}}
\put(19,16.5){\makebox(0,0){\textcolor{blue}{2}}}
\put(13.5,13.125){\makebox(0,0)[l]{\textcolor{blue}{$\overline{1}+\overline{2}$}}}
\put(13.5,8.375){\makebox(0,0)[l]{\textcolor{blue}{10}}}
\put(25.124,8.5){\makebox(0,0){\textcolor{blue}{4}}}
\put(0.876,8.5){\makebox(0,0){\textcolor{blue}{5}}}
\put(13,29.5){\makebox(0,0){\textcolor{blue}{6}}}
\put(2.608,21.5){\makebox(0,0){\textcolor{red}{7}}}
\put(23.392,21.5){\makebox(0,0){\textcolor{red}{8}}}
\put(13,3.5){\makebox(0,0){\textcolor{red}{$\overline{6}$}}}
\end{picture}}
\quad\to{}\displaybreak\\
&\raisebox{-15mm}{\begin{picture}(26,31)
\put(13,14.25){\makebox(0,0){\includegraphics{u3.eps}}}
\put(7,16.5){\makebox(0,0){\textcolor{blue}{$\overline{2}$}}}
\put(19,16.5){\makebox(0,0){\textcolor{blue}{$\overline{1}$}}}
\put(13.5,9.5){\makebox(0,0)[l]{\textcolor{blue}{10}}}
\put(25.124,8.5){\makebox(0,0){\textcolor{blue}{4}}}
\put(0.876,8.5){\makebox(0,0){\textcolor{blue}{5}}}
\put(13,29.5){\makebox(0,0){\textcolor{blue}{6}}}
\put(13,3.5){\makebox(0,0){\textcolor{red}{$\overline{6}$}}}
\put(13,15.5){\makebox(0,0){\textcolor{red}{3}}}
\put(13,22){\makebox(0,0)[b]{\textcolor{blue}{$1+2$}}}
\end{picture}}
\quad\to\quad
\raisebox{-15mm}{\begin{picture}(26,31)
\put(13,14.25){\makebox(0,0){\includegraphics{u1.eps}}}
\put(7,16.5){\makebox(0,0){\textcolor{blue}{$\overline{2}$}}}
\put(19,16.5){\makebox(0,0){\textcolor{blue}{$\overline{1}$}}}
\put(13.5,9.5){\makebox(0,0)[l]{\textcolor{blue}{10}}}
\put(25.124,8.5){\makebox(0,0){\textcolor{blue}{4}}}
\put(0.876,8.5){\makebox(0,0){\textcolor{blue}{5}}}
\put(13,29.5){\makebox(0,0){\textcolor{blue}{9}}}
\put(2.608,21.5){\makebox(0,0){\textcolor{red}{7}}}
\put(23.392,21.5){\makebox(0,0){\textcolor{red}{8}}}
\put(13,3.5){\makebox(0,0){\textcolor{red}{$\overline{6}$}}}
\put(13,15.5){\makebox(0,0){\textcolor{red}{3}}}
\end{picture}}\quad.
\end{align*}
I.\,e., the third generator is
\begin{equation}
\raisebox{-19mm}{\begin{picture}(45,39)
\put(22.5,19.5){\makebox(0,0){\includegraphics[width=45mm]{tetra.eps}}}
\put(17.5,20){\makebox(0,0){\textcolor{blue}{1}}}
\put(35.5,22){\makebox(0,0){\textcolor{blue}{2}}}
\put(24.5,21){\makebox(0,0){\textcolor{blue}{3}}}
\put(30,13.5){\makebox(0,0){\textcolor{blue}{4}}}
\put(19,9){\makebox(0,0){\textcolor{blue}{5}}}
\put(27,4){\makebox(0,0){\textcolor{blue}{6}}}
\put(13,2.5){\makebox(0,0){\textcolor{red}{7}}}
\put(43,5){\makebox(0,0){\textcolor{red}{8}}}
\put(19,15){\makebox(0,0){\textcolor{red}{9}}}
\put(25.5,36){\makebox(0,0){\textcolor{red}{10}}}
\end{picture}}
\hspace{5mm}\to
\raisebox{-19mm}{\begin{picture}(45,39)
\put(22.5,19.5){\makebox(0,0){\includegraphics[width=45mm]{tetra.eps}}}
\put(17.5,20){\makebox(0,0){\textcolor{blue}{$\overline{2}$}}}
\put(35.5,22){\makebox(0,0){\textcolor{blue}{$\overline{1}$}}}
\put(24.5,21){\makebox(0,0){\textcolor{blue}{10}}}
\put(30,13.5){\makebox(0,0){\textcolor{blue}{4}}}
\put(19,9){\makebox(0,0){\textcolor{blue}{5}}}
\put(27,4){\makebox(0,0){\textcolor{blue}{$\overline{9}$}}}
\put(13,2.5){\makebox(0,0){\textcolor{red}{7}}}
\put(43,5){\makebox(0,0){\textcolor{red}{8}}}
\put(19,15){\makebox(0,0){\textcolor{red}{$\overline{6}$}}}
\put(25.5,36){\makebox(0,0){\textcolor{red}{3}}}
\end{picture}}\quad.
\label{Sym:gen3}
\end{equation}
The structure of the group becomes apparent if we introduce new variables:
\begin{align}
&\bar{I}(a_1,a_2,a_3,a_4,a_5,a_6) = \tilde{I}(b_1,b_2,b_3,b_4,b_5,b_6)\,,
\label{Sym:b}\\
&\left(\begin{array}{c}
b_1\\b_2\\b_3\\b_4\\b_5\\b_6
\end{array}\right) =
\frac{1}{3}
\left(\begin{array}{rrrrrr}
1 & 2 & 0 & 1 & -1 & 0 \\
0 & 1 & 2 & 0 & 1 & -1 \\
-1 & 0 & 1 & 2 & 0 & 1 \\
1 & -1 & 0 & 1 & 2 & 0 \\
0 & 1 & -1 & 0 & 1 & 2 \\
2 & 0 & 1 & -1 & 0 & 1
\end{array}\right)
\left(\begin{array}{c}
a_1\\a_2\\a_3\\a_4\\a_5\\a_6
\end{array}\right)\,.
\nonumber
\end{align}
Then our 3 generators transform $\tilde{I}(b_1,b_2,b_3,b_4,b_5,b_6)$ to
\begin{align*}
&\hat{P}_1 \tilde{I}(b_1,b_2,b_3,b_4,b_5,b_6) =
\tilde{I}(\bar{b}_1,\bar{b}_6,\bar{b}_2,\bar{b}_4,\bar{b}_3,\bar{b}_5)\,,\\
&\hat{P}_2 \tilde{I}(b_1,b_2,b_3,b_4,b_5,b_6) =
\tilde{I}(\bar{b}_3,\bar{b}_5,\bar{b}_1,\bar{b}_6,\bar{b}_2,\bar{b}_4)\,,\\
&\hat{P}_3 \tilde{I}(b_1,b_2,b_3,b_4,b_5,b_6) =
\tilde{I}(b_3,b_2,b_1,b_4,b_5,b_6)\,.
\end{align*}
We can combine them into 3 better generators
\begin{equation*}
\hat{Q}_1 = (\hat{P}_3 \hat{P}_1^2)^2 \hat{P}_3 \hat{P}_2 \hat{P}_1\,,\quad
\hat{Q}_2 = \hat{P}_1^3 \hat{P}_3 \hat{P}_1\,,\quad
\hat{Q}_3 = (\hat{P}_3 \hat{P}_2 \hat{P}_1^2)^2\,;
\end{equation*}
they transform $\tilde{I}(b_1,b_2,b_3,b_4,b_5,b_6)$ to
\begin{align*}
&\hat{Q}_1 \tilde{I}(b_1,b_2,b_3,b_4,b_5,b_6) =
\tilde{I}(b_2,b_3,b_4,b_5,b_6,b_1)\,,\\
&\hat{Q}_2 \tilde{I}(b_1,b_2,b_3,b_4,b_5,b_6) =
\tilde{I}(b_2,b_1,b_3,b_4,b_5,b_6)\,,\\
&\hat{Q}_3 \tilde{I}(b_1,b_2,b_3,b_4,b_5,b_6) =
\tilde{I}(\bar{b}_1,\bar{b}_2,\bar{b}_3,\bar{b}_4,\bar{b}_5,\bar{b}_6)\,.
\end{align*}
The first two generate the symmetric group $S_6$
of permutations of 6 variables $b_{1,\ldots,6}$;
and $\hat{Q}_3$ generates $Z_2$.
So, the symmetry group of the integrals $I$ is $S_6\times Z_2$~\cite{BB:88};
it contains $6!\cdot2=1440$ elements~\cite{B:86}.
The most useful information is the expansion
of $I(a_1,a_2,a_3,a_4,a_5)$~(\ref{Intro:mom}) in $\varepsilon$
at $d=4-2\varepsilon$ and $a_{1,\ldots,5}=1+\mathcal{O}(\varepsilon)$.
This means that all $a_i$ are $1+\mathcal{O}(\varepsilon)$;
the same is true for $b_i$
(note that $\sum_{i=1}^6 b_i = \frac{3}{2} d$).
The function~(\ref{Sym:b}) is invariant with respect to $S_6\times Z_2$;
therefore, the expansion can be written entirely via invariants of this group.
The invariants are
\begin{equation*}
I_1 = 1 - \frac{d}{4} = 1 - \frac{1}{6} \sum_{i=1}^6 b_i
\end{equation*}
and
\begin{equation*}
I_n = \sum_{i=1}^6 \left(b_i - \frac{d}{4}\right)^n
\end{equation*}
for $n=2$, 3, 4, 5, 6.
The total degree of $I_3$ and $I_5$ must be even,
because they change their signs under $\hat{Q}_5$:
\begin{equation}
\bar{I}(a_1,a_2,a_3,a_4,a_5,a_6) =
\sum_{i_3+i_5\;\text{even}} C_{i_1 i_2 i_3 i_4 i_5 i_6}
I_1^{i_1} I_2^{i_2} I_3^{i_3} I_4^{i_4} I_5^{i_5} I_6^{i_6}\,.
\label{Sym:exp}
\end{equation}
All unknown coefficients $C_{i_1 i_2 i_3 i_4 i_5 i_6}$
needed for expanding up to $\varepsilon^4$
can be fixed by considering the integrals~(\ref{IBP:red})
(the indices in the left triangle are 1).
IBP reduces these integrals to $\Gamma$-functions,
and hence all these coefficients are expressed via $\zeta_n$.
So, the symmetry allows one to obtain,
practically for free~\cite{K:84,K:85,B:86,BB:88},
\begin{align}
&\bar{I}(a_1,a_2,a_3,a_4,a_5,a_6) =
6 \zeta_3 + 18 \zeta_4 I_1 + 3 \zeta_5 \left(I_1^2 + \frac{5}{2} I_2\right)
\label{Sym:e4}\\
&{} - 15 (7 \zeta_6 + 2 \zeta_3^2) I_1^3
+ 3 \left(\frac{25}{2} \zeta_6 - \zeta_3^2\right) I_1 I_2
\nonumber\\
&{} - 9 \left(\frac{439}{8} \zeta_7 + 20 \zeta_4 \zeta_3\right) I_1^4
+ 3 \left(\frac{211}{8} \zeta_7 - 6 \zeta_4 \zeta_3\right) I_1^2 I_2
+ \frac{9}{8} \zeta_7 \left(\frac{35}{4} I_2^2 - 7 I_4\right)
+ \cdots
\nonumber
\end{align}
\section{Gegenbauer polynomials}
\label{S:Geg}
If indices of 3 lines forming a triangle are 1,
$I$ reduces to $\Gamma$-functions by IBP~(\ref{IBP:red}).
In a more general case when 2 adjacent lines have indices 1
$I$ can be expressed via hypergeometric functions of 1.
Due to the tetrahedron symmetry, it does not matter
which 2 adjacent lines have indices 1 (Fig.~\ref{F:Kotikov}).
These integrals were calculated by Kotikov~\cite{K:96}
by an ingenious use of $x$-space Gegenbauer polynomials~\cite{CKT:80}.
\begin{figure}[ht]
\begin{center}
\begin{picture}(98,24)
\put(15,12){\makebox(0,0){\includegraphics{kot1.eps}}}
\put(1.5,12){\makebox(0,0){0}}
\put(28.5,12){\makebox(0,0){$z$}}
\put(15,22.5){\makebox(0,0){$x$}}
\put(15,1.5){\makebox(0,0){$y$}}
\put(6,4){\makebox(0,0){$a$}}
\put(24,20){\makebox(0,0){$b$}}
\put(6,20){\makebox(0,0){$c$}}
\put(52,12){\makebox(0,0){\includegraphics{kot2.eps}}}
\put(86,12){\makebox(0,0){\includegraphics{kot3.eps}}}
\end{picture}
\end{center}
\caption{Integrals with 2 adjacent lines having indices 1.}
\label{F:Kotikov}
\end{figure}
Let's consider the first of these diagrams in $x$-space:
\begin{align}
&I(c,b,1,1,a) =
\frac{G^2(1) G(a) G(b) G(c)}{G(a+b+c+2-d)}
A(\bar{a},\bar{b},\bar{c})\,,
\label{Geg:A}\\
&A(a,b,c) = \frac{1}{\pi^d} \int \frac{d^d x\,d^d y}%
{(y^2)^a [(z-y)^2]^\lambda [(z-x)^2]^b (x^2)^c [(x-y)^2]^\lambda}\,,
\nonumber
\end{align}
where $\lambda=d/2-1$ and $z^2=1$.
Expanding the propagator as
\begin{equation}
\frac{1}{[(y-z)^2]^\lambda} = \frac{1}{\Gamma(\lambda)}
\sum_{n=0}^\infty \frac{\Gamma(\lambda+n)}{n!}
y^{\mu_1\ldots\mu_n} z_{\mu_1\ldots\mu_n}
\left[ \theta(1-y^2) + \frac{\theta(y^2-1)}{(y^2)^{n+\lambda}} \right]
\label{Geg:exp}
\end{equation}
(where $y^{\mu_1\ldots\mu_n}$ is the traceless part of $y^{\mu_1}\cdots y^{\mu_n}$)
we can integrate in $d^d y$:
\begin{align*}
&A(a,b,c) = \frac{1}{\Gamma^2(\lambda) (a-1)}
\sum_{n=0}^\infty \frac{2^n \Gamma(n+\lambda)}{n!} \frac{z_{\mu_1\ldots\mu_n}}{\pi^{d/2}}
\int \frac{d^d x\,x^{\mu_1\ldots\mu_n}}{(x^2)^c [(z-x)^2]^b}\\
&\left[ \frac{1}{n+\lambda-a+1}
\left( \frac{\theta(1-x^2)}{(x^2)^{a-1}} + \frac{\theta(x^2-1)}{(x^2)^{n+\lambda}} \right)
- \frac{1}{n+\lambda+a-1}
\left( \theta(1-x^2) + \frac{\theta(x^2-1)}{(x^2)^{n+\lambda+a-1}} \right) \right]\,.
\end{align*}
Now the integral in $d^d x$ can be calculated:
\begin{align*}
&A(a,b,c) = \frac{1}{\Gamma(\lambda) \Gamma(2\lambda) \Gamma(b) \Gamma(b-\lambda) (a-1)}
\sum_{n=0}^\infty \frac{2^n \Gamma(n+\lambda)}{n!}
\sum_{m=0}^\infty \frac{\Gamma(m+n+b) \Gamma(m+b-\lambda)}{m! \Gamma(m+n+\lambda+1)}\\
&\biggl[ \frac{1}{n+\lambda-a+1}
\left( \frac{1}{m+n-a-c+\lambda+2} + \frac{1}{m+n+b+c-1} \right)\\
&\quad{} - \frac{1}{n+\lambda+a-1}
\left( \frac{1}{m+n-c+\lambda+1} + \frac{1}{m+n+a+b+c-2} \right) \biggr]\,.
\end{align*}
It appears to be possible to transform this result
into a form containing only single sums~\cite{K:96}:
\begin{equation}
A(a,b,c) = \frac{A_1 - A_2}{\Gamma\left(\frac{d}{2}-1\right) (a-1)}
\label{Geg:res}
\end{equation}
where
\begin{align}
&A_1 = 2 \frac{\Gamma\left(\frac{d}{2}-b\right)}{\Gamma(b)}
\biggl\{ \frac{1}{d-2a}
\label{Geg:A1}\\
&\biggl[
\frac{\Gamma\left(\frac{d}{2}-a-c+1\right) \Gamma\left(a+b+c-\frac{d}{2}-1\right)}%
{\Gamma(a+c-1) \Gamma(d-a-b-c+1)}
\F{3}{2}{d-2,\frac{d}{2}-a,\frac{d}{2}-a-c+1\\\frac{d}{2}-a+1,d-a-b-c+1}{1}
\nonumber\\
&\quad{} +
\frac{\Gamma(1-c) \Gamma(b+c-1)}%
{\Gamma\left(c+\frac{d}{2}-1\right) \Gamma\left(\frac{d}{2}-b-c+1\right)}
\F{3}{2}{d-2,\frac{d}{2}-a,b+c-1\\\frac{d}{2}-a+1,c+\frac{d}{2}-1}{1}
\biggr]
\nonumber\\
&{} - \frac{1}{2a+d-4} \biggl[
\frac{\Gamma\left(\frac{d}{2}-c\right) \Gamma\left(b+c-\frac{d}{2}\right)}%
{\Gamma(c) \Gamma(d-b-c)}
\F{3}{2}{d-2,a+\frac{d}{2}-2,\frac{d}{2}-c\\a+\frac{d}{2}-1,d-b-c}{1}
\nonumber\displaybreak\\
&{} +
\frac{\Gamma(2-a-c) \Gamma(a+b+c-2)}%
{\Gamma\left(a+c+\frac{d}{2}-2\right) \Gamma\left(\frac{d}{2}-a-b-c+2\right)}
\F{3}{2}{d-2,a+\frac{d}{2}-2,a+b+c-2\\a+\frac{d}{2}-1,a+c+\frac{d}{2}-2}{1}
\biggr] \biggr\}\,,
\nonumber\\
&A_2 =
\frac{\Gamma(1-b) \Gamma(1-c) \Gamma\left(\frac{d}{2}-a\right) \Gamma\left(a+\frac{d}{2}-2\right)
\Gamma\left(\frac{d}{2}-b\right) \Gamma\left(a+b+c-\frac{d}{2}-1\right)}%
{\Gamma(d-2) \Gamma(a+c-1) \Gamma\left(a+b-\frac{d}{2}\right)
\Gamma\left(\frac{d}{2}-a-b+1\right) \Gamma\left(\frac{d}{2}-b-c+1\right)}
\nonumber\\
&{} - \frac{2 \Gamma(1-b)}{(2a+d-4) \Gamma\left(b-\frac{d}{2}+1\right)}
\nonumber\\
&\biggl[
\frac{\Gamma(2-a-c) \Gamma\left(a+b+c-\frac{d}{2}-1\right)}%
{\Gamma(3-a-b-c) \Gamma\left(a+c+\frac{d}{2}-2\right)}
\F{3}{2}{d-2,a+\frac{d}{2}-2,a+b+c-2\\a+\frac{d}{2}-1,a+c+\frac{d}{2}-2}{1}
\nonumber\\
&\quad{} +
\frac{\Gamma(1-c) \Gamma\left(b+c-\frac{d}{2}\right)}%
{\Gamma\left(c-\frac{d}{2}+1\right) \Gamma(d-b-c)}
\F{3}{2}{d-2,a+\frac{d}{2}-2,\frac{d}{2}-c\\a+\frac{d}{2}-1,d-b-c}{1}
\biggr]
\label{Geg:A21}\\
&{} = -
\frac{\Gamma(1-b) \Gamma(2-a-c) \Gamma\left(\frac{d}{2}-a\right) \Gamma\left(a+\frac{d}{2}-2\right)
\Gamma\left(\frac{d}{2}-b\right) \Gamma\left(b+c-\frac{d}{2}\right)}%
{\Gamma(c) \Gamma(d-2) \Gamma\left(b-a-\frac{d}{2}+2\right) \Gamma\left(a-b+\frac{d}{2}-1\right)
\Gamma\left(\frac{d}{2}-a-b-c+2\right)}
\nonumber\\
&{} + \frac{2 \Gamma(1-b)}{(d-2a) \Gamma\left(b-\frac{d}{2}+1\right)}
\biggl[
\frac{\Gamma(1-c) \Gamma\left(b+c-\frac{d}{2}\right)}%
{\Gamma(2-b-c) \Gamma\left(c+\frac{d}{2}-1\right)}
\F{3}{2}{d-2,\frac{d}{2}-a,b+c-1\\\frac{d}{2}-a+1,c+\frac{d}{2}-1}{1}
\nonumber\\
&{} +
\frac{\Gamma(2-a-c) \Gamma\left(a+b+c-\frac{d}{2}-1\right)}%
{\Gamma\left(a+c-\frac{d}{2}\right) \Gamma(d-a-b-c+1)}
\F{3}{2}{d-2,\frac{d}{2}-a,\frac{d}{2}-a-c+1\\\frac{d}{2}-a+1,d-a-b-c+1}{1}
\biggr]
\label{Geg:A22}\\
&{} =
\frac{\Gamma(1-b) \Gamma(1-c) \Gamma(2-a-c) \Gamma\left(\frac{d}{2}-a\right)
\Gamma\left(a+\frac{d}{2}-2\right) \Gamma\left(\frac{d}{2}-b\right)}%
{\Gamma(1-a) \Gamma(a) \Gamma(d-2) \Gamma\left(\frac{d}{2}-a-b+1\right)
\Gamma\left(\frac{d}{2}-a-b-c+2\right)}
\nonumber\\
&{} + \frac{2 \Gamma(1-b)}{\Gamma\left(b-\frac{d}{2}+1\right)}
\biggl[
\frac{\Gamma(2-a-c) \Gamma\left(a+b+c-\frac{d}{2}-1\right)}%
{(d-2a) \Gamma\left(a+c-\frac{d}{2}\right) \Gamma(d-a-b-c+1)}
\nonumber\\
&\qquad{}
\F{3}{2}{d-2,\frac{d}{2}-a,\frac{d}{2}-a-c+1\\\frac{d}{2}-a+1,d-a-b-c+1}{1}
\nonumber\\
&{} -
\frac{\Gamma(1-c) \Gamma\left(b+c-\frac{d}{2}\right)}%
{(2a+d-4) \Gamma\left(c-\frac{d}{2}+1\right) \Gamma(d-b-c)}
\F{3}{2}{d-2,a+\frac{d}{2}-2,\frac{d}{2}-c\\a+\frac{d}{2}-1,d-b-c}{1}
\biggr]
\label{Geg:A23}\\
&{} =
\frac{\Gamma(1-b) \Gamma\left(\frac{d}{2}-a\right) \Gamma\left(a+\frac{d}{2}-2\right)
\Gamma\left(\frac{d}{2}-b\right) \Gamma\left(b+c-\frac{d}{2}\right)
\Gamma\left(a+b+c-\frac{d}{2}-1\right)}%
{\Gamma(1-a) \Gamma(a) \Gamma(c) \Gamma(d-2) \Gamma\left(c+\frac{d}{2}-2\right)}
\nonumber\\
&{} + \frac{2 \Gamma(1-b)}{\Gamma \left(b-\frac{d}{2}+1\right)}
\biggl[
\frac{\Gamma(1-c) \Gamma\left(b+c-\frac{d}{2}\right)}%
{(d-2a) \Gamma(2-b-c) \Gamma\left(c+\frac{d}{2}-1\right)}
\F{3}{2}{d-2,\frac{d}{2}-a,b+c-1\\\frac{d}{2}-a+1,c+\frac{d}{2}-1}{1}
\nonumber\\
&{} -
\frac{\Gamma(2-a-c) \Gamma\left(a+b+c-\frac{d}{2}-1\right)}%
{(2a+d-4) \Gamma(3-a-b-c) \Gamma\left(a+c+\frac{d}{2}-2\right)}
\nonumber\\
&\qquad{}
\F{3}{2}{d-2,a+\frac{d}{2}-2,a+b+c-2\\a+\frac{d}{2}-1,a+c+\frac{d}{2}-2}{1}
\biggr]\,.
\label{Geg:A24}
\end{align}
In particular, for the integral $I(a)$~(\ref{Uni:Ia}) we obtain
\begin{align}
&I(a) = 2
\Gamma\left(\tfrac{d}{2}-1\right) \Gamma\left(\tfrac{d}{2}-a-1\right)
\Gamma(a-d+3)
\label{Geg:Ia}\\
&\biggl[
\frac{2\Gamma\left(\frac{d}{2}-1\right)}%
{(d-2a-4)\Gamma(a+1)\Gamma\left(\frac{3}{2}d-a-4\right)}
\F{3}{2}{1,d-2,a-\frac{d}{2}+2\\a+1,a-\frac{d}{2}+3}{1}
- \frac{\pi\cot\pi(d-a)}{\Gamma(d-2)} \biggr]\,;
\nonumber
\end{align}
this form is equivalent to~(\ref{Uni:F32}),
though mathematical proof is unknown.
No expression for $I$ with unit indices of 2 non-adjacent lines
is known.
\section{Solving IBP for 3 non-integer indices}
\label{S:Beyond}
Expressions for $I$ with 2 indices of adjacent lines equal to 1
via hypergeometric functions of 1 were also derived by guessing
the solution of IBP for such integrals~\cite{BGK:97}.
Let's consider
\begin{equation}
I(a_1,a_2,a_3,a_4) =
\raisebox{-7.5mm}{\begin{picture}(30,17)
\put(15,8.5){\makebox(0,0){\includegraphics{i4.eps}}}
\put(15.5,8){\makebox(0,0)[l]{$a_3$}}
\put(7,15){\makebox(0,0){$a_1$}}
\put(23,15){\makebox(0,0){$a_2$}}
\end{picture}}
\label{Beyond:I}
\end{equation}
where $a_4 = a_1 + a_2 + a_3 - \frac{d}{2}$.
The IBP relations are
\begin{align}
&a_1 I(a_1+1,a_2,a_3,a_4+1) - (a_1+a_2-d+2) I(a_1,a_2,a_3,a_4)
\nonumber\\
&{} = a_3 G(1,a_4+1)
\left( \frac{a_1 G(a_1+1,a_3+1)}{a_4-a_2+1} - G(a_2,a_3+1) \right)\,,
\nonumber\\
&(a_3-d+2) I(a_1,a_2,a_3,a_4)
\nonumber\\
&\quad{} + \frac{(a_1+a_3-d+1) (a_2+a_3-d+1)}{a_3-d/2+1} I(a_1,a_2,a_3-1,a_4-1)
\nonumber\\
&{} = a_3 (a_3+a_4-d+1) G(1,a_4)
\left( \frac{G(a_1,a_3+1)}{a_4-a_2} + \frac{G(a_2,a_3+1)}{a_4-a_1} \right)\,,
\label{Beyond:IBP}
\end{align}
where $G(a_1,a_2)=G(a_1) G(a_2) G(\bar{a_1}+\bar{a_2})$
is the standard massless one-loop self-energy.
If we express $I(a_1,a_2,a_3,a_4)$ as
\begin{align}
&\frac{d-3}{a_3 a_4 G(1,a_4+1)} I(a_1,a_2,a_3,a_4)
\nonumber\\
&{} = G(a_1,a_2+1)
S({\textstyle\frac{d}{2}}-a_1-1,a_2-1,{\textstyle\frac{d}{2}}+a_1-a_4-2,a_4-a_2)
+ (a_1\leftrightarrow a_2)
\label{Beyond:Ansatz}
\end{align}
via a new function $S(a_1,a_2,a_3,a_4)$ satisfying
\begin{align}
&S(a_1,a_2,a_3,a_4) = S(a_2,a_1,a_3,a_4) = - S(a_3,a_4,a_1,a_2)\,,
\nonumber\\
& a_1 S(a_1,a_2,a_3,a_4) = 1 + \frac{(a_1+a_3) (a_1+a_4)}{a_1+a_2+a_3+a_4} S(a_1-1,a_2,a_3,a_4)\,,
\label{Beyond:Sprop}
\end{align}
then~(\ref{Beyond:IBP}) holds.
The solution of~(\ref{Beyond:Sprop}) can be written as
\begin{equation}
S(a_1,a_2,a_3,a_4) =
\frac{\pi \cot \pi a_3}{H(a_1,a_2,a_3,a_4)} - \frac{1}{a_3}
- \frac{a_2+a_3}{a_2 a_3} F(a_1+a_3,-a_2,-a_3,a_2+a_4)
\label{Beyond:S}
\end{equation}
where
\begin{equation*}
H(a_1,a_2,a_3,a_4) =
\frac{\Gamma(1+a_1) \Gamma(1+a_2) \Gamma(1+a_3) \Gamma(1+a_4) \Gamma(1+a_1+a_2+a_3+a_4)}%
{\Gamma(1+a_1+a_3) \Gamma(1+a_1+a_4) \Gamma(1+a_2+a_3) \Gamma(1+a_2+a_4)}
\end{equation*}
and
\begin{equation}
F(a_1,a_2,a_3,a_4) = \F{3}{2}{1,-a_1,-a_2\\1+a_3,1+a_4}{1} - 1\,.
\label{Beyond:F32}
\end{equation}
Expansion of
\begin{equation*}
\raisebox{-4mm}{\begin{picture}(30,17)
\put(15,8.5){\makebox(0,0){\includegraphics{i4.eps}}}
\put(15.5,8.5){\makebox(0,0)[l]{$1+b$}}
\put(4.5,15){\makebox(0,0){$1+a$}}
\put(25.5,15){\makebox(0,0){$1+a$}}
\end{picture}}
\end{equation*}
to all orders in $a$, $b$ at $\varepsilon=0$
can be expressed via $\zeta_{2n+1}$~\cite{BGK:97}
(some particular cases were known earlier~\cite{K:85,B:86,BB:88}).
Elegant symmetry-based methods to derive several terms of $\varepsilon$ expansion
of hypergeometric functions~(\ref{Beyond:F32}) algebraically
are presented in~\cite{BGK:97}.
However, these methods cannot be extended to higher orders.
They are no longer necessary: the algorithm constructed in~\cite{MUW:02}
allow one to expand such hypergeometric functions to any order,
results are expressible via multiple $\zeta$ values.
The knowledge of expansions of these ${}_3F_2(1)$ allows one to reconstruct
all unknown coefficients in the general expansion~(\ref{Sym:exp})
up to $\varepsilon^9$~\cite{BGK:97,B:03}.
The $\varepsilon^5$ term
\begin{align}
&\bar{I}(a_1,a_2,a_3,a_4,a_5,a_6) = \cdots
\nonumber\\
&{} + 3 \left( \frac{378}{5} \zeta_{5\,3} - \frac{33523}{40} \zeta_8 + 10 \zeta_5 \zeta_3 \right) I_1^5
- 3 \left( \frac{54}{5} \zeta_{5\,3} - \frac{1009}{40} \zeta_8 + 42 \zeta_5 \zeta_3 \right) I_1^3 I_2
\nonumber\\
&\quad{} - \frac{3}{2} \left( \frac{9}{5} \zeta_{5\,3} - \frac{4023}{80} \zeta_8 + 7 \zeta_5 \zeta_3 \right) I_1 I_2^2
+ 3 \left( \frac{18}{5} \zeta_{5\,3} - \frac{1083}{40} \zeta_8 + 8 \zeta_5 \zeta_3 \right) I_1 I_4
+ \cdots
\label{Beyond:e5}
\end{align}
is the first term where a depth-2 value $\zeta_{5\,3}$ appears.
In particular, we obtain several equivalent results for $I(a)$:
\begin{align}
&\frac{(d-3)(d-4)\Gamma(a) \Gamma\left(\frac{3}{2}d-a-4\right)}%
{2 \Gamma^2\left(\frac{d}{2}-1\right) \Gamma(a-d+3) \Gamma\left(\frac{d}{2}-a-1\right)}
I(a)
\nonumber\\
&{} = \frac{3d-2a-10}{d-a-3}
\F{3}{2}{1,\frac{d}{2}-2,a-d+3\\a,a-d+4}{1}
+ A \pi \cot\pi(a-d) - 2
\label{Beyond:Ia1}\\
&{} = - \frac{3d-2a-10}{d-a-3}
\F{3}{2}{1,1-a,d-a-3\\3-\frac{d}{2},d-a-2}{1}
+ A \pi \cot\pi\tfrac{d}{2} + \frac{d-4}{d-a-3}
\label{Beyond:Ia2}\\
&{} = 4 \frac{a-1}{d-2a-2}
\F{3}{2}{1,a-\frac{3}{2}d+5,a-\frac{d}{2}+1\\3-\frac{d}{2},a-\frac{d}{2}+2}{1}
+ A \pi \cot\pi\tfrac{d}{2} - 2 \frac{d-4}{d-2a-2}
\label{Beyond:Ia3}\\
&{} = - 4 \frac{a-1}{d-2a-2}
\F{3}{2}{1,\frac{d}{2}-2,\frac{d}{2}-a-1\\\frac{3}{2}d-a-4,\frac{d}{2}-a}{1}
+ A \pi \cot\pi\left(\tfrac{d}{2}-a\right) - 2
\label{Beyond:Ia4}
\end{align}
where
\begin{equation*}
A = \frac{\Gamma(a)\Gamma\left(\frac{3}{2}d-a-4\right)}%
{\Gamma(d-4)\Gamma\left(\frac{d}{2}-1\right)}\,.
\end{equation*}
A curious integral belonging to the current class was considered in~\cite{DHP:90}.
The symmetry allows one to write it in 12 equivalent forms:
\begin{align}
&\bar{I}(1,1,\lambda,1,\lambda,\lambda)
= \bar{I}(1,1,\lambda,\lambda,1,\lambda)
= \bar{I}(1,\lambda,1,1,\lambda,\lambda)
= \bar{I}(1,\lambda,1,\lambda,\lambda,1)
\nonumber\\
={}&\bar{I}(1,\lambda,\lambda,1,1,\lambda)
= \bar{I}(1,\lambda,\lambda,1,\lambda,1)
= \bar{I}(\lambda,1,1,\lambda,1,\lambda)
= \bar{I}(\lambda,1,1,\lambda,\lambda,1)
\nonumber\\
={}&\bar{I}(\lambda,1,\lambda,1,1,\lambda)
= \bar{I}(\lambda,1,\lambda,\lambda,1,1)
= \bar{I}(\lambda,\lambda,1,1,\lambda,1)
= \bar{I}(\lambda,\lambda,1,\lambda,1,1)
\label{Beyond:Pismak}
\end{align}
where $\lambda=d/2-1$.
It reduces to $I(1,\lambda,\lambda,\lambda)$.
However, it cannot be directly calculated using the above formulas:
one of the arguments should be shifted by $x$,
and the limit $x\to0$ should be taken.
In the paper~\cite{DHP:90},
recurrence relations shifting $d$ by $\pm2$ were derived
(they were used in~\cite{KSV:94}).
This method became popular later.
\section{Mellin--Barnes representation}
\label{S:MB}
The integral $I$ can be written as ($k^2=1$)
\begin{align}
&\raisebox{-4.25mm}{\begin{picture}(22,11)
\put(11,5.5){\makebox(0,0){\includegraphics{d1.eps}}}
\put(11.5,5.5){\makebox(0,0)[l]{$a_3$}}
\put(6,10){\makebox(0,0){$a_1$}}
\put(16,10){\makebox(0,0){$a_2$}}
\put(16,1){\makebox(0,0){$a_4$}}
\put(6,1){\makebox(0,0){$a_5$}}
\put(11.5,5.5){\makebox(0,0)[l]{$a_3$}}
\end{picture}}
=
\raisebox{-3.75mm}{\begin{picture}(18,10)
\put(9,5){\makebox(0,0){\includegraphics{d2.eps}}}
\put(9,9.5){\makebox(0,0){$a_1$}}
\put(9,0.5){\makebox(0,0){$a_5$}}
\end{picture}}
\quad\text{where}\quad
\raisebox{-1.75mm}{\begin{picture}(10,6)
\put(5,3){\makebox(0,0){\includegraphics{d3.eps}}}
\end{picture}}
=
\raisebox{-4.75mm}{\begin{picture}(16,12)
\put(8,6){\makebox(0,0){\includegraphics{d4.eps}}}
\put(9,9){\makebox(0,0){$a_2$}}
\put(9,3){\makebox(0,0){$a_4$}}
\put(4.5,6){\makebox(0,0)[r]{$a_3$}}
\end{picture}}\,,
\nonumber\\
&I(a_1,a_2,a_3,a_4,a_5) = \frac{1}{\pi^{d/2}}
\int \frac{d^d k_1}{[(k_1-k)^2]^{a_1} (k_1^2)^{a_5}}
V((k_1-k)^2,k_1^2)\,,
\nonumber\\
&V((k_1-k)^2,k_1^2) = \frac{1}{\pi^{d/2}}
\int \frac{d^d k_2}{[(k_2-k)^2]^{a_2} [(k_1-k_2)^2]^{a_3} (k_2^2)^{a_4}}\,.
\label{MB:nest}
\end{align}
Substituting the Mellin--Barnes representation of the one-loop vertex~\cite{BD:91}%
\footnote{For $d=4$ and $a_i=1$ it was obtained in~\cite{U:75}.}
\begin{equation}
V((k_1-k)^2,k_1^2) = \frac{1}{(2\pi i)^2} \int d z_1\,d z_2\,
[(k_1-k)^2]^{z_1} (k_1^2)^{z_2} v(z_1,z_2)
\label{MB:V}
\end{equation}
(where $v(z_1,z_2)$ is a combination of $\Gamma$-functions),
we can easily calculate the loop integral in $k_1$~\cite{BW:03}%
\footnote{For $d=4$ and $a_i=1$ this was also done in~\cite{U:75}.}:
\begin{equation}
I(a_1,a_2,a_3,a_4,a_5) = \frac{1}{(2\pi i)^2} \int d z_1\,d z_2\,
G(a_1-z_1,a_5-z_2) v(z_1,z_2)
\label{MB:I}
\end{equation}
($G(a_1,a_2)$ is the standard massless one-loop self-energy integral).
The result is
\begin{align}
&I(a_1,a_2,a_3,a_4,a_5) = \frac{1}%
{(2\pi i)^2 \Gamma(a_2) \Gamma(a_4) \Gamma(a_3) \Gamma(d-a_2-a_4-a_3)}
\nonumber\\
&{}\times\int d z_1\,d z_2\,
\frac{\Gamma(-z_1) \Gamma(\frac{d}{2}-a_4-a_3-z_1) \Gamma(\frac{d}{2}-a_1+z_1)}%
{\Gamma(a_1-z_1)}
\nonumber\\
&\quad\frac{\Gamma(-z_2) \Gamma(\frac{d}{2}-a_2-a_3-z_2) \Gamma(\frac{d}{2}-a_5+z_2)}%
{\Gamma(a_5-z_2)}
\nonumber\\
&\quad{}\frac{\Gamma(a_1+a_5-\frac{d}{2}-z_1-z_2)
\Gamma(a_3+z_1+z_2) \Gamma(a_2+a_4+a_3-\frac{d}{2}+z_1+z_2)}%
{\Gamma(d-a_1-a_5+z_1+z_2)}\,.
\label{MB:Ires}
\end{align}
This double Mellin--Barnes integral can be expressed via double sums~\cite{BW:03}.
First we close the $z_1$ integration contour to the right.
There are 3 series of poles: $\Gamma(-z_1)$,
$\Gamma(\frac{d}{2}-a_4-a_3-z_1)$, $\Gamma(a_1+a_5-\frac{d}{2}-z_1-z_2)$,
and we obtain 3 sums over residues
\begin{equation*}
I = I_1 + I_2 + I_3\,.
\end{equation*}
Then we close the $z_2$ integration contour contour to the right,
and get double sums:
\begin{equation*}
\begin{split}
I &{}= I_{1\,1} + I_{1\,2} + I_{1\,3}\\
&{} + I_{2\,1} + I_{2\,2} + I_{2\,3}\\
&{} + I_{3\,1} + I_{3\,2} + I_{3\,3} + I_{3\,4} + I_{3\,5}\,.
\end{split}
\end{equation*}
These nested sums belong to the classes which can be expanded in $\varepsilon$
to any order in terms of multiple $\zeta$ values
by the algorithms constructed in~\cite{MUW:02}.
These algorithms were implemented in the packages
\texttt{NestedSums}~\cite{W:02} (in \texttt{C++} with \texttt{GiNaC})
and \texttt{XSUMMER}~\cite{MU:06} (in \texttt{FORM}).
Therefore, expansion of the integrals $I$ to any order in $\varepsilon$
can be written in terms of multiple $\zeta$ values~\cite{BW:03}.
\textbf{Acknowledgements}.
I am grateful to
P.\,A.~Baikov,
D.\,J.~Broadhurst,
K.\,G.~Che\-tyrkin,
A.\,I.~Davydychev,
M.\,Yu.~Kalmykov,
A.\,V.~Kotikov,
V.\,A.~Smirnov
for numerous discussions of various questions related to the present topic;
to Yu.\,M.~Pismak, A.\,P.~Isaev, R.\,N.~Lee, N.\,A.~Kivel
for constructive comments;
to T.~Huber, D.~Ma\^{\i}tre
for their help in using \texttt{HypExp} and \texttt{HPL};
and to D.\,I.~Kazakov and the members of the organizing committee
for organizing the conference and inviting me to present a talk.
This work was supported by the BMBF through Grant No. 05H09VKE.
|
2,869,038,154,022 | arxiv | \section{Introduction}
Connections between non-integrability, many-body physics, complexity, ergodicity, and entropy generation are the cornerstones of statistical mechanics. The aim of quantum chaos is to extend these questions in the quantum domain.
Foundational works in this regard include semiclassical methods connecting classical periodic orbits to {the} density of states
level statistics \cite{Berry375}, properties of Wigner functions \cite{Berry77a}, and quantum scars in ergodic phase spaces \cite{heller1984bound} and connections to random matrix theory.
Search for these footprints of chaos, and characterization of ``true" quantum chaos, independent of any classical limit, has important consequences both from a foundational point of view as well for quantum information processing.
For example, such studies address complexity in quantum systems and play a potentially crucial role in information processing protocols like quantum simulations that are superior to their classical counterparts.
Characterization of chaos in the quantum domain has been much contested since, unlike its classical counterpart, unitary quantum evolution preserves the overlap between two initial state vectors and hence rules out hypersensitivity to initial conditions. However, a deeper study reveals chaos in quantum systems.
These issues have been extensively studied in the last few decades and several quantum signatures of classical chaos have been discovered. This interestingly coincides with exquisite control of individual quantum systems in the laboratory and the ability to coherently drive these systems with non-integrable/chaotic Hamiltonians. Recent trends include studies involving connections of quantum chaos to {out-of-time-ordered} correlators (OTOC) and the rate of scrambling of quantum information in many-body systems with consequences ranging from the foundations of quantum statistical mechanics, quantum phase transitions, and thermalization on the one hand to information scrambling inside a black hole on the other hand \cite{manybody1, manybody2, manybody3, manybody4, qgravity1, chaos1,chaos2, shock1, pawan, shenker2, shenker3, kitaev1, kitaev2}.
OTOCs have been much talked about in the quantum information circle recently and a number of ways to measure OTOCs have been proposed including a protocol employing an interferometric scheme in cold atoms \cite{swingle}. An alternative method involving two-point projective measurements was proposed \cite{campisi}, giving a scheme for the measurement of OTOCs using the two-point measurement scheme, developed in the field of non-equilibrium quantum thermodynamics elucidating the connections between information scrambling and thermodynamics.
Various other protocols are reported in \cite{meas1,meas2,meas3}. Measuring OTOCs in experiments is not easy, as the implementation of perfect time reversal in an experimental setting is impossible because of dissipation. However experimental implementation has been achieved in some systems. Measurement of OTOCs for an Ising spin chain in an NMR simulator has been reported \cite{nmr,wei}. A many-body time-reversal protocol using trapped ions has been proposed and demonstrated \cite{garttner} which though universal is not scalable.
These experiments measure infinite temperature OTOCs, an observation that will be important for us.
{ In order to explore any quantum signatures of chaos, one has to numerically process data structures whose computational complexity scale
exponentially with the number of qubits required to simulate the system.
In this paper, we give a quantum algorithm that gives an exponential {speed-up} in measuring OTOCs provided that the number of gates, $K$, required in the decomposition of the times evolution operator of the system scales \textit{polynomially} with $n$, where $n$ is the number of qubits used in the implementation and, $N$, the dimension of the Hilbert space with $N = 2^n$. This implies that the algorithm measures the OTOCs in a time that scales as poly(n), which is exponentially faster than any classical algorithm. Our algorithm is based on the Deterministic Quantum Computation with one pure qubit (DQC1) algorithm, which is the first mixed state scheme of quantum computation. Therefore, this can be naturally implemented by a high-temperature NMR based quantum information processor. It involves a deterministic quantum control of one qubit model, using scattering circuit\cite{knill,scattering}. This algorithm is also called the `power of one qubit' as the main primary resource required for this algorithm is one pure qubit. Moreover, the essential part of simulations, state initialization, and readout, that are often quite involved in certain models of quantum computation \cite{van2001powerful}. We give a quantum circuit to evaluate OTOCs—which bypasses the need to prepare a complex initial state and can be accomplished by a very simple measurement.
Applications include estimation of fidelity decay and density of states in quantum chaos \cite{exponential, PhysRevA.68.022302}, computing Jones {polynomials} from knot theory \cite{shor2007estimating, jones2004nuclear} and phase estimation in quantum metrology \cite{PhysRevA.77.052320}. Although the DQC1 model of quantum information processing (QIP) is believed to be less powerful than a universal quantum computer, its natural implementation in high-temperature NMR makes it an ideal candidate for probing OTOCs and mixed state quantum computation protocols.}
\section{Out-of-time-ordered correlators (OTOCs)}
OTOC were first proposed by Larkin and Ovchinnikov in the context of semiclassical approximations in the theory of superconductivity \cite{larkin}. They later reemerged in the study of many-body systems \cite{manybody1,manybody2, manybody3, manybody4} quantum gravity \cite{qgravity1} and quantum chaos \cite{chaos1,chaos2, shock1, pawan, shenker2, shenker3, kitaev1, kitaev2}. In quantum information literature, OTOC is used as a probe to study the dynamics of information. One can probe the macroscopic irreversibility of the dynamics, the spread of quantum information from a localized point to the rest of the system via entanglement and correlations, and also the aspects of thermalization \cite{unscrambling,scrambling1, scrambling2}. Consider a chain of interacting spins. Then a correlator of two operators acting at two different sites can be defined as
\begin{equation}\label{eqq}
C_{W,V}(\tau)=\dfrac{1}{2}\langle[W(x, \tau), V(y, 0)]^{\dagger}[W(x, \tau), V(y, 0)]\rangle,
\end{equation}
where the local operators $W$ and $V$ are unitary and/or Hermitian that act on sites $x$ and $y$ respectively and $W(x, \tau)=U^{\dagger}(\tau)W(x, 0)U(\tau)$ is the Heisenberg evolution of operator $W$ under time evolving operator $U(\tau)$. The average is taken with respect to the thermal state at some temperature which we take to be infinite. In particular, if the operators $W$ and $V$ are unitary, the above equation becomes,
\begin{equation}
C_{W, V}(\tau)=1-\mathtt{Re}\langle W(x,\tau)^{\dagger}V(y, 0)^{\dagger}W(x, \tau)V(y, 0)\rangle.
\end{equation}
In classical physics, the chaos is defined as the sensitive dependence on initial conditions. If we replace $W$ and $V$ in the Eq (\ref{eqq}) with position($Q$) and momentum ($P$) operators, and taking a semi-classical limit, we notice that $\hbar^2\{Q(\tau), P(0) \}^2=\left(\hbar\frac{\delta Q(\tau)}{\delta Q(0)}\right)^{2} \approx \mathrm{exp}(2\lambda \tau)$. The quantum-classical correspondence principle implies that the quantity $C_{W, V}(\tau)$ grows exponentially till the Ehrenfest time($\tau_{Eh}$). However, unlike the classical systems, the lyapunov exponent($\lambda $) calculated from OTOC is bounded by $\frac{2\pi}{\beta}$ \cite{chaos1}. Beyond the $\tau_{Eh}$, the quantum corrections start dominating and the quantum-classical correspondence breaks down.
An interesting feature of OTOC is that it measures the spreading of initially localized operators across system degrees of freedom as the operator evolves in Heisenberg fashion \cite{pawan,ope1, ope2, ope3, ope4, ope5}. Consider a pair of local operators $W$ and $V$ that act on different subspaces of total Hilbert space($\mathcal{H}$) under a chaotic time evolution $U(\tau)=\exp(iH\tau)$. We assume that the Hamiltonian is generic with local interactions. Under this evolution, the operator $W$ will evolve in time and it can be expanded in Taylor series around $\tau=0$ as
\begin{eqnarray}\label{eqq2}
W(\tau)&=&\sum_{n}\dfrac{\tau^n}{n!}\dfrac{d^n W}{d\tau^n}\nonumber\\
&=& W(0)+i\tau[H, W]+(i\tau)^2[H,[H, W]]+...
\end{eqnarray}
This implies that the operators $W(\tau)$ and $V$ in general do not commute for time $\tau \neq 0$. For example, consider one dimensional Ising spin chain with nearest-neighbor interactions. Let $W(i, \tau=0)=\sigma_{z}^i$ acts on site $i$ at time $\tau =0$. On substituting $W$ in second line of the series in the Eq (\ref{eqq2}), the first order commutator will give us the sum of products of local operators acting on the sites $i-1, i$ and $i+1$ i.e $[H, \sigma_{z}^{i}]=f(i-1, i, i+1)$. As time flows, the higher ordered nested commutators also will contribute to the expansion of $W(\tau)$ thus making the quantity $[W(\tau), V]\neq 0$ \cite{shock1}.
Lieb and Robbinson \cite{lieb1972finite} showed that for short range interacting Hamiltonians, the quantity $C_{W, V}$ is bounded i.e $C_{W,V}(\tau)\leq ce^{-a(i-v\tau)}$. Where $a$ and $c$ are constants and $v$ is called Lieb-Robbinson velocity. This bound on OTOC imply a light-cone like structure in quantum lattice models.
it is worthwhile to note that the growth of the OTOC
is a quantum measure, can be used in systems with no
obvious classical limits.
\section{Determinstic Quantum Computation with one pure qubit (DQC1)}
{ Single qubit quantum computation, although limited in applicability is interesting from a fundamental point of view. Despite involving minimal entanglement, DQC1 gives an advantage over classical computing. It has been shown that none of the classical models simulate DQC1 efficiently \cite{animesh}}. In this model, we start with a known state of an ancilla or probe qubit and couple it to the system. If the system state is known, we can perform spectroscopy of the controlled operation acting on the system. Else if the operation is known, one can do tomography with the same circuit \cite{scattering}. In both cases, a measurement performed on the ancilla qubit after the interaction reveals information about the system or the operation. The circuit is given in the figure below
{The circuit diagram for DQC1 is shown below.}
\begin{figure}[!ht]
\centering
\begin{quantikz}
\lstick{$\ket{0}$} & \gate{H} & \ctrl{1} & \gate{H} & \meter{} \\
\lstick{$\ket{\psi_0}$ or $\mathbb{I}/2^{n}$} & \qwbundle[alternate]{} & \gate{U} \qwbundle[alternate]{}& \qwbundle[alternate]{} & \qwbundle[alternate]{}
\end{quantikz}
\caption{Quantum circuit for the DQC1 protocol (when the input is $\mathbb{I}/2^{n}$). The circuit gives an efficient algorithm for trace estimation of a unitary with only one qubit of quantum information.}
\end{figure}
{The top qubit (the pure qubit that is also the control qubit) is acted upon by a Hadamard gate. This transforms state $\ket{0}$ to $\frac{(\ket{0} +\ket{1})}{\sqrt{2}}$. Then a controlled unitary $U$ is applied followed by another Hadamard gate. It is to be noted that the controlled unitary $U$, and the state
$\ket{\psi_0}$ can belong to an arbitrarily large Hilbert space.
Measuring the control qubit, we observe $\ket{0}$ and $\ket{1}$ with probabilities}
\begin{align}
P(0)= \frac{1}{2}(1+ \mathtt{Re} \bra\psi_0 U \ket \psi_0) \nonumber \\
P(1)= \frac{1}{2}(1 - \mathtt{Re} \bra\psi_0 U \ket \psi_0) .
\end{align}
{Instead of a pure state $\ket{0}$, if the lower set of qubits are in a completely mixed state, with density matrix, $\rho = \mathbb{I}/2^{n}$, we get }
\begin{align}
P(0)= \frac{1}{2}(1+ \frac{1}{2^n}\mathtt{Re} (\mathtt{tr} U)) \nonumber \\
P(0)= \frac{1}{2}(1 - \frac{1}{2^n}\mathtt{Re} (\mathtt{tr} U))
\end{align}
{By a trivial modification of this scheme, one can make these probabilities depend on $\mathtt{Im} (\mathtt{tr} U)$
and therefore, this gives a quantum algorithm to estimate the trace of a unitary matrix. $L$ measurement of the top qubit will give us an estimate of the trace with fluctuations of size $1/\sqrt{L}$. Therefore, to achieve an accuracy $\epsilon$ one requires $L \sim 1/\epsilon^2$ implementations of the circuit. If $P_e$ is the probability that the estimate departs from the actual value by an amount $\epsilon$, then one needs to run the experiment $L \sim \ln(1/P_e)/\epsilon^2$ times.
This accuracy in the estimate does not scale with the size of the unitary matrix and hence provides an exponential {speed-up} over the best known classical algorithm, provided the unitary admits an efficient gate decomposition. It is known that if the gate decomposition scales as \textit{poly(n)}, the controlled version of these gates also scales \textit{polynomially} in $n$.
Moreover, the result is obtained by a mesaurement of only the top qubit and hence independent of the size of the readout register.}
As a last remark, it is worthwhile to note that, while we have assumed the probe qubit to be in a pure state, this is not necessary. With the probe qubit in a state, $\alpha \ket{0}\bra{0} + \frac{(1- \alpha)}{2}\mathbb{I} $, the model with a tiniest fraction of a qubit is computationally equivalent to the DQC1 circuit described above.
More specifically, the number of runs of the trace estimation algorithm goes as $L \sim \ln(1/P_e)/\alpha^2\epsilon^2$. Therefore, as long as $\alpha$ is non-zero, the circuit provides an efficient estimate of the trace.
\section{Using DQC1 to calculate OTOC}
We now adapt the DQC1 algorithm to measure OTOCs. This is shown in the circuit in Fig \ref{fig2}.
\begin{figure*}[!ht]
\centering
\begin{quantikz}
\lstick{$\ket{0}$} & \gate{H}\slice{$t_1$} & \ctrl{1} &\ifthenelse{\the\pgfmatrixcurrentcolumn>1}{\arrow[arrows]{l}}{}& \ctrl{1} &\ifthenelse{\the\pgfmatrixcurrentcolumn>1}{\arrow[arrows]{l}}{}& \ctrl{1} &\ifthenelse{\the\pgfmatrixcurrentcolumn>1}{\arrow[arrows]{l}}{}& \ctrl{1} & \gate{H} \slice{$t_2$} & \meter{} \\
\lstick{$\ket{\psi_0}$}&\qwbundle[alternate]{} &\gate{V}\qwbundle[alternate]{}&\gate{U_{\tau}}\qwbundle[alternate]{}&\gate{W}\qwbundle[alternate]{}&\gate{U_{\tau}^\dagger}\qwbundle[alternate]{}&\gate{V^\dagger}\qwbundle[alternate]{}&\gate{U_{\tau}^\dagger}\qwbundle[alternate]{}&\gate{W}\qwbundle[alternate]{}&\gate{U_{\tau}}\qwbundle[alternate]{}&\qwbundle[alternate]{}
\end{quantikz}
\caption{This circuit evaluates the expectation value of OTOC with respect to $\ket{\psi_0}.$ Time progresses along the horizontal line. The top register is the single-qubit ancilla or probe. The bottom register is the system on which controlled gates act. When the probe qubit is $\ket{0}$, the system is left unchanged, whereas when the probe is $\ket{1}$, controlled operations take place. Measurement of $\sigma_z$ or $\sigma_y$ is performed on the probe qubit, in the end, revealing the value of OTOC. }
\label{fig2}
\end{figure*}
Here we initialize the probe to $\ket{0}$ and for simplicity let us say the system state is prepared in a pure state $\ket{\psi_0}$. The controlled gates act on the system only when the control qubit is $\ket{1}$. $H$ is the Hadamard gate, and $U_\tau$ is the unitary determined by a Hamiltonian which evolves the system up to time $\tau$. The state of the probe $+$ system at time $t_1$ is $\frac{(\ket{0} +\ket{1})}{\sqrt{2}} \otimes \ket{\psi_0}.$ After the interaction, at time $t_2$, the combined state is $\frac{1}{2}\ket{0} \otimes (1+\mathcal{U}) \ket{\psi_0}+\frac{1}{2}\ket{1} \otimes (1-\mathcal{U}) \ket{\psi_0} $ where $\mathcal{U}= W_\tau^\dagger V^\dagger W_\tau V.$ After the action of the second Hadamard on the probe qubit, measurement of $\sigma_z \otimes \mathbb{I}$, with $\sigma_z$ on the probe qubit yields $\mathtt{Re} \bra{\psi_0}W_\tau^\dagger V^\dagger W_\tau V \ket{\psi_0}$ and measurement of $\sigma_y$ on the probe yields $\mathtt{Im}\bra{\psi_0}W_\tau^\dagger V^\dagger W_\tau V \ket{\psi_0}$. If we perform the circuit sufficiently many times, then we get
\begin{align}
\langle \sigma_z \rangle &= \mathtt{Re} \bra{\psi_0}W_\tau^\dagger V^\dagger W_\tau V \ket{\psi_0} \nonumber \\ \langle \sigma_y \rangle &= \mathtt{Im} \bra{\psi_0}W_\tau^\dagger V^\dagger W_\tau V \ket{\psi_0}
\end{align}
Thus we have obtained the OTOC values. As mentioned previously, assuming we have an efficient gate decomposition and fix the size of fluctuations in our answer, the complexity if this algorithm does not scale with the dimension of Hilbert space of the physical system under consideration. This is not an unreasonable assumption as efficient decomposition of some quantized chaotic systems is known \cite{benenti2001efficient, emerson2003pseudo, PhysRevA.57.1634} and used in quantum simulations \cite{PhysRevA.68.022302, emerson2002fidelity}.
In the above, the inherent assumption is that the initial state of the system is perfectly known. By taking the initial state $\ket{\psi_0}\bra{\psi_0}$ to be completely mixed, that is proportional to $\mathbb{I}$, we get the trace of OTOC, which is the measurement with respect to a thermal state at infinite temperature. Therefore, OTOCs with respect to the thermal state at infinite temperature is a perfect candidate for the implementation with DQC1, that employs only 1 qubit of quantum information, and hence a happy accident.
\section{Estimating the eigenvlaue spectrum of OTOC}
Not only the expectation value of OTOCs, the eigenvalue spectrum of OTOCs is also of interest. Just like energy eigenvalue spacing for integrable and chaotic systems form distinct distribution, the level spacing of OTOCs also shows marked difference \cite{spectrum2, spectrum1}. One can obtain the eigenvalue density of OTOCs using a DQC1 algorithm. The circuit is similar to the previous one. But now, apart from the $n$-qubit register for the system, we also need an extra $n_2$-qubit ancilla and perform discrete Fourier transforms.
The circuit is shown in Fig. \ref{fig3}.
\begin{figure*}[!ht]
\begin{quantikz}
\lstick{$\ketbra{0}{0}$} & \gate{H} & \ctrl{1} &\ifthenelse{\the\pgfmatrixcurrentcolumn>1}{\arrow[arrows]{l}}{}& \ctrl{1} &\ifthenelse{\the\pgfmatrixcurrentcolumn>1}{\arrow[arrows]{l}}{}& \ctrl{1} & \gate{H} &\ifthenelse{\the\pgfmatrixcurrentcolumn>1}{\arrow[arrows]{l}}{}& \meter{} \\
\lstick{$\ketbra{u}{u}$} & \qwbundle[alternate]{}& \gate{FT}\qwbundle[alternate]{} & \qwbundle[alternate]{} &\ctrl{1}\qwbundle[alternate]{}&\qwbundle[alternate]{}& \gate{FT}\qwbundle[alternate]{}& \qwbundle[alternate]{}&\qwbundle[alternate]{}&\qwbundle[alternate]{} \\
\lstick{$\rho_0$} & \qwbundle[alternate]{} &\qwbundle[alternate]{}&\qwbundle[alternate]{}& \gate{W_\tau^\dagger V^\dagger W_\tau V}\qwbundle[alternate]{} &\qwbundle[alternate]{} &\qwbundle[alternate]{}&\qwbundle[alternate]{}&\qwbundle[alternate]{}&\qwbundle[alternate]{}
\end{quantikz}
\caption{Circuit for obtaining the spectral density of OTOC. Now there are two ancillas. Controlled Fourier transform is applied twice on the second ancilla. The operation $W_\tau^\dagger V^\dagger W_\tau V$ which acts on the system is written in a condensed form and should be implemented by decomposing into constituent gates as in Fig. \ref{fig2}. Only the single-qubit probe/ancilla is measured in the end as before. }
\label{fig3}
\end{figure*}
In this circuit, $\ket{u}$ is the initialized state of the second ancilla register of $n_2$ qubits, with the expectation value of OTOC equal to $u$. The OTOC, $W_\tau^\dagger V^\dagger W_\tau V$, which can be implemented as before.
Let $N_2=2^{n_2}$ and
at the end of circuit, measuring $\sigma_z$ and $\sigma_y$ on the probe qubit as before, we get
\begin{equation}
f(u)= \frac{1}{N_2} \sum_{s=0}^{N_2-1} \mathrm{exp}(i4 \pi u s/N_2) \mathtt{tr}[(W_\tau^\dagger V^\dagger W_\tau V)\rho_0]
\end{equation}
Where $s$ is the Fourier domain variable of $u.$ Spectral information is now contained in the phases, and can be estimated. Normalizing, so that $\sum_{u=0}^{N_2-1} f(u)=1$, we get the probability function of eigenvalues. The resolution of the spectral density is determined by the number of ancilla qubits $n_2$. As in the previous case, the DQC1 implementation provides an exponential speed up in obtaining spectral density over any known classical algorithm.
\section{Conclusion}
We have shown that using a single bit of quantum information, one can estimate OTOCs with an exponential speed-up over the best known classical algorithm. In the spirit of the slogan, ``classical chaos generates classical information, as captured by classical Lyapunov exponents and the classical Kolmogorov-Sinai entropy, quantum chaos generates quantum information", leading to the growth of OTOCs (till the Ehrenfest time), {which are} popular quantifiers for this. In this work, we have given an efficient quantum algorithm for estimating OTOCs and capturing the growth of quantum complexity.
One possible avenue is to estimate the semiclassical
formulas, like the Gutzwiller trace formula on a quantum computer.
There are existing algorithms for this \cite{georgeot08} that give a
polynomial speed-up over similar implementations on a classical
computer.
We aim to explore the possibility of such computations using the DQC1 model of quantum computation, which can even operate on highly mixed initial states. One can also consider a perturbed OTOC where the operator $W_\tau^\dagger$ that occurs in $W_\tau^\dagger V^\dagger W_\tau V$,
undergoes time evolution with a slightly perturbed Hamiltonian as compared to $W_\tau$ and therefore provides a direct analog to classically chaotic systems under stochastic noise. {
Moreover, understanding the power behind DQC1 is still an open question.
Future directions include determining the nature of resources
quantum mechanics provides for information processing tasks that are
superior to their classical counterparts as well as other avenues where mixed-state quantum computation can be applied.}
|
2,869,038,154,023 | arxiv | \section{Introduction}
Solar prominences, also called filaments when they are viewed on-disk, are long-observed but still not well-known structures in the solar atmosphere. Since they are outstanding features in multiple-wavelength observations of the Sun and have close relationships with various solar eruptive phenomena, prominences are always one of major topics in solar and space physics. The key issues in prominence/filament studies are their formation, maintenance, dynamic processes, and their roles in other related solar activities, e.g., coronal mass ejections (CMEs) and flares. With the aid of modern sensor technology, many facts of prominences are revealed \citep[e.g.,][and the references therein]{Poland_1986, Tandberg-Hanssen_1995, Martin_1998, Patsourakos_Vial_2002}. Prominences are dense (electron density $\sim10^{9}-10^{11}$ cm$^{-3}$) and cool ($\sim5000-8000$ K) plasmas floating in hot and diluted solar corona \citep[e.g.,][]{Engvold_Brynildsen_1986, Hiei_etal_1986, Hirayama_1986, Madjarska_etal_1999}. They can appear anywhere from active regions to polar regions, and live for days to months. They have spines and barbs, and always straddle above polarity inversion lines. There are sometimes strong counterstreamings along spines and very dynamic vertical flows. The chirality of prominences/filaments obeys the pattern that most prominences on northern hemisphere are dextral while most ones on southern hemisphere are sinistral. The association rate of eruptive prominences with CMEs is more than about 70\% \citep[e.g.,][]{Gilbert_etal_2000, Gopalswamy_etal_2003}.
Many of the above findings are made through statistical investigations combined with case studies. A continuously updated catalog of prominences with unbiased parameters is undoubtedly helpful for such researches, especially in the age of the explosive growth of observational data. For instance, the successful launch of STEREO (Solar Terrestrial Relationship Observatory) spacecraft (A and B) in 2006 led the amount of solar observations explosively growing to more than 12 GB a day, and now it has increased to about 2 TB a day from SDO (Solar Dynamic Observatory), which was just launched in February 2010. NOAA/SWPC\footnote{http://www.swpc.noaa.gov/Data/index.html} routinely compiles a list of solar events, in which on-disk filaments and limb eruptive prominences are included; but the list is far from complete. Thanks to the unique properties of prominences/filaments, they can be clearly observed at multiple wavelengths, such as H$\alpha$, He I 10830~\AA, He II 304~\AA, radio waves, etc \citep[e.g.,][]{Schmahl_etal_1974, Hanaoka_etal_1994, Penn_etal_1994, ChiuderiDrago_etal_2001, Labrosse_Gouttebroze_2001}, and therefore it is possible to extract them from the vast amount of data automatically and consistently.
Recognitions of on-disk filaments and limb prominences are different. The former is mainly accomplished by studying H$\alpha$ data. For example, \citet{Gao_etal_2002}, \citet{Shih_Kowalski_2003}, \citet{Fuller_etal_2005} and \citet{Zharkova_etal_2005} developed codes to automatically detect filaments in full-disk H$\alpha$ images. The automated system developed by \citet{Bernasconi_etal_2005} is able to detect, classify and track H$\alpha$ filaments efficiently. EUV observations are much more difficult to be used in detecting on-disk filaments due to the low contrast of filaments and the involvement of coronal holes. However, EUV observations are suitable for limb prominence detection. Through the usage of Fe IX/X 171~\AA, Fe XII 195~\AA, Fe XV 284~\AA\ and He II 304~\AA\ images from SOHO/EIT (\citet{Delaboudiniere_etal_1995}) instrument, \citet{Foullon_Verwichte_2006} developed algorithms to recognize limb prominences. In their method, He II 304~\AA\ data provide the basic criteria for the selection of candidate prominence regions, and other emission lines are used to remove active regions, which also appear brightly in EUV 304~\AA. Most recently, \citet{Labrosse_etal_2010} also developed an efficient code to detect limb prominences in EUV 304~\AA\ images.
In this paper, we present an automated system of detecting and tracking solar limb prominences based on only He II 304~\AA\ data, and a resultant on-line catalog as well, which can be continuously updated. The performance and limitations of the system are presented in section \ref{sec_performance}. Based on our catalog, some preliminary statistical results of solar limb prominences are also presented. The reasons we choose He II 304~\AA\ emission line rather than H$\alpha$ are the following. First, for prominence/filament observations, He II 304~\AA\ line is the only one uninterruptedly imaging the Sun with high cadence (operated by space-borne instruments SECCHI/EUVI on board STEREO twins, and AIA on board SDO). A complete database of limb prominences is therefore possible to be established. Secondly, the high time resolution of the data allows us to track their evolution, even small changes. Thirdly, the projection effect can be minimized for certain parameters, such as height, radial speed, etc. Fourth, they are complementary to the catalogs of on-disk filaments. Further, there is so far no well-established on-line catalog for limb prominences.
\section{Method}
Our system consists of five modules. The first module is to select prominence candidates; the second one is to extract necessary parameters for further usage; the third one is to discriminate prominences from other non-prominence features, such as active regions and noise; the fourth one is to track the prominences for the evolution; and the last one is to generate a catalog of prominences with final parameters. Here we use EUVI 304~\AA\ data from STEREO-B/SECCHI to illustrate these processes.
\subsection{Module 1: Prominence Candidate Selection}\label{sec_module_1}
The functions of module 1 are illustrated in Figure \ref{fg_image_processing}. A raw EUVI 304~\AA image is shown in Figure \ref{fg_image_processing}a. The background brightness above the limb generally decreases with increasing distance, $r$, from the solar center. Similarly, the prominences near the solar surface are much brighter than those at high altitude (for example, comparing prominence A and B marked in the image). The variation of the brightness of the prominence with $r$ is further discussed in Sec.\ref{sec_fading}. Although region B is too dark to be noticed in the raw image, we still consider it a prominence candidate for its higher density compared to the ambient coronal plasmas.
The first step of processing is to use a technique similar to the normalizing-radial-graded filter \citep{Morgan_etal_2006} to rescale the brightness so that the contrast is independent of $r$. To do this, a background image is first created, which is a circular symmetric image with respect to the center of the solar disk as shown in Figure \ref{fg_image_processing}b. The pixel value at any $r$ is just the average value of all the pixels along the circle at $r$ in the original image. It is obvious that the brightness of background plasma does drop quickly as $r$ increases. Then, we obtain the rescaled image (Fig.\ref{fg_image_processing}c) by using the following formula
\begin{eqnarray}
\mathrm{Rescaled\ Image}=\frac{\mathrm{Original\ Image\ (Fig.\ref{fg_image_processing}a)}+\delta}{\mathrm{Background\ Image\ (Fig.\ref{fg_image_processing}b)}+\delta}
\end{eqnarray}
Here, $\delta$ is a small value to avoid dividing by a near-zero value. Both prominence material A and B become much clearer. For the STEREO-B/EUVI 304~\AA\ images, $\delta$ is chosen to be 5 through trial and error; however, this number may change for other instruments, depending on the signal-to-noise ratio.
\begin{figure*}[tbh]
\centering
\includegraphics[width=\hsize]{f01.eps}
\caption{A sample image on 2007 October 8 to illustrate the processes of selecting prominence candidates. (a) The raw EUVI 304~\AA\ image,
(b) circular symmetrical background image, (c) rescaled image, (d) binary image of
selected kernels, (e) binary image of possible prominence regions,
and (f) the rescaled image with the boundaries of the recognized
regions. The bright patch above the east limb in (f) is an active
region, which will be removed by module 3.}\label{fg_image_processing}
\end{figure*}
The further recognition, which applies the technique of region-growing with certain thresholds, is based on the rescaled image. The following process is similar to those by, e.g., \citet{Gao_etal_2002} and \citet{Bernasconi_etal_2005}, and thus we just briefly describe it here. First, we set a threshold $th_{knl}$ to pick all the pixels with larger value as kernels. The searching region is from 1 $R_\sun$ to $r_{max1}$, where $r_{max1}$ is an upper boundary, above which there is no kernel selected. The boundary of $r_{max1}$ is needed because the signal-to noise ratio will become low when approaching the edge of the telescope's field of view. For STEREO-B/SECCHI EUVI images, we choose the value of 1.7 $R_\sun$. The selected kernels serve as the seeds, from which the whole prominence regions grow out. Figure \ref{fg_image_processing}d is a binary image showing the kernels. Here some small kernels, which are isolated pixels due to the presence of noise, have been removed by applying a morphological {\it opening} operator with a box size of $s_n\times s_n$. Secondly, let these kernels grow by setting another smaller threshold $th_{pro}$, i.e., all neighboring pixels with values larger than $th_{pro}$ will be included in the growing regions. For the cases that several regions close to each other but not connected, we use a morphological {\it closing} operator with a box size of $s_m\times s_m$ to merge them together. Some regions, whose areas are smaller than $th_{area}$, are discarded to further prevent noise-like features from being included. The resultant regions are the candidate prominences as shown in Figure \ref{fg_image_processing}e. Figure \ref{fg_image_processing}f is obtained by superimposing the boundaries of the recognized regions on the rescaled image. The set of arguments discussed above are listed in Table \ref{tb_arguments}.
\subsection{Module 2: Parameter Extraction}\label{sec_module_2}
Once we have the boundary of a region of interest, the extraction of parameters of the region is straightforward. According to the scaling information in the header of FITS file of the image, we can calculate the area ($A$) and average brightness ($F$) of the region, the minimum and maximum positions in both radial and azimuthal directions ($r_{bot}$, $r_{top}$, $\theta_{min}$, and $\theta_{max}$), and the centroid of brightness ($r_{cen}$ and $\theta_{cen}$). It should be noted that, as the signal-to-noise ratio decreases significantly near the edge of the field of view, we set another upper boundary $r_{max2}$, slightly larger than $r_{max1}$, and consider that the parameters of any prominence extending into the region above it might be unreliable.
\begin{figure}[tbhp]
\centering
\includegraphics[width=0.5\hsize]{f02.eps}
\caption{Extracting the spine of a prominence. (a) Original region, (b) Skeleton, (c) Spine, (d) Smoothed spine.}\label{fg_skeleton}
\end{figure}
Further, we linearize (i.e., get the spine of) the region to obtain certain morphological information. Figure \ref{fg_skeleton} presents a sample. We use the morphological {\it thin} operator \citep[refer to, e.g.][]{Lam_etal_1992} to get the skeleton (panel b) of the prominence of interest (panel a). Usually, a skeleton is too intricate because of many branches involved. To remove trivial branches, we first calculate the length (weighted by the rescaled brightness) of each branch. Then, for branches connecting to the same node, we compare their lengths and save the largest one. The above steps are iterated until there are only two ends (panel c). The resultant curve is further smoothed to get the spine (panel d) by applying the 2-dimensional mean filter method. One should note that any region of interest will be finally simplified to a line with only two ends even if it actually has three or more ends/footpoints.
Since most prominences are loop-like structure in morphology, we keep the length of the spine as the characteristic length ($L$) of the recognized region. Meanwhile, the obtained spine can be used in 3D reconstruction of a prominence if it is viewed in two different visual angles at the same time (e.g., by the STEREO twins or combined with SOHO), which will be specifically studied in another paper.
\subsection{Module 3: Non-prominence Feature Removal}
While photons in EUV 304~\AA\ images mainly come from He II emissions, they also have contaminations from hot coronal lines \citep[e.g.,][]{Zhang_etal_1999}. As a result, prominences are not the only bright feature in EUV 304~\AA\ wavelength, and active regions also appear bright. In \citet{Foullon_Verwichte_2006} work, the authors realized this fact and used observations in other wavelengths to exclude active regions from their detected bright regions. The regions recognized through our first two modules also contain active regions and some noisy features. We do not, however, try to involve other observations in our detection, which will make the system more intricate and prone to additional errors. In our system, the previously extracted parameters for each recognized region will be used to discriminate real prominences from these non-prominence features, as discussed below.
Prominences have a different appearance from other features. For example, in morphology, a prominence usually looks like a loop or stick, while an active region is shaped as a round blob. In brightness, a prominence is almost flat over radial distance in the rescaled image, while an active region is not. Thus one can use a classification method to remove the non-prominence features. There are many classification methods, e.g., linear discriminant analysis (LDA), support vector machines (SVM), neural networks (NN) \citep[e.g.,][]{Meyer_etal_2003}. The method we adopted here is the linear discriminant analysis. One can refer to the paper by, e.g., \citet{Fisher_1936} for the principle of the linear discriminant analysis.
Through many tests, the parameters $\ln A$ (standing for the size of a region), $\ln\frac{A}{L}$ (for the shape) and $\ln \chi_F^2$ (for the variation in brightness, where $\chi_F^2$ is the value of Chi-square goodness-of-fit statistic for the brightness $F$ as a linear function of distance $r$) are chosen to construct the linear discriminant function (LDF). Our sample contains 5066 regions from a total of 3780 images (4 images per day, near 00:00, 06:00, 12:00, 18:00 UT, respectively, from 2007 April 1 to 2009 October 31). Each region is checked by eyes to determine which group it belongs to, the prominences or non-prominences. On the basis of this large collection of features of known classification, or the truth table, we derive the LDF as
\begin{eqnarray}
X=1.460\ln A+1.103\ln\frac{A}{L}-0.491\ln \chi_F^2 \label{eq_lda}
\end{eqnarray}
or
\begin{eqnarray}
X=14.20\frac{\ln A}{\langle\ln A\rangle}+4.70\frac{\ln\frac{A}{L}}{\langle\ln\frac{A}{L}\rangle}+1.36\frac{\ln \chi_F^2}{\langle\ln \chi_F^2\rangle}
\end{eqnarray}
where the quantity $\langle f\rangle$, the mean value of $f$ calculated based on our truth table, is used to normalize the parameters, and so that we can learn the importance of the parameters from their factors. The area $A$ is the most important to discriminate a prominence from other features because it has the largest factor 14.20. However, it should be noted that some very big prominences might be missed due to the important role of area in the discrimination (for example the erupting prominence on 2009 November 2). But such missings usually take place in several frames, and therefore will not significantly affect the tracking result of the whole evolution process of the prominences identified in other frames.
Figure \ref{fg_lda} shows the discriminant result. It can be seen that the group of prominences (labeled G1) generally have different LDF values from the group of non-prominence features (G0). Since there is still an overlap between the two groups, we evaluate the goodness of LDF by
\begin{eqnarray}
G=1-\frac{n_o}{n}
\end{eqnarray}
where $n_o$ is the number of regions whose LDF value falls within the overlap, and $n$ is the total number of regions in the truth table. In other words, the value of $G$ is the ratio of the area of non-overlapped regions to the sum of the areas occupied by the two groups in Figure \ref{fg_lda}. $G=1$ means the LDF can completely discriminate the two groups. In our case, the goodness is about 0.86.
\begin{figure}[tbh]
\centering
\includegraphics[width=0.9\hsize]{f03.eps}
\caption{Result of the linear discriminant analysis of the truth table. The two groups, prominences (labeled G1) and non-prominence
features (G0), are indicated in red and blue colors,
respectively.}\label{fg_lda}
\end{figure}
Based on Eq.\ref{eq_lda}, we can calculate the LDF value of any recognized region, and compare it with the derived distribution of the LDF values, which is fitted with Gaussian distribution functions as shown by the curves in Figure \ref{fg_lda}, to determine how likely the region is a prominence. The likelihood of a region being a prominence is given by
\begin{eqnarray}
P=\frac{m_1}{m_0+m_1}
\end{eqnarray}
where $m_0$ and $m_1$ are the values of the fitted Gaussian distribution functions corresponding to the LDF value for group 0 and 1, respectively. A region with $P\leq50\%$ is treated as a non-prominence feature and discarded.
\subsection{Module 4: Prominence Tracking}\label{sec_module_4}
Our method to track the evolution of a prominence is quite simple. Figure \ref{fg_tracking} is the flow chart showing how we track a prominence. The top of the flow chart is a prominence to be tracked in an image, and the bottom of the chart gives the four possible results. Since the flow chart is detailed enough, we will not repeat it here. There are only several things that we would like to point out.
\begin{figure*}[tbh]
\centering
\includegraphics[width=\hsize]{f04.eps}
\caption{Flow chart to illustrate the prominence tracking process.}\label{fg_tracking}
\end{figure*}
First, the criterion used to judge if a prominence region is evolved from a prominence region in the previous image is to check whether or not there is an overlap between them in spatial domain. This requires that the cadence of the data should be high enough, especially when studying a fast erupting prominence. According to our statistical result (see Fig.\ref{fg_cadence}), which is plotted based on our catalog (refer to Sec.\ref{sec_catalog}), most prominences move with a speed of about 4 km/s or less in radial or azimuthal direction, few ones may reach up to more than one hundred km/s. Considering that their characteristic length is $L\approx 60$ Mm, it is inferred that a cadence of 4 hours (for 4 km/s speed, or a cadence of 15 minutes for 100 km/s) is basically sufficient for prominence tracking, which is much longer than 10 minutes, the cadence of STEREO/SECCHI EUVI 304~\AA\ data.
\begin{figure*}[tbh]
\centering
\includegraphics[width=0.32\hsize]{f05a.eps}
\includegraphics[width=0.32\hsize]{f05b.eps}
\includegraphics[width=0.32\hsize]{f05c.eps}
\caption{Histograms of the radial ({\it left panel}) and azimuthal speed ({\it middle panel}) of the centroid of prominences and the characteristic length ({\it right panel}). The average values are marked in the plots. The upper limits of the $x$-axes are chosen to make the plots readable, but do not mean the maximum values (The same treatment is made to the Fig.\ref{fg_duration} and \ref{fg_compare}).}\label{fg_cadence}
\end{figure*}
Secondly, we use a time threshold $th_{dis}$ to determine whether or not a prominence has disappeared, i.e., if a previous named prominence has not been found in the successive images for a duration of $th_{dis}$, it is treated as disappeared. A previous detected prominence may temporarily and intermittently `disappear'. Such a `disappearing' may not be real; it may be resulted from the unstable quality or jittering of images (although we have done some treatments on original data as mentioned in Sec.\ref{sec_module_1}) and/or small changes of prominence itself, which cause the brightness of the prominence to decrease down below the threshold $th_{knl}$ or even $th_{pro}$ temporarily. Note that this situation only happens to some small and/or faint prominences, not to major ones. Set a relatively long time duration $th_{dis}$ can efficiently track the entire evolution process of a prominence. Here we let $th_{dis}=2$ hours.
Thirdly, in our tracking process, the case that a prominence splits into two or more parts is considered (see the third result in the flow chart). However, we do not deal with the case of merging, which means there are more than one prominence regions (for example, A and B) in the previous image merging together and associated with only one prominence region (say C) in the current image. The merging of prominences is ambiguous, as the phenomenon can also be interpreted as that the prominence region A (or B) disappears and region B (or A) evolves to region C. In this scenario, no region merging takes place.
Fourth, if a prominence is identified as a new one (see the left side of the flow chart), we will check if it connects to the solar surface. Only those rooted on the Sun are considered as real prominences. This justification is based on the assumption that no newly emerged prominence is disconnected from the Sun.
\subsection{Module 5: Catalog Generating}\label{sec_catalog}
\begin{table*}[t
\caption{List of the arguments used by SLIPCAT for STEREO-B/EUVI
304~\AA\ data} \label{tb_arguments}
\begin{tabular}{lccp{285pt}}
\hline
Arguments & Values &Units & Interpretation \\
\hline
$\delta$ $^a$ & 5.0 & & A small value used in creating rescaled images (see Sec.\ref{sec_module_1}). \\
$th_{knl}$ $^a$ & 2.0 & & A threshold for kernel selection (see Sec.\ref{sec_module_1}). \\
$th_{pro}$ $^a$ & 1.7 & & A threshold for region growing (see Sec.\ref{sec_module_1}). \\
$r_{max1}$ $^b$ & 1.7 & $R_\sun$ & An upper boundary in $r$, above which there is no selected kernel (see Sec.\ref{sec_module_1}). \\
$r_{max2}$ $^b$ & 1.73 & $R_\sun$ & An upper boundary in $r$. The parameters of any prominence extending into the region above it is considered to be unreliable (see Sec.\ref{sec_module_2}). \\
$s_n$ $^c$ & 5 & pixels & Define a box used to remove noise-like kernels (see Sec.\ref{sec_module_1}). \\
$s_m$ $^c$ & 5 & pixels & Define a box used to merge regions which are very close to each other (see Sec.\ref{sec_module_1}). \\
$th_{area}$ & 500 & Mm$^2$ & A threshold to remove very small regions (see Sec.\ref{sec_module_1}). \\
$th_{dis}$ & 2 & hours & A threshold to judge if a prominence has disappeared (see Sec.\ref{sec_module_4}). \\
\hline
\end{tabular}\\
$^a$ Pixel values in rescaled images, probably changing for different instruments.\\
$^b$ Depending on the range of field of view and the signal-to-noise ratio. \\
$^c$ Need to be changed for different spatial resolution of images.
\end{table*}
\begin{table*}[t
\caption{List of the primary parameters extracted for each prominence at a certain time}
\label{tb_parameters}
\begin{tabular}{lp{395pt}}
\hline
Parameter & Interpretation \\
\hline
($r_{cen}$, $\theta_{cen}$) & Coordinates of the centroid of the brightness. \\
$r_{bot}$, $r_{top}$ & Give the span of a prominence in the radial direction. \\
$\theta_{min}$, $\theta_{max}$ & Give the span of a prominence in the angular direction. \\
$A$ & Area of a prominence in the units of Mm$^2$. \\
$L$ & Characteristic length of a prominence in the units of Mm. \\
$F$ & Average brightness recorded in the original image. \\
$P$ & Likelihood of a recognized region to be a prominence. \\
\hline
\end{tabular}
\end{table*}
Solar prominences above the limb are identified and tracked by the above four modules. All the input arguments to SLIPCAT for the STEREO-B/SECCHI EUVI 304~\AA\ data are summarized in Table \ref{tb_arguments}, and the primary parameters extracted for each prominence at a certain time are summarized in Table \ref{tb_parameters}. Since we have the parameters of each prominence in its time sequence, it becomes possible to automatically extract its kinetic evolution information. For example, we can derive the velocity $\ve v_c$ and acceleration $\ve a_c$ of the centroid, from the variations of $r_{cen}$ and $\theta_{cen}$ with time. Also we can get some peak and average values of the above parameters, such as $A_{max}$, $A_{ave}$, etc. Moreover, for each prominence, we give the confidence level by the following formula
\begin{eqnarray}
C=\left\{\begin{array}{lc}
1, & \overline{P}>90\% \\
2, & 75\%<\overline{P}\le 90\% \\
3, & 50\%<\overline{P}\le75\% \\
\end{array}\right.
\end{eqnarray}
where $\overline{P}$ is the average value of the likelihood of the prominence over its period of tracking. A resultant online catalog is established at \url{http://space.ustc.edu.cn/dreams/slipcat/}, where one can find all the final output parameters. The analyses in the following sections are based on the parameters in the catalog.
We note that it is straightforward to apply the system to other spacecraft data, e.g., SDO/AIA data (which is in our planning), though we present here only STEREO-B data. The only part we need to modify is the input arguments listed in Table \ref{tb_arguments}. Besides, the code is written in IDL language, and the modules 1-3 and modules 4-5 can run separately. The modules 1-3 do nothing with the causal relationship among the images, thus they can process images in serial or parallel. Using a computer with 2.33 GHz Inter Xeon CPUs, 3 GB memory and Linux operating system, it takes 36 seconds on average for modules 1-3 to process one full-size EUVI image and takes only 0.29 seconds for modules 4-5. Thus, to process the sequence of images over one day with the cadence of 10 minutes , it needs about 1.45 hours. The SDO/AIA data has higher resolution ($4096\times4096$ pixels) and faster cadence (10 seconds), which will require modules 1-3 to spend much more time. It is estimated that it will take about 347 hours if we serially process such one-day images in the machine mentioned above. Thus, to reduce the processing time to less than 24 hours, there is a need of a small cluster with 15 or more CPUs to run the modules 1-3 in parallel, which should be affordable.
\section{Performance and Limitations}\label{sec_performance}
The SLIPCAT detects 19140 prominences from the STEREO-B data during the beginning of April 2007 to the end of October 2009. Figure \ref{fg_duration} shows the distributions of duration of the prominences and the number of frames in which a prominence is detected. It is found that 6348 prominences (33\%) are detected in only one frame (upper panels), and the rest exhibits a roughly linear correlation between the duration and frame number as shown in the lower panel of Fig.\ref{fg_duration}. The solid line in the scatter-plot marks the expected relationship between the duration and frame number for the imaging cadence of 10 minutes. A point above the line means the instrument operates in a higher-cadence mode and the prominence is well tracked, while a point below the line implies that the prominence is missing in some frames. Most points distribute around the solid line. We define a prominence as poorly-tracked when it matches one of the following two criteria. (1) It is detected in only one frame, and (2) it is missing in 2/3 or more of the expected frames (marked by the dashed line in the scatter-plot). Note that the radial speeds of prominences are no more than 160 km/s (see Fig.\ref{fg_cadence}), which implies a prominence is expected in at least 5 or more frames. Thus a prominence detected in only 2 frames is also treated as poorly-tracked. There are a total of 9663 (50.5\%) poorly-tracked prominences during the period of interest. As to the rest, we call them well-tracked prominences.
These poorly-tracked prominences are generally small and their top portions (or leading edges) lie low. These are revealed in Figure \ref{fg_compare}. The average value of maximum areas of poorly-tracked prominences is 976 Mm$^2$, about 3 times smaller than that of well-tracked ones; the average value of maximum lengths of poorly-tracked prominences is about 60 Mm, nearly half of that of well-tracked ones; and the average top position of poorly-tracked prominences is about 21 Mm lower than that of well-tracked ones. As early as in 1932, \citet{Pettit_1932} had concluded that prominences are usually about 60 Mm long or more, 10 Mm thick and 50 Mm high. These numbers are close to what we obtained here using modern data. By checking movies in the catalog, we find that the causes of such poorly-tracked prominences are probably resulted from the features to be marginal (close to the detection thresholds), contamination of nearby non-prominence features, and of course, the limitation of the detection algorithm. The parameters of these prominences may not be extracted correctly, and therefore they will be excluded from our statistical analysis in Sec.\ref{sec_statistics}.
\begin{figure*}[tbhp]
\centering
\includegraphics[width=0.49\hsize]{f06a.eps}
\includegraphics[width=0.49\hsize]{f06b.eps}
\includegraphics[width=0.98\hsize]{f06c.eps}
\caption{({\it Upper panels}) Histograms of the duration and the number of frames. The average values are marked in the plots.
({\it Lower panel}) Relationship between the duration and frames.
The solid line indicates the expected relationship at a cadence of
10 minutes.}\label{fg_duration}
\end{figure*}
\begin{figure*}[tbh]
\centering
\includegraphics[width=0.32\hsize]{f07a.eps}
\includegraphics[width=0.32\hsize]{f07b.eps}
\includegraphics[width=0.32\hsize]{f07c.eps}
\caption{Histograms of the maximum area ({\it left panel}), characteristic length ({\it middle panel}) and top position ({\it right panel}) for the well-tracked (solid line) and poorly-tracked (dotted line) prominences. The average values (red for well-tracked and blue for poorly-tracked) are marked in the plots.}\label{fg_compare}
\end{figure*}
\begin{figure}[ptbh]
\centering
\includegraphics[width=0.8\hsize]{f08.eps}
\caption{Histograms of the latitude for long ($\geq80$ hours, dotted line) and short ($<80$ hours, solid line) duration prominences, respectively. The average values (red for short-duration and blue for long-duration) are marked in the plots.}\label{fg_latitude}
\end{figure}
\begin{figure}[ptbh]
\centering
\includegraphics[width=0.8\hsize]{f09.eps}
\caption{Distribution of daily counts of limb prominences.}\label{fg_daily_counts}
\end{figure}
The upper-left panel of Figure \ref{fg_duration} suggests that some prominences may appear for up to 260 hours, which seems too long to be possible. Due to the solar rotation, a prominence at a height of about 56 Mm (the average value of top position) above equator can stay visible for no more than 80 hours in theory. With the increasing latitude, this lasting duration may increase. If the prominence extends along the longitudinal direction, the lasting duration can be even longer. Figure \ref{fg_latitude} shows that the prominences with long duration generally appear in high latitude, where quiescent or polar crown prominences are usually present. Thus, it is possible to have such long-duration prominences.
Figure \ref{fg_daily_counts} shows the daily counts of the well-tracked prominences. For most days, about 14 prominences are detected. However, in extreme cases, there may be as many as 32 or as few as zero prominences a day. By checking the H$\alpha$ images from BBSO, it is found that there are more filaments during solar maximum than solar minimum. During 2007 -- 2009, the extreme solar minimum, there are usually only a few filaments in an H$\alpha$ image, that is inconsistent with our results. To make sure that the prominences identified by SLIPCAT are not non-prominence features, we compare the EUVI 304~\AA\ images with the H$\alpha$ image as shown in Figure \ref{fg_ha_slipcat}. The date of these images is 2009 October 7th, which is arbitrarily chosen; on this day, the STEREO-B spacecraft was 58 degrees behind the Earth. Thus the west limb in STEREO-B/EUVI 304~\AA\ image corresponds to about 33 degrees west to the central meridian in the H$\alpha$ image. One can find that the western hemisphere in the H$\alpha$ image is largely free of features except there are three prominences standing at high latitude (as denoted by arrows). Actually, there is a prominence at the low latitude, which can be clearly seen in the EUVI images (marked by a circle). The comparison demonstrates that there are probably some small prominences visible in EUV 304~\AA\ emission line, but invisible in H$\alpha$. It is also supported by many observational facts that solar filaments are generally more extended in EUV lines than in H$\alpha$ \citep[e.g.,][]{Heinzel_etal_2001}. Thus, it can be concluded that SLIPCAT is sensitive to recognize prominences, even those invisible in H$\alpha$ .
\begin{figure*}[tbh]
\centering
\includegraphics[width=\hsize]{f10.eps}
\caption{Comparison of the EUV 304~\AA\ prominences viewed by STEREO with H$\alpha$ filaments viewed from the Earth.
From the left to right, they are original EUV 304~\AA\ image,
rescaled image, rescaled image with selected prominences, and
H$\alpha$ image from BBSO.}\label{fg_ha_slipcat}
\end{figure*}
\begin{figure*}[ptbh]
\centering
\includegraphics[width=\hsize]{f11.eps}
\caption{An erupting prominence on 2008 November 13, which split
into three parts during the eruption. One escaped from the Sun, one
remained on the Sun, and the other erupted but returned back
to the Sun.}\label{fg_20081113_prom}
\end{figure*}
Further, the accuracy of the parameters listed in the catalog should be addressed. First of all, it should be noted that the values of these parameters just give us the information of prominences during the period they are detected, not necessarily through their whole life-time. Second, some parameters suffer from the projection effect, e.g., the value of area depending on the angle of view. Third, some parameters, e.g., velocity, change rates of area and brightness, are automatically derived from fitting data points. Thus their accuracy depends on the number of frames, the fitting function and the complexity of the prominence. As an example, Figure \ref{fg_20081113_prom} presents an erupting prominence observed on 2008 November 13. This prominence rose from the south-east limb, and partially erupted. During its eruption, the prominence split into 3 major parts (see Fig.\ref{fg_20081113_prom}d), one escaped from the Sun, one remained on the Sun, and the other erupted but returned back to the Sun later. SLIPCAT tracks the whole eruption process. The left panel of Figure \ref{fg_20081113_kin} displays the height-time profile of the leading edge of the prominence. The solid line is a quadratic fit through the data points and the dashed line is a linear fit. Due to the splitting of the prominence, the fitting curves obviously do not reflect the reality. The correct treatment is to study the evolution of the split parts separately. The right panel shows the height-time profile of the escaping part. However, the fitting result for that part is still not satisfactory enough. The time at about 11:00 UT is a critical point, before which the escaping part was slowly rising, while after which it experienced a fast acceleration and erupted quickly. Thus a two-stage linear fitting is more appropriate than a pure linear or quadratic fitting. However, we still choose linear and quadratic functions to fit all the detected prominences, because this treatment can be easily operated in an automated way. One should keep it in mind that the fitting results only give a coarse estimation of the average speed (or change rates of area and brightness) and acceleration.
\begin{figure*}[ptbh]
\centering
\includegraphics[width=0.495\hsize]{f12a.eps}
\includegraphics[width=0.495\hsize]{f12b.eps}
\caption{Height-time profiles of the leading edge of the prominence presented in Fig.\ref{fg_20081113_prom}.
The left panel is for the whole prominence system, while the right panel is for the escaping part only. The solid line is a quadratic fit through the data points and the
dashed line is a linear fit.}\label{fg_20081113_kin}
\end{figure*}
\section{Preliminary Statistical Results}\label{sec_statistics}
Now SLIPCAT has a complete dataset for STEREO-B data. A catalog of prominences seen from STEREO-A will be generated soon. Here we would like to present some statistical results of the STEREO-B prominences. Since we probably do not get very accurate parameters for prominences as discussed in the last section, the results obtained here are just preliminary. However, from a statistical point of view, these results should be significant. In the following analysis, all the poorly-tracked prominences are removed. Such prominences are generally extremely small and stay at low altitude as discussed in Sec.\ref{sec_performance}, thus the statistical results might suffer from the bias of selection; one should treat it as a statistics of moderate and major prominences. Moreover, we will include prominences with all confidence levels. There are only 257 (2.7\%) prominences with confidence level of 2 or 3, which represents a rather small fraction in the database .
\subsection{Static Parameters}
\begin{figure*}[ptbh]
\centering
\includegraphics[width=0.49\hsize]{f13a.eps}
\includegraphics[width=0.49\hsize]{f13b.eps}
\includegraphics[width=0.49\hsize]{f13c.eps}
\includegraphics[width=0.49\hsize]{f13d.eps}
\caption{Histograms of static parameters: ({\it upper-left panel}) heliographic latitude at the first appearance,
mean values of ({\it upper-right panel}) height of centroid, ({\it lower-left panel}) area and
({\it lower-right panel}) brightness. The average values of these distribution are marked in the plots.}\label{fg_hist_static}
\end{figure*}
First of all, static parameters of the 9477 well-tracked prominences are investigated. These parameters are (1) the heliographic latitude of the centroid of a prominence at the first detection, (2) the mean value of the height of the centroid, (3) the mean value of the area and (4) the mean value of the brightness of a prominence. The distribution of heliographic latitudes (folded at equator) shown in the upper-left panel of Figure \ref{fg_hist_static} suggests that 99\% prominences appear below 60 degrees. One may suspect that the lower count at high latitude may be caused by the perception that a prominence at high latitude is usually quiescent or a polar crown prominence and generally is long lived and extended. However, the scatter-plot of the latitude versus duration in the upper-left panel of Figure \ref{fg_stat_lat} indicates that this perception is wrong, as it clearly shows that the durations of the prominences above 60 degrees are short. The long-duration prominences appear around 30 to 60 degrees, which probably implies that long extended prominences arise there. Similar result can also be found in Figure \ref{fg_latitude} though there we include the poorly-tracked prominences. Moreover, from rescaled EUV 304~\AA\ images, one can clearly find that the regions above 60 degree are generally occupied by polar coronal holes. By checking the catalog and movies, we find that the detected `prominences' above 60 degrees are usually polar jets. Since their number is small, the statistical results obtained below will not be affected by including these `false' prominences.
The distribution of the heights of centroids suggests that about 82\% of prominences stay at around 26 Mm above the solar surface. There is no obvious dependence of the height on the latitude as shown by the scatter-plot in Figure \ref{fg_stat_lat}. The previous statistical study by \citet{Ananthakrishnan_1961} suggested that the heights of prominences vary between 15 and 150 Mm. \citet{Pettit_1932} gave a value of about 50 Mm, and \citet{Kim_etal_1988} showed that there is a peak at about 28 Mm in the distribution of heights. Theoretical work suggests that the height of a prominence depends on the gradient of ambient coronal magnetic field strength with height \citep[e.g.,][]{Filippov_Den_2000}. Our result is consistent with these studies.
\begin{figure*}[tbh]
\centering
\includegraphics[width=0.49\hsize]{f14a.eps}
\includegraphics[width=0.49\hsize]{f14b.eps}
\includegraphics[width=0.49\hsize]{f14c.eps}
\includegraphics[width=0.49\hsize]{f14d.eps}
\caption{Correlations of heliographic latitude with ({\it upper-left panel}) duration,
mean values of ({\it upper-right panel}) the height of centroid,
({\it lower-left panel}) area, and ({\it lower-right}) brightness.}\label{fg_stat_lat}
\end{figure*}
The area and brightness can be usually used to evaluate if a prominence is a major one or not. Similarly, we use the mean values of them to show the distribution. The average projected area on the plane of sky of all prominences is about 1072 Mm$^2$. Nearly 60\% prominences have smaller area than the average value. Figure \ref{fg_stat_lat} suggests that the area is unrelated with the latitude. The brightness is recorded as digital number (DN) by the CCD camera. Its distribution is close to a Gaussian one, with an average value at around 275 DN. It can be as low as 20 DN or as high as 960 DN. The scatter-plot in Figure \ref{fg_stat_lat} shows a weak dependence of the brightness on the latitude. The low-latitude prominences may reach to a higher brightness than middle to high-latitude prominences.
\subsection{Dynamic Parameters}
For dynamic properties, we investigate the velocities (both radial and azimuthal) and change rates of area and brightness of prominences. Here we use the leading edge rather than the centroid in the analysis of the radial speed. One also may use the radial speed of the centroid, but it will bring a large error in the case that a prominence splits into two parts, one erupting and the other staying on the Sun. The histograms in Figure \ref{fg_hist_dynamic} show the radial and azimuthal speeds. A speed of 10 arcsec/s corresponds to about 33 km/s at solar surface. More than 80\% of prominences have no obvious motion in either radial or azimuthal direction, and the value of zero is the most probable speed (as shown in the insets). A few prominences may move upward at about more than 100 km/s, and also there are 37 ($\approx0.4$\%) prominences having a radial speed $<-20$ km/s or an azimuthal speed $>10$ arcsec/s. The former could be easily understood that an erupting prominence may have a large outward speed, but the cause is not obvious for the latter.
\begin{figure*}[tbh]
\centering
\includegraphics[width=0.49\hsize]{f15a.eps}
\includegraphics[width=0.49\hsize]{f15b.eps}
\includegraphics[width=0.49\hsize]{f15c.eps}
\includegraphics[width=0.49\hsize]{f15d.eps}
\caption{Histograms of dynamic parameters: ({\it upper-left panel}) radial speed of leading edge,
({\it upper-right panel}) azimuthal speed of centroid, ({\it lower-left panel}) change rate of area and
({\it lower-right panel}) brightness. The averaged values are marked in the plots.}\label{fg_hist_dynamic}
\end{figure*}
We have mentioned in Sec.\ref{sec_performance} that the quality or precision of the fitting results largely depends on the number of measurements. More measurements can efficiently reduce errors, and therefore the fitting results will be more reliable. By checking the movies of the 37 prominences with the unusual speed, we find 34 prominences are detected in no more than 6 frames, and the speeds of most (not all of) these 34 prominences are not correctly reflected by the fitting. However, since we have a large sample in the statistics, such a small fraction of corrupted events will not distort the overall picture shown in Figure \ref{fg_hist_dynamic}. On the other hand, it is also realized that there are indeed some prominences having a large downward or azimuthal speed. The large downward speed may either present a real motion or is just resulted from the shrink of the prominence. Any further analyses on such extreme events will be pursued in the future.
Similarly, for most prominences the change rates of area and brightness are quite small although the average value of the change rate of area is about $1.35\times10^4$ Mm$^2$/s. There are 106 (1.1\%) prominences with absolute value of change rate of area $>10^6$ Mm$^2$/s or brightness $>0.25$ DN/s. The movies reveal that some prominences do change this fast.
\subsection{Fading of Prominences}\label{sec_fading}
It is well known that prominences generally become dimmer as they rise. The reason could be the heating of prominence materials \citep[e.g.,][]{Mouradian_Martres_1986, Ofman_etal_1998, Hanaoka_Shinkawa_1999}, mass loss \citep[e.g.,][]{Rusin_Rybansky_1982} and/or expansion \citep[e.g.,][]{Bemporad_2009}. The first one belongs to thermal processes, and the other two are dynamic processes \citep[e.g.,][]{Mouradian_Martres_1986, Mouradian_etal_1995, Tandberg-Hanssen_1995}. Here we will look into this issue in a statistical way. In our parameters, we have the information of altitude, brightness and area of prominences, and have no direct information about the heating or mass of prominences. Thus it is impossible to make a comprehensive study of the causes of the prominence fading, but we can learn how significant the factor of the expansion is.
The first panel in Figure \ref{fg_stat_lbs} shows a strong anti-correlation between the brightness and the height of prominences. The higher altitude makes the prominence dimmer. The diamond symbols with error bars indicate the average values of brightness around certain altitude. These points are fitted to obtain the following empirical formula
\begin{eqnarray}
F=7.47\times10^{3}(h+1.29)^{-0.891} \mathrm{\ DN} \label{eq_d-b}
\end{eqnarray}
where $h\geq35$ Mm is the height from solar surface in units of Mm. It describes the dependence of the brightness on the altitude. The points below 35 Mm are excluded in the fitting as they seem to follow another pattern.
The second panel exhibits an evident positive linear correlation between the area and height. It means a prominence at a higher altitude tends to be larger. This phenomenon supports the picture that, when a prominence rises or erupts, it expands as well. The expansion is probably caused by the weaker constraint of ambient atmosphere at a higher altitude. Similarly, we fit the data points marked by the diamond symbols, and get
\begin{eqnarray}
A=64h-906 \mathrm{\ Mm}^2 \label{eq_d-s}
\end{eqnarray}
The reversed correlations of Eq.\ref{eq_d-b} and \ref{eq_d-s} suggest that the expansion of prominences must be a cause of the prominence fading when they are rising or erupting.
\begin{figure}[tbh]
\centering
\includegraphics[width=0.49\hsize]{f16a.eps}
\includegraphics[width=0.49\hsize]{f16b.eps}
\includegraphics[width=0.49\hsize]{f16c.eps}
\caption{Scatter plots between minimum brightness, maximum area and maximum height
of leading edge.}\label{fg_stat_lbs}
\end{figure}
Combining the two equations, we derive the relationship between the brightness and area (indicated by the dashed line in the last panel of Fig.\ref{fg_stat_lbs})
\begin{eqnarray}
F=3.04\times10^{5}(A+9.88\times10^2)^{-0.891} \mathrm{\ DN} \label{eq_b-s1}
\end{eqnarray}
where $A$ is in the units of Mm$^2$. Also we can directly fit the data in the last panel that leads to
\begin{eqnarray}
F=3.96\times10^{4}(A+2.42\times10^3)^{-0.631} \mathrm{\ DN} \label{eq_b-s2}
\end{eqnarray}
as shown by the solid line. Approximately, area is proportional to $V^{2/3}$ where $V$ is the volume of a prominence. If there is no mass loss or gain, the density of a prominence $\rho$ is inversely proportional to the volume, i.e., $\rho\propto V^{-1} \propto A^{-1.5}$. The brightness can be a proxy of the density of the plasma in the temperature window corresponding to EUV 304~\AA\ emission line. If there is no heating or cooling, the function $F=c_0(A+c_1)^{-1.5}$ is supposed to describe the relationship between the brightness and area. By fitting the data points, we obtain the dotted line given by
\begin{eqnarray}
F=4.76\times10^{8}(A+1.66\times10^4)^{-1.5} \mathrm{\ DN} \label{eq_b-s3}
\end{eqnarray}
It is found that the solid and dashed lines are close to the dotted one, which implies that, in a statistical point of view, the expansion is probably one of the major causes of the fading of prominences during their rise or eruption. Of course, this statistical conclusion could not be true for all individual cases. As revealed by, e.g., \citet{Ofman_etal_1998} and \citet{Hanaoka_Shinkawa_1999}, the heating process may play an important role in the disappearances of some prominences.
\section{Summary}
We have developed an automated system of catching and tracking solar limb prominences in EUV 304~\AA\ images. The system, called SLIPCAT, is able to generate (1) a catalog of solar limb prominences and (2) some characteristic parameters of each detected prominence, including the height, position angle, area, length, brightness and their first and second derivatives with respect to time. SLIPCAT is composed by five modules, (1) prominence candidate selection, (2) parameter extraction, (3) non-prominence feature removal, (4) prominence tracking, and (5) catalog generating. At present, an online catalog for STEREO-B/EUVI 304~\AA\ data has been generated (refer to \url{http://space.ustc.edu.cn/dreams/slipcat/}), and catalogs for STEREO-A/SECCHI/EUVI and SDO/AIA data are in preparation.
Based on the STEREO-B/EUVI 304~\AA\ data, SLIPCAT proved to perform well in detecting limb prominences.
\begin{enumerate}
\item It can distinguish real prominences from non-prominence features, e.g., active regions, without observations in other wavelengths by using the technique of linear discriminant analysis. The goodness of the overall classification is at about 86\% successful rate.
\item It detects as many as 9477 well-tracked prominences during 2007 April -- 2009 October, which means in average about 10 events per day. Compared to H$\alpha$ data, it is found that SLIPCAT is sensitive enough to recognize almost all prominences, even those invisible in H$\alpha$ images or very small ones.
\item Thanks to the high-cadence EUV 304~\AA\ data, SLIPCAT is able to provide the detailed evolution processes of prominences quantitatively without manual interventions. The upper-right panel of Figure \ref{fg_duration} implies that a well-tracked prominence is at least detected in 28 images in average; and the case in Figure \ref{fg_20081113_kin} shows that such high-cadence detection allows us to make a detailed analysis of its evolution, including its eruption, oscillation, etc.
\end{enumerate}
However, not all the parameters extracted by SLIPCAT can precisely reveal the real behavior of prominences. The limitations have been addressed in the last paragraph of Sec.\ref{sec_performance}. Summarized here, they are that (1) the parameters characterize the properties of prominences during the period they are detected, not over the whole life-time, (2) they suffer from the projection effect, and (3) the speeds, change rates of area, length and brightness, which are derived from linear and quadratic fittings, may not be accurate.
By applying SLIPCAT to the STEREO-B/EUVI 304~\AA\ data from 2007 April to 2009 October, we obtain the following preliminary statistical results of solar limb prominences.
\begin{enumerate}
\item In average, there are about 10 prominences standing above the solar limb per day during the solar minimum. For most days, about 14 prominences are expected to be detected, and sometimes the number could be as large as 32 or as small as zero.
\item Most (99\%) prominences appear below latitude of 60 degrees, and the long extended prominences tend to arise between latitudes of 30 and 60 degrees.
\item Most (82\%) prominences have a height of about 26 Mm from the solar surface.
\item The projected area of a prominence on the plane-of-sky is about 1072 Mm$^2$ in average, and nearly 60\% of prominences have a smaller area.
\item Most prominences are quite stable during the period they are detected; no obvious change in position, area or brightness can be found.
\item Particularly, more than 80\% of prominences do not show obvious motion in either radial or azimuthal direction. However, some prominences have an upward speed of more than 100 km/s, and a few prominences present a significant downward or azimuthal speed.
\item The brightness of prominences is anti-correlated with the height. The prominences at higher altitude look dimmer.
\item The area of prominences is positively correlated with the height. The prominences at higher altitude are generally larger.
\item From the statistical point of view, the expansion of prominences is probably one of the major causes of the fading of prominences during their rise or eruption.
\end{enumerate}
\paragraph{Acknowledgments.}
We acknowledge the use of the data from STEREO/SECCHI. This research is supported by grants from NSFC 40525014, 973 key project 2006CB806304, FANEDD 200530, and the fundamental research funds for the central universities.
\bibliographystyle{agufull}
|
2,869,038,154,024 | arxiv | \section{Introduction}
Lifelong learning considers systems that continually learn new tasks, from one or more domains, over the course of a lifetime. Lifelong learning is a large, open problem and is of great importance to the development of general purpose Artificially Intelligent (AI) agents. A formal definition of lifelong learning follows.\\
\begin{define}
\label{def:lifelong}
Lifelong Learning is the continued learning of tasks, from one or more domains, over the course of a lifetime, by a lifelong learning system. A lifelong learning system efficiently and effectively (1) retains the knowledge it has learned; (2) selectively transfers knowledge to learn new tasks; and (3) ensures the effective and efficient interaction between (1) and (2)\cite{Silver2013}.\\
\end{define}
A truly general lifelong learning system, shown in Figure \ref{fig:lls}, therefore has the following attributes:
(1) \textbf{Efficient retention of learned task knowledge}
A lifelong learning system should minimize the retention of erroneous knowledge.
In addition, it should also be computationally efficient when storing knowledge in long-term memory.
(2) \textbf{Selective transfer: } A lifelong learning system needs the ability to choose relevant prior knowledge for solving new tasks, while casting aside irrelevant or obsolete information.
(3) \textbf{System approach: } Ensures the effective and efficient interaction of the retention and transfer elements.\\
\begin{figure}
\begin{center}
\includegraphics[width=0.46\textwidth]{minecraft.png}
\caption{A screenshot from \textbf{Minecraft}, a popular video game which poses a challenging lifelong learning problem.}
\label{fig:minecraft}
\end{center}
\end{figure}
Lifelong learning systems in real-world domains suffer from the curse of dimensionality. That is, as the state and action spaces increase, it becomes more and more difficult to model and solve new tasks as they are encountered. In addition, planning over potentially infinite time-horizons as well as efficiently retaining and reusing knowledge pose non-trivial challenges. A challenging, high-dimensional domain that incorporates many of the elements found in lifelong learning is Minecraft. Minecraft is a popular video game whose goal is to build structures, travel on adventures, hunt for food and avoid zombies. An example screenshot from the game is seen in Figure \ref{fig:minecraft}. Minecraft is an open research problem as it is impossible to solve the entire game using a single AI technique \cite{Smith2016,Oh2016}. Instead, the solution to Minecraft may lie in solving sub-problems, using a divide-and-conquer approach, and then providing a synergy between the various solutions. Once an agent learns to solve a sub-problem, it has acquired a \textit{skill} that can then be reused when a similar sub-problem is subsequently encountered. \\
\begin{figure}
\begin{center}
\includegraphics[width=0.47\textwidth]{lifelong.pdf}
\caption{\textbf{Lifelong Learning:} A lifelong learning system (1) efficiently retains knowledge and (2) selectively transfers knowledge to solve new tasks. Upon solving a task, the knowledge base is refined and new knowledge is added to the system. A systems approach ensures efficient and effective interaction between (1) and (2).}
\label{fig:lls}
\end{center}
\end{figure}
\vspace{1cm}
Many of the tasks that are encountered by an agent in a lifelong learning setting can be naturally decomposed into \textit{skill hierarchies} \cite{Stone2000,Stone2005,Bai2015}. In Minecraft for example, consider building a wooden house as seen in Figure \ref{fig:minecraft}. This task can be decomposed into sub-tasks (a.k.a skills) such as chopping trees, sanding the wood, cutting the wood into boards and finally nailing the boards together. Here, the knowledge gained from chopping trees can also be partially \textit{reused} when cutting the wood into boards. In addition, if the agent receives a new task to build a small city, then the agent can reuse the \textit{skills} it acquired during the `building a house' task.\\
In a high-dimensional, lifelong learning setting such as Minecraft, learning skills and when to reuse the skills is non-trivial. This is key to efficient knowledge retention and transfer, increasing exploration, efficiently solving tasks and ultimately advancing the capabilities of the Minecraft agent. \\
Reinforcement Learning (RL) provides a generalized approach to skill learning through the options framework \cite{Sutton1999}. Options are Temporally Extended Actions (TEAs) and are also referred to as skills \cite{daSilva2012} and macro-actions \cite{Hauskrecht1998}. Options have been shown both theoretically \cite{Precup1997,Sutton1999} and experimentally \cite{Mann2013,Mann2014b} to speed up the convergence rate of RL algorithms. From here on in, we will refer to options as skills.\\
\vspace{1cm}
In order to learn reusable skills in a lifelong learning setting, the framework needs to be able to (1) learn skills, (2) learn a controller which determines when a skill should be used and \textit{reused} and (3) be
able to efficiently accumulate reusable skills.
There are recent works that perform skill learning \cite{Mankowitz2016a,Mankowitz2016b,Mnih2016,Bacon2015}, but these works have focused on learning good skills and have not explicitly shown the ability to reuse skills nor scale with respect to the number of skills in lifelong learning domains.\\
With the emergence of Deep RL, specifically Deep Q-Networks (DQNs), RL agents are now equipped with a powerful non-linear function approximator that can learn rich and complex policies (or skills). Using these networks the agent learns policies (or skills) from raw image pixels, requiring less domain specific knowledge to solve complicated tasks (E.g Atari video games). While different variations of the DQN algorithm exist \cite{Van2015,schaul2015prioritized,wang2015dueling,bellemare2015increasing}, we will refer to the vanilla version unless otherwise stated. There are deep learning approaches that perform sub-goal learning \cite{Rusu2016,Kulkarni2016}, yet these approaches rely on providing the task or sub-goal to the agent, prior to making a decision. \citeauthor{Kulkarni2016} (\citeyear{Kulkarni2016}) also rely on manually constructing sub-goals a-priori for tasks and utilize intrinsic motivation which may be problematic for complicated problems where designing good intrinsic motivations is not clear and non-trivial. \\
In our paper, we present our novel lifelong learning system called the Hierarchical Deep Reinforcement Learning (RL) Network (H-DRLN) architecture shown in Figure \ref{fig:hdrln} (It is defined formally in the Hierarchical Deep RL Network Section). While we do not claim to provide an end-to-end solution, the H-DRLN contains all the basic building blocks of a truly general lifelong learning framework (see the Related Work Section for an in-depth overview). The H-DRLN controller learns to solve complicated tasks in Minecraft by learning reusable RL skills in the form of pre-trained Deep Skill Networks (DSNs). Knowledge is \textbf{retained} by incorporating reusable skills into the H-DRLN via a Deep Skill module. There are two types of Deep Skill Modules: (1) a DSN array (Figure \ref{fig:hdrln}, Module $A$) and (2) a \textit{multi-skill distillation} network (Figure \ref{fig:hdrln}, Module $B$), our novel variation of policy distillation \cite{Rusu2015} applied to learning skills. Multi-skill distillation enables the H-DRLN to \textit{efficiently} retain knowledge and therefore scale in lifelong learning, by encapsulating multiple reusable skills into a single distilled network. When solving a new task, the H-DRLN \textbf{selectively transfers} knowledge in the form of temporal abstractions (skills) to solve the given task. By taking advantage of temporally extended actions (skills), the H-DRLN learns to solve tasks with lower sample complexity and superior performance compared to vanilla DQNs. \\
\textbf{Main Contributions:} (1) A novel Hierarchical Deep Reinforcement Learning Network (H-DRLN) architecture which includes an H-DRLN controller and a Deep Skill Module. The H-DRLN contains all the basic building blocks for a truly general lifelong learning framework. (2) We show the potential to learn \textit{reusable} Deep Skill Networks (DSNs) and perform knowledge transfer of the learned DSNs to new tasks to obtain an optimal solution. We also show the potential to transfer knowledge between related tasks without any additional learning. (3) We efficiently retain knowledge in the H-DRLN by performing skill distillation, our variation of policy distillation, for learning skills and incorporate it into the Deep Skill Model to solve complicated tasks in Minecraft. (4) Empirical results for learning an H-DRLN in sub-domains of Minecraft with a DSN array and a distilled skill network. We also verify the improved convergence guarantees for utilizing reusable DSNs (a.k.a options) within the H-DRLN, compared to the vanilla DQN.
\section{Previous Research on Lifelong Learning in RL}
\label{sec:related_work}
Designing a truly general lifelong learning agent is a challenging task. Previous works on lifelong learning in RL have focused on solving specific elements of the general lifelong learning system as shown in Table \ref{tab:lifelong}. \\
According to Definition \ref{def:lifelong}, a lifelong learning agent should be able to \textbf{efficiently retain knowledge}. This is typically done by sharing a representation among tasks, using distillation \cite{Rusu2015} or a latent basis \cite{Ammar2014}. The agent should also learn to \textbf{selectively use} its past knowledge to solve new tasks efficiently. Most works have focused on a \textit{spatial transfer} mechanism, i.e., they suggested learning differentiable weights from a shared representation to the new tasks \cite{2016arXiv161105397J,Rusu2016}. In contrast, \citeauthor{Brunskill2014} (\citeyear{Brunskill2014}) suggested a \textit{temporal transfer} mechanism, which identifies an optimal set of skills in past tasks and then learns to use these skills in new tasks. Finally, the agent should have a \textbf{systems approach} that allows it to efficiently retain the knowledge of \textit{multiple tasks} as well as an efficient mechanism to \textit{transfer} knowledge for solving new tasks.\\
Our work incorporates all of the basic building blocks necessary to performing lifelong learning. As per the lifelong learning definition, we efficiently transfer knowledge from previous tasks to solve a new target task by utilizing RL skills \cite{Sutton1999}. We show that skills reduce the sample complexity in a complex Minecraft environment and suggest an efficient mechanism to retain the knowledge of multiple skills that is scalable with the number of skills.
\begin{table}
\scalebox{0.6}{
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
& \textbf{Works} & H-DRLN & Ammar & Brunskill & Rusu & Rusu & Jaderberg \tabularnewline
& & (this work) & et. al & and Li & & et. al. & et. al.\tabularnewline
\textbf{Attribute} & & & (2014) & (2014) & (2015) & (2016) & (2016)\tabularnewline
\hline
& Memory & & & & & & \tabularnewline
& efficient & \checkmark & \checkmark & \ding{55} & \checkmark & \ding{55} & \ding{55}\tabularnewline
\textbf{Knowledge} & architecture & & & & & & \tabularnewline
\cline{2-8}
\textbf{Retention} & Scalable to & & & & & & \tabularnewline
& high & \checkmark & \ding{55} & \ding{55} & \checkmark & \checkmark & \checkmark\tabularnewline
& dimensions & & & & & & \tabularnewline
\hline
& Temporal & & & & & & \tabularnewline
& abstraction & \checkmark & \ding{55} & \checkmark & \ding{55} & \ding{55} & \ding{55}\tabularnewline
\textbf{Selective} & transfer & & & & & & \tabularnewline
\cline{2-8}
\textbf{Transfer} & Spatial & & & & & & \tabularnewline
& abstraction & \ding{55} & \checkmark & \ding{55} & \checkmark & \checkmark & \checkmark\tabularnewline
& transfer & & & & & & \tabularnewline
\hline
& & & & & & & \tabularnewline
& Multi-task & \checkmark & \checkmark & \checkmark & \checkmark & \ding{55} & \checkmark\tabularnewline
\textbf{Systems} & & & & & & & \tabularnewline
\cline{2-8}
\textbf{Approach} & & & & & & & \tabularnewline
& Transfer & \checkmark & \checkmark & \checkmark & \ding{55} & \checkmark & \checkmark\tabularnewline
& & & & & & & \tabularnewline
\hline
\end{tabular}
}
\caption{\textbf{Previous works} on lifelong learning in RL.}
\label{tab:lifelong}
\end{table}
\section{Background}
\label{sec:background}
\textbf{Reinforcement Learning:} The goal of an RL agent is to maximize its expected return by learning a policy $\pi:S \rightarrow \Delta_A$ which is a mapping from states $s \in S$ to a probability distribution over the actions $A$. At time $t$ the agent observes a state $s_t \in S$, selects an action $a_t \in A$, and receives a bounded reward $r_t \in [0, R_{\max}]$ where $R_{\max}$ is the maximum attainable reward and $\gamma\in[0,1]$ is the discount factor. Following the agents action choice, it transitions to the next state $s_{t+1} \in S$ . We consider infinite horizon problems where the cumulative return at time $t$ is given by $R_t = \sum_{t'=t}^\infty \gamma^{t'-t}r_t$. The action-value function $Q^{\pi}(s,a) = \mathbb{E} [R_t|s_t = s, a_t = a, \pi]$ represents the expected return after observing state $s$ and taking an action $a$ under
a policy $\pi$. The optimal action-value function obeys a fundamental recursion known as the Bellman equation:
\begin{equation*}
Q^* (s_t,a_t)=\mathbb{E}
\left[r_t+\gamma \underset{a'}{\mathrm{max}}Q^*(s_{t+1},a')
\right] \enspace .
\end{equation*}
\textbf{Deep Q Networks:} The DQN algorithm \cite{Mnih2015} approximates the optimal Q function with a Convolutional Neural Network (CNN) \cite{Krizhevsky2012}, by optimizing the network weights such that the expected Temporal Difference (TD) error of the optimal bellman equation (Equation \ref{DQN_loss}) is minimized:
\begin{equation}
\label{DQN_loss}
\mathbb{E}_{s_t,a_t,r_t,s_{t+1}}\left\Vert Q_{\theta}\left(s_{t},a_{t}\right)-y_{t}\right\Vert _{2}^{2} \enspace ,
\end{equation}
where
\begin{equation*}
y_{t}=
\begin{cases}
r_{t} & \mbox{if } s_{t+1} \mbox{ is terminal}\\
r_{t}+\gamma\underset{\mbox{\mbox{\ensuremath{a}'}}}{\mbox{max}}Q_{\theta_{target}}\left(s_{t+1},a^{'}\right) & \mbox{otherwise}
\end{cases}
\end{equation*}
Notice that this is an offline learning algorithm, meaning that the tuples $\left\{ s_{t,}a_{t},r_{t},s_{t+1},\gamma\right\}$ are collected from the agents experience and are stored in the \textbf{Experience Replay (ER)} \cite{lin1993reinforcement}. The ER is a buffer that stores the agent’s experiences at each time-step $t$, for the purpose of ultimately training the DQN parameters to minimize the loss function. When we apply minibatch training updates, we sample tuples of experience at random from the pool of stored samples in the ER. The DQN maintains two separate Q-networks. The current Q-network with parameters $\theta$, and the target Q-network with parameters $\theta_{target}$. The parameters $\theta_{target}$ are set to $\theta$ every fixed number of iterations. In order to capture the game dynamics, the DQN represents the state by a sequence of image frames.\\
\textbf{Double DQN \cite{Van2015}: } Double DQN (DDQN) prevents overly optimistic estimates of the value function. This is achieved by performing action selection with the current network $\theta$ and evaluating the action with the target network $\theta_{target}$ yielding the DDQN target update $y_t=r_t$ if $s_{t+1}$ is terminal, otherwise $y_t = r_t + \gamma Q_{\theta_{target}}(s_{t+1}, \max_{a} Q_{\theta}(s_{t+1}, a))$. DDQN is utilized in this paper to improve learning performance.\\
\textbf{Skills, Options, Macro-actions \cite{Sutton1999}:} A skill $\sigma$ is a temporally extended control structure defined by a triple $\sigma = <I,\pi,\beta>$ where I is the set of states where the skill can be initiated, $\pi$ is the intra-skill policy, which determines how the skill behaves in encountered states, and $\beta$ is the set of termination probabilities determining when a skill will stop executing. The parameter $\beta$ is typically a function of state $s$ or time $t$.\\
\textbf{Semi-Markov Decision Process (SMDP):} Planning with skills can be performed using SMDP theory. More formally, an SMDP can be defined by a five-tuple $<S, \Sigma, P, R, \gamma>$ where $S$ is a set of states, $\Sigma$ is a set of skills, and $P$ is the transition probability kernel. We assume rewards received at each timestep are bounded by $[0, R_{\max}]$. $R:S \times \sigma \rightarrow [0,\frac{R_{\max}}{1-\gamma}]$ represents the expected discounted sum of rewards received during the execution of a skill $\sigma$ initialized from a state $s$. The solution to an SMDP is a skill policy $\mu$.\\
\textbf{Skill Policy:} A skill policy $\mu : S \rightarrow \Delta_\Sigma$ is a mapping from states
to a probability distribution over skills $\Sigma$. The action-value function $Q_µ : S \times \Sigma \rightarrow R$ represents the long-term value of taking a skill $\sigma \in \Sigma$ from a state $s \in S$ and thereafter always selecting skills according to policy $\mu$ and is defined by $Q_µ(s, \sigma) = \mathbb{E} [\sum ^\infty _{t=0} \gamma ^t R_t |(s, \sigma), \mu] $.
We denote the skill reward as $R_s^{\sigma} = \mathbb{E}[r_{t+1} + \gamma r_{t+2} + \cdot\cdot\cdot + \gamma ^{k-1} r_{t+k} | s_t=s,\sigma]$ and transition probability as $P_{s,s'}^{\sigma} = \sum_{j=0}^\infty \gamma^j Pr[k=j,s_{t+j}=s'|s_t=s,\sigma]$. Under these definitions the optimal skill value function is given by the following equation \cite{stolle2002learning}:
\begin{equation}
\label{OptionBellman}
Q_{\Sigma}^*(s,\sigma) = \mathbb{E} [R_s^{\sigma} + \gamma ^k \underset{\sigma'\in \Sigma}{\mathrm{max}} Q_{\Sigma}^*(s',\sigma')] \enspace .
\end{equation}
\textbf{Policy Distillation \cite{Rusu2015}: }
Distillation \cite{hinton2015distilling} is a method to transfer knowledge from a teacher model $T$ to a student model S. This process is typically done by supervised learning. For example, when both the teacher and the student are separate deep neural networks, the student network is trained to predict the teacher's output layer (which acts as labels for the student). Different objective functions have been previously proposed. In this paper we input the teacher output into a softmax function and train the distilled network using the Mean-Squared-Error (MSE) loss: $\mbox{cost}(s)=\Vert \mbox{Softmax}_\tau (Q_{T}(s)) - Q_{S}(s)\Vert^2$ where $Q_{T}(s)$ and $Q_{S}(s)$ are the action values of the teacher and student networks respectively and $\tau$ is the softmax temperature. During training, this cost function is differentiated according to the student network weights.
Policy distillation can be used to transfer knowledge from $N$ teachers $T_i, i=1,\cdots N$ into a single student (multi-task policy distillation). This is typically done by switching between the $N$ teachers every fixed number of iterations during the training process. When the student is learning from multiple teachers (i.e., multiple policies), a separate student output layer is assigned to each teacher $T_i$, and is trained for each task, while the other layers are shared.
\section{Hierarchical Deep RL Network}
In this Section, we present an in-depth description of the H-DRLN (Figure \ref{fig:hdrln}); a new architecture that extends the DQN and facilitates skill reuse in lifelong learning. Skills are incorporated into the H-DRLN via a Deep Skill Module that can incorporate either a DSN array or a distilled multi-skill network.\\
\textbf{Deep Skill Module: }
The pre-learned skills are represented as deep networks and are referred to as Deep Skill Networks (DSNs). They are trained a-priori on various sub-tasks using our version of the DQN algorithm and the regular Experience Replay (ER) as detailed in the Background Section. Note that the DQN is one choice of architecture and, in principal, other suitable networks may be used in its place. The Deep Skill Module represents a set of $N$ DSNs. Given an input state $s \in S$ and a skill index $i$, it outputs an action $a$ according to the corresponding DSN policy $\pi_{DSN_{i}}$. We propose two different Deep Skill Module architectures: (1) The DSN Array (Figure \ref{fig:hdrln}, module $A$): an array of pre-trained DSNs where each DSN is represented by a separate DQN. (2) The Distilled Multi-Skill Network (Figure \ref{fig:hdrln}, module $B$), a single deep network that represents multiple DSNs. Here, the different DSNs share all of the hidden layers while a separate output layer is trained for each DSN via policy distillation \cite{Rusu2015}. The Distilled skill network allows us to incorporate multiple skills into a single network, making our architecture scalable to lifelong learning with respect to the number of skills.\\
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.45\textwidth]{hdlrn_arch_final.pdf}
\caption{\textbf{The H-DRLN architecture:} It has outputs that correspond to primitive actions ($a_1,a_2,...,a_m$) and DSNs ($DSN_1, DSN_2, ... ,DSN_n$). The Deep Skill Module (bottom) represents a set of skills. It receives an input and a skill index and outputs an action according to the corresponding skill policy. The architecture of the deep skill module can be either a DSN array or a Distilled Multi-Skill Network. }
\label{fig:hdrln}
\end{center}
\end{figure}
\textbf{H-DRLN architecture: } A diagram of the H-DRLN architecture is presented in Figure \ref{fig:hdrln} (top). Here, the outputs of the H-DRLN consist of primitive actions as well as skills. The H-DRLN learns a policy that determines when to execute primitive actions and when to \textbf{reuse} pre-learned skills. If the H-DRLN chooses to execute a primitive action $a_t$ at time $t$, then the action is executed for a single timestep. However, if the H-DRLN chooses to execute a skill $\sigma_i$ (and therefore DSN $i$ as shown in Figure \ref{fig:hdrln}), then DSN $i$ executes its policy, $\pi_{DSN_{i}}(s)$ until it terminates and then gives control back to the H-DRLN. This gives rise to two necessary modifications that we needed to make in order to incorporate skills into the learning procedure and generate a truly hierarchical deep network: (1) Optimize an objective function that incorporates skills; (2) Construct an ER that stores skill experiences. \\
\textbf{Skill Objective Function:} As mentioned previously, a H-DRLN extends the vanilla DQN architecture to learn control between primitive actions and skills. The H-DRLN loss function has the same structure as Equation~\ref{DQN_loss}, however instead of minimizing the standard Bellman equation, we minimize the Skill Bellman equation (Equation~\ref{OptionBellman}). More specifically, for a skill $\sigma_t$ initiated in state $s_t$ at time $t$ that has executed for a duration $k$, the H-DRLN target function is given by:
\begin{equation}\nonumber
\resizebox{0.45\textwidth}{!}{$
y_{t}=
\begin{cases}
\sum_{j=0}^{k-1} \left[ \gamma^j r_{j+t} \right] & \mbox{if } s_{t+k} \mbox{ terminal}\\
\sum_{j=0}^{k-1} \left[ \gamma^j r_{j+t} \right]+
\gamma^k\underset{\mbox{\mbox{\ensuremath{\sigma}'}}}{\mbox{max}}Q_{\theta_{target}}\left(s_{t+k},\sigma^{'}\right) & \mbox{else}
\end{cases}
$}
\end{equation}
This is the first work to incorporate an SMDP cost function into a deep RL setting.\\
\textbf{Skill - Experience Replay: } We extend the regular ER \cite{lin1993reinforcement} to incorporate skills and term this the Skill Experience Replay (S-ER).
There are two differences between the standard ER and our S-ER. Firstly, for each sampled skill tuple, we calculate the sum of discounted cumulative rewards, $\tilde{r}$, generated whilst executing the skill. Second, since the skill is executed for $k$ timesteps, we store the transition to state $s_{t+k}$ rather than $s_{t+1}$. This yields the skill tuple $(s_t,\sigma_t, \tilde{r}_t, s_{t+k})$ where $\sigma_t$ is the skill executed at time $t$.
\section{Experiments}
\label{sec:experiments}
To solve new tasks as they are encountered in a lifelong learning scenario, the agent needs to be able to adapt to new game dynamics and learn when to \textit{reuse} skills that it has learned from solving previous tasks. In our experiments, we show (1) the ability of the Minecraft agent to learn DSNs on sub-domains of Minecraft (shown in Figure \ref{fig:domains_dsn}$a-d$). (2) The ability of the agent to reuse a DSN from navigation domain 1 (Figure \ref{fig:domains_dsn}$a$) to solve a new and more complex task, termed the \textit{two-room} domain (Figure \ref{fig:domains_complex}$a$). (3) The potential to transfer knowledge between related tasks without any additional learning. (4) We demonstrate the ability of the agent to reuse multiple DSNs to solve the \textit{complex-domain} (Figure \ref{fig:domains_complex}$b$). (5) We use two different Deep Skill Modules and demonstrate that our architecture scales for lifelong learning.\\
\textbf{State space} - As in \citeauthor{Mnih2015} (\citeyear{Mnih2015}), the state space is represented as raw image pixels from the last four image frames which are combined and down-sampled into an $84 \times 84$ pixel image. \textbf{Actions} - The primitive action space for the DSN consists of six actions: (1) Move forward, (2) Rotate left by $30^{\circ}$, (3) Rotate right by $30^{\circ}$, (4) Break a block, (5) Pick up an item and (6) Place it. \textbf{Rewards} - In all domains, the agent gets a small negative reward signal after each step and a non-negative reward upon reaching the \textbf{final} goal (See Figure \ref{fig:domains_dsn} and Figure \ref{fig:domains_complex} for the different domain goals). \\
\textbf{Training} - Episode lengths are $30, 60$ and $100$ steps for single DSNs, the two room domain and the complex domain respectively. The agent is initialized in a random location in each DSN and in the first room for the two room and complex domains. \textbf{Evaluation} - the agent is evaluated during training using the current learned architecture every 20k (5k) optimization steps (a single epoch) for the DSNs (two room and complex room domains). During evaluation, we averaged the agent's performance over 500 (1000) steps respectively. \textbf{Success percentage: } The $\%$ of successful task completions during evaluation.\\
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.40\textwidth]{alldomains_dsn.png}
\caption{\textbf{The domains:} ($a$)-($d$) are screenshots for each of the domains we used to train the DSNs.}
\label{fig:domains_dsn}
\end{center}
\end{figure}
\subsection{Training a DSN}
Our first experiment involved training DSNs in sub-domains of Minecraft (Figure \ref{fig:domains_dsn}$a-d$), including two navigation domains, a pickup domain and a placement domain respectively. The break domain is the same as the placement domain, except it ends with the break action. Each of these domains come with different learning challenges. The Navigation $1$ domain is built with identical walls, which provides a significant learning challenge since there are visual ambiguities with respect to the agent's location (see Figure \ref{fig:domains_dsn}$a$). The Navigation $2$ domain provides a different learning challenge since there are obstacles that occlude the agent's view of the exit from different regions in the room (Figure \ref{fig:domains_dsn}$b$). The pick up (Figure \ref{fig:domains_dsn}$c$), break and placement (Figure \ref{fig:domains_dsn}$d$) domains require navigating to a specific location and ending with the execution of a primitive action (Pickup, Break or Place respectively). \\
In order to train the different DSNs, we use the Vanilla DQN architecture \cite{Mnih2015} and performed a grid search to find the optimal hyper-parameters for learning DSNs in Minecraft. The best parameter settings that we found include: (1) a higher learning ratio (iterations between emulator states, \textit{n-replay} = 16), (2) higher learning rate (\textit{learning rate} = 0.0025) and (3) less exploration (\textit{eps{\_}endt} - 400K). We implemented these modifications, since the standard Minecraft emulator has a slow frame rate (approximately $400$ ms per emulator timestep), and these modifications enabled the agent to increase its learning between game states. We also found that a smaller experience replay (\textit{replay{\_}memory} - 100K) provided improved performance, probably due to our task having a relatively short time horizon (approximately $30$ timesteps). The rest of the parameters from the Vanilla DQN remained unchanged. After we tuned the hyper-parameters, all the DSNs managed to solve the corresponding sub-domains with almost $100\%$ success as shown in Table \ref{table:distilled}. (see supplementary material for learning curves).
\subsection{Training an H-DRLN with a DSN}
In this experiment, we train the H-DRLN agent to solve a complex task, the two-room domain, by reusing a single DSN (pre-trained on the navigation $1$ domain). \\
\textbf{Two room Domain: } This domain consists of two-rooms (Figure \ref{fig:domains_complex}$a(iii)$). The first room is shown in Figure \ref{fig:domains_complex}$a(i)$ with its corresponding exit (Figure \ref{fig:domains_complex}$a(ii)$). Note that the exit of the first room is not identical to the exit of the navigation $1$ domain (Figure \ref{fig:domains_dsn}$a$). The second room contains a goal (Figure \ref{fig:domains_complex}$a (iii)$) that is the same as the goal of the navigation $1$ domain (Figure \ref{fig:domains_dsn}$a$). The agent's available action set consists of the primitive movement actions and the Navigate $1$ DSN. \\
\begin{figure}
\begin{center}
\includegraphics[width=0.47\textwidth]{alldomains_complex.png}
\caption{\textbf{Composite domains:} ($a$) The two-room domain and ($b$) the complex domain with three different tasks, ($i$) navigation, ($ii$) pickup and ($iii$) placement}
\label{fig:domains_complex}
\end{center}
\end{figure}
\textbf{Skill Reusability/Knowledge Transfer: } We trained the H-DRLN architecture as well as the vanilla DQN on the two-room domain. We noticed two important observations.
\textbf{(1)} The H-DRLN architecture solves the task after a single epoch and generates significantly higher reward compared to the vanilla DQN. This is because the H-DRLN makes use of knowledge transfer by \textit{reusing} the DSN trained on the one-room domain to solve the two-room domain. This DSN is able to identify the exit of the first room (which is different from the exit on which the DSN was trained) and navigates the agent to this exit. The DSN is also able to navigate the agent to the exit of the second room and completes the task. The DSN is a temporally extended action as it lasts for multiple time steps and therefore increases the exploration of the RL agent enabling it to learn to solve the task faster than the vanilla DQN. \textbf{(2)} After $39$ epochs, the vanilla DQN completes the task with $50\%$ success percentage. This sub-optimal performance is due to wall ambiguities, causing the agent to get stuck in sub-optimal local minima. After the same number of epochs, the agent completes the task using the H-DRLN with $76\%$ success.\\
\begin{figure}[b]
\begin{center}
\includegraphics[width=0.35\textwidth]{bar_success_2r.pdf}
\caption{Two room domain \textbf{success percentages} for the vanilla DQN, the single DSN, the H-DRLN after a single epoch (START) and in the last epoch (END).}
\label{fig:bars}
\end{center}
\end{figure}
\textbf{Knowledge Transfer without Learning:} We then decided to evaluate the DSN (which we trained on the navigation $1$ domain) in the two-room domain \textbf{without} performing any additional learning on this network. We found it surprising that the DSN, without any training on the two-room domain, generated a higher reward compared to the vanilla DQN which was specifically trained on the two-room domain for $39$ epochs. Figure~\ref{fig:bars} summarizes the success percentage comparison between the different architectures in the two-room domain. The vanilla DQN, DSN, H-DRLN START and H-DRLN END had average success percentages of $50\%, 67.65\%, 73.08\%$ and $76\%$ respectively. The DSN performance is sub-optimal compared to the H-DRLN architecture but still manages to solve the two-room domain. This is an exciting result as it shows the potential for DSNs to identify and solve related tasks without performing any additional learning.\\
\subsection{Training an H-DRLN with a Deep Skill Module}
In this section, we discuss our results for training and utilizing the H-DRLN with a Deep Skill Module to solve the complex Minecraft domain. In each of the experiments in this section, we utilized DDQN to train the H-DRLN and the DDQN baseline unless otherwise stated.\\
\textbf{Complex Minecraft Domain:}
This domain (Figure \ref{fig:domains_complex}$b$) consists of three rooms. Within each room, the agent is required to perform a specific task. Room $1$ (Figure \ref{fig:domains_complex}$b(i)$) is a navigation task, where the agent needs to navigate around the obstacles to reach the exit. Room $2$ (Figure \ref{fig:domains_complex}$b(ii)$) contains two tasks. (1) A pickup task whereby the agent is required to navigate to and collect a block in the center of the room; (2) A break task, where the agent needs to navigate to the exit and break a door. Finally, Room $3$ (Figure \ref{fig:domains_complex}$b(iii)$) is a placement task whereby the agent needs to place the block that it collected in the goal location. The agent receives a non-negative reward if it successfully navigates through room $1$, collects the block and breaks the door in room $2$ and places the block in the goal location in room $3$ (Arrow path in Figure \ref{fig:domains_complex}$b$). Otherwise, the agent receives a small negative reward at each timestep. Note that the agent needs to complete three separate tasks before receiving a sparse, non-negative reward. The agent's available action set are the original primitive actions as well as the DSN's: (1) Navigate $2$, (2) Pickup, (3) Break and (4) Placement.\\
\textbf{Training and Distilling Multiple DSNs:}
As mentioned in the H-DRLN Section, there are two ways to incorporate skills into the Deep Skill Module: (1) DSN Array and (2) Multi-Skill Distillation. For both the DSN array and multi-skill distillation, we utilize four pre-trained DSNs (Navigate $2$, Pickup, Break and Placement).
These DSNs collectively form the DSN array. For the multi-skill distillation, we utilized the pre-trained DSNs as teachers and distil these skills directly into a single network (the student) using the distillation setup shown in Figure \ref{fig:distillation_fig}, and as described in the Background Section. Once trained, we tested the distilled network separately in each of the three individual rooms (Figure \ref{fig:domains_dsn}$b-d$). The performance for each room is shown in Table \ref{table:distilled} for temperatures $\tau=0.1$ and $\tau=1$. The high success percentages indicate that the agent is able to successfully complete each task using a single distilled network. In contrast to policy distillation, our novelty lies in the ability to, not only distil skills into a single network, but also learn a control rule (using the H-DRLN) that switches between the skills to solve a given task.
\begin{table}[h]
\centering
\begin{tabular}{| c | c | c | c |}
\hline
Domain & $\tau = 0.1$ & $\tau = 1$ & Original DSN \\ \hline
Navigation & 81.5 & 78.0 & 94.6 \\ \hline
Pick Up & 99.6 & 83.3 & 100\\ \hline
Break & 78.5 & 73.0 & 100\\ \hline
Placement & 78.5 & 73.0 & 100\\ \hline
\end{tabular}
\caption{The \textbf{success $\%$} performance of the distilled multi-skill network on each of the four tasks (Figures \ref{fig:domains_dsn}$b-d$). }
\label{table:distilled}
\end{table}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.40\textwidth]{distillation}
\caption{Multi-skill distillation.}
\label{fig:distillation_fig}
\end{center}
\end{figure}
\textbf{Training the H-DRLN:}
We now show results for training the (1) H-DRLN with a DSN array, (2) H-DRLN with DDQN and a DSN array and (3) H-DRLN with DDQN and a distilled multi-skill network (with $\tau=0.1$). This is compared to (4) a DDQN baseline. The learning curves can be seen in Figure \ref{fig:learningcurves}. We performed these trials 5 times for each architecture and measured success rates of $85\pm10\%$, $91\pm4\%$ and $94\pm4\%$ ($mean\% \pm std$) for the H-DRLN, H-DRLN with DDQN and H-DRLN with DDQN and a distilled multi-skill network respectively. To calculate these values we averaged the success percentages for the final $10$ epochs. Note that the distilled H-DRLN has a higher average success rate and both H-DRLN's with DDQN have lower variance. The DDQN was unable to solve the task. This is due to a combination of wall ambiguities (as in the two room domain) and requiring more time to learn. The H-DRLN is able to overcome ambiguities and also learns to reuse skills. We also trained the DDQN with intrinsic rewards which enabled it to solve the task. However, this required a significantly larger amount of training time compared to the H-DRLN and the result was therefore omitted.\\
\begin{figure}
\begin{center}
\includegraphics[width=0.47\textwidth]{comparison}
\caption{The success $\%$ \textbf{learning curves} for the (1) H-DRLN with a DSN array (blue), (2) H-DRLN with DDQN and a DSN array (orange), and (3) H-DRLN with DDQN and multi-skill distillation (black). This is compared with (4) the DDQN baseline (yellow).}
\label{fig:learningcurves}
\end{center}
\end{figure}
\textbf{Skill usage:}
Figure \ref{fig:optionselection} presents the usage $\%$ of skills by the H-DRLN agent during training. We can see that around training epoch $50$, the agent starts to use skills more frequently (black curve). As a result, the H-DRLN agent's performance is significantly improved, as can be seen by the increase in reward (yellow curve). After epoch $93$, the agent's skill usage reduces with time as it needs to utilize more primitive actions. This observation makes sense, since planning only with skills will yield a sub-optimal policy if the skills themselves are sub-optimal. However, planning with both primitive actions and skills always guarantees convergence to an optimal policy (utilizing only primitive actions in the worst-case) \cite{Mann2013}. In our case, the skills that were trained on the one-room domains helped the agent to learn in the complex domain but were sub-optimal due to small changes between the one-room domains and the complex domain. Thus, the agent learned to refine his policy by using primitive actions. To conclude, Figures \ref{fig:learningcurves} and \ref{fig:optionselection} tell us that, while skills are used approximately $20\%$ of the time by the final H-DRLN policy, they have a significant impact on accelerating the agent's learning capabilities.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.47\textwidth]{usage}
\caption{\textbf{Skill usage} $\%$ in the complex domain during training (black). The primitive actions usage $\%$ (blue) and the total reward (yellow) are displayed for reference.}
\label{fig:optionselection}
\end{center}
\end{figure}
\section{Discussion}
We presented our novel Hierarchical Deep RL Network (H-DRLN) architecture. This architecture contains all of the basic building blocks for a truly general lifelong learning framework: (1) Efficient knowledge retention via multi-skill distillation; (2) Selective transfer using temporal abstractions (skills); (3) Ensuring interaction between (1) and (2) with the H-DRLN controller. We see this work as a building block towards truly general lifelong learning using hierarchical RL and Deep Networks.\\
We have also provided the first results for learning Deep Skill Networks (DSNs) in Minecraft, a lifelong learning domain. The DSNs are learned using a Minecraft-specific variation of the DQN \cite{Mnih2015} algorithm. Our Minecraft agent also learns how to reuse these DSNs on new tasks by the H-DRLN. We incorporate multiple skills into the H-DRLN using (1) the DSN array and (2) the scalable distilled multi-skill network, our novel variation of policy distillation.\\
In addition, we show that the H-DRLN provides superior learning performance and faster convergence compared to the DDQN, by making use of skills. Our work can also be interpreted as a form of curriculum learning \cite{bengio2009curriculum} for RL. Here, we first train the network to solve relatively simple sub-tasks and then use the knowledge it obtained to solve the composite overall task. We also show the potential to perform knowledge transfer between related tasks without any additional learning. This architecture also has the potential to be utilized in other 3D domains such as Doom \cite{Kempka2016} and Labyrinth \cite{Mnih2016asynchronous}.\\
Recently, it has been shown that Deep Networks tend to implicitly capture the hierarchical composition of a given task \cite{Zahavy2016}. In future work, we plan to utilize this implicit hierarchical composition to learn DSNs. In addition, we aim to (1) learn the skills online whilst the agent is learning to solve the task. This could be achieved by training the teacher networks (DSNs), whilst simultaneously guiding learning in the student network (our H-DRLN); (2) Perform online refinement of the previously learned skills; (3) Train the agent in real-world Minecraft domains.
\section*{Acknowledgement}
This research was supported in part by the European Community’s Seventh Framework Programme (FP7/2007-2013) under grant agreement 306638 (SUPREL) and the Intel Collaborative Research Institute for Computational Intelligence (ICRI-CI).
\bibliographystyle{aaai}
|
2,869,038,154,025 | arxiv | \subsubsection*{\bibname}}
\usepackage{subfigure}
\usepackage{amsthm}
\usepackage{amssymb}
\usepackage{amsmath}
\usepackage{graphicx}
\usepackage{xcolor}
\newcommand{\triangleq}{\triangleq}
\newcommand{\mathbb{E}}{\mathbb{E}}
\newcommand{\arabic{algorithm}}{\arabic{algorithm}}
\theoremstyle{definition}
\newtheorem{algorithm}{Algorithm}[section]
\newtheorem{conjecture}{Conjecture}[section]
\newtheorem{theorem}{Theorem}[section]
\newtheorem{assumption}[theorem]{Assumption}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{definition}[theorem]{Definition}
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{corollary}[theorem]{Corollary}
\theoremstyle{remark}
\newtheorem{claim}{Claim}[section]
\newtheorem{example}{Example}[section]
\newtheorem{remark}{Remark}[section]
\begin{document}
\twocolumn[
\aistatstitle{Landing Probabilities of Random Walks for Seed-Set Expansion in Hypergraphs}
\aistatsauthor{ Eli Chien \And Pan Li \And Olgica Milenkovic }
\aistatsaddress{ Department ECE, UIUC \And Department CS, Purdue \And Department ECE, UIUC } ]
\begin{abstract}
We describe the first known mean-field study of landing probabilities for random walks on hypergraphs. In particular, we examine clique-expansion and tensor methods and evaluate their mean-field characteristics over a class of random hypergraph models for the purpose of seed-set community expansion. We describe parameter regimes in which the two methods outperform each other and propose a hybrid expansion method that uses partial clique-expansion to reduce the projection distortion and low-complexity tensor methods applied directly on the partially expanded hypergraphs. \footnote{Eli Chien and Pan Li contribute equally to this work. A short version of this paper appears in ITW 2021.}
\end{abstract}
\vspace{-0.1cm}
\section{Introduction}
Random walks on graphs are Markov random processes in which given a starting vertex, one moves to a randomly selected neighbor and then repeats the procedure starting from the newly selected vertex~\cite{lovasz1993random}. Random walks are used in many graph-based learning algorithms such as PageRank~\cite{page1999pagerank} and Label Propagating~\cite{zhu2002learning}, and they have found a variety of applications in local community detection~\cite{andersen2006local,gleich2012vertex}, information retrieval~\cite{page1999pagerank} and semi-supervised learning~\cite{zhu2002learning}.
Random walks are also frequently used to characterize the topological structure of graphs via the hitting time of a vertex from a seed, the commute time between two vertices~\cite{von2014hitting} and the mixing time which also characterizes global graph connectivity~\cite{aldous1995reversible}. Recently, a new measure of vertex connectivity and similarity, termed a landing probability (LP), was introduced in~\cite{kloumann2017block}. The LP of a vertex is the probability of a random walk ending at the vertex after making a certain number of steps. Different linear combinations of LPs give rise to different forms of PageRanks (PRs), such as the standard PR~\cite{page1999pagerank} and the heat-kernel PR~\cite{chung2007heat}, both used for various graph clustering tasks. In particular, Kloumann et al.~\cite{kloumann2017block} also initiated the analysis of PRs based on LPs for seed-based community detection. Under the assumption of a generative stochastic block model (SBM)~\cite{holland1983stochastic} with two blocks, the authors of~\cite{kloumann2017block} proved that the empirical average of LPs within the seed community concentrates around a deterministic centroid. Similarly, the empirical averages of LPs outside the seed community also concentrate around another deterministic centroid. These deterministic centroids are the mean-field counterparts of the empirical averages. Kloumann et al.~\cite{kloumann2017block} also showed that the difference of the centroids decays geometrically with a rate that depends on the number of random walk steps and the SBM parameters. The above result implies that the standard PR is optimal for seed-set community detection from the perspective of marginal maximization, provided that only the first-order moments are available.
On the other hand, random walks on hypergraphs (RWoHs) have received significantly less attention in the literature despite the fact that hyperedges more accurately capture higher-order relations between entities when compared to edges. Most of the work on hypergraph clustering has focused on subspace clustering~\cite{agarwal2005beyond}, network motif clustering~\cite{benson2016higher,li2017motif}, ranking data categorization~\cite{li2017inhomogeneous} and heterogeneous network analysis~\cite{yang2018meta}. Random walks on hypergraphs are mostly used indirectly, by replacing hypereges with cliques, merging the cliques and then exploring random walks on standard graphs~\cite{zhou2007learning,chitra2019random}. We refer to this class of approaches as \emph{clique-expansion random walks on hypergraphs} (clique-expansion RWoHs) which were successfully used for community detection in~\cite{yin2017local}. However, it is well-known that clique-expansion can cause significant distortion in the clustering process~\cite{hein2013total,li2018quadratic,chien2018hs}. This motivated parallel studies on higher-order methods which work directly on the hypergraph. Higher-order Markov chain random walk methods were described in~\cite{wu2016general} and shown to have excellent empirical performance; for simplicity, we henceforth refer to this class of walks as \emph{tensor RWoHs}.
In a different direction, the authors of~\cite{chan2018spectral} defined RWoHs based on a non-linear Laplacian operator whose spectrum carries information about the conductance of a hypergraph. The method of~\cite{chan2018spectral} can also be used to address a number of semi-supervised learning problems on higher-order
data structures by solving convex optimization problems~\cite{li2018quadratic}. Using non-linear Laplacians requires highly non-trivial analytical techniques, and conductance is often not the only relevant performance metric for clustering and community detection.
Furthermore, convex optimization formulations often obscure our theoretical understanding of the underlying problem
The focus of this work is on providing the first known characterization of LPs for RWoHs, determining various trade-offs between clique-expansion and tensor RWoHs for the task of seed-set community expansion and proposing means for combining the two methods when appropriate. We adopt a methodology similar to the one used in~\cite{kloumann2017block} for classical graphs: The hypergraphs are assumed to be generated according to a well-studied hypergraph stochastic block model (hSBM)~\cite{ghoshdastidar2017consistency,chien2018community,ahn2018hypergraph,chien2019minimax} and seed-expansion is performed via mean-field analysis of LPs of random walks of different lengths. Our contributions are as follows:
\begin{itemize}
\vspace{-0.2cm}
\item We derive asymptotic results which show that the empirical centroids of LPs concentrate around their mean-field counterparts.
\vspace{-0.1cm}
\item We prove that LPs of clique-expansion RWoHs behave similarly as the LPs of random walks on graphs. More precisely, the difference between the empirical centroids of LPs within and outside the seed community decays geometrically with the number of steps in the random walks.
\vspace{-0.1cm}
\item We show that the LPs of tensor RWoHs behave differently than those corresponding to clique-expanded graphs when the size
of the hyperedges is large: If the hyperedges density within a cluster is at least twice as large as that across clusters, the difference between the empirical centroids of LPs within and outside the seed community converges to a constant dependent on the model parameters. Otherwise, the difference decreases geometrically with the length of the random walk. Consequently, tensor RWoHs exhibit a phase transition phenomenon.
\item As explained in~\cite{kloumann2017block}, combining information about both the first and second moment of the LPs leads to a method that on the SBM performs as well as belief-propagation, which is optimal. We combine this method with LPs of clique-expansion RWosH and tensor RWoHs and show that these two methods have different regimes in which they exhibit good performance; as expected, the regimes depend on the parameter settings of the hSBM. This is due to the fact that LPs of tensor RWoHs has a larger centroid distance while LPs of clique-expansion have smaller (empirical) variance.
\vspace{-0.1cm}
\item We propose a novel hypergraph random walk technique that combines partial clique-expansion with tensor methods. The goal of this method is to simultaneously avoid large distortion introduced by clique-expansion and reduce the complexity of tensor methods by reducing the size of the hyperedges. The method builds upon the theoretical analysis of the means of LPs and \emph{empirical} evidence regarding the variance of the LPs and hence extends the work in~\cite{li2019optimizing}. A direct analysis including the variance of the LPs of the tensor method appears challenging.
\vspace{-0.1cm}
\item The analysis for tensor RWoHs proved to be difficult as it is essentially requires tracking a large number of states in a standard high-order Markov chain. To mitigate this problem and make our analysis tractable, we introduce a novel state reduction strategy which significantly decreases the dimensionality of the problem. This technical contribution may be of independent interest in various tensor analysis problems.
\end{itemize}
The paper is organized as follows. In Section 2, we introduce the relevant notation and formally define the clique-expansion and tensor RWoHs. The same section explains the relationship between LPs and PR methods, and the importance of LPs for seed-set expansion community detection. In Section 3, we introduce the relevant hypergraph SBM, termed $d$-hSBM, and the ideas behind seed-set expansion and the mean-field LPs approach. Theoretical properties of the LPs for clique-expansion and tensor RWoHs are described in Sections 3.2 and 3.3, respectively. In Section 4,
we present the mean-field analysis for tensor RWoHs while the same analysis for clique-expansion RWoHs is deferred to the Supplement. We show how to leverage the information provided by the first and second moment of LPs for seed-set expansion in Section 5. Section 6 contains simulation results on synthetic datasets.
\vspace{-0.1cm}
\section{Preliminaries}
\vspace{-0.1cm}
\vspace{-0.1cm}
\subsection{Random walks on hypergraphs} \label{sec:RWoH}
\vspace{-0.1cm}
A hypergraph is an ordered pair of sets $G(V,E)$, where $V=\{{v_1,v_2,\ldots,v_n\}}$ is the set of vertices while $E$ is the set of hyperedges.
Each hyperedge $e\in E$ is a subset of $V$, i.e., $e \subseteq V$. Unlike an edge in a graph, a hyperedge $e$ may contain more than two vertices.
If $\forall \, e\in E$ one has $|e| \leq d$, the hypergraph $G$ is termed $d$-bounded. A $d$-bounded hypergraph can be represented by a $d$-dimensional supersymmetric tensor $\mathbf{A}$ such that $A_{v_1,...,v_d} = 1$ if $e = \{v_1,...,v_d\}\in E$, and $A_{v_1,...,v_d} = 0$ otherwise, for all $v_1,\ldots,v_d \in V$. Note that we consider the case where the hyperedges can have repeat vertices are allowed (i.e. multisets). Note that it is easy to extend our analysis to the case where hyperedges cannot have repeat vertices (i.e. sets), albeit the analysis can be more tedious. Henceforth, we assume $G$ to be $d$-bounded with constant $d$, a model justified by numerous practical applications such as subspace clustering~\cite{agarwal2005beyond}, network motif clustering~\cite{li2017inhomogeneous} and natural language processing~\cite{wu2016general}.
We focus on two known forms of RWoHs.
\textbf{Clique-Expansion RWoHs} is a random walk based on representing the hypergraph via a ``projected'' weighted graph~\cite{chitra2019random,zhou2007learning}: Every hyperedge of $G(V,E)$ is replaced by a clique, resulting in an undirected weighted graph $G^{(ce)}$. The derived weighted graph $G^{(ce)}$ has the same vertex set as the original hypergraph, denoted by $V^{(ce)}=V$. The edge set $E^{(ce)}$ is the union of all the edges in the cliques, with the weight of each $e\in E^{(ce)}$ set to $|\{e'\in E: e\subseteq e'\}|$. The weighted adjacency matrix of $G^{(ce)}$, $\mathbf{A}^{(ce)}$, may be written as $A^{(ce)}_{v_{d-1},v_d} = \sum_{\{v_1,...,v_{d-2}\}\in V}A_{v_1,...,v_{d}}$.
Let $y_{ce}^{(0)}\in [0,1]^{|V|}$ be the initial state vector describing which vertices may be used as the origins or seeds of the random walk and with what probability.
The $(k+1)$-th step random walk state vector equals
\begin{align}\label{eq:rw_CE}
y_{ce}^{(k+1)} =y_{ce}^{(k)}\mathbf{A}^{(ce)},
\end{align}
while the $k$-step LP of a vertex $v$ in the clique-expansion framework is defined as
$$x_{v;ce}^{(k)} = y_{v;ce}^{(k)}/\|y_{ce}^{(k)}\|_1.$$
\textbf{Tensor RWoHs} are described by a tensor $\mathbf{A}$ corresponding to a Markov Chain of order $d-1$~\cite{wu2016general}. Each step of the walk is determined by the previous $d-1$ states and we use $y^{(k)}_{v_1,...,v_{d-1};t}$ to denote the number of paths of length $k$ whose last $d-1$ visited vertices equal $v_1, v_2, ..., v_{d-1}$. The number of paths of length $k+1$ steps may be computed according to the following expression:
\begin{align}\label{eq:rw_HG}
y^{(k+1)}_{v_2,...,v_{d};t} = \sum_{v_1=1}^{n} A_{v_1,...,v_{d}} y^{(k)}_{v_1,...,v_{d-1};t}.
\end{align}
The $k$-step LP of a vertex $v$ may be defined similarly as that of clique-expansion RWoHs,
$$x_{v, t}^{(k)} = \sum_{v_1,...,v_{d-2}}y_{v_1,...,v_{d-2}, v;t}^{(k)}/\|y_t^{(k)}\|_1.$$
The complexity of computing a one-step LP in a tensor RWoHs equals $O(n^d)$, while the used storage space equals $O(n^{d-1})$. In contrast, computing the one-step LP of a clique-expansion RWoHs has complexity $O(n^2)$ and it requires storage space equal to $O(n)$. To mitigate the computational and storage issues associated with tensors, one may use tensor approximation methods~\cite{gleich2015multilinear,benson2017spacey}; unfortunately, it is not well-understood theoretically how these approximations perform on various learning tasks.
In what follows, whenever clear from the context, we omit the subscripts indicating if the method uses clique-expansion or tensors, and write $x_{v}^{(k)}$ for either of the two types of LPs.
\vspace{-0.1cm}
\subsection{Seed-set expansion based on LPs}
\vspace{-0.1cm}
Seed-set expansion is a clustering problem which aims to identify subsets of vertices around seeds that are densely connected among themselves~\cite{xie2013overlapping,gleich2012vertex,kloumann2017block}. Seed-set expansion may be seen as a special form of local community detection, and some recent works~\cite{chien2018community,chien2019minimax,ahn2018hypergraph,paul2018higher,angelini2015spectral,kim2018stochastic} has also addressed community detection in hypergraphs using approaches that range from information theory to statistical physics.
Seed-set expansion community detection algorithms operate as follows: One starts from a seed set within one community of interest and performs a random walk. Since vertices within the community are densely connected, the values of the LPs of vertices within the community are in general higher than those of vertices outside of the community. Consequently, thresholding properly combined LP values may allow for classifying vertices as being inside or outside of the community. Formally, each vertex $v$ in a hypergraph $G(V,E)$ is associated with a vector of LPs $(x_v^{(0)},x_v^{(1)},...)$ of all possible lengths. The generalized Page Rank (GPR) of a vertex $v$ with respect to a pre-specified set of weights $(\gamma_k)_{k=0}^{\infty}$ is defined as $\sum_{k=0}^{\infty}\gamma_k x_v^{(k)}$. The GPRs of vertices are compared to a threshold to determine whether they belong to the community of interest. Consequently, GPRs lead to linear classifiers that use LPs as vertex features. The above described GPR formulation includes Personalized PR (PPR)~\cite{andersen2006local}, where $\gamma_k = (1-\alpha)\alpha^k$, and heat-kernal PR (HPR)~\cite{chung2007heat}, where $\gamma_k = e^{-h}h^k/k!$, for properly chosen $\alpha,\, h$.
An important question that arises in seed-set expansion is how to choose the weights of the GPR in order to insure near-optimal or optimal classification~\cite{kloumann2017block}. To this end, start with a partition into two communities $V_0,V_1$ of $V$. Let $\mathbf{a} = (a^{(0)},a^{(1)},...)$ denote the arithmetic mean (centroid) of the LPs of vertices $v\in V_0$, $a^{(k)} \triangleq \frac{1}{|V_0|}\sum_{v\in V_0}x_v^{(k)}$, and let $\mathbf{b} = (b^{(0)},b^{(1)},...)$ denote the arithmetic mean (centroid) of the LPs of vertices $v\in V_1$ , $b^{(k)} \triangleq \frac{1}{|V_1|}\sum_{v\in V_1}x_v^{(k)}$. If the only available information about the distribution of the LPs are $\mathbf{a}$ and $\mathbf{b}$, a discriminant with weights $\gamma_k = a^{(k)} - b^{(k)}$ is optimal since the deterministic boundary is orthogonal to the line that connects the centroids of the two communities. Klouman et al.~\cite{kloumann2017block} observed that for community detection over graphs generated by standard SBMs~\cite{holland1983stochastic}, such a discriminant corresponds to PPR with an adequately chosen parameter $\alpha$.
In what follows we study the statistical properties of the centroids $a^{(k)}$ and $b^{(k)}$ of RWoHs, where the hypergraphs are generated by a hSBM. The main goal of the analysis is to characterize the centroid difference $a^{(k)} - b^{(k)}$ which guides the choice of the weights $\gamma_k$. Some results related to the variance of the landing probabilities and comparisons of the discriminative power of the two types of LPs will be presented as well.
\vspace{-0.1cm}
\section{Statistical characterization of LPs}
\vspace{-0.1cm}
We start by introducing the $d$-hSBM of interest. Afterwards, we outline the mean-field approach for our analysis and use the obtained results to determine the statistical properties of LPs of clique-expansion and tensor RWoHs. In particular, we provide new concentration results for the corresponding LPs.
For notational simplicity, we focus on symmetric hSBMs with two blocks only. More general models may be analyzed using similar techniques.
\begin{definition}[$d$-hSBM]
The $d$-hSBM$(n, p,q)$ is a $d$-bounded hypergraph $G(V,E)$ such that $\forall e\in E, |e| \leq d$ and $|V| = n$. The hypergraph has the following properties. Let $\sigma$ be a binary labeling function $\sigma: V \mapsto \{0, 1\}$, which induces a partition of $V = V_0 \cup V_1$ where $V_i = \left\{v\in V_i : \sigma(v) = i\right\}$ and $|V_{2-i}| = \lceil n/2 \rceil $ or $|V_{2-i}| = \lfloor n/2 \rfloor$, for $i=1,2$. The hypergraph $G(V,E)$ is uniquely represented by an adjacency tensor $\bf{A}$ of dimension $d$, where for all indices $v_1\leq ... \leq v_d \in V$, $A_{v_1,...,v_d}$ are i.i.d. Bernoulli random variables and $\mathbf{A}$ is symmetric.
\begin{equation*}
\mathbb{P}\left (A_{v_1,...,v_d} = 1\right ) =
\begin{cases}
p, & \text{if } \sigma(v_1) = ... = \sigma(v_d)\\
q, & \text{otherwise},
\end{cases}
\end{equation*}
where $0 < q <p \leq 1$. In our subsequent asymptotic analysis for which $n\rightarrow \infty$, we assume that $\frac{p}{q} = \Theta(1)$ is a constant. This captures the regime of parameter values for which the problem is challenging to solve.
\end{definition}
\vspace{-0.1cm}
\subsection{Mean-field LPs}
\vspace{-0.1cm}
Next we perform a mean-field analysis of our model in which the random hypergraph topology is replaced by its expected topology.
This results in $\mathbf{A}$ and the clique-expansion matrix $\mathbf{A}^{(ce)}$ being replaced by $\mathbb{E}\mathbf{A}$ and $\mathbb{E}\mathbf{A}^{(ce)},$ respectively.
The mean-field values of the LPs are defined as follows: For clique-expansion RWoHs, the mean-field counterpart of~\eqref{eq:rw_CE} equals
\begin{align}\label{eq:rw_CE_MF}
\bar{y}_{ce}^{(k+1)} =\bar{y}_{ce}^{(k)} \mathbb{E}\mathbf{A}^{(ce)},
\end{align}
and the corresponding mean-field of a $k$-step LP for vertex $v$ reads as $\bar{x}_{v;ce}^{(k)} = \bar{y}_{v;ce}^{(k)}/\|\bar{y}_{ce}^{(k)}\|_1$.
For tensor RWoHs, the mean-field counterpart of~\eqref{eq:rw_HG} equals
\begin{align}\label{eq:rw_HG_MF}
\bar{y}^{(k+1)}_{v_2,...,v_{d};t} = \sum_{v_1=1}^{n} \bar{y}^{(k)}_{v_1,...,v_{d-1};t} \mathbb{E} A_{v_1,...,v_{d}} .
\end{align}
The $k$-step LP of a vertex $v$ equals $\bar{x}_{v;t}^{(k)} = \sum_{v_1,...,v_{d-2}}\bar{y}_{v_1,...,v_{d-2}, v;t}^{(k)}/\|\bar{y}_t^{(k)}\|_1$.
For non-degenerate random variables of interest in our study, $\bar{x}^{(k)}_{v}\neq \mathbb{E} x^{(k)}_v$, but one can nevertheless show that the geometric centroids of the LPs $a^{(k)}$ and $b^{(k)}$ concentrate around their mean-field counterparts $\bar{a}^{(k)}$ and $\bar{b}^{(k)}$, respectively. This concentration result guarantees
consistency of our method.
\vspace{-0.1cm}
\subsection{Concentration results}\label{subsec:CE}
\vspace{-0.1cm}
The mean-field of the LPs for the $d$-hSBM$(n,p,q)$ model in the clique-expansion setting is described in the following theorem.
\begin{theorem}\label{MAINTHM:CE}
Let $G$ be sampled from a $d$-hSBM$(n,p,q)$ model and let $G^{(ce)}$ be the graph obtained from $G$ through clique-expansion.
Let the initial state vector of the RWoHs be $y^{(0)}_{s; ce} = 1$ and $y^{(0)}_{v;ce} = 0$ otherwise, where $s$ is a vertex chosen uniformly at random from $V_0$. Set
$\bar{y}^{(0)}_{ce} = \mathbb{E} y^{(0)}_{ce}$. Then for all $k\geq 0$ we have
\begin{align*}
\bar{x}_{v; ce}^{(k)} =
\begin{cases}
\bar{a}^{(k)} &\text{if }v\in V_0\\
\bar{b}^{(k)} &\text{if }v\in V_1
\end{cases},
\end{align*}
where $\bar{a},\bar{b}$ satisfy the following recurrence relation
\begin{align}
&\begin{bmatrix}
\bar{a}^{(k)}\\
\bar{b}^{(k)}
\end{bmatrix}
=
\begin{bmatrix}
\frac{p+(2^{d-2}-1)q}{p+(2^{d-1}-1)q} & \frac{2^{d-2}q}{p+(2^{d-1}-1)q}\\
\frac{2^{d-2}q}{p+(2^{d-1}-1)q} & \frac{p+(2^{d-2}-1)q}{p+(2^{d-1}-1)q}
\end{bmatrix}
\begin{bmatrix}
\bar{a}^{(k-1)}\\
\bar{b}^{(k-1)}
\end{bmatrix}
, \nonumber\\
&\begin{bmatrix}
\bar{a}^{(0)}\\
\bar{b}^{(0)}
\end{bmatrix}
= \frac{2}{n}
\begin{bmatrix}
1\\
0
\end{bmatrix}. \label{eq:CE1}
\end{align}
\end{theorem}
\begin{remark}
The eigenvalue decomposition leads to
\begin{align*}
\bar{a}^{(k)} - \bar{b}^{(k)} = \frac{2}{n}\left[\frac{p-q}{p+(2^{d-1}-1)q}\right]^{k}, \;\forall k\geq 0.
\end{align*}
This result reveals that the geometric discriminant under the $d$-hSBM$(n,p,q)$ is of the same form as that of PPR with parameter $\alpha = \frac{p-q}{p+(2^{d-1}-1)q}$. The result is also consistent with the finding for the special case $d = 2$ described in~\cite{kloumann2017block}.
\end{remark}
Next we show that the geometric centroids of LPs of clique-expansion RWoHs will asymptotically concentrate around their mean-field counterparts, which establishes consistency of the mean-field analysis.
\begin{lemma}\label{lma:concentrate}
Assume that $G$ is sampled from a $d$-hSBM$(n,p,q)$ model, for some constant $d\geq 3$. Let $x_{v;ce}^{(k)}$ be the LPs of a clique-expansion RWoHs on $G^{(ce)}$ satisfying~\eqref{eq:rw_CE}. Also assume that $\frac{n^{d-1}q^2}{\log n}\rightarrow \infty$. Then,
for any constant $\epsilon>0$, $n$ sufficiently large and a bounded constant $k\geq 0$, one has
\begin{align*}
& a^{(k)} \triangleq \frac{1}{|V_0|}\sum_{v\in V_0}x_{v;ce}^{(k)} \in [(1-\epsilon)\bar{a}^{(k)},(1+\epsilon)\bar{a}^{(k)}]\\
& b^{(k)} \triangleq \frac{1}{|V_1|}\sum_{v\in V_1}x_{v;ce}^{(k)} \in [(1-\epsilon)\bar{b}^{(k)},(1+\epsilon)\bar{b}^{(k)}],
\end{align*}
with probability at least $1-o(1)$.
\end{lemma}
The proof of Theorem~\ref{MAINTHM:CE} and Lemma~\ref{lma:concentrate} are presented in Supplement~\ref{sec:CEhRW} and \ref{app:pflma1} respectively.
In the tensor setting, one can also determine the distance between the centroids of LPs based on a recurrence relation.
However, a direct application of this method requires tracking $2^{d-1}$ states in the recurrence which makes the analysis intractable.
To address this issue, we introduce a new state reduction technique which allows us to track only $d-1$ states. The key insight used in our proof is that our goal is to characterize the distance between the centroids instead of $\bar{y}$ itself, and that the distance changes are dictated by a significantly smaller state-space recurrence relation.
The state reduction technique also allows us to describe the centroid distance in closed form for $d\leq 5$, as it arises as the solution of a polynomial equation.
Moreover, for large $d$, we justify the use of a heuristic approximation for the centroid distance and verify its quality through extensive numerical simulations.
\begin{theorem}\label{thm:abapprox}
Let $G$ be sampled from a $d$-hSBM$(n,p,q)$ model with $d\geq 3$ and set the initial vector of the Tensor RWoHs to $y^{(0)}_{s_1,...,s_{d-1}; t} = 1$ and
$y^{(0)}_{v_1...,v_{d-1};t} = 0$ otherwise, where $s_1,...,s_{d-1}$ are chosen independently and uniformly at random from $V_0$.
Furthermore, let $\bar{y}^{(0)}_{t} = \mathbb{E} y^{(0)}_{t}$. Then
\begin{align*}
\bar{w}_k = \bar{a}^{(k)} - \bar{b}^{(k)} = \frac{2}{n}\frac{\beta_1(k)}{\zeta_1(k)},
\end{align*}
where $\beta_1(k)$ and $\zeta_1(k)$ satisfy the following recurrence relations:
\small
\begin{equation}\label{thmeq:beta}
\begin{bmatrix}
\beta_1(k)\\
\vdots\\
\beta_{d-1}(k)
\end{bmatrix}
= \frac{n}{2}
\begin{bmatrix}
0 & \cdots & 0 & 0 & p-q\\
q & 0 & \cdots & 0 & p-q\\
0 & \ddots & \vdots & 0 & p-q\\
0 & \cdots & q & 0 & p-q\\
0 & \cdots & 0 & q & p-q\\
\end{bmatrix}
\begin{bmatrix}
\beta_1(k-1)\\
\vdots\\
\beta_{d-1}(k-1)
\end{bmatrix},
\end{equation}
\normalsize
and
\small
\begin{equation}\label{thmeq:zeta}
\begin{bmatrix}
\zeta_1(k)\\
\vdots\\
\zeta_{d-1}(k)
\end{bmatrix}
= \frac{n}{2}
\begin{bmatrix}
2q & \cdots & 0 & 0 & p-q\\
q & 0 & \cdots & 0 & p-q\\
0 & \ddots & \vdots & 0 & p-q\\
0 & \cdots & q & 0 & p-q\\
0 & \cdots & 0 & q & p-q\\
\end{bmatrix}
\begin{bmatrix}
\zeta_1(k-1)\\
\vdots\\
\zeta_{d-1}(k-1)
\end{bmatrix}.
\end{equation}
\normalsize
The initial conditions take the form
\begin{align*}
& \begin{bmatrix}
\zeta_1(0)\\
\vdots\\
\zeta_{d-1}(0)
\end{bmatrix} =
\begin{bmatrix}
\beta_1(0)\\
\vdots\\
\beta_{d-1}(0)
\end{bmatrix} = \frac{4}{n^2}
\begin{bmatrix}
1\\
\vdots\\
1
\end{bmatrix}.
\end{align*}
\end{theorem}
A closed-form expression for the distance between the centroids may be obtained through eigenvalue decomposition of the matrices specifying the recurrence for
$\beta,\zeta$. This is demonstrated for $d=3$ in Supplement~\ref{app:d3case}. The Abel-Ruffini theorem~\cite{abel1824memoire} establishes that there are no algebraic solutions in terms of the radicals for arbitrary polynomial equations of degree $\geq 5$, which implies that our centroids distance may not have a closed form unless $d-1< 5$.
For our subsequent analysis, we find the following corollary of Theorem~\ref{thm:abapprox} useful.
\begin{theorem}\label{cor:HgeqCE}
For all $d$-bounded hypergraphs with $d\geq 3$ and for all $k\geq 1$, the centroid distance for the $d$-hSBM$(n,p,q)$ model satisfies
$$\bar{a}^{(k)} - \bar{b}^{(k)} =\bar{w}_k \geq \frac{p-q}{p+q}\, \bar{w}_{k-1} =\frac{p-q}{p+q}\, (\bar{a}^{(k-1)} - \bar{b}^{(k-1)}).$$
\end{theorem}
Combining the above result with that of Theorem~\ref{MAINTHM:CE} reveals that the distance between the centroids of the tensor RWoHs is greater than that of the clique-expansion RWoHs whenever $d\geq 3$. Applying a telescoping sum on the result of Theorem~\ref{cor:HgeqCE} also produces the following bound
$$\bar{w}_k \geq \frac{2}{n}\left( \frac{p-q}{p+q}\right )^k.$$
Comparing this bound to the result of Theorem~\ref{MAINTHM:CE}, one can observe that the centroid distance $\bar{w}_k$ of the tensor RWoHs decays slower than that of the clique-expansion RWoHs with increasing $k$, and that the centroid distance of LPs of tensor RWoHs is larger than that of clique-expansion RWoHs.
We defer the proof of the results to Supplement~\ref{sec:pfthm3} and instead present simulation results in Figure~\ref{fig:simul1}.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.49\linewidth,height=0.3\linewidth]{L2CD_04_01.pdf}
\includegraphics[width=0.49\linewidth,height=0.3\linewidth]{L2CD_04_03.pdf}
\vspace{-0.7cm}
\caption{Centroid distances for the two studied RWoHs. We ran $20$ steps of the random walk and used Theorems~\ref{MAINTHM:CE},~\ref{thm:abapprox} to calculate the centroid distance $||\bar{w}||_2 = \sqrt{\sum_{k=1}^{20}\bar{w}_k^2}$.
}\label{fig:simul1}
\vspace{-0.3cm}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=0.31\linewidth]{Gray.pdf}
\includegraphics[width=0.33\linewidth]{T-RWoH_10_4_3.pdf}
\includegraphics[width=0.33\linewidth]{T-RWoH_10_4_1.pdf}
\vspace{-0.7cm}
\caption{The phase transition following from Theorem~\ref{thm:abapprox}. (Right) The gray scale captures the magnitude of $\bar{w}_{50}$ for different values of $(p,q)$. The darker the shade, the smaller the $\bar{w}_{50}$; the separating line is $p=2q$. (Middle, Left) The decay of $\bar{w}_{k}$ for $(p,q) = (0.4,0.3)$ and $(0.4,0.1)$, respectively. }\label{fig:simul2}
\vspace{-0.3cm}
\end{figure}
Although there may not be a general closed form characterization for the centroid distance when $d\geq 6$, we may still obtain some simple approximation results for the case that $d$ is sufficiently large by analyzing the characteristic polynomials of $\beta,\zeta$. For sufficiently large $d$
, we have the following approximation for $\bar{w}_k$, $\frac{2}{n}\frac{C_3p^k}{C_1(2q)^k+C_2p^k}$,
where $C_1,C_2$ and $C_3$ are constants independent of $k$. We describe how this heuristic naturally arises from the characteristic polynomial of the recurrence and how it is supported by extensive simulations in Supplement~\ref{app:thm2larged}.
We also observe that there exists a phase transition at $p = 2q$, illustrated in Figure~\ref{fig:simul2}. When $p>2q$, we have $\bar{w}_k \rightarrow \frac{2}{n} \frac{C_3}{C_2}$. This implies that the centroid distance does not diminish when increasing the step size $k$. Thus, for this parameter setting the tensor RWoHs behaves fundamentally different from the clique-expansion RWoHs (see Theorem~\ref{MAINTHM:CE}). On the other hand, if $q<p<2q$ then $\bar{w}_k$ decays roughly geometrically similarly to what is proved in Theorem~\ref{MAINTHM:CE}. We conjecture that the constants are as listed below.
\begin{conjecture}\label{conj:1}
\begin{align*}
& C_1 = \frac{-q}{p-2q},\;C_2 = \frac{p-q}{p-2q},\;C_3 = \frac{p-q}{p}.
\end{align*}
\end{conjecture}
Figure~\ref{fig:simul2} also shows that for $p>2q$ the centroid distance decays very slowly with $k$ and that the difference between our conjectured behavior and the result of the pertinent theorem is very small.
The next result shows that the empirical centroids asymptotically concentrate around their mean-fields.
\begin{lemma}\label{lma:Hconcentrate}
Assume that $G$ is sampled from a $d$-hSBM$(n,p,q)$ model, for some constant $d\geq 3$. Let the LPs of the tensor RWoHs be $x_{v;t}^{(k)}$ and assume that $\frac{nq^2}{\log n}\rightarrow \infty$.
For sufficiently large $n$, any constant $\epsilon>0$ and a bounded constant $k$,
\begin{align*}
&\frac{2}{n}\left (\sum_{v\in V_0}x_{v;t}^{(k)} - \sum_{v\in V_1}x_{v;t}^{(k)} \right )\\
&\in [(1-\epsilon)(\bar{a}^{(k)}-\bar{b}^{(k)}),(1+\epsilon)(\bar{a}^{(k)}-\bar{b}^{(k)})],
\end{align*}
with probability at least $1-o(1)$.
\end{lemma}
The proof of Lemma~\ref{lma:Hconcentrate} is deferred to Supplement~\ref{app:pflma2}. The above results show that if one sets $\gamma_k = \bar{w}_k$ in the GPR formulation, it will asymptotically approach the geometric discriminant function or a tight approximation thereof. Independent on the choice of the parameter $\alpha$, the geometric discriminant function of PPR does not match that of the tensor RWoHs on $d$-hSBM. Only the choice of $\gamma_k$ suggested by our analysis will allow the aforementioned result to hold true.
\vspace{-0.1cm}
\section{Proofs}\label{sec:T-RWoH}
\vspace{-0.1cm}
Due to space limitations, we relegate all relevant proofs pertaining to the clique-expansion method to the Supplement and exclusively focus on the tensor case.
The main technical difficulty associated with tensor RWoHs is the large size of the state space, equal to $2^{d-1}$, which makes an analysis akin to the one described in Theorem~\ref{MAINTHM:CE} difficult. The main finding of this section is that for the $d$-hSBM, the centroid distances are governed by small recurrences involving only $d-1$ states. The proof supporting this observation comprises two steps, the first step of which is similar to the proof of Theorem~\ref{MAINTHM:CE}. The second step of the proof describes how to reduce the state space of the recurrence. For simplicity, we start with $d=3$ and then generalize the analysis for arbitrary $d$. For notational convenience, we write $\mathbf{Y}^{(k)} = [Y_1^{(k)},Y_2^{(k)},Y_3^{(k)},Y_4^{(k)}]^T$.
\begin{theorem}\label{thm:post123456}
Let $G$ be sampled from $3$-hSBM$(n,p,q)$ and let the tensor RWoHs be associated with an initial vector $y^{(0)}_{s_1,s_2; t} = 1$ and $y^{(0)}_{v_1,v_2;t} = 0,\;\forall (v_1,v_2)\neq (s_1,s_2)$, where $s_1$ and $s_2$ are selected independently and uniformly at random from $V_0$. Furthermore, let $\bar{y}^{(0)}_{t} = \mathbb{E} y^{(0)}_{t}$. Then for all $k\geq 0$
\begin{align}
\bar{y}_{i,j;t}^{(k)} =
\begin{cases}
Y_1^{(k)} & \text{if } (i,j) \in V_0\times V_0,\\
Y_2^{(k)} & \text{if } (i,j) \in V_0\times V_1,\\
Y_3^{(k)} & \text{if } (i,j) \in V_1\times V_0,\\
Y_4^{(k)} & \text{if } (i,j) \in V_1\times V_1,\\
\end{cases}
\end{align}
where $\mathbf{Y}^{(0)} = \frac{4}{n^2}[1,0,0,0]^T$ and
\begin{align*
&\begin{bmatrix}
Y_1^{(k+1)}\\
Y_2^{(k+1)}\\
Y_3^{(k+1)}\\
Y_4^{(k+1)}\\
\end{bmatrix}
=
\begin{bmatrix}
\frac{np}{2} & 0 & \frac{nq}{2} & 0\\
\frac{nq}{2} & 0 & \frac{nq}{2} & 0\\
0 & \frac{nq}{2} & 0 & \frac{nq}{2}\\
0 & \frac{nq}{2} & 0 & \frac{np}{2}
\end{bmatrix}
\begin{bmatrix}
Y_1^{(k)}\\
Y_2^{(k)}\\
Y_3^{(k)}\\
Y_4^{(k)}
\end{bmatrix}.
\end{align*}
\end{theorem}
\begin{proof}
The proof proceeds by induction: The base case $k = 0$ is clearly true. For the induction step, assume that the hypothesis holds for $1,2,...,k$. Then
\begin{align*}
& \forall i,j\in V_0, \; Y_1^{(k+1)} = \bar{y}_{i,j;t}^{(k+1)} = \sum_{l=1}^{n}\mathbb{E} A_{lij}\bar{y}_{l,i;t}^{(k)} \\
&= \sum_{l\in V_0}\mathbb{E} A_{lij}\bar{y}_{l,i;t}^{(k)}+\sum_{l\in V_1} \mathbb{E} A_{lij}\bar{y}_{l,i;t}^{(k)} \\
& = \sum_{l\in V_0}\mathbb{E} A_{lij}Y_1^{(k)}+\sum_{l\in V_1}\mathbb{E} A_{lij}Y_3^{(k)} = \frac{np}{2}Y_1^{(k)}+\frac{nq}{2}Y_3^{(k)}.
\end{align*}
Similar expressions may be derived for $Y_2^{(k+1)},Y_3^{(k+1)},Y_4^{(k+1)}$. This completes the proof.
\end{proof}
Next we show how to reduce the number of states to $d-1$. To this end, we simplify $\bar{x}_{i;t}$ as
\begin{align*}
& \bar{x}_{j;h}^{(k)} = \frac{ \sum_{i = 1}^{n}\bar{y}_{i,j;h}^{(k)}}{ \sum_{i,l = 1}^{n}\bar{y}_{i,l;h}^{(k)}} =
\begin{cases}
\bar{a}^{(k)} = \frac{2}{n}\frac{Y_1^{(k)} + Y_3^{(k)}}{\sum_{m=1}^{4}Y_m^{(k)}}, & \mbox{if } j\in V_0 \\
\bar{b}^{(k)} = \frac{2}{n}\frac{Y_2^{(k)} + Y_4^{(k)}}{\sum_{m=1}^{4}Y_m^{(k)}}, & \mbox{if } j\in V_1.
\end{cases}
\end{align*}
The centroid distance $\bar{w}= \bar{a} - \bar{b}$ may be written as
\begin{align*}
& \bar{w}_k= \bar{a}^{(k)} - \bar{b}^{(k)} = \frac{2}{n}\frac{\beta_1(k)}{\zeta_1(k)},\text{ where}\\
& \beta_1(k) = [1,-1,1,-1]\mathbf{Y}^{(k)},\,
\zeta_1(k) = [1,1,1,1]\mathbf{Y}^{(k)}.
\end{align*}
We introduce next the following auxiliary variables:
\begin{align*}
& \beta_2(k) = [1,0,0,-1]\mathbf{Y}^{(k)},\,
\zeta_2(k) = [1,0,0,1]\mathbf{Y}^{(k)}.
\end{align*}
In the expression for $\beta_1(k)$, we replace $\mathbf{Y}^{(k)}$ by $\mathbf{Y}^{(k-1)}$ by invoking the recurrence of Theorem~\ref{thm:post123456}. One can then show that the recurrence for $\mathbf{Y}^{(k)}$ may be replaced by a recurrence for $\beta_1(k)$ and $\beta_2(k)$:
\begin{align}
\begin{bmatrix}
\beta_1(k+1)\\
\beta_2(k+1)
\end{bmatrix}
= \frac{n}{2}
\begin{bmatrix}
0 & p-q\\
q & p-q
\end{bmatrix}
\begin{bmatrix}
\beta_1(k)\\
\beta_2(k)
\end{bmatrix}.
\end{align}
For $\zeta$, one can derive the following similar result:
\begin{align}
\begin{bmatrix}
\zeta_1(k+1)\\
\zeta_2(k+1)
\end{bmatrix}
= \frac{n}{2}
\begin{bmatrix}
2q & p-q\\
q & p-q
\end{bmatrix}
\begin{bmatrix}
\zeta_1(k)\\
\zeta_2(k)
\end{bmatrix}.
\end{align}
This approach generalizes for arbitrary $d$, but we defer the detailed analysis to Supplement~\ref{app:remainpfthm2}. Let $e_{i} = [1,0,...,0,-1]$ where there the runlength of zeros equals $2^{i}-2$.
Then, $\beta_i(k) = [e_i,e_i,...,e_i]Y^{(k)}$; a similar expression is valid for $\zeta$ with all $-1$s changed to $1$s. As for the case $d=3$, one can establish the following recurrence relations:
\small
\begin{equation}\label{margsimp7}
\begin{bmatrix}
\beta_1(k)\\
\vdots\\
\beta_{d-1}(k)
\end{bmatrix}
= \frac{n}{2}
\begin{bmatrix}
0 & \cdots & 0 & 0 & p-q\\
q & 0 & \cdots & 0 & p-q\\
0 & \ddots & \vdots & 0 & p-q\\
0 & \cdots & q & 0 & p-q\\
0 & \cdots & 0 & q & p-q\\
\end{bmatrix}
\begin{bmatrix}
\beta_1(k-1)\\
\vdots\\
\beta_{d-1}(k-1)
\end{bmatrix},
\end{equation}
\normalsize
and
\small
\begin{equation}\label{margsimp8}
\begin{bmatrix}
\zeta_1(k)\\
\vdots\\
\zeta_{d-1}(k)
\end{bmatrix}
= \frac{n}{2}
\begin{bmatrix}
2q & \cdots & 0 & 0 & p-q\\
q & 0 & \cdots & 0 & p-q\\
0 & \ddots & \vdots & 0 & p-q\\
0 & \cdots & q & 0 & p-q\\
0 & \cdots & 0 & q & p-q\\
\end{bmatrix}
\begin{bmatrix}
\zeta_1(c-1)\\
\vdots\\
\zeta_{d-1}(c-1)
\end{bmatrix}.
\end{equation}
\normalsize
This complete the proof of Theorem~\ref{thm:abapprox}.
\vspace{-0.1cm}
\section{Construction of GPR based on landing probabilities}\label{sec:CoGPR}
\vspace{-0.1cm}
In what follows, we use the results of our theoretical results to propose new GPR methods for hypergraph clustering. Following~\cite{kloumann2017block}, the geometric discriminant of interest equals $w^Tx_v,$ where $x_v$ is the landing probability vector of the vertex $v$. If only the first moments of the LPs are available, the optimal choice of $w$ corresponding to the maximal marginal separator of the centroids is given in Theorems~\ref{MAINTHM:CE} and~\ref{thm:abapprox} for clique-expansion RWoHs and tensor RWoHs, respectively.
The geometric discriminant only takes the first-order moments of LPs into account. As pointed out in~\cite{kloumann2017block}, the Fisher discriminant is expected to have better classification performance since it also make use of the covariances of LPs (see Figure~\ref{fig:explain}). More precisely, the Fisher discriminant takes the form $\left ( \Sigma^{-1}w\right)^Tx_v,$ where $x_v$ is the landing probability vector of the vertex $v$ and $\Sigma$ is the covariance matrix of the landing probability vector. The authors of~\cite{kloumann2017block} \emph{empirically} leveraged the information about the second-order moments of LPs. They showed that the Fisher discriminant has a performance that nearly matches that of belief propagation, the statistically optimal method for community detection on SBM~\cite{abbe2015detection,zhang2014scalable,mossel2014belief}. We therefore turn our attention to Fisher discriminant corresponding to clique-expansion and tensor RWoHs.
\begin{figure}[!htb]
\centering
\subfigure[\scriptsize{Geometric discriminant.}\label{fig:geodis_plot}]{\includegraphics[width=0.48\linewidth]{Geometric_discriminator.PNG}}
\subfigure[\scriptsize{Fisher discriminant.}\label{fig:Fishdis_plot}]{\includegraphics[width=0.48\linewidth]{Fisher_discriminator.PNG}}
\vspace{-0.5cm}
\caption{Illustration of geometric and Fisher discriminants. Consecutive step LPs are correlated as the random walks have memory and one needs to take the covariance into account. The gradations in the colors reflect the density of the LPs in the ambient space. }\label{fig:explain}
\vspace{-0.4cm}
\end{figure}
Recall that our theoretical results shows that tensor RWoHs lead to larger centroid distances compared to those of clique-expansion RWoHs. Most importantly, the difference between the centroid distances of the two methods increase with the hyperedge size $d$. Hence, for large hyperedges sizes the theoretical results suggest that one should not directly use clique-expansion combined with PR methods. On the other hand, clique-expansion leads to reductions in the variance of random walks. This is intuitively clear since entries of the clique-expanded adjacency matrix contain sums of entries of the original adjacency tensor; hence the adjacency matrix obtained through clique-expansion will be ``closer'' to its expectation, implying a smaller variance. This also follows from Lemma~\ref{lma:concentrate} and ~\ref{lma:Hconcentrate} by observing that the empirical centroids of clique-expansion RWoHs converge faster as $n$ grows. This points to an important bias-variance trade-off between clique-expansion and tensor RWoHs. We therefore propose the following hybrid random walk scheme combining clique-expansion and tensor methods, referred to ``CET RWoHs''. The gist of the CET approach is not to replace a hyperedge by a complete graph as is done in clique-expansion, but replace it by a complete lower order hypergraph instead. On the reduced order hypergraph one can then apply the tensor RWoHs both to increase the centroid distance and to ensure smaller computational and space complexity.
\vspace{-0.1cm}
\section{Simulations}
\vspace{-0.1cm}
In the examples that follow, all results are obtained by averaging over $20$ independent trials.
The first test illustrates the clustering performance of geometric and Fisher discriminant corresponding to clique-expansion and tensor RWoHs for $3$-hSBM$(100,p,q)$ with a uniform seed initialization. More precisely, we start with $\bar{y}_{ce}^{(0)}$ and $\bar{y}_{t}^{(0)}$, respectively. Subsequently, we use
$k=6$ steps of the random walk for both the clique-expansion RWoHs and tensor RWoHs; our choice is governed by the fact that the centroid distance of the clique-expansion LPs with $k=6$ is close to $0$. Figure~\ref{fig:expResult_old} shows that for both geometric and Fisher discriminant, using the LPs of tensor RWoHs results in better clustering performance compared to that of clique-expansion RWoHs. This supports our theoretical results
\begin{figure}[!htb]
\centering
\subfigure[\scriptsize{Geometric discriminant.}\label{fig:geoperfomance_plot}]{\includegraphics[width=0.48\linewidth]{EXPfirst.pdf}}
\subfigure[\scriptsize{Fisher discriminant.}\label{fig:fisherperfomance_plot}]{\includegraphics[width=0.48\linewidth]{EXPsecond.pdf}}
\vspace{-0.5cm}
\caption{Clustering performance of the CE (clique-expansion) and T (tensor) methods with uniform initialization on a $3$-hSBM.}\label{fig:expResult_old}
\vspace{-0.2cm}
\end{figure}
However, in practice, one rarely uses a uniform initialization as it implies (partial) prior knowledge of the cluster structure: Seed-set expansion is usually of interest in applications where the seeds are user-defined. To illustrate the performance of the clique-expansion, tensor and CET methods in this setting, we also used single-vertex-seed initializations $y_{ce}^{(0)}$ and $y_{t}^{(0)}$, respectively. Figure~\ref{fig:expResult} provides simulations for a $4$-hSBM$(100,p,q)$, demonstrating that when only the first moment is used (i.e., when the discriminant is geometric), clique-expansion RWoHs offer the best performance.
This may be explained by observing that the LPs are correlated and clique-expansion RWoHs has a smaller variance than the other methods. On the other hand, if we additionally use the second moment (i.e., when the discriminant is Fisher), then the CET RWoHs has the best performance in almost all parameter regimes while the tensor RWoHs has the best performance only when $p-q$ is close to $0$. This finding matches our results and their interpretation in Section~\ref{sec:CoGPR}, indicating that CET RWoHs offers good bias-variance trade-offs. As the difference of the centroid distances of clique-expansion and tensor RWoHs grows as the hyperedge size $d$ increases, the performance gain of CET RWoHs is expected to be even larger for higher order hypergraphs. It remains an open question how to choose the best combination of hypergraph projections and tensor RWoHs with respect to both performance and computational complexity.
\begin{figure}[!htb]
\centering
\subfigure[]{\includegraphics[width=0.49\linewidth]{p2_NEW.pdf}}
\subfigure[]{\includegraphics[width=0.49\linewidth]{p6_NEW.pdf}}
\vspace{-0.5cm}
\caption{Clustering performance of the CE (clique-expansion), tensor and CET methods with single-seed-vertex initialization on $4$-hSBM.}\label{fig:expResult}
\end{figure}
\textbf{Acknowledgment:} The work was supported by the NSF grant 1956384 and the NSF Center for Science of Information (CSoI) housed at Purdue University.
\bibliographystyle{IEEEtran}
|
2,869,038,154,026 | arxiv | \section{Introduction}
\label{introduction}
When solving the Einstein equations numerically, the standard way is to
split the spacetime into space and time.
The most fundamental decomposition of the Einstein equations is the
Arnowitt-Deser-Misner (ADM) formulation \cite{ADM62, York78}.
However, it is well known that in long-term evolutions in strong
gravitational fields such as the coalescences of binary neutron stars and/or
black holes, simulations with the ADM formulation are unstable and are often
interrupted before producing physically interesting results.
Finding more robust and stable formulations is known to the ``formulation
problem'' in numerical relativity \cite{SY00,Shinkai09,SY02gr-qc}.
Many formulations have been proposed in the last two decades.
The most commonly used sets of evolution equations among numerical
relativists are the so-called Baumgarte-Shapiro-Shibata-Nakamura (BSSN)
formulation \cite{SN95,BS98}, the generalized harmonic (GH) formulation
\cite{PretriusCQG05, Garfinkle02}, the Kidder-Scheel-Teukolsky (KST)
formulation \cite{KST01}, and the Z4 formulation \cite{BLPZ03,BLPZ04}
(as references of their numerical application, we here cite only well-known
articles; \cite{BCCKM06, CLMZ06} for the BSSN formulation,
\cite{Pretorius05} for the GH formulation, \cite{SBCKMP09} for the KST
formulation, and \cite{ABBRP11} for the Z4 formulation).
All of the above modern formulations include the technique of
``constraint damping'', which attempts to control the violations of
constraints by adding the constraint terms to their evolution equations.
Using this technique, more stable and accurate systems are obtained
(see e.g. \cite{WBH11,GCHM05}).
This technique can be described as `adjustment' of the original system.
In \cite{YS01prd, YS02, SY02}, two of the authors systematically
investigated how the adjusted terms change the original systems by
calculating the constraint propagation equations.
The authors suggested some effective adjustments for the BSSN formulation
under the name ``adjusted BSSN formulation''\cite{YS02}.
The actual constraint-damping effect was confirmed by numerical tests
\cite{KS08}.
Fiske proposed a method of adjusting the original evolution system using the
norm of the constraints, $C^2$, \cite{Fiske04}, which we call a
``$C^2$-adjusted system.''
The new evolution equations force the constraints to evolve towards their
decay if the coefficient parameters of the adjusted terms are set as
appropriate positive values.
Fiske reported the damping effect of the constraint violations for the
Maxwell system \cite{Fiske04} and for the linearized ADM and BSSN
formulations \cite{Fiske_Phd}.
He also reported the limitation of the magnitude of the coefficient
parameters of the adjusted terms.
In \cite{TYS11}, we applied this $C^2$-adjusted system to the (full) ADM
formulation and presented some numerical tests.
We confirmed that the violations of the constraints are less than those in
the original system.
We also reported the differences of the effective range of the coefficient
of the adjusted terms.
In this article, we apply the $C^2$-adjusted system to the (full) BSSN
formulation and derive the constraint propagation equations in the flat
space.
We perform some numerical tests and compare them with three other types of
BSSN formulations: the standard BSSN formulation, the
$\widetilde{A}$-adjusted BSSN formulation, and the $C^2$-adjusted BSSN
formulation.
We use the gauge-wave and polarized Gowdy wave testbeds, which are the test
problems as is known to apples-with-apples testbeds for comparing evolution
systems \cite{Alcubierre_etc04}.
Since the models are precisely fixed up to the gauge conditions, boundary
conditions, and technical parameters, the testbeds are widely used for
comparisons \cite{KS08, Zumbusch09, BB10}.
The structure of this article is as follows.
We review the ideas of adjusted systems and $C^2$-adjusted system in
Sec.\ref{GeneralIdea}.
In Sec.\ref{ApplicationEinsteinEq}, we review the standard and adjusted BSSN
formulations and derive the $C^2$-adjusted version of the BSSN formulation.
In Sec.\ref{NumericalExamples}, we present some numerical tests of the
gauge-wave and polarized Gowdy wave testbeds.
We show the damping effect of the constraint violations, and confirm that
inclusion of algebraic constraints in $C^2$ make the violations of
constraints decrease.
We summarize this article in Sec.\ref{Summary}.
In this article, we only consider vacuum spacetime, but the inclusion of
matter is straightforward.
\section{Ideas of adjusted systems and $C^2$-adjusted systems}
\label{GeneralIdea}
\subsection{Idea of adjusted systems}
\label{GeneralIdea_AdjustedSystems}
Suppose we have dynamical variables $u^i$ that evolve with the evolution
equations
\begin{align}
&\partial_t u^i = f(u^i, \partial_j u^i, \cdots),
\label{eq:generalEvolveEquations}
\end{align}
and suppose also that the system has the (first class) constraint equations
\begin{align}
&C^a(u^i, \partial_j u^i, \cdots)\approx 0.
\label{eq:generalConstraintEquations}
\end{align}
We can then predict how the constraints are preserved by evaluating the
constraint propagation equations
\begin{align}
\partial_t C^a &=g(C^a,\partial_i C^a,\cdots),
\label{eq:generalConstraintPropagation}
\end{align}
which measure the violation behavior of constraints $C^a$ in time evolution.
Equation \eqref{eq:generalConstraintPropagation} is theoretically weakly
zero, i.e., $\partial_t C^a \approx 0$, since the system is supposed to be
the first class.
However, free numerical evolution with discretized grids introduces a
constraint violation, at least at the level of truncation error, which
sometimes grows and stops the simulations.
The unstable feature of ADM evolution can be understood on the basis of this
analysis \cite{Pretorius05}.
Such features of the constraint propagation equations,
\eqref{eq:generalConstraintPropagation}, change when we modify the original
evolution equations.
Suppose we add constraint terms to the right-hand-side of
\eqref{eq:generalEvolveEquations} as
\begin{align}
\partial_t u^i = f(u^i, \partial_j u^i, \cdots) + F(C^a, \partial_j C^a,
\cdots),
\label{eq:generalADjustedEvolutionEqs}
\end{align}
where $F(C^a,\cdots)\approx0$ in principle zero but not exactly zero in
numerical evolutions.
With this adjustment, equation \eqref{eq:generalConstraintPropagation} will
also be modified to
\begin{align}
\partial_t C^a&=g(C^a,\partial_i C^a,\cdots) + G(C^a, \partial_i C^a,
\cdots).
\label{eq:generalADjustedConstraintPropagationEqs}
\end{align}
Therefore, we are able to control $\partial_t C^a$ by making an appropriate
adjustment $F(C^a, \partial_j C^a, \cdots)$ in
\eqref{eq:generalADjustedEvolutionEqs}.
If $\partial_t C^a<0$ is realized, then the system has the constraint
surface as an attractor.
This technique is also known as a constraint-damping technique.
Almost all the current popular formulations used in large-scale numerical
simulations include this implementation.
The purpose of this article is to find a better way of adjusting the
evolution equations to realize $\partial_t C^a\leq 0$.
\subsection{Idea of $C^2$-adjusted systems}
\label{theideaofC2}
Fiske \cite{Fiske04} proposed a way of adjusting the evolution equations
which we call ``$C^2$-adjusted systems'';
\begin{align}
\partial_t u^i = f(u^i, \partial_j u^i, \cdots)-\kappa^{i j}
\left(\frac{\delta C^2}{\delta u^j}\right),
\label{eq:adjutedGeneralEvolveEquations}
\end{align}
where $\kappa^{i j}$ is a positive-definite constant coefficient and $C^2$
is the norm of the constraints, which is defined as
$\displaystyle{C^2\equiv \int C_a C^a d^3 x}$.
The term $(\delta C^2/\delta u^j)$ is the functional derivative of $C^2$
with respect to $u^j$.
The associated constraint propagation equation becomes
\begin{align}
\partial_t C^2 = h(C^a, \partial_i C^a, \cdots) - \int d^3 x
\left(\frac{\delta C^2}{\delta u^i}\right)\kappa^{i j}
\left(\frac{\delta C^2}{\delta u^j}\right).
\label{eq:adjustedGeneralConstraintPropagation_of_C2}
\end{align}
The motivation for this adjustment is to naturally obtain the
constraint-damping system, $\partial_t C^2<0$.
If we set $\kappa^{i j}$ so that the second term of the right-hand side of
\eqref{eq:adjustedGeneralConstraintPropagation_of_C2} becomes larger than
the first term, then $\partial_t C^2$ becomes negative, which indicates that
constraint violations are expected to decay to zero.
Fiske presented numerical examples of the Maxwell system and the linearized
ADM and BSSN formulations, and concluded that this method actually reduces
constraint violations as expected.
In our previous work \cite{TYS11}, we applied the $C^2$-adjusted system to
the (full) ADM formulation and derived the constraint propagation
equations.
We confirmed that $\partial_t C^2<0$ is expected in the flat spacetime.
We performed numerical tests with the $C^2$-adjusted ADM formulation using
the Gowdy wave testbed, and confirmed that the violations of the constraint
are lower than those of the standard ADM formulation.
The simulation continues 1.7 times longer than that of the standard ADM
formulation with the magnitude of the violations of the constraint less than
order $O(10^0)$.
\section{Application to BSSN formulation}
\label{ApplicationEinsteinEq}
\subsection{Standard BSSN Formulation}
\label{standardBSSN}
We work with the widely used notation of the BSSN system.
That is, the dynamical variables
$(\varphi, K, \widetilde{\gamma}_{i j}, \widetilde{A}_{i j},
\widetilde{\Gamma}^i)$ as the replacement of the variables of the ADM
formulation, $(\gamma_{i j}, K_{i j})$, where
\begin{align}
\varphi
& \equiv (1/12) \log({\rm det} (\gamma_{i j})),
\label{eq:phi_BSSNVariables}\\
K
& \equiv \gamma^{i j} K_{i j},
\label{eq:K_BSSNVariables}\\
\widetilde{\gamma}_{i j}
& \equiv e ^{-4 \varphi} \gamma_{i j},
\label{eq:gammaTilde_BSSNVariables}\\
\widetilde{A}_{i j}
& \equiv e ^{-4 \varphi}(K_{i j} - (1/3) \gamma_{i j} K), \,\,{\rm and}
\label{eq:A_BSSNVariables}\\
\widetilde{\Gamma}^i
& \equiv \widetilde{\gamma}^{m n} \widetilde{\Gamma}^i{}_{m n}.
\label{eq:Gamma_BSSNVariables}
\end{align}
The BSSN evolution equations are, then,
\begin{align}
\partial_t \varphi
& = - (1/6) \alpha K + (1/6) (\partial_i \beta^i) + \beta^i
(\partial_i \varphi),
\label{eq:phi_standardBSSNEvolutionEquations}\\
\partial_t K
& = \alpha \widetilde{A}_{i j} \widetilde{A}^{i j} + (1/3) \alpha K^2
- D_i D^i \alpha + \beta^i (\partial_i K),
\label{eq:K_standardBSSNEvolutionEquations}\\
\partial_t\widetilde{\gamma}_{i j}
& = -2 \alpha \widetilde{A}_{i j} - (2/3) \widetilde{\gamma}_{i j}
(\partial_\ell \beta^\ell)
\nonumber\\
&\quad
+ \widetilde{\gamma}_{j \ell} (\partial_i \beta^\ell)
+ \widetilde{\gamma}_{i \ell} (\partial_j \beta^\ell)
+ \beta^\ell (\partial_\ell \widetilde{\gamma}_{i j}),
\label{eq:gamma_standardBSSNEvolutionEquations}\\
\partial_t \widetilde{A}_{i j}
& = \alpha K \widetilde{A}_{i j}
- 2 \alpha \widetilde{A}_{i \ell} \widetilde{A}^\ell{}_j
+ \alpha e ^{-4 \varphi} R_{i j}{}^{\rm TF}
\nonumber\\
&\quad
- e ^{-4\varphi}(D_i D_j\alpha)^{\rm TF}
- (2/3) \widetilde{A}_{i j}(\partial_\ell \beta^\ell)
\nonumber\\
&\quad
+ (\partial_i \beta^\ell) \widetilde{A}_{j \ell}
+ (\partial_j \beta^\ell) \widetilde{A}_{i \ell}
+ \beta^\ell (\partial_\ell \widetilde{A}_{i j}),
\label{eq:A_standardBSSNEvolutionEquations}\\
\partial_t \widetilde{\Gamma}^i
& =
2 \alpha\{ 6 (\partial_j \varphi) \widetilde{A}^{i j}
+ \widetilde{\Gamma}^i{}_{j \ell} \widetilde{A}^{j \ell}
- (2/3) \widetilde{\gamma}^{i j} (\partial_j K)
\}
\nonumber\\
&\quad
-2 (\partial_j \alpha) \widetilde{A}^{i j}
+ (2/3) \widetilde{\Gamma}^i (\partial_j \beta^j)
+ (1/3) \widetilde{\gamma}^{i j} (\partial_\ell \partial_j \beta^\ell)
\nonumber\\
&\quad
+ \beta^\ell (\partial_\ell \widetilde{\Gamma}^i)
- \widetilde{\Gamma}^j (\partial_j \beta^i)
+ \widetilde{\gamma}^{j \ell} (\partial_j \partial_\ell \beta^i),
\label{eq:CGamma_standardBSSNEvolutionEquations}
\end{align}
where ${}^{\rm TF}$ denotes the trace-free part. The Ricci tensor in the
BSSN system is normally calculated as
\begin{align}
R_{i j}
& \equiv \widetilde{R}_{i j} + R^\varphi_{i j},
\label{eq:ricci_standardBSSNFormulation}
\end{align}
where
\begin{align}
\widetilde{R}_{i j}
&\equiv \widetilde{\gamma}_{n (i} \partial_{j)} \widetilde{\Gamma}^n
+ \widetilde{\gamma}^{\ell m} (2 \widetilde{\Gamma}^k{}_{\ell (i}
\widetilde{\Gamma}_{j) k m} + \widetilde{\Gamma}_{n \ell j}
\widetilde{\Gamma}^n{}_{i m})
\nonumber\\
&\quad
- (1/2) \widetilde{\gamma}^{m \ell} \widetilde{\gamma}_{i j, m \ell}
+ \widetilde{\Gamma}^n \widetilde{\Gamma}{}_{(i j) n},
\label{eq:ricciTilde_standardBSSNFormulation}\\
R_{i j}^\varphi
& \equiv - 2 \widetilde{D}_i \widetilde{D}_j \varphi
+ 4 (\widetilde{D}_i \varphi) (\widetilde{D}_j \varphi)
- 2 \widetilde{\gamma}_{i j} \widetilde{D}_m \widetilde{D}^m \varphi
\nonumber\\
&\quad
-4 \widetilde{\gamma}_{i j} (\widetilde{D}^m \varphi)
(\widetilde{D}_m \varphi).
\label{eq:ricciPhi_standardBSSNFormulation}
\end{align}
The BSSN system has five constraint equations.
The ``kinematic'' constraint equations, which are the Hamiltonian constraint
equation and the momentum constraint equations ($\mathcal{H}$-constraint and
$\mathcal{M}$-constraint, hereafter), are expressed in terms of the BSSN
basic variables as
\begin{align}
\mathcal{H}
& \equiv e ^{-4 \varphi} \widetilde{R} - 8 e ^{-4 \varphi}
(\widetilde{D}_i \widetilde{D}^i \varphi + (\widetilde{D}^m \varphi)
(\widetilde{D}_m \varphi))
\nonumber\\
&\quad
+ (2/3)K^2 - \widetilde{A}_{i j} \widetilde{A}^{i j} -(2/3) \mathcal{A} K
\approx 0
\label{eq:hamiltonian_standardBSSNConstraintEquations},\\
\mathcal{M}_i
& \equiv -(2/3) \widetilde{D}_i K + 6 (\widetilde{D}_j \varphi)
\widetilde{A}^j{}_i + \widetilde{D}_j \widetilde{A}^j{}_i
\nonumber\\
&\quad
-2(\widetilde{D}_i \varphi) \mathcal{A}\approx 0,
\label{eq:momentum_standardBSSNConstraintEquations}
\end{align}
respectively, where $\widetilde{D}_i$ is the covariant derivative associated
with $\widetilde{\gamma}_{i j}$ and $\widetilde{R}=\widetilde{\gamma}^{i j}
\widetilde{R}_{i j}$.
Because of the introduction of new variables, there are additional
``algebraic'' constraint equations:
\begin{align}
\mathcal{G}^i
& \equiv \widetilde{\Gamma}^i - \widetilde{\gamma}^{j \ell}
\widetilde{\Gamma}^i{}_{j \ell}\approx 0,
\label{eq:gConstraint_standardBSSNConstraintEquations}\\
\mathcal{A}
& \equiv \widetilde{A}^{i j} \widetilde{\gamma}_{i j}\approx 0,
\label{eq:aCosntraint_standardBSSNConstraintEquations}\\
\mathcal{S}
& \equiv{\rm det} (\widetilde{\gamma}_{i j}) - 1\approx 0,
\label{eq:sConstraint_standardBSSNConstraintEquations}
\end{align}
which we call the $\mathcal{G}$-, $\mathcal{A}$-, and
$\mathcal{S}$-constraints, respectively, hereafter.
If the algebraic constraint equations,
\eqref{eq:gConstraint_standardBSSNConstraintEquations}-\eqref{eq:sConstraint_standardBSSNConstraintEquations},
are not satisfied, the BSSN formulation and ADM formulation are not
equivalent mathematically.
\subsection{$C^2$-adjusted BSSN Formulation}
\label{c2adjustedBSSN}
The $C^2$-adjusted BSSN evolution equations are formally written as
\begin{align}
\partial_t \varphi
&= \eqref{eq:phi_standardBSSNEvolutionEquations}
- \lambda_\varphi \left(\frac{\delta C^2}{\delta \varphi}\right),
\label{eq:phi_c2adjusted_BSSN}\\
\partial_t K
&= \eqref{eq:K_standardBSSNEvolutionEquations}
- \lambda_K \left(\frac{\delta C^2}{\delta K}\right),
\label{eq:K_c2adjusted_BSSN}\\
\partial_t \widetilde{\gamma}_{i j}
&= \eqref{eq:gamma_standardBSSNEvolutionEquations}
- \lambda_{\widetilde{\gamma} i j m n}
\left(\frac{\delta C^2}{\delta \widetilde{\gamma}_{m n}}\right),
\label{eq:gamma_c2adjusted_BSSN}\\
\partial_t \widetilde{A}_{i j}
&= \eqref{eq:A_standardBSSNEvolutionEquations}
- \lambda_{\widetilde{A} i j m n}
\left(\frac{\delta C^2}{\delta \widetilde{A}_{m n}}\right),
\label{eq:A_c2adjusted_BSSN}\\
\partial_t \widetilde{\Gamma}^i
&= \eqref{eq:CGamma_standardBSSNEvolutionEquations}
- \lambda_{\widetilde{\Gamma}}^{i j}
\left(\frac{\delta C^2}{\delta \widetilde{\Gamma}^j}\right),
\label{eq:CGamma_c2adjusted_BSSN}
\end{align}
where all the coefficients $\lambda_\varphi$, $\lambda_K$,
$\lambda_{\widetilde{\gamma}}{}_{i j m n}$,
$\lambda_{\widetilde{A}}{}_{i j m n}$, and
$\lambda_{\widetilde{\Gamma}}^{i j}$ are positive definite.
$C^2$ is a function of the constraints $\mathcal{H}$, $\mathcal{M}_i$,
$\mathcal{G}^i$, $\mathcal{A}$, and $\mathcal{S}$, which we set as
\begin{align}
C^2
&= \int\Large(\mathcal{H}^2 + \gamma^{i j}\mathcal{M}_i \mathcal{M}_j
+ c_G\gamma_{i j}\mathcal{G}^i\mathcal{G}^j \nonumber\\
&\qquad+ c_A\mathcal{A}^2 + c_S\mathcal{S}^2\Large) d^3x,
\label{eq:c2Definition_tmp}
\end{align}
where, $c_G$, $c_A$, and $c_S$ are Boolean parameters (0 or 1).
These three parameters are introduced to prove the necessity of the
algebraic constraint terms in \eqref{eq:c2Definition_tmp}.
The adjusted terms in
\eqref{eq:phi_c2adjusted_BSSN}-\eqref{eq:CGamma_c2adjusted_BSSN} are then
written down explicitly, as shown in Appendix \ref{Appendix_adjustTermsC2}.
The constraint propagation equations of this system are also derived for
the Minkowskii background, as shown in Appendix
\ref{Appendix_CP_Minkowskii}.
Now we discuss the effect of the algebraic constraints.
From \eqref{eq:CPH_Minkowskii}-\eqref{eq:CPS_Minkowskii}, we see that the
constraints affect each others.
The constraint propagation equations of the algebraic constraints,
\eqref{eq:CPG_Minkowskii}-\eqref{eq:CPS_Minkowskii}, include
$c_G(\lambda_{\widetilde{\gamma}}\Delta\delta^a{}_b
-2\lambda_{\widetilde{\Gamma}}\delta^a{}_b)\mathcal{G}^b$,
$-6c_{A} \lambda_{\widetilde{A}}\mathcal{A}$, and
$-6c_S\lambda_{\widetilde{\gamma}}\mathcal{S}$, respectively.
These terms contribute to reduce the violations of each constraint if
$c_G$, $c_A$, and $c_S$ are non-zero.
Therefore, we adopt $c_G=c_A=c_S=1$ in \eqref{eq:c2Definition_tmp};
\begin{align}
C^2 = \int \left(\mathcal{H}^2 + \gamma^{i j}\mathcal{M}_i \mathcal{M}_j
+ \gamma_{i j}\mathcal{G}^i\mathcal{G}^j + \mathcal{A}^2
+\mathcal{S}^2\right)d^3x.
\label{eq:C2definition}
\end{align}
This discussion is considered only from the viewpoint of the inclusion of
the diffusion terms.
In order to validate this decision, we perform some numerical examples
in Sec.\ref{NumericalExamples}.
\subsection{$\widetilde{A}$-adjusted BSSN System}
\label{adjustedSystem}
In \cite{YS02}, two of the authors reported some examples of adjusted
systems for the BSSN formulation.
The authors investigated the signatures of eigenvalues of the coefficient
matrix of the constraint propagation equations, and concluded
three of the examples to be the best candidates for the adjustment.
The actual numerical tests were performed later \cite{KS08} using the
gauge-wave, linear-wave, and polarized Gowdy wave testbeds.
The most robust system among the three examples for these three testbeds was
the $\widetilde{A}$-adjusted BSSN formulation, which replaces
\eqref{eq:A_standardBSSNEvolutionEquations} in the standard BSSN system with
\begin{align}
\partial_t\widetilde{A}_{i j}
&=\eqref{eq:A_standardBSSNEvolutionEquations} +
\kappa_A\alpha \widetilde{D}_{(i}\mathcal{M}_{j)},
\label{eq:A-equation}
\end{align}
where $\kappa_A$ is a constant.
If $\kappa_A$ is set as positive, the violations of the constraints are
expected to be damped in flat spacetime \cite{YS02}.
We also use the $\widetilde{A}$-adjusted BSSN system for comparison in the
following numerical tests.
\section{Numerical Examples}
\label{NumericalExamples}
\begin{table*}[t]
\caption{List of figures.\label{table:figures}}
\begin{tabular}{llll}
\hline
& & gauge-wave test & Gowdy wave test \\
& & \S \ref{gauge-wave_testbed} & \S \ref{GOWDY_TEST}\\
\hline
(A) & standard BSSN
\eqref{eq:phi_standardBSSNEvolutionEquations}-\eqref{eq:CGamma_standardBSSNEvolutionEquations}
& Fig.\ref{fig:ConstraintViolations_GaugeWave} norm each
& Fig.\ref{fig:ConstraintViolationsGowdy} norm each\\
& (constraint propagation, see App. \ref{Appendix_CP_noshit})
& Fig.\ref{fig:C2_GaugeWave} norm all
& Fig.\ref{fig:DampingViolations_Gowdy} norm all\\
\hline
(B) & $\widetilde{A}$-adjusted BSSN
& Fig.\ref{fig:C2_GaugeWave} norm all
& Fig.\ref{fig:DampingViolations_Gowdy} norm all\\
& \eqref{eq:phi_standardBSSNEvolutionEquations}-\eqref{eq:gamma_standardBSSNEvolutionEquations},
\eqref{eq:CGamma_standardBSSNEvolutionEquations}, and \eqref{eq:A-equation}
& Fig.\ref{fig:ConstraintViolationsC2_GaugeWave}
norm each & \\
& (constraint propagation, see App. \ref{Appendix_CP_Minkowskii}) & &
\\ \hline
(C) & $C^2$-adjusted BSSN
\eqref{eq:phi_c2adjusted_BSSN}-\eqref{eq:CGamma_c2adjusted_BSSN}
& Fig.\ref{fig:C2_GaugeWave} norm all
& Fig.\ref{fig:DampingViolations_Gowdy} norm all \\
& (constraint propagation, see App. \ref{Appendix_CP_Minkowskii})
& Fig.\ref{fig:ConstraintViolationsC2_GaugeWave} norm each
& Fig.\ref{fig:C2Constraints} norm each\\
& & Fig.\ref{fig:DiffereneceGauge} adjusted ratio
& Fig.\ref{fig:MagnitudeTermsGowdy} adjusted ratio\\
& & Fig.\ref{fig:C2definition_GaugeWave} \eqref{eq:C2definition} test
& Fig.\ref{fig:C2Parameter_GowdyWave2} \eqref{eq:C2definition} test\\
\hline
\end{tabular}
\end{table*}
We test the three systems ($C^2$-adjusted BSSN, $\widetilde{A}$-adjusted
BSSN, and standard BSSN) in numerical evolutions using the gauge-wave
and polarized Gowdy wave spacetimes, which are the standard tests for
comparisons of formulations in numerical relativity, and are known as
apples-with-apples testbeds \cite{Alcubierre_etc04}.
We also performed the linear-wave testbed but the violations of the
constraint are negligible; thus, we employ only the above two testbeds in
this article.
These tests have been used by several groups and were reported in the same
manner (e.g., \cite{Zumbusch09, BB10, KS08, PHK08}).
For simplicity, we set the coefficient parameters in
\eqref{eq:gamma_c2adjusted_BSSN}-\eqref{eq:CGamma_c2adjusted_BSSN} to
$\lambda_{\widetilde{\gamma} i j m n} = \lambda_{\widetilde{\gamma}}
\delta_{i m}\delta_{j n}$, $\lambda_{\widetilde{A} i j m n} =
\lambda_{\widetilde{A}} \delta_{i m}\delta_{j n}$, and
$\lambda_{\widetilde{\Gamma}}^{i j} = \lambda_{\widetilde{\Gamma}}
\delta^{ij}$ with non-negative coefficient constant parameters
$\lambda_{\widetilde{\gamma}}$, $\lambda_{\widetilde{A}}$, and
$\lambda_{\widetilde{\Gamma}}$.
Our code passes the convergence test with second-order accuracy.
We list the figures in this article in Table \ref{table:figures} for
reader's convenience.
\subsection{Gauge-wave Testbed}
\label{gauge-wave_testbed}
\subsubsection{Metric and Parameters}
\label{metric_paramaters_gauge}
The metric of the gauge-wave test is
\begin{align}
ds^2 = -H dt^2 + H dx^2 + dy^2 + dz^2,
\end{align}
where
\begin{align}
H=1-A\sin(2\pi (x-t)/d),
\end{align}
which describes a sinusoidal gauge wave of amplitude $A$ propagating along
the $x$-axis.
The nontrivial extrinsic curvature is
\begin{align}
K_{xx} = -\frac{\pi A}{d}\frac{\cos(\frac{2\pi (x-t)}{d})}
{\sqrt{1-A\sin\frac{2\pi(x-t)}{d}}}.
\end{align}
Following \cite{Alcubierre_etc04}, we chose the numerical domain and
parameters as follows:
\begin{itemize}
\item Gauge-wave parameters: $d=1$ and $A=10^{-2}$.
\item Simulation domain: $x\in [-0.5, 0.5]$, $y=z=0$.
\item Grid: $x^n = -0.5+(n-1/2)dx$ with $n=1,\cdots, 100$, where $dx=1/100$.
\item Time step: $dt = 0.25 dx$.
\item Boundary conditions: Periodic boundary condition in $x$-direction
and planar symmetry in $y$- and $z$-directions.
\item Gauge conditions:
\begin{align}
\partial_t \alpha = -\alpha^2 K, \quad \beta^i = 0.
\end{align}
\item Scheme: second-order iterative Crank-Nicolson.
\end{itemize}
\subsubsection{Constraint Violations and Their Dampings}
\label{ConstraintviolationsGauge}
\begin{figure}[t]
\begin{center}
\includegraphics[keepaspectratio=true,width=85mm]{./fig1.eps}
\end{center}
\caption{
L2 norm of each constraint violation in the gauge-wave evolution using
the standard BSSN formulation.
The vertical axis is the logarithm of the L2 norm of the constraints
and the horizontal axis is time.
We see the evolution stops at $t=110$ due to the growth of
$\mathcal{M}$-constraint violation.
\label{fig:ConstraintViolations_GaugeWave}}
\end{figure}
Figure \ref{fig:ConstraintViolations_GaugeWave} shows the violations of
five constraint equations $\mathcal{H}$, $\mathcal{M}_i$, $\mathcal{G}^i$,
$\mathcal{A}$, and $\mathcal{S}$ for the gauge-wave evolution using the
standard BSSN formulation.
The violation of the $\mathcal{M}$-constraint, line (A-2), is the largest
during the evolution, while the violations of both the
$\mathcal{A}$-constraint and $\mathcal{S}$-constraint are negligible.
This is the starting point for improving the BSSN formulation.
\begin{figure}[t]
\begin{center}
\includegraphics[keepaspectratio=true,width=85mm]{./fig2.eps}
\end{center}
\caption{
L2 norm of all the constraints in gauge-wave evolution comparing
three BSSN formulations:
(A) standard BSSN formulation (solid line),
(B) $\widetilde{A}$-adjusted BSSN formulation (dotted line), and
(C) $C^2$-adjusted BSSN formulation (dot-dashed line).
The adopted parameters are $\kappa_A=10^{-1.6}$ for (B), and
$\lambda_\varphi = 10^{-8.5}$, $\lambda_{K} = 10^{-8.4}$,
$\lambda_{\widetilde{\gamma}} = 10^{-7.3}$, $\lambda_{\widetilde{A}} =
10^{-2.5}$, and $\lambda_{\widetilde{\Gamma}} = 10^{-1.8}$ for (C) to
minimize $C^2$ at $t=1000$.
The constraint violations of the $\widetilde{A}$-adjusted BSSN
formulation, (B), increase with time and the simulation stops before
$t=1300$, while those of the $C^2$-adjusted BSSN
formulation, (C), remain at $O(10^{-1})$ until $t=1300$ and the
simulation stops at $t=1350$.
\label{fig:C2_GaugeWave}}
\end{figure}
Applying the adjustment procedure, the lifetime of the standard BSSN
evolution is increased at least 10-fold.
In Fig.\ref{fig:C2_GaugeWave}, we plot the L2 norm of the constraints,
\eqref{eq:C2definition}, of three BSSN evolutions: (A) the standard BSSN
formulation \eqref{eq:phi_standardBSSNEvolutionEquations}-\eqref{eq:CGamma_standardBSSNEvolutionEquations},
(B) the $\widetilde{A}$-adjusted BSSN formulation
\eqref{eq:phi_standardBSSNEvolutionEquations}-\eqref{eq:gamma_standardBSSNEvolutionEquations},
\eqref{eq:CGamma_standardBSSNEvolutionEquations}, and \eqref{eq:A-equation},
and (C) the $C^2$-adjusted BSSN formulation
\eqref{eq:phi_c2adjusted_BSSN}-\eqref{eq:CGamma_c2adjusted_BSSN}.
For the standard BSSN case, we see the violation of constraint
monotonically increases in the earlier stage, while other two adjusted
cases keep it smaller.
We can say that the $C^2$-adjusted formulation is the most robust one
against the violation of constraints between three.
\begin{figure*}[t]
\begin{center}
\includegraphics[keepaspectratio=true,width=150mm]{./fig3.eps}
\end{center}
\caption{
L2 norm of each constraint in the gauge-wave evolution using the
$\widetilde{A}$-adjusted BSSN formulation [panel (a)] and $C^2$-adjusted
BSSN formulation [panel (b)].
The parameters $\kappa_{A}$, $\lambda_\varphi$, $\lambda_{K}$,
$\lambda_{\widetilde{\gamma}}$, $\lambda_{\widetilde{A}}$, and
$\lambda_{\widetilde{\Gamma}}$ are the same as those in
Fig.\ref{fig:C2_GaugeWave}.
In both panels, we see that the violations of the
$\mathcal{H}$-constraint [the lines (B-1) and (C-1)], the
$\mathcal{M}$-constraint [(B-2) and (C-2)], and the
$\mathcal{G}$-constraint [(B-3) and (C-3)] are less than those for the
standard BSSN formulation in
Fig.\ref{fig:ConstraintViolations_GaugeWave}.
However, the violations of the $\mathcal{A}$-constraint
[(B-4) and (C-4)] and the $\mathcal{S}$-constraint [(B-5) and (C-5)]
are larger.
Line (B-5) overlaps with line (B) in Fig.\ref{fig:C2_GaugeWave} after
$t=100$, and line (C-5) overlaps with line (C) in
Fig.\ref{fig:C2_GaugeWave} after $t=500$.
\label{fig:ConstraintViolationsC2_GaugeWave}}
\end{figure*}
We plot the norm of each constraint equation in
Fig.\ref{fig:ConstraintViolationsC2_GaugeWave}.
First, we see that the violation of the $\mathcal{M}$-constraint for the
two adjusted BSSN formulations [the lines (B-2) and (C-2) in
Fig.\ref{fig:ConstraintViolationsC2_GaugeWave}] are less than that of the
standard BSSN formulation in Fig.\ref{fig:ConstraintViolations_GaugeWave}.
This behavior would be explained from the constraint propagation equations,
where we see the terms $\lambda_{\widetilde{A}}\Delta \mathcal{M}_a$ and
$(1/2)\kappa_A \Delta \mathcal{M}_i$ in \eqref{eq:CPM_Minkowskii} and
\eqref{eq:CPM_Minkowskii_Aadust}, respectively.
These terms contribute to reduce the violations of the
$\mathcal{M}$-constraint.
This is the main consequence of the two adjusted BSSN formulations.
Second, we also find that the violations of the $\mathcal{A}$-constraint
and $\mathcal{S}$-constraint are larger than those in
Fig.\ref{fig:ConstraintViolations_GaugeWave}.
From constraint propagation equations \eqref{eq:CPA_Minkowskii} and
\eqref{eq:CPA}, the violation of the $\mathcal{A}$-constraint is triggered
by the $\mathcal{M}$- and $\mathcal{A}$-constraints.
The increase in the violations of the $\mathcal{A}$-constraint is caused by
the term $2\lambda_{\widetilde{A}}\delta^{i j}(\partial_i \mathcal{M}_j)$.
Similarly, in \eqref{eq:CPS_Minkowskii} and \eqref{eq:CPS}, the violation of
the $\mathcal{S}$-constraint is triggered by only the
$\mathcal{A}$-constraint since the magnitude of $\lambda_{\widetilde
{\gamma}}$ is negligible.
Therefore, the increase in the violation of the $\mathcal{S}$-constraint is
due to the violation of the $\mathcal{A}$-constraint.
\begin{figure}[t]
\begin{center}
\includegraphics[keepaspectratio=true,width=85mm]{./fig4.eps}
\end{center}
\caption{
L2 norm of the ratio (adjusted terms)/(original terms) of each evolution
equation of the $C^2$-adjusted BSSN formulation,
\eqref{eq:phi_c2adjusted_BSSN}-\eqref{eq:CGamma_c2adjusted_BSSN}, in
the gauge-wave test.
We see that the largest ratio is the evolution equation of
$\widetilde{A}_{i j}$.
The corrections to $\varphi$, $K$, and $\widetilde{\gamma}_{i j}$
evolution equations are reasonably small.
\label{fig:DiffereneceGauge}}
\end{figure}
From \eqref{eq:appendix_BSSN_DeltaPhi} and
\eqref{eq:appendix_BSSN_DeltaGamma}, it can be seen that the adjusted terms
of the evolution equations of $\varphi$ and $\widetilde{\gamma}_{i j}$
include second-order derivative terms of the $\mathcal{H}$-constraint.
This means that these evolution equations include fourth-order derivative
terms of the dynamical variables.
In order to investigate the magnitudes of the adjusted terms, we show in
Fig.\ref{fig:DiffereneceGauge} the ratio of the adjusted terms to that of
the original terms in each evolution equation.
We see that the magnitudes of the adjusted terms of $\varphi$ and
$\widetilde{\gamma}_{i j}$ are reasonably small.
In the simulations with the $C^2$-adjusted BSSN formulation, the largest
violation is the $\mathcal{S}$-constraint.
The $\mathcal{S}$-constraint depends only on the dynamical variables
$\widetilde{\gamma}_{i j}$, so that there is no other choice than setting
$\lambda_{\widetilde{\gamma}}$ for controlling $\mathcal{S}$-constraint, as
can be seen from \eqref{eq:CPS_Minkowskii}.
However, we must set $\lambda_{\widetilde{\gamma}}$ to a value as small as
possible since the adjusted term of $\widetilde{\gamma}_{i j}$ includes
higher derivatives of $\widetilde{\gamma}_{i j}$.
Therefore, it is hard to control the $\mathcal{S}$-constraint, and we
have not yet found an appropriate set of parameters.
This will remain as a future problem of this $C^2$-adjusted BSSN system.
We also investigated the sensitivity of the parameters in the
$C^2$-adjusted BSSN evolutions.
We compared evolutions with setting only one of the parameters,
$(\lambda_{\varphi}, \lambda_K, \lambda_{\widetilde{\gamma}},
\lambda_{\widetilde{A}}, \lambda_{\widetilde{\Gamma}})$, nonzero.
Since the key of the damping of the violation of constraints is the
$\mathcal{M}$-constraint, and $(\lambda_K, \lambda_{\widetilde{A}})$
controls the violation of $\mathcal{M}$-constraint directly by
\eqref{eq:CPM_Minkowskii}, we mention here only the dependence on
$\lambda_K$ and $\lambda_{\widetilde{A}}$.
We found that constraint-damping feature changes sensitively by both
$\lambda_K$ and $\lambda_{\widetilde{A}}$, among them setting
$\lambda_{\widetilde{A}}$ is important to control the
$\mathcal{M}$-constraint violation.
We see the best controlled evolution with
$\lambda_{\widetilde{A}}=10^{-3}$, than $10^{-2}$ and $10^{-4}$.
\subsubsection{Contribution of Algebraic Constraints \\in Definition of
$C^2$}
\label{C2DefinitionGauge}
\begin{figure}[t]
\begin{center}
\includegraphics[keepaspectratio=true,width=85mm]{./fig5.eps}
\end{center}
\caption{
Difference with the definition of $C^2$, \eqref{eq:C2definition}, in
the damping of each constraint violation with $c_G=c_A=c_S=0$.
The parameters $\lambda_{\varphi}$, $\lambda_{K}$,
$\lambda_{\widetilde{\gamma}}$, $\lambda_{\widetilde{A}}$, and
$\lambda_{\widetilde{\Gamma}}$ are the same as those in
Fig.\ref{fig:C2_GaugeWave}.
The simulation stops since the violations of the constraints sudden
increase at $t=800$.
\label{fig:C2definition_GaugeWave}}
\end{figure}
In Sec.\ref{c2adjustedBSSN}, we defined $C^2$, \eqref{eq:C2definition},
including the algebraic constraints.
We check this validity by turning off the algebraic constraints in
\eqref{eq:C2definition}.
The result is shown in Fig.\ref{fig:C2definition_GaugeWave}, where we see
the simulation stops at $t=800$ due to a sudden increase in the
violation of the constraints.
This confirms that the algebraic constraints play an important role of
damping of the violations of constraints.
We also tested with other combinations of Boolean parameters $(c_G, c_A,
c_S)$, and confirmed that the best controlled evolution is realized when
$c_G=c_A=c_S=1$.
\subsection{Gowdy-wave Testbed}
\label{GOWDY_TEST}
\subsubsection{Metric and Parameters}
\label{numericalImplimentationGowdy}
The metric of the polarized Gowdy wave is given by
\begin{align}
d s^2=t^{-1/2} e ^{\lambda/2}(-d t^2+d x^2)+t( e ^P d y^2+ e ^{-P}dz^2),
\end{align}
where $P$ and $\lambda$ are functions of $x$ and $t$.
The forward direction of the time coordinate $t$ corresponds to the
expanding universe, and $t=0$ corresponds to the cosmological singularity.
For simple forms of the solutions, $P$ and $\lambda$ are given by
\begin{align}
P &= J_0(2\pi t)\cos(2\pi x),\\
\lambda
&= -2\pi t J_0(2\pi t)J_1(2\pi t)\cos^2(2\pi x)+2\pi^2t^2[J_0^2(2\pi t)
\nonumber\\
&\quad
+J_1^2(2\pi t)]
-(1/2)\{
(2\pi)^2[J_0^2(2\pi)+J_1^2(2\pi)]
\nonumber\\
&\quad
-2\pi J_0(2\pi)J_1(2\pi)
\},
\end{align}
where $J_n$ is the Bessel function.
Following \cite{Alcubierre_etc04}, a new time coordinate $\tau$, which
satisfies harmonic slicing, is obtained by the coordinate transformation
\begin{align}
t(\tau) = k e ^{c\tau},
\end{align}
where $k$ and $c$ are arbitrary constants.
We also follow \cite{Alcubierre_etc04} by setting $k$, $c$, and the initial
time $t_0 $ as
\begin{align}
k &\sim 9.67076981276405,\quad
c \sim 0.002119511921460,\\
t_0 &=9.87532058290982,
\end{align}
so that the lapse function in the new time coordinate is unity and $t=\tau$
at the initial time.
We also use the following parameters specified in \cite{Alcubierre_etc04}.
\begin{itemize}
\item Simulation domain: $x\in[-0.5,0.5], y=z=0$.
\item Grid: $x_n = -0.5+(n-(1/2))dx$, $n=1,\cdots,100$, where $d x=1/100$.
\item Time step: $d t=0.25d x$.
\item Boundary conditions: Periodic boundary condition in $x$-direction
and planar symmetry in $y$- and $z$-directions.
\item Gauge conditions: $\partial_t\alpha = -\alpha^2 K$, $\beta^i=0$.
\item Scheme: second-order iterative Crank-Nicolson.
\end{itemize}
\subsubsection{Constraint Violations and Their Dampings}
\label{ConstraintviolationsGowdy}
\begin{figure}[t]
\begin{center}
\includegraphics[keepaspectratio=true,width=85mm]{./fig6.eps}
\end{center}
\caption{
L2 norm of each constraint equation in the polarized Gowdy wave
evolution using the standard BSSN formulation.
The vertical axis is the logarithm of the L2 norm of the constraint and
the horizontal axis is backward time.
\label{fig:ConstraintViolationsGowdy}}
\end{figure}
We begin showing the case of the standard BSSN formulation,
\eqref{eq:phi_standardBSSNEvolutionEquations}-\eqref{eq:CGamma_standardBSSNEvolutionEquations}.
Figure \ref{fig:ConstraintViolationsGowdy} shows the L2 norm of the
violations of the constraints as a function of backward time $(-t)$.
We see that the violation of the $\mathcal{M}$-constraint is the largest at
all times and that all the violations of constraints increase monotonically
with time.
[Comparing with the result in \cite{KS08}, our code shows that the
$\mathcal{H}$-constraint (A-1) remains at the same level but the
$\mathcal{M}$-constraint (A-2) is smaller.]
\begin{figure}[t]
\begin{center}
\includegraphics[keepaspectratio=true,width=85mm]{./fig7.eps}
\end{center}
\caption{
L2 norm of the constraints, $C^2$, of the polarized Gowdy wave tests for
the standard BSSN and two adjusted formulations.
The vertical axis is the logarithm of the L2 norm of $C^2$ and the
horizontal axis is backward time.
The solid line (A) is the standard BSSN formulation, the dotted line (B)
is the $\widetilde{A}$-adjusted BSSN formulation with $\kappa_A =
-10^{-0.2}$, and the dot-dashed line (C) is the $C^2$-adjusted BSSN
formulation with $\lambda_{\varphi} = -10^{-10}$, $\lambda_{K} =
-10^{-4.6}$, $\lambda_{\widetilde{\gamma}} = -10^{-11}$,
$\lambda_{\widetilde{A}} = -10^{-1.2}$, and $\lambda_{\widetilde{\Gamma}} =
-10^{-14.3}$.
Note that the signatures of $\kappa_A$ and $\lambda$s are negative since
the simulations evolve backward.
We see that lines (A) and (C) are identical until $t=-200$.
Line (C) then decreases and maintains its magnitude under $O(10^{-2})$
after $t=-400$.
We confirm this behavior until $t=-1500$.
\label{fig:DampingViolations_Gowdy}}
\end{figure}
Similar to the gauge-wave test, we compare the violations of $C^2$ for three
types of BSSNs in Fig.\ref{fig:DampingViolations_Gowdy}.
In the case of the $\widetilde{A}$-adjusted BSSN formulation, the violation
of the constraints increases if we set $|\kappa_A|$ larger than $10^{-0.2}$.
In the case of the $C^2$-adjusted BSSN formulation, it increases if we set
$|\lambda_{\widetilde{A}}|$ larger than $10^{-1.2}$.
Note that the signatures of the above $\kappa_A$ and $\lambda$s are
negative, contrary to the predictions in \cite{YS02} and
Sec.\ref{ApplicationEinsteinEq}, respectively.
This is because these simulations are performed with backward time.
\begin{figure}[t]
\begin{center}
\includegraphics[keepaspectratio=true,width=85mm]{./fig8.eps}
\end{center}
\caption{
The same with Fig.\ref{fig:ConstraintViolationsGowdy} but for the
$C^2$-adjusted BSSN formulation.
The parameters, ($\lambda_{\varphi}$, $\lambda_{K}$,
$\lambda_{\widetilde{\gamma}}$, $\lambda_{\widetilde{A}}$,
$\lambda_{\widetilde{\Gamma}}$), are the same with those for (C) in
Fig.\ref{fig:DampingViolations_Gowdy}.
We see that the violation of the $\mathcal{M}$-constraint decreases and
becomes the lowest after $t=-700$.
\label{fig:C2Constraints}}
\end{figure}
As shown in Fig.\ref{fig:DampingViolations_Gowdy}, the violations of $C^2$
for the standard BSSN formulation and the $\widetilde{A}$-adjusted BSSN
formulation increase monotonically with time, while that for the
$C^2$-adjusted BSSN formulation decreases after $t=-200$.
To investigate the reason of this rapid decay after $t=-200$, we plot each
constraint violation in Fig.\ref{fig:C2Constraints}.
We see that the violations of the $\mathcal{A}$-constraint and
$\mathcal{S}$-constraint increase with negative time, in contrast to the
standard BSSN formulation, and those of the $\mathcal{M}$-constraint and
$\mathcal{G}$-constraint decrease after $t=-200$.
The propagation equation of the $\mathcal{M}$-constraint,
\eqref{eq:CPM_Minkowskii}, includes the term $-2c_A\lambda_{\widetilde{A}}
\partial_a \mathcal{A}$, which contributes to constraint damping.
Similarly, the propagation equation of the $\mathcal{G}$-constraint,
\eqref{eq:CPG_Minkowskii}, includes $\delta^{a b}\{(1/2)
\lambda_{\widetilde{\gamma}}\partial_b\Delta + 2\lambda_{\widetilde{\Gamma}}
\partial_b\}\mathcal{H} -c_S\lambda_{\widetilde{\gamma}}\delta^{a b}
\partial_b \mathcal{S}$; the decay of the violations of the
$\mathcal{G}$-constraint is caused by these terms.
Therefore, these terms are considered to become significant of approximately
$t=-200$ when the violations of the $\mathcal{A}$, $\mathcal{H}$, and
$\mathcal{S}$-constraints become a certain order of magnitude.
\begin{figure}[t]
\begin{center}
\includegraphics[keepaspectratio=true,width=85mm]{./fig9.eps}
\end{center}
\caption{
L2 norm of the ratio (adjusted terms)/(original terms) of each evolution
equation for the $C^2$-adjusted BSSN formulation,
\eqref{eq:phi_c2adjusted_BSSN}-\eqref{eq:CGamma_c2adjusted_BSSN}.
We see that the largest ratio is that for the evolution of
$\widetilde{A}_{i j}$.
The corrections to the $\widetilde{\gamma}_{i j}$ and
$\widetilde{\Gamma}^i$ evolution equations are reasonably small.
\label{fig:MagnitudeTermsGowdy}}
\end{figure}
In contrast to the gauge-wave testbed (Fig.\ref{fig:DiffereneceGauge}), we
prepared Fig.\ref{fig:MagnitudeTermsGowdy}, which shows the magnitudes of
the ratio of the adjusted terms to the original terms.
Since the magnitudes of the adjusted terms of $\varphi$ and
$\widetilde{\gamma}_{i j}$ can be disregarded, the effect of the reduction
of the adjusted terms of $\varphi$ and $\widetilde{\gamma}_{i j}$ is
negligible.
Therefore, the $C^2$-adjusted BSSN evolution in the Gowdy wave can be
regarded as maintaining its original hyperbolicity.
We repeated the parameter-dependency survey of $(\lambda_{\varphi},
\lambda_{K}, \lambda_{\widetilde{\gamma}},\lambda_{\widetilde{A}},
\lambda_{\widetilde{\Gamma}})$ for this spacetime evolution.
Similar to Sec.\ref{ConstraintviolationsGauge}, we found that
constraint-damping feature is sensitive to both $\lambda_K$ and
$\lambda_{\widetilde{A}}$, of which $\lambda_{\widetilde{A}}$ works
effectively than $\lambda_K$.
We see the most controlled evolution when $\lambda_{\widetilde{A}}
=10^{-1}$, than that of $\lambda_{\widetilde{A}}=10^0$ or $\lambda_
{\widetilde{A}}=10^{-2}$.
\subsubsection{Contribution of Algebraic Constraints \\in Definition of
$C^2$}
\label{C2DefinitionGowdy}
\begin{figure}[t]
\begin{center}
\includegraphics[keepaspectratio=true,width=85mm]{./fig10.eps}
\end{center}
\caption{
Difference with the definition of $C^2$ with $c_G=c_A=c_S=0$.
The coefficient parameters, $\lambda_{\varphi}$, $\lambda_{K}$,
$\lambda_{\widetilde{\gamma}}$, $\lambda_{\widetilde{A}}$ and
$\lambda_{\widetilde{\Gamma}}$, are all the same as those for (C) in
Fig.\ref{fig:DampingViolations_Gowdy}.
In comparison with Fig.\ref{fig:C2Constraints}, all the violations of
the constraints are larger.
\label{fig:C2Parameter_GowdyWave2}}
\end{figure}
In Sec.\ref{c2adjustedBSSN}, we investigated the effect of the definition of
$C^2$.
Similar to the gauge-wave tests in the previous subsection, we show
the effect of constraint damping caused by the algebraic constraints.
In Fig.\ref{fig:C2Parameter_GowdyWave2}, we plot the violations of all the
constraint with $c_G=c_A=c_S=0$.
We see that all the violations of the constraints are larger than those in
Fig.\ref{fig:C2Constraints}.
This result is consistent with the discussion in Sec.\ref{c2adjustedBSSN}.
\section{Summary and Discussion}
\label{Summary}
To obtain an evolution system robust against the violation of constraints,
we derived a new set of adjusted BSSN equations applying the idea proposed
by Fiske \cite{Fiske04} which we call a ``$C^2$-adjusted system.''
That is, we added the functional derivatives of the norm of the constraints,
$C^2$, to the evolution equations
[\eqref{eq:phi_c2adjusted_BSSN}-\eqref{eq:CGamma_c2adjusted_BSSN}].
We performed numerical tests in the gauge-wave and Gowdy wave spacetimes
and confirmed that the violations of constraints decrease as expected,
and that longer and accurate simulation than that of the standard BSSN
evolution is available.
The construction of the $C^2$-adjusted system is straightforward.
However, in BSSN, there are two kinetic constraints and three additional
algebraic constraints compared to the ADM system; thus, the definition of
$C^2$ is a matter of concern.
By analyzing constraint propagation equations, we concluded that $C^2$
should include all the constraints.
This was also confirmed by numerical tests.
The importance of such algebraic constraints suggests the similar
treatment when we apply this idea to other formulations of the Einstein
equation.
To evaluate the reduction of the violations of the constraints, we also
compared evolutions with the
$\widetilde{A}$-adjusted BSSN formulation proposed in \cite{YS02}.
We concluded that the $C^2$-adjusted BSSN formulation exhibits superior
constraint damping to both the standard and $\widetilde{A}$-adjusted BSSN
formulations.
In particular, the lifetimes of the simulations of the $C^2$-adjusted BSSN
formulation in the gauge-wave and Gowdy wave testbeds are as ten-times and
twice as longer than those of the standard BSSN formulation, respectively.
So far, many trials have been reported to improve BSSN formulation
(e.g. \cite{YS02, BH10}).
Recently, for example, a conformal-traceless Z4 formulation was proposed
with its test demonstrations \cite{ABBRP11}.
Among them, Fig.1 of \cite{ABBRP11} can be compared with our
Fig.\ref{fig:ConstraintViolationsC2_GaugeWave} [(B-1) and (C-1)] as the
same gauge-wave test.
The violation of $\mathcal{H}$-constraint in $C^2$-adjusted evolution
looks smaller than that of new Z4 evolution, but regarding the blow-up
time of simulations, new Z4 system has advantage.
Fiske reported the applications of the idea of $C^2$-adjustment to
``linearized'' ADM and BSSN formulations in his dissertation
\cite{Fiske_Phd}.
(As he mentioned, his BSSN is not derived from the standard BSSN equations
but from a linearized ADM using a new variable, $\Gamma$.
His set of BSSN equations also does not include the $\mathcal{A}$- and
$\mathcal{S}$-constraints in our notation.).
He observed damping of the constraint violation of five orders of magnitude
and the equivalent solution errors in his numerical evolution tests.
Our studies show that the full BSSN set of
equations with fully adjusted terms also produces the desired
constraint-damping results (Fig.\ref{fig:C2_GaugeWave} and
Fig.\ref{fig:DampingViolations_Gowdy}),
although apparent improvements are at fewer orders of magnitude.
When applied this idea to the ADM system \cite{TYS11}, we found that the
adjustment to the $K_{ij}$-evolution equation is essential.
In the present study, we found that the adjustment to the
$\widetilde{A}_{ij}$-evolution equation is essential for controlling the
constraints.
In both cases, the associated adjustment parameters
(Lagrangian multipliers), $\lambda_{\widetilde{A}}$ in this study, are
sensitive and require fine-tuning.
In future, automatic controlling system such that
monitoring the order of constraint violations and maintaining
them by tuning the parameters automatically would be helpful.
Applications of control theory in this direction are being investigated.
The correction terms of the $C^2$-adjusted system include higher-order
derivatives and are not quasi-linear; thus, little is known mathematically
about such systems.
These additional terms might effectively act as artificial viscosity terms
in fluid simulations, but might also enhance the violation of errors.
To investigate this direction further, the next step is to apply the idea to
a system in which constraints do not include second-order derivatives of
dynamical variables. We are working on the Kidder-Scheel-Teukolsky
formulation \cite{KST01} as an example of such a system, which we will
report in the near future.
\begin{acknowledgments}
This work was partially supported by Grant-in-Aid for
Scientific Research Fund of Japan Society of the Promotion of Science
No. 22540293 (HS).
Numerical computations were carried out on an Altix 3700 BX2 supercomputer
at YITP in Kyoto University and on the RIKEN Integrated Cluster of Clusters
(RICC).
\end{acknowledgments}
|
2,869,038,154,027 | arxiv | \section{Introduction}
An accurate and precise measurement of the Hubble constant at the few-percent level imposes significant constraints on the equation of state of dark energy and other cosmologically relevant parameters \citep{komatsu11}. The next generation of surveys aimed at improving our understanding of dark energy will benefit from an even tighter constraint on $H_0$ \citep{weinberg12} than the present bounds of 3.4\% \citep{riess11}.
Cosmological applications of the Extragalactic Distance Scale \citep{freedman10} primarily rely on the Period-Luminosity relation of Cepheid variables \citep[hereafter the ``Leavitt law'',][]{leavitt12} as the primary distance indicator. The upcoming {\it Gaia} mission \citep{prusti11} is expected to deliver a sub-percent calibration of the Leavitt law in the Milky Way \citep{windmark11}, which could in turn enable a 1\% measurement of $H_0$ if all sources of systematic error are properly accounted for.
One of these sources of systematic error occurs when two or more neighboring (but not necessarily physically associated) stars fall within the same resolution element of an instrument and cannot be fit with separate point-spread functions (PSFs). This effect is commonly referred to as {\it blending} and it is different from {\it crowding} or confusion noise, which results in improper PSF fitting and/or inaccurate background subtraction due to a very high stellar density. An extreme example of blending in the absence of crowding is a Cepheid in a binary system located in a low-surface brightness environment. Blending will bias the measured flux of a Cepheid towards larger values, shifting the Leavitt law to brighter magnitudes and leading to systematically shorter distances and larger values of $H_0$. Extreme blends can be readily identified by their effects on Cepheid colors and/or amplitude ratios and such tests are routinely carried out \citep{pellerin11,scowcroft09,macri06}. However, low-level blends are unlikely to be identified by such cuts and may affect studies of the metallicity dependence of the Leavitt law (another source of systematic uncertainty) since they could mimic the photometric changes expected from differences in chemical abundances.
The Local Group galaxy M33 is a good testbed for studies of Cepheid systematics thanks to its relative proximity \citep[$D=895-965$~kpc,][]{bonanos06,pellerin11}, moderate inclination angle \citep[$i=55^{\circ}$,][]{ho97} and recent episodes of star formation which have resulted in large numbers of Cepheids throughout its disk \citep{hartman06,pellerin11}. \citet{scowcroft09} used M33 Cepheids to study the ``metallicity effect'' of the Leavitt law, motivated by the large abundance gradient inferred from \ion{H}{2} regions \citep{zaritsky94,magrini07,magrini10}. However other studies \citep{urbaneja05,bresolin10,bresolin11} have determined a much shallower abundance gradient, which would make the metallicity effect considerably harder to measure.
The disk of M33 has been extensively imaged by the Hubble Space Telescope (HST) using the Wide-Field and Planetary Camera 2 (WFPC2) and the Advanced Camera for Surveys (ACS). The angular resolution of HST at optical wavelengths is 10-15 times better than the seeing at a good site on the surface of the Earth. Thus, a comparison of HST and ground-based images of the same Cepheids in M33 can yield useful insights into the nature of blending for more distant galaxies observed only with Hubble.
\begin{figure*}[t]
\centering
\includegraphics[angle=0,width=\textwidth]{fig01.eps}
\caption{Footprints of the HST fields used in this study overlayed on a DPOSS-II image of M33. The blue rectangles are from ACS, and the white boxes are from WFPC2. The field label names end in 'a' for ACS, and 'w' for WFPC2.}
\label{fig:footprints}
\end{figure*}
\input{table1.tex}
Previous studies of the influence of blends on the Cepheid Distance Scale, based on comparisons between ground-based and HST images of nearby galaxies were carried out by \citet{mochejska00} in M31, by \citet{mochejska01} in M33, and by \citet{bresolin05} in NGC$\,$300. In the case of M33, \citet{mochejska01} used HST/WFPC2 images and the Cepheid sample of the DIRECT survey \citep{macri01}. During the intervening decade there have been numerous additional HST observations of M33 using both WFPC2 and the Advanced Camera for Surveys (ACS), which enable us to study more Cepheids than \citet{mochejska01} and, in the case of ACS, with greater depth and finer pixel scale. Furthermore, we rely on a new synoptic survey of M33 \citep{pellerin11} carried out at the WIYN 3.5-m telescope with more Cepheids and better angular resolution than the DIRECT catalog.
\citet{pellerin11} carried out extensive simulations based on the M33 ACS images to quantify the photometric bias due to crowding in their ground-based photometry. Considering the range of magnitudes and surface brightnesses spanned by the M33 Cepheid sample, they found that crowding bias increased as a function of magnitude but did not exhibit a dependence on surface brightness. Our paper complements their study by quantifying the photometric bias due to blending for Cepheids in M33.
We describe in \S\ref{sec:data} the data used in this paper and the photometry we measured; \S\ref{sec:method} describes the method used to quantify the level of blending; we discuss the results in \S\ref{sec:results} and compare them to previous work in \S\ref{sec:comp}. Our concluding remarks and suggestions for future work can be found in \S\ref{sec:conclusions}.
\section{Data and Analysis}\label{sec:data}
We based our analysis on the Cepheid sample published by the M33 Synoptic Stellar Survey \citep{pellerin11}. We identify these variables in HST images and search for companions unresolved in the ground-based data. We calculate blending statistics based on these companions.
\subsection{Cepheid Sample}
Our analysis is based on the sample of Cepheids listed in Table~3 of \citet{pellerin11}. The ground-based observations and analysis are described in detail in that publication, which we briefly summarize here. Data from the DIRECT survey of M33 \citep{macri01} were combined with new images obtained at the 3.5-m Wisconsin-Indiana-Yale-NOAO (WIYN) telescope with the Mini-Mosaic (MiniMo) camera to detect 563 Cepheids ranging in period from 2 to 110 days. The typical FWHM of the WIYN images was $0\farcs75$, sampled at a plate scale of $0\farcs28/$pix. The photometry and astrometry were calibrated using the catalogs of \citet{massey06}.
\subsection{HST Data}
We queried the Hubble Legacy Archive (HLA) and the Mikulski Archive for Space Telescopes (MAST)\footnote{The HLA and MAST are operated by the Space Telescope Science Institute (STScI).} for HST images of M33 obtained with either WFPC2 or ACS which had overlap with the WIYN images of \citet{pellerin11}. We selected observations with multiple exposures to allow for cosmic-ray removal. We also required a minimum of 100\,s of total exposure time, to ensure a depth that would enable the detection of faint companions around the Cepheids. We further restricted our study to fields that were imaged in $V$ (HST filters F555W or F606W) and $I$ (HST filter F814W).
The HST fields contained 149 ($\sim 25$\%) of the Cepheids listed in \citet{pellerin11}. The locations of these fields are shown in Figure \ref{fig:footprints} and listed in Table \ref{tb:fields}. The table also contains references to previously-published analyses of the data. Except for two ACS fields, all images were acquired on a single epoch and we therefore only have imaging of the Cepheids at a random phase within their pulsation cycle.
The ACS images were reprocessed through the MAST On-The-Fly-Recalibration pipeline to apply the most up-to-date calibrations, while the WFPC2 images had already been reprocessed using the final set of calibration frames in mid 2009 by STScI \citep{gonzaga10}. We downloaded the reprocessed images and used MultiDrizzle \citep{koekemoer02} to remove cosmic rays, correct for geometric distortions in the cameras, and co-add multiple observations into master images.
\subsection{Photometry and Cepheid Search \label{sec:photometry}}
We performed point-spread function (PSF) photometry using DAOPHOT and ALLSTAR \citep{stetson87}. We derived model PSFs using grids of artificial stars created with TinyTim \citep{krist04} for the appropriate bandpasses, cameras and CCDs. We ran the FIND algorithm twice on each image, removing all stars found on the first iteration before proceeding to the second one. This increased the detection efficiency of faint stars, such as possible companions of a Cepheid. ALLSTAR was run one final time on the merged star list. Based on the observed luminosity functions, the photometry is complete to $V\!\sim\!25.5, I\!\sim\!24.7$ and $V\!\sim\!24.3, I\!\sim\!23$~mag for ACS and WFPC2, respectively.
\vspace{2pt}
Instrumental magnitudes were converted to the HST VEGAMAG system using the equations listed in Appendix D of \citet{sirianni05} and the coefficients listed in Table 10 of \citet{sirianni05} and Table 2 of \citet{dolphin09} for ACS and WFPC2, respectively.
\vspace{2pt}
\begin{figure}[t]
\includegraphics[width=0.48\textwidth]{fig02.eps}
\caption{Comparison of HST and WIYN images of three Cepheids in M33, illustrating different blending values. Left column: WIYN $V$ images; center column: HST $V$ images; right column: HST $I$ images. Panels are $8\arcsec$ on a side and the black circles are $0.75\arcsec$ in diameter, equal to the WIYN FWHM.}
\label{fig:thumbnails}
\end{figure}
Given the vastly different resolution and depth of the HST and WIYN images, the former had significantly larger stellar densities. Furthermore, the astrometric solution provided by the automated STScI pipeline is only accurate to a few arcseconds \citep{koekemoer05}. We obtained a rough initial match between HST and WIYN images using the brightest few hundred stars in common. Once the gross astrometric offset had been removed, we matched the complete star lists using DAOMATCH and DAOMASTER \citep{stetson93} and refined the astrometric solution of the HST images. Cepheids were then selected based on the coordinates tabulated by \citet{pellerin11}. We visually inspected every Cepheid to ensure the star in the HST frame was indeed a match to the same star in WIYN image. This process helped to identify and correct a few erroneous matches where a faint star close to the Cepheid was originally identified as the variable in the HST frame. Lastly, we estimated the disk surface brightness by averaging the background flux values reported by ALLSTAR for stars within $7\arcsec$ of each Cepheid.
\section{Blending calculation}\label{sec:method}
We quantify the level of blending following the prescription of \citet{mochejska00},
\vspace{3pt}
\begin{equation} \label{eq:blend}
S_{F}=\sum(f_i) / f_C
\end{equation}
\vspace{3pt}
\noindent{where $S_F$ is the total flux contribution from the companions relative to the Cepheid in filter $F$, $f_i$ is the flux of an individual companion star located within the critical radius and $f_C$ is the flux of the Cepheid.}
\vspace{3pt}
We calculated the values of $S$ separately for $V$ and $I$, using a critical radius of $0\farcs375$ which is the average value of the half-width at half-maximum (HWHM) of the WIYN PSF. We only include companions that contribute 4\% or more of the flux of a Cepheid in order to provide a conservative estimate of the blending value. This cut-off was adopted by \citet{mochejska00}, although \citet{mochejska01} raised it to 6\%. In practice, stars with $f_i \sim 0.05 f_C$ (or $\Delta$mag$\sim 3.25$) are near the completeness limit of the ACS images relative to the faintest, shortest-period Cepheids, which have $V\sim22.5, I\sim21.5$~mag.
\vspace{3pt}
In the case of Cepheids present in both ACS and WFPC2 images, we calculated blending values using the ACS data given its finer spatial sampling and increased depth. In the case of Cepheids present in multiple fields obtained with the same camera, we gave preference to the image with the deepest exposure time. If the exposures were of similar depth, we averaged the Cepheid magnitudes and the $S$ values.
\vspace{3pt}
Figure~\ref{fig:thumbnails} shows a comparison of HST and WIYN images for three Cepheids with different values of $S$. Each panel is $8\arcsec$ on a side, centered on a Cepheid. Circles with radii of $0\farcs375$ (typical WIYN HWHM) are drawn around the variables. The Cepheids were chosen to show the range of blending values, from $S_F=0$ (top row) to $S_F\sim0.6$ (bottom row). The left column shows the WIYN $V$ images, while the center and right columns show the $V$ and $I$ HST images.
\vspace{2pt}
The photometry and blending values are listed in Table \ref{tb:cephblend}. For each Cepheid, we list the ID and period from \citet{pellerin11}, the $V$ magnitude and its uncertainty, the value of $S_V$ and its uncertainty, and the corresponding information for the $I$ band. Additionally, we tabulate the $V$ and $I$ surface brightness values and the designations of the WIYN and HST fields where each Cepheid is located. The uncertainties in $S_F$ values are calculated by propagating the reported ALLSTAR photometric uncertainties through Eqn.~\ref{eq:blend}. HST field codes are based on the field name listed in the first column of Table~\ref{tb:fields}, followed by a letter to identify the camera (`a' for ACS, `w' for WFPC2).
\vspace{2pt}
Figure~\ref{fig:cmd} shows a color-magnitude diagram of the Cepheids and all companions located within the critical radius. As a reference, we also plot 3.5\% of all the stars with $I<26$~mag detected in the $V$ and $I$ ACS frames. The companions span a broad range of colors and magnitudes, but most are associated with the red giant branch and the red clump. These findings are not directly applicable to all Cepheid hosts, since different star-formation histories will alter the relative contributions of the upper main-sequence and the red giant branch.
\vspace{2pt}
\begin{figure}[t]
\includegraphics[width=0.48\textwidth]{fig03.ps}
\caption{Color-magnitude diagram of M33 Cepheids (in blue) and companions within $0\farcs 375$ (in red), contributing more (filled) or less (open) than 4\% of the Cepheid flux. Black dots are used to plot 3.5\% of the stars detected in the ACS frames with $I<26$~mag.}
\label{fig:cmd}
\end{figure}
We used the HST star lists obtained in \S\ref{sec:photometry} to tabulate all companions within a 2$\arcsec$ radius of each Cepheid, presented in Table~\ref{tb:comp}. Companions are labeled using the Cepheid ID from Table~\ref{tb:cephblend} and are numbered in increasing order of radial distance from the variable. We list the x-, y- and radial distance from the Cepheid (in arcseconds), the $V$ magnitude and uncertainty, and the $I$ magnitude and uncertainty. Some companions were only detected in one band.
\vspace{2pt}
This extended dataset can be used for a variety of future studies. For example, comparisons of HST data with ground-based observations of M33 at different angular resolutions can be easily carried out by selecting the appropriate critical radius. Likewise, the sensitivity of blending values to the faint-companion cutoff limit can be explored. Lastly, suitable scaling of fluxes and angular separations can yield simulated HST images of Cepheids in similar environments out to $D\sim35$~Mpc, at which point $2\arcsec$ at the distance of M33 would be equivalent to the angular resolution of HST in the $V$ band.
\vspace{2pt}
\section{Results}\label{sec:results}
We find mean blending values of $S_V=0.096\pm0.015$ and $S_I=0.083\pm0.013$ and median values of zero for both bands. Figure \ref{fig:Shist} shows cumulative distributions of blending values, while Figures~\ref{fig:SvP} and \ref{fig:Svsky} show the distribution of blending values as a function of period and surface brightness, respectively. Table~\ref{tb:blendstat} lists the fractions of Cepheids which meet several blending criteria as a function of period and surface brightness. We calculated the uncertainty in each fraction using the binomial distribution approximation,
\begin{equation}
\sigma(f)=\sqrt{f (1-f) /N}
\end{equation}
\noindent{where $f$ is the fraction value and $N$ is the number of Cepheids meeting a particular set of criteria. We cross-checked the validity of this approximation by performing 100,000 bootstrap resamplings with replacement, which yielded the same uncertainties. Lastly, we tested the sensitivity to outliers in the distributions by performing the same number of jacknife resamplings, keeping 90\% of the original sample. The derived fractions remained stable at the 2\% level.}
\vspace{2pt}
The fraction of Cepheids with no blending is marginally lower ($\sim 1\sigma$) for Cepheids with $P<10$~d than for ones with $P>10$~d. Such a trend might be expected because the shorter-period, less luminous Cepheids can be affected (at a fixed flux ratio) by a larger fraction of disk stars. However, the difference vanishes when comparing the statistics of Cepheids affected at the 10\% level. There is no significant difference in the statistics of Cepheids located in areas with ``high'' or ``low'' surface brightness.
\vspace{2pt}
We also examined the effect of blends on the color of the Cepheids by calculating the value of $S_V-S_I$ for all Cepheids with non-zero values of either $S_V$ or $S_I$. The resulting histogram, presented in Figure \ref{fig:scolor}, shows that most blends do not appreciably change the color of the Cepheids: $\langle S_V-S_I\rangle = 0.03\pm0.27$.
\vspace{2pt}
\section{Comparison with previous work}\label{sec:comp}
\citet{mochejska01} analyzed WFPC2 images of M33 Cepheids discovered by the DIRECT project. We recalculated our blending values using the parameters adopted\ \ in\ \ that\ \ paper:\ \ a critical\ \ radius of\ \ $0\farcs75$ and
\input{table2s.tex}
\input{table3s.tex}
\begin{figure*}[htp]
\epsscale{0.9}\plottwo{fig04a.ps}{fig04b.ps}
\caption{Cumulative distributions of the blending values in the V and I bands (right and left panels, respectively). The solid lines represent the entire Cepheid sample while the dotted and dashed lines denote the short- and long-period ($P\leq10, P>10$~d) Cepheids, respectively.}
\label{fig:Shist}
\end{figure*}
\vspace{-24pt}
\begin{figure*}
\epsscale{0.9}\plottwo{fig05a.ps}{fig05b.ps}
\caption{Blending values as a function of the period of the Cepheid.}
\label{fig:SvP}
\end{figure*}
\vspace{-24pt}
\begin{figure*}[htp]
\epsscale{0.9}\plottwo{fig06a.ps}{fig06b.ps}
\caption{Blending values as a function of the sky background.}
\label{fig:Svsky}
\end{figure*}
\clearpage
\begin{figure}[t]
\includegraphics[width=0.49\textwidth]{fig07.ps}
\caption{Distribution of the ``color'' of the companions relative to their Cepheid.}
\label{fig:scolor}
\end{figure}
\input{table4.tex}
\noindent{a companion flux cutoff of 6\%. The results are tabulated in the rightmost column of Table 4. We also compared the individual blending values measured for 33 variables in common in $V$ and 28 in $I$. As seen in Figure~\ref{fig:compm01}, there is good agreement with $\langle\Delta S_F\rangle=-0.02\pm0.13$.}
The statistics derived using the criteria of \citet{mochejska01} are in excellent agreement with the values presented in their paper. For example, the fraction of Cepheids with $S_V<0.1$ becomes $45\pm4$\%, compared to their value of $\sim43\pm5$\% (inferred from their Fig.~4 and Table 2). We also obtain identical values for the mean and median blending levels ($24\pm3$\% and 13\%) and reproduce the difference in blending statistics for ``short'' and ``long'' period Cepheids. Clearly, the differences between the two sets of values presented in Table~4 are due to the $2\times$ smaller critical radius adopted in our study, and emphasize the importance of angular resolution.
\begin{figure}[t]
\includegraphics[width=0.49\textwidth]{fig08.ps}
\caption{Comparison of blending values for Cepheids in common (found in WFPC2 images) with \citet{mochejska01}; our analysis was redone using their criteria. Filled (open) circles are used to plot the blending values in the V (I) filter.}
\label{fig:compm01}
\end{figure}
We also compared the disk surface brightness values we derived with those determined by \citep{mochejska01} and found agreement at the level of 0.2~mag/\sq\arcsec. We note that our surface brightness calculation is based on the average background level of the HST images within $7\arcsec$ of the Cepheid, while \citet{mochejska01} used the DIRECT ground-based images to calculate the sky in an annulus about $6\arcsec$ from the Cepheid. Regardless of the method used to measure the background or the blending criteria adopted, there is little (if any) correlation between blending fraction and surface brightness for the range of values considered here.
\citet{bresolin05} calculated blending statistics for a small sample of 16 Cepheids in NGC$\,$300 using HST/ACS images. They found a median value of 0\% and an average value of 7\%. Our results are consistent with their findings.
\section{Concluding Remarks}\label{sec:conclusions}
We have presented a survey of Cepheids in M33 and their companions within 2$\arcsec$, as resolved by HST with the ACS and WFPC2 cameras. We calculated the flux contribution of the companions when they are blended (unresolved) in ground-based images with a seeing of $0\farcs75$. We find that more than half of the Cepheids in our sample exhibit no blending at $V$ and $I$, regardless of period or surface brightness. The majority of companion stars are located in the red giant branch and do not significantly alter the derived color of the Cepheids.
\clearpage
We plan to combine the ground-based photometry of \citet{pellerin11} with the blending values derived in this paper to investigate possible biases in the determination of distance moduli and ``metallicity corrections'' when using samples that lack such higher-resolution imaging. Additionally, our compilation of companions may be useful to derive empirical photometric bias corrections for Cepheids in more distant galaxies studied with the {\it Hubble Space Telescope}, provided the variables are located in similar environments to the M33 sample.
\vspace{-9pt}
\acknowledgements
JMC acknowledges support by the Department of Education through the GAANN Fellowship Program. AP and LMM acknowledge financial support through a Texas A\&M University faculty startup fund. We thank Profs. Jianhua Huang and Lan Zhou for useful discussions on statistical techniques and the referee, Dr. Barry Madore, for his very helpful comments.
\vspace{4pt}
Based on observations made with the NASA/ESA Hubble Space Telescope and obtained using the Mikulski Archive for Space Telescopes and the Hubble Legacy Archive (HLA) at STScI. STScI is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS 5-26555. The HLA is a collaboration between STScI/NASA, the Space Telescope European Coordinating Facility (ST-ECF/ESA) and the Canadian Astronomy Data Centre (CADC/NRC/CSA).
\vspace{4pt}
{\it Facilities:} \facility{HST (WFPC2, ACS)}, \facility{WIYN (MiniMo)}
\bibliographystyle{apj}
|